Back to Blog

Microsoft Ships Production-Ready Agent Framework 1.0 for .NET and Python

Microsoft just dropped Agent Framework 1.0. If you are suffering from AI-fatigue, I do not blame you. We survived the October 2025 previews where half the documentation resulted in 404s and the other half was hallucinated by Copilot. We watched Microsoft ship fifteen overlapping AI SDKs in the span of two years. We endured Semantic Kernel’s identity crisis and AutoGen’s academic, impenetrable API surface. But it is April 2026, and the dust has settled. Microsoft has finally stopped throwing spaghetti at the wall. Agent Framework 1.0 is here, it unifies the best parts of Semantic Kernel and AutoGen, and shockingly, it is actually production-ready. Here is the rundown of what they built, why the architecture makes sense, and how to use it without wanting to throw your laptop into the nearest body of water. ## The End of the Two-Language Cold War For the past three years, the AI ecosystem has been a bifurcated mess. If you wanted to build data pipelines or write hacky prototypes, you used Python. If you had to maintain a monolithic enterprise backend that actually paid the bills, you used .NET. The Python guys had all the cool new libraries on day one, and the C# guys had to wait six months for a community port that crashed on edge cases. Agent Framework 1.0 kills this dynamic. It ships under the unified `Microsoft.Agents.AI` namespace for both .NET and Python simultaneously. The core single-agent abstraction is identical across both runtimes. An agent receives context, thinks using a connected LLM, calls tools, and returns a structured response. The abstraction is clean enough to swap providers without touching your business logic. ## The Core Architecture: Goodbye, Spaghetti Prompts The problem with most LLM wrappers is they assume you want to write raw strings directly to an API endpoint. Agent Framework 1.0 treats agents as state machines. You define an agent, give it a system prompt, attach tools (which are just native functions), and wire it to a connector. The framework handles the JSON schema generation for tool calling, the retry logic for 503s, and the context window management. Let's look at the initialization. ### Python Implementation Installation is standard: ```bash pip install microsoft-agents-ai pip install microsoft-agents-connectors-openai ``` The Python API is aggressively Pythonic. No weird Java-style builder patterns. ```python import os from microsoft.agents.ai import Agent, Tool from microsoft.agents.connectors.openai import OpenAIConnector # Define a native tool @Tool(name="get_database_schema", description="Retrieves the schema for a given table") def get_schema(table_name: str) -> dict: # Real code would hit Postgres here return {"columns": ["id", "username", "password_hash"]} # Initialize the connector connector = OpenAIConnector( api_key=os.environ.get("OPENAI_API_KEY"), model="gpt-4o" ) # Build the agent agent = Agent( name="DBAdmin", instructions="You are a database administrator. Use tools to inspect schemas and answer questions.", connector=connector, tools=[get_schema] ) response = agent.run("What columns are in the users table?") print(response.content) ``` ### .NET Implementation On the C# side, Microsoft leaned heavily into `Microsoft.Extensions.DependencyInjection`. If you know ASP.NET Core, this feels like home. ```bash dotnet add package Microsoft.Agents.AI dotnet add package Microsoft.Agents.Connectors.OpenAI ``` ```csharp using Microsoft.Agents.AI; using Microsoft.Agents.Connectors.OpenAI; using Microsoft.Extensions.DependencyInjection; var builder = Host.CreateDefaultBuilder(args); builder.ConfigureServices((context, services) => { // Register the connector services.AddOpenAIConnector(options => { options.ApiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY"); options.Model = "gpt-4o"; }); // Register tools services.AddAgentTool<DatabaseTools>(); // Register the agent services.AddAgent("DBAdmin", options => { options.Instructions = "You are a database administrator. Use tools to inspect schemas."; }); }); var app = builder.Build(); var agent = app.Services.GetRequiredService<IAgentProvider>().GetAgent("DBAdmin"); var result = await agent.RunAsync("What columns are in the users table?"); Console.WriteLine(result.Content); ``` Notice the parity. You aren't learning two different mental models to deploy the same architecture across your microservices. ## Provider Support: The "Bring Your Own Model" Reality Microsoft knows OpenAI is not the only game in town anymore. They built first-party connectors for the heavy hitters. You do not have to write custom HTTP clients just because your security team suddenly mandated Anthropic over Azure. ### Supported Connectors (v1.0 GA) | Provider | Connector Package | Best For | Status | | :--- | :--- | :--- | :--- | | **Azure OpenAI** | `connectors-azure-openai` | Enterprise compliance, VNET integration | Tier 1 | | **OpenAI** | `connectors-openai` | Bleeding edge models (GPT-4.5, o3) | Tier 1 | | **Anthropic** | `connectors-anthropic` | Coding tasks, massive context windows | Tier 1 | | **Amazon Bedrock** | `connectors-bedrock` | AWS multi-model architectures | Tier 2 | | **Google Gemini** | `connectors-gemini` | Multimodal input, massive context | Tier 2 | | **Ollama** | `connectors-ollama` | Local development, air-gapped deployments | Tier 1 | | **Microsoft Foundry**| `connectors-foundry` | Internal MS models, Phi-4 deployments | Tier 1 | Swapping from OpenAI to a local Ollama instance for testing is literally a one-line dependency injection change. This alone saves weeks of refactoring when management decides API costs are too high. ## The Model Context Protocol (MCP) Integration This is where the framework goes from "nice to have" to "mandatory." Writing custom tools for every single internal API your company uses is tedious. The Model Context Protocol (MCP) emerged late last year as the standard for connecting AI models to data sources. Agent Framework 1.0 supports MCP out of the box. You do not write REST clients anymore. You point the framework at an MCP server, and the agent automatically discovers the available tools and context. ```json // mcp-config.json { "servers": { "internal-wiki": { "command": "node", "args": ["/opt/mcp/confluence-server/build/index.js"] }, "github-repo": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"] } } } ``` Load that config into your Agent, and it suddenly knows how to search your Confluence wiki and read your GitHub PRs without writing a single line of custom tool code. This is how you scale agents across an enterprise without hiring an army of integration engineers. ## A2A: AutoGen Finally Grows Up Single agents are cute for chat interfaces. Production systems require multi-agent orchestration. You need a researcher agent, a coder agent, and a reviewer agent arguing with each other until the tests pass. Microsoft took the wild, academic chaos of the AutoGen project and formalized it into the A2A (Agent-to-Agent) protocol. A2A defines standard communication patterns: 1. **Sequential Chat:** Agent A finishes, hands output to Agent B. 2. **Group Chat:** Multiple agents in a room, with a deterministic routing manager deciding who speaks next. 3. **Hierarchical:** A manager agent delegates tasks to worker agents. Instead of writing brittle while-loops to manage this, you define the topology and let the framework handle the state graph. ```python from microsoft.agents.ai.orchestration import GroupChat, RoundRobinRouter coder = Agent(name="Coder", instructions="Write python code.", tools=[run_tests]) reviewer = Agent(name="Reviewer", instructions="Critique code for security flaws.") chat = GroupChat( agents=[coder, reviewer], router=RoundRobinRouter(max_turns=10) ) final_state = chat.start(initial_message="Write a script to parse JWT tokens.") ``` The framework tracks the entire conversation history, manages token limits across the group, and halts when a consensus is reached (or when the token budget evaporates). ## The Bad and The Ugly It is not all perfect. It is still a 1.0 release from Microsoft. * **Telemetry is noisy:** By default, the OpenTelemetry integration spits out massive traces for every single LLM token chunk. You will blow out your Datadog budget in an hour if you do not aggressively filter the spans. * **Python Typing:** The Python SDK relies heavily on Pydantic under the hood for schema generation, but the error messages when you mess up a nested type hint are completely incomprehensible. * **State Persistence:** The default memory stores are ephemeral. If your container restarts mid-thought, the agent gets amnesia. You have to wire up Redis or Postgres manually to the `IMemoryStore` interface, and the documentation for this is currently sparse. ## Actionable Takeaways If you are building AI features this year, stop writing raw `HttpClient` wrappers around the OpenAI REST API. You are building technical debt. 1. **Standardize on `Microsoft.Agents.AI`:** If you run a mixed .NET/Python environment, this is the only framework that treats both as first-class citizens. Stop making your C# devs use a Python sidecar just to run an agent. 2. **Adopt MCP immediately:** Stop writing custom tools for third-party SaaS apps. Find an open-source MCP server for the service and plug it straight into the framework. 3. **Local Dev with Ollama:** Use the built-in Ollama connector to run your test suites against local models (like `llama3` or `phi-4`). It will save you thousands of dollars in CI/CD API costs. 4. **Wrap the Telemetry:** Before pushing to production, configure your OpenTelemetry exporter to drop raw prompt payloads unless the log level is set to Debug. Your compliance officer will thank you. Agent Framework 1.0 is boring, predictable, and enterprise-grade. In the AI space, "boring" is exactly what we need right now. Get to work.