Back to Blog

Linux Foundation Announces the Formation of the Agentic AI Foundation (AAIF), Anchored by New Project Contributions Including Model Context Protocol (MCP), goose and AGENTS.md

The tech industry has a predictable reflex: when a new paradigm threatens to fragment into a million proprietary shards, we wrap it in a foundation. We’ve seen it with Cloud Native, we’ve seen it with JS frameworks, and now, predictably, we are seeing it with AI agents. The Linux Foundation just dropped the Agentic AI Foundation (AAIF). Ordinarily, another consortium announcement is an excuse to hit archive and move on with your day. We don't need another steering committee debating bylaws while the actual code rots. But this one is different. The AAIF isn't just a corporate shell game. It’s launching with three concrete, foundational primitives that might actually save us from a dystopian future of vendor-locked AI workflows: Anthropic’s Model Context Protocol (MCP), Block’s Goose framework, and OpenAI’s brutally simple `AGENTS.md`. Let’s unpack why this matters, how it works under the hood, and why you should probably start refactoring your agent wrappers today. ## The API Wild West is Dead (Or Dying) Up until yesterday, building an AI agent meant chaining yourself to a specific provider’s ecosystem. You wrote function-calling schemas for OpenAI, rewrote them for Anthropic’s tool use, and then glued it all together with a brittle LangChain script that broke every minor version bump. It was unsustainable. Agents aren't just chatbots anymore; they are headless background processes executing code, querying databases, and touching production systems. We desperately needed a POSIX-like standard for AI context and execution. Enter the AAIF. The founding members—Anthropic, Block, and OpenAI—aren't just tossing cash at the Linux Foundation. They are donating the actual infrastructure required to make interoperable AI agents a reality. ## Model Context Protocol (MCP): The Universal Serial Bus for AI Anthropic’s MCP is the heavy hitter of this announcement. If you haven't been paying attention, MCP is essentially USB for AI models. It’s an open standard that dictates how an AI model connects to external data sources and tools. Instead of writing custom API glue for every LLM, you write an MCP server. Any MCP-compliant client (which is rapidly becoming *all of them*) can connect to it, discover the tools, and execute them. It uses standard JSON-RPC 2.0. It runs over `stdio` for local tools or SSE/HTTP for remote ones. It is boring, predictable, and heavily engineered for reliability. ### Building a Basic MCP Server Let's look at what this actually means in practice. You don't need a massive SDK to expose a tool. You just need to speak the protocol. ```typescript import { Server } from "@modelcontextprotocol/sdk/server/index.js"; import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; import { CallToolRequestSchema, ListToolsRequestSchema, } from "@modelcontextprotocol/sdk/types.js"; const server = new Server( { name: "database-reader", version: "1.0.0" }, { capabilities: { tools: {} } } ); server.setRequestHandler(ListToolsRequestSchema, async () => { return { tools: [ { name: "query_db", description: "Run a read-only SQL query against the prod replica", inputSchema: { type: "object", properties: { sql: { type: "string" }, }, required: ["sql"], }, }, ], }; }); server.setRequestHandler(CallToolRequestSchema, async (request) => { if (request.params.name === "query_db") { const sql = request.params.arguments?.sql as string; // Execute safe query... return { content: [{ type: "text", text: `Results for: ${sql}` }], }; } throw new Error("Tool not found"); }); const transport = new StdioServerTransport(); await server.connect(transport); ``` You run this script. An agent spins it up as a child process. It reads the stdout. It knows exactly what `query_db` does. No proprietary OpenAI or Anthropic SDKs required. The AAIF inheriting this means MCP is immune to vendor rug-pulls. It is now community infrastructure. ## Goose: The Local-First Execution Engine While Anthropic gave us the protocol, Block (formerly Square) dropped Goose into the AAIF. Goose is an open-source, local-first AI agent framework. Most "agent frameworks" are thinly veiled SaaS plays. They want your API keys, your data, and your compute. Goose is the antithesis of this. It runs on your metal, combining LLMs, local tools, and MCP servers into a unified execution environment. It’s built for developers who want a reliable, trusted environment for agentic workflows, without piping their entire proprietary codebase through a third-party hosted orchestrator. ### Spinning up Goose Goose doesn't abstract away the terminal; it embraces it. ```bash # Install Goose curl -fsSL https://github.com/block/goose/releases/latest/download/goose-linux-amd64 -o /usr/local/bin/goose chmod +x /usr/local/bin/goose # Initialize a local agent session with an MCP backend goose session start --mcp-server "node ./database-reader.js" --model "local/llama-3-8b-instruct" ``` Once running, Goose acts as the orchestrator. It manages the context window, handles the tool calling loop, and routes requests to the MCP servers. By donating this to the AAIF, Block has effectively commoditized the agent orchestrator layer. You don't need to pay a premium for execution; you just need Goose and your own compute. ## AGENTS.md: The Unsexy Fix to a Massive Problem OpenAI’s contribution is perhaps the most heavily overlooked, yet arguably the most pragmatic. `AGENTS.md` is exactly what it sounds like: a text file. We have `.gitignore` for source control. We have `Dockerfile` for containers. But until now, we haven't had a standard mechanism to tell an AI agent how to behave inside a specific repository. Agents hallucinate project conventions. They use the wrong testing framework. They ignore linting rules. They overwrite core files because they lack context. `AGENTS.md` is a dedicated, predictable place to provide context and instructions to AI coding agents. It lives in your project root. Every compliant agent reads it before executing a single line of code. ### Anatomy of an AGENTS.md File It’s just markdown. But its standardized location means CI/CD pipelines, IDE plugins, and autonomous agents all know where to look. ```markdown # AGENTS.md ## Identity You are an elite TypeScript engineer working on the Stormap.ai core platform. ## Architecture Rules - We use Next.js App Router exclusively. Do not generate `pages/` directory code. - State management is done via Zustand. No Redux. - All database interactions MUST go through the Prisma client located in `lib/db.ts`. ## Tool Usage - If you need to modify the schema, use the `prisma_migrate` tool via MCP. - Do NOT run `npm install` directly. Use `pnpm`. ## Testing - Tests are written in Vitest. Run `pnpm test` after making logic changes. ``` When Goose (or any other agent) spins up in this directory, it parses `AGENTS.md`. It injects these rules into the system prompt. It constraints the agent's behavior to match your team's actual engineering standards. It’s a dead-simple solution to the catastrophic failure mode of "AI wrote 500 lines of React using class components." ## The Old Way vs The AAIF Way Let's look at the delta between how we built agents last month versus how we should build them today under the AAIF umbrella. | Feature | The Wild West (Pre-AAIF) | The AAIF Standard | | :--- | :--- | :--- | | **Tool Integration** | Custom JSON schemas per LLM provider (OpenAI, Anthropic, Gemini). | **MCP**. One JSON-RPC standard over stdio/HTTP. Write once, run anywhere. | | **Agent Orchestration** | Hosted SaaS platforms, LangChain spaghetti, opaque execution loops. | **Goose**. Open-source, local-first, verifiable execution loops on your own metal. | | **Project Context** | Hacking massive prompt prefixes into your CLI tool or UI wrapper. | **AGENTS.md**. Standardized markdown file in the project root read by all compliant clients. | | **Governance** | Vendor whim. A deprecated endpoint breaks your entire product. | **Linux Foundation**. Open governance, neutral territory, predictable lifecycle management. | ## The Reality Check Is this the silver bullet? Of course not. MCP is still verbose. Writing robust servers requires heavy error handling because LLMs will inevitably pass garbage arguments to your tools. Goose is young and will face the same edge-case nightmares every local orchestrator faces when dealing with stateful, multi-step reasoning loops. And `AGENTS.md`? It relies on the assumption that the underlying model is smart enough to actually *follow* the markdown instructions, which anyone who has used an open-weight 7B model knows is a coin toss. But it is a massive step in the right direction. It moves the battleground from "who has the best proprietary SDK" to "who has the smartest model and the fastest execution." By pushing these primitives into the Linux Foundation, Anthropic, Block, and OpenAI are admitting that the infrastructure layer needs to be free. The money isn't in the protocol; the money is in the intelligence. ## Actionable Takeaways You don't need to wait for the AAIF to ratify a 500-page spec. The code is here today. 1. **Drop Proprietary Tool Calls:** Stop writing custom function-calling wrappers for OpenAI and Anthropic. Refactor your internal tools into MCP servers. Run them over `stdio` for your local dev tools and SSE for your internal microservices. 2. **Add `AGENTS.md` to Your Repos:** Create an `AGENTS.md` in your primary monorepo right now. Document your most painful, frequently hallucinated architectural rules. 3. **Evaluate Local Orchestration:** Pull down Goose. Point it at a local Llama 3 instance and your new MCP servers. See how much of your expensive, cloud-hosted agent workflow can be run entirely on an M-series Mac or a cheap Linux VPS. 4. **Decouple Your AI:** The entire point of the AAIF is interchangeability. If your current stack makes it hard to swap Claude for GPT-4 or a local model within 10 minutes, your architecture is flawed. Fix it using the new primitives. The foundation has been laid. Stop fighting the API wars and start building the actual logic.