Enabling AI agents to discover, communicate, and collaborate across frameworks
The Agent-to-Agent Protocol (A2A) is an open standard developed by Google that enables AI agents to communicate, collaborate, and delegate tasks to each other regardless of the frameworks, models, or vendors that power them. While MCP addresses the challenge of connecting a single model to tools, A2A tackles the more complex problem of orchestrating multiple autonomous agents that need to work together.
The protocol was born from the recognition that the future of AI is not a single omniscient model but rather an ecosystem of specialized agents. An enterprise might have a research agent powered by one model, a coding agent powered by another, a data analysis agent using yet another, and a project management agent tied to internal systems. A2A provides the communication layer that lets these diverse agents discover each other, negotiate capabilities, delegate tasks, and exchange results.
A2A introduces a set of core concepts that map to how human organizations collaborate. Agent Cards serve as digital business cards -- JSON documents that describe an agent's identity, capabilities, skills, and endpoint information. Any agent can publish an Agent Card, and any other agent can discover and read it to understand what that agent can do. This discovery mechanism is essential for building dynamic multi-agent systems where agents come and go.
Tasks are the fundamental unit of work in A2A. When one agent needs something done, it creates a Task and sends it to a capable agent. Tasks have a lifecycle with defined states: submitted, working, input-needed, and completed. This lifecycle supports both simple request-response interactions (ask a question, get an answer) and complex collaborative workflows (multi-turn exchanges where the working agent requests additional information from the delegating agent).
The protocol supports multiple communication patterns to accommodate different use cases. Synchronous request-response handles simple queries. Server-Sent Events enable real-time streaming of progress updates for long-running tasks. Push notifications allow agents to asynchronously alert others about state changes, enabling fire-and-forget delegation patterns. This flexibility means A2A can support everything from sub-second lookups to multi-day collaborative projects.
A2A is built on standard web technologies, making it accessible and easy to implement. The protocol defines HTTP-based endpoints for agent communication, with JSON as the data format. Agent Cards are served as JSON documents at a well-known URL, similar to how robots.txt or .well-known paths work on the web.
The core architecture centers on the Task abstraction. A client agent sends a task request to a server agent's endpoint. The server agent processes the request and manages the task through its lifecycle. During processing, the server can send Messages (intermediate communications) and produce Artifacts (final outputs like documents, code, or data). The client can monitor task progress, provide additional input when requested, and retrieve results.
Communication follows a message-based pattern. Each Message contains one or more Parts, which can be text, structured data, or file references. This flexible content model allows agents to exchange rich, multimodal information. Artifacts similarly support multiple content types, enabling agents to produce diverse outputs.
For real-time interactions, A2A defines a streaming endpoint using Server-Sent Events. The client initiates a task and receives a stream of events as the server agent works -- status updates, intermediate messages, and final artifacts all flow through the same stream. For asynchronous workflows, push notifications allow the server to notify the client when task states change, enabling decoupled operation where neither agent needs to maintain a persistent connection.
The protocol intentionally does not dictate agent internals. An A2A-compatible agent can be powered by any model (Claude, GPT, Gemini, Llama, or custom models), use any framework (LangGraph, CrewAI, custom code), and connect to any tools (including via MCP). A2A only governs the external communication between agents, preserving implementation freedom.
The A2A ecosystem is building rapidly since the protocol's release in early 2025. Google has provided reference implementations and SDKs, with community adoption growing across agent framework providers and enterprise development teams.
A2A is designed to complement the existing agent ecosystem rather than replace it. Frameworks like LangGraph, CrewAI, and AutoGen can implement A2A endpoints to enable their agents to communicate with agents built on other frameworks. This interoperability is A2A's primary value proposition -- breaking down the silos between different agent implementations.
The relationship between A2A and MCP is complementary and well-defined. MCP connects individual models to their tools (vertical integration), while A2A connects autonomous agents to each other (horizontal integration). An agent might use MCP internally to access databases and APIs, while using A2A externally to collaborate with other agents. Together, they form a comprehensive protocol stack for the agentic AI ecosystem.
To get started with A2A, begin by understanding the core concepts: Agent Cards, Tasks, Messages, and Artifacts. The A2A specification provides detailed documentation of each concept and its JSON schema.
To build an A2A server agent, implement the required HTTP endpoints: the Agent Card endpoint (for discovery), the task creation endpoint, and optionally the streaming and push notification endpoints. Use the reference implementation as a starting point and customize it with your agent's specific capabilities and logic.
To build an A2A client agent that can discover and communicate with other agents, implement the client-side protocol: fetching and parsing Agent Cards, sending task requests, handling responses and streaming events, and managing task lifecycles.
The A2A specification and reference implementations are available on GitHub, with examples demonstrating common interaction patterns and integration with popular agent frameworks.
MCP (Model Context Protocol)
The universal standard for connecting AI models to tools and data
Function Calling
The foundational pattern for AI models to interact with external tools and APIs
RAG (Retrieval-Augmented Generation)
Grounding AI responses in real-world data through intelligent retrieval
Context Engineering
The systems discipline of designing optimal information flow into AI models