Empower your applications with Agent-to-Agent (A2A) protocol support.
A production-ready agent server built with LangGraph that implements the Agent-to-Agent (A2A) protocol. Perfect for building advanced, real-time AI workflows and integrating AI capabilities into your applications.
- 🔄 A2A Protocol Support: Full implementation of the Agent-to-Agent protocol.
- 🧠 LangGraph-powered Agent: Flexible agent architecture built on LangGraph's React pattern.
- 🔍 Extensible Tools: Easy integration of custom tools like search, database queries, and more.
- ⚡ Real-Time Streaming: Stream responses for live updates and interactive flows.
- 🗄️ Database-backed State Management: Maintain conversation state across sessions.
- 🌐 FastAPI HTTP Server: Robust HTTP server with proper A2A protocol endpoints.
-
Clone the repo
git clone https://github.com/llmx-de/a2a_template_langgraph.git cd a2a_template_langgraph
-
Install dependencies
pip install -e . # or with uv uv install
-
Set up PostgreSQL Use the included Docker Compose file to start a PostgreSQL instance:
docker-compose up -d postgres
-
Configure Create a
.env
file:OPENAI_API_KEY=your_openai_api_key HOST=0.0.0.0 # Optional, defaults to 0.0.0.0 PORT=10000 # Optional, defaults to 10000 OPENAI_MODEL=o4-mini # Optional, defaults to o4-mini
-
Run the server
python main.py
-
Clone the repo
git clone https://github.com/llmx-de/a2a_template_langgraph.git cd a2a_template_langgraph
-
Configure Create a
.env
file:OPENAI_API_KEY=your_openai_api_key OPENAI_MODEL=o4-mini # Optional, defaults to o4-mini
-
Build and run with Docker Compose
docker-compose up -d
This will:
- Start a PostgreSQL database
- Run database migrations automatically
- Start the A2A agent server
Your agent server will be live on http://localhost:10000
.
-
GET / or GET /.well-known/agent.json
Returns the agent card information following the A2A protocol. -
POST /
Send a one-off task to the agent. -
POST /send_task_subscribe
Stream a task to receive real-time responses.
.
├── main.py # Entry point
├── a2a_service/ # Core service package
│ ├── agent.py # LangGraph agent implementation
│ ├── server.py # A2A HTTP server
│ ├── database.py # Database connection setup
│ ├── types.py # All data types and models for the A2A protocol
│ ├── models/ # Database models
│ │ └── db_models.py # SQLAlchemy database models
│ ├── task_managers/ # Task management modules
│ │ ├── __init__.py # Base task manager interface
│ │ ├── db_task_manager.py # DB-backed task manager
│ │ └── async_inmem_task_manager.py # In-memory task manager
│ └── tools/ # Agent tools
│ └── search.py # Web search tool
├── alembic/ # Database migration scripts
├── alembic.ini # Alembic configuration
├── pyproject.toml # Project configuration & dependencies
├── docker-compose.yaml # Docker setup for PostgreSQL
└── README.md # This file
- Python 3.13+
- LangGraph for agent orchestration
- LangChain for LLM components
- FastAPI & Uvicorn for HTTP server
- PostgreSQL via SQLAlchemy & Alembic migrations
- OpenAI for LLM capabilities
- Pydantic for data validation and serialization
- Docker & Docker Compose for containerization and deployment
This project includes full Docker support for development and deployment:
- Dockerfile: Builds the A2A LangGraph Agent application
- docker-compose.yaml: Orchestrates the application with PostgreSQL and automatic migrations
- Automatic Database Migration: Runs Alembic migrations on startup
- Smart Entrypoint Script: Handles database waiting, migrations, and application startup
The Docker setup provides:
- Automatic database initialization and migrations
- Proper dependency ordering (database → migrations → application)
- Health checks to ensure services are ready before dependent services start
Note: While the project recommends UV for local development, the Docker build uses pip for compatibility and reliability in containerized environments.
To add custom tools:
- Create a new Python file in the
a2a_service/tools/
directory - Define your tool using the
@tool
decorator from LangChain - Import and add your tool to the Agent's tools list in
main.py
Example:
# a2a_service/tools/my_tool.py
from langchain_core.tools import tool
@tool
def my_custom_tool(param: str) -> str:
"""Description of what this tool does.
Args:
param: Description of the parameter
Returns:
Description of the return value
"""
# Your implementation here
return f"Processed: {param}"
Feel free to contribute or report issues!