An open-source, code-first Go toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control.
Features β’ Installation β’ Quick Start β’ Architecture β’ Examples
Important
This project is in the alpha stage.
Flags, configuration, behavior, and design may change significantly.
- β‘οΈ Fully compatible with the official SDK: Fully emulates the Python implementation of adk-python.
- π€ Multi-Agent Architecture: Build hierarchical agent systems with LLM, Sequential, Parallel, and Loop agents
- π Multi-Provider Support: Unified interface for Google Gemini, Anthropic Claude, and more via
google.golang.org/genai
- π οΈ Extensible Tools: Rich ecosystem of tools with automatic function calling and authentication
- πΎ Memory Systems: Long-term knowledge storage and retrieval with vector-based search
- π Secure Code Execution: Multiple backends (built-in, container, local) with resource limits
- π Streaming First: Real-time event streaming with Go 1.23+ iterators
- π Session Management: Stateful conversation tracking with three-tier state management
- π― Smart Planning: Strategic planning with built-in and ReAct planners
- π Authentication: Multi-scheme auth support (OAuth2, API Key, Basic, Bearer)
- π¬ Live Mode: Video/audio-based conversations for supported models
- Go 1.24 or higher
- API keys for your chosen LLM providers
go mod init your-project
go get github.com/go-a2a/adk-go
# For Google Gemini
export GEMINI_API_KEY="your-gemini-api-key"
# For Anthropic Claude
export ANTHROPIC_API_KEY="your-anthropic-api-key"
# Optional: For Google Cloud AI Platform
export GOOGLE_APPLICATION_CREDENTIALS="path/to/service-account.json"
package main
import (
"context"
"fmt"
"log"
"github.com/go-a2a/adk-go/agent"
"github.com/go-a2a/adk-go/model"
"github.com/go-a2a/adk-go/session"
"github.com/go-a2a/adk-go/types"
)
func main() {
ctx := context.Background()
// Create a model
m, err := model.NewGoogleModel("gemini-2.0-flash-exp")
if err != nil {
log.Fatal(err)
}
defer m.Close()
// Create an LLM agent
llmAgent := agent.NewLLMAgent(ctx, "assistant",
agent.WithModel(m),
agent.WithInstruction("You are a helpful AI assistant."),
)
// Create session
sessionService := session.NewInMemoryService()
sess, _ := sessionService.CreateSession(ctx, "myapp", "user123", "session456", nil)
// Create invocation context
ictx := types.NewInvocationContext(sess, sessionService, nil, nil)
// Run the agent
for event, err := range llmAgent.Run(ctx, ictx) {
if err != nil {
log.Printf("Error: %v", err)
continue
}
// Handle events
if event.Message != nil {
fmt.Println("Agent:", event.Message.Text)
}
}
}
package main
import (
"context"
"fmt"
"math/rand"
"github.com/go-a2a/adk-go/agent"
"github.com/go-a2a/adk-go/model"
"github.com/go-a2a/adk-go/tool/tools"
"github.com/go-a2a/adk-go/types"
)
// Simple dice rolling function
func rollDice(ctx context.Context, sides int) (int, error) {
if sides <= 0 {
return 0, fmt.Errorf("dice must have at least 1 side")
}
return rand.Intn(sides) + 1, nil
}
func main() {
ctx := context.Background()
// Create model
m, _ := model.NewGoogleModel("gemini-2.0-flash-exp")
defer m.Close()
// Create function tool
diceTool := tools.NewFunctionTool("roll_dice", rollDice,
tools.WithDescription("Roll a dice with specified number of sides"),
tools.WithParameterDescription("sides", "Number of sides on the dice"),
)
// Create agent with tools
agent := agent.NewLLMAgent(ctx, "game_master",
agent.WithModel(m),
agent.WithInstruction("You are a game master. Help users with dice rolls and games."),
agent.WithTools(diceTool),
)
// ... run agent similar to above
}
package main
import (
"context"
"github.com/go-a2a/adk-go/agent"
"github.com/go-a2a/adk-go/model"
)
func main() {
ctx := context.Background()
m, _ := model.NewGoogleModel("gemini-2.0-flash-exp")
defer m.Close()
// Create specialized agents
researcher := agent.NewLLMAgent(ctx, "researcher",
agent.WithModel(m),
agent.WithInstruction("You are a research specialist. Gather and analyze information."),
)
writer := agent.NewLLMAgent(ctx, "writer",
agent.WithModel(m),
agent.WithInstruction("You are a content writer. Create compelling content based on research."),
)
reviewer := agent.NewLLMAgent(ctx, "reviewer",
agent.WithModel(m),
agent.WithInstruction("You are a content reviewer. Ensure quality and accuracy."),
)
// Create sequential workflow
workflow := agent.NewSequentialAgent("content_pipeline",
agent.WithSubAgents(researcher, writer, reviewer),
agent.WithDescription("Complete content creation pipeline"),
)
// ... run workflow
}
ADK Go follows a hierarchical, event-driven architecture with strong type safety and extensibility:
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Agent β β Model β β Tools β
β System βββββΊβ Layer βββββΊβ Ecosystem β
β β β β β β
βββββββββββββββββββ€ βββββββββββββββββββ€ βββββββββββββββββββ€
β β’ LLMAgent β β β’ Google Gemini β β β’ Function Toolsβ
β β’ Sequential β β β’ Anthropic β β β’ Agent Tools β
β β’ Parallel β β β’ Multi-providerβ β β’ Auth Tools β
β β’ Loop β β β’ Streaming β β β’ Toolsets β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β
βββββββββββββββββββββββββΌββββββββββββββββββββββββ
β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Flow β β Event β β Session β
β Management β β System β β Management β
β β β β β β
βββββββββββββββββββ€ βββββββββββββββββββ€ βββββββββββββββββββ€
β β’ LLMFlow β β β’ Streaming β β β’ State Mgmt β
β β’ AutoFlow β β β’ Real-time β β β’ Memory β
β β’ SingleFlow β β β’ Event Actions β β β’ Persistence β
β β’ Processors β β β’ Deltas β β β’ Three-tier β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
- LLMAgent: Full-featured agents powered by language models with tools, instructions, callbacks, planners, and code execution
- SequentialAgent: Executes sub-agents one after another, supports live mode with
taskCompleted()
flow control - ParallelAgent: Runs sub-agents concurrently in isolated branches, merges event streams
- LoopAgent: Repeatedly executes sub-agents until escalation or max iterations
- Event-Driven Streaming: All operations use
iter.Seq2[*Event, error]
for real-time processing - Hierarchical Composition: Agents form trees with parent/child relationships
- Interface-Driven Design: Core abstractions in
types/
enable extensibility - Functional Options: Configuration via
WithXxx()
functions - Context Propagation: Rich context flows through all operations
- Type Safety with Flexibility: Strong typing while supporting dynamic LLM interactions
- Resource Management: Proper cleanup with Close() methods throughout
// Create different agent types
llmAgent := agent.NewLLMAgent(ctx, "assistant", ...)
seqAgent := agent.NewSequentialAgent("workflow", ...)
parAgent := agent.NewParallelAgent("concurrent", ...)
loopAgent := agent.NewLoopAgent("repeater", ...)
// Multi-provider support
gemini, _ := model.NewGoogleModel("gemini-2.0-flash-exp")
claude, _ := model.NewAnthropicModel("claude-3-5-sonnet-20241022")
// Registry pattern
model.RegisterModel("custom-model", customModelFactory)
m, _ := model.GetModel("custom-model")
// Function tools with automatic declaration generation
tool := tools.NewFunctionTool("my_function", myFunc,
tools.WithDescription("Description of the function"),
tools.WithParameterDescription("param", "Parameter description"),
)
// Custom tools
type CustomTool struct {
*tool.Tool
}
func (t *CustomTool) Run(ctx context.Context, args map[string]any, toolCtx *types.ToolContext) (any, error) {
// Tool implementation
return result, nil
}
// In-memory storage
memService := memory.NewInMemoryService()
// Vertex AI RAG (future)
ragService := memory.NewVertexAIRAGService(projectID, location)
// Store and retrieve memories
memService.AddSession(ctx, sessionID, "Important information")
memories, _ := memService.SearchMemories(ctx, "search query")
// Three-tier state management
state := map[string]any{
"app:theme": "dark", // Application-wide
"user:preference": "verbose", // User-specific
"temp:calculation": 42, // Session-temporary
}
sessionService := session.NewInMemoryService()
sess, _ := sessionService.CreateSession(ctx, appName, userID, sessionID, state)
codeAgent := agent.NewLLMAgent(ctx, "coder",
agent.WithModel(model),
agent.WithInstruction("You are a coding assistant. Write and execute code to solve problems."),
agent.WithCodeExecutor(codeexecutor.NewBuiltInExecutor()), // Use model's native execution
)
type APITool struct {
*tool.Tool
}
func (t *APITool) Run(ctx context.Context, args map[string]any, toolCtx *types.ToolContext) (any, error) {
// Request API key if not available
if !toolCtx.HasCredential("api_key") {
toolCtx.RequestCredential("api_key", &types.AuthConfig{
Type: types.AuthTypeAPIKey,
Description: "API key for external service",
})
return "Please provide your API key", nil
}
// Use the API key
apiKey := toolCtx.GetCredential("api_key")
// ... make API call
}
for event, err := range agent.Run(ctx, ictx) {
if err != nil {
if errors.Is(err, context.Canceled) {
log.Println("Operation canceled")
break
}
log.Printf("Error: %v", err)
continue
}
// Process different event types
switch {
case event.Message != nil:
fmt.Printf("Message: %s\n", event.Message.Text)
case event.ToolCall != nil:
fmt.Printf("Tool call: %s\n", event.ToolCall.Name)
case event.Actions != nil && event.Actions.StateDelta != nil:
fmt.Printf("State update: %+v\n", event.Actions.StateDelta)
}
}
Run tests with API keys:
# Set API keys
export GEMINI_API_KEY="your-key"
export ANTHROPIC_API_KEY="your-key"
# Run all tests
go test ./...
# Run specific tests
go test ./agent -run TestLLMAgent
# With coverage
go test -cover ./...
# Build
go build ./...
# Lint
go vet ./...
# Format
gofmt -w .
- Agents: Core agent interfaces and implementations (
agent/
) - Flow: Request/response processing pipelines (
flow/
) - Memory: Long-term storage and retrieval systems (
memory/
) - Models: LLM provider integrations and abstractions (
model/
) - Session: Conversation and state management (
session/
) - Tools: Extensible tool system with function declarations (
tool/
) - Types: Core interfaces and type definitions (
types/
)
adk-go/
βββ agent/ # Agent implementations (LLM, Sequential, Parallel, Loop)
βββ artifact/ # Artifact storage services (GCS, in-memory)
βββ codeexecutor/ # Code execution backends (built-in, container, local)
βββ example/ # Example implementations and utilities
βββ flow/ # LLM processing pipelines and flows
βββ memory/ # Memory storage systems (in-memory, Vertex AI RAG)
βββ model/ # LLM provider integrations (Gemini, Claude, registry)
βββ planner/ # Strategic planning components (built-in, ReAct)
βββ session/ # Session management and state tracking
βββ tool/ # Tool framework and implementations
βββ types/ # Core interfaces and type definitions
βββ internal/ # Internal utilities (pool, iterators, maps)
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Run tests and linting
- Submit a pull request
- Use Go 1.24+ features including generics
- Follow standard Go conventions
- Use
any
instead ofinterface{}
- Include copyright headers in all files:
// Copyright 2025 The Go A2A Authors // SPDX-License-Identifier: Apache-2.0
- Write comprehensive tests with
github.com/google/go-cmp
- Issues: GitHub Issues
- Discussions: GitHub Discussions
This is a Go implementation of the Agent Development Kit (ADK), a toolkit for building, evaluating, and deploying sophisticated AI agents.
adk-go follows the same architectural principles as the Python implementation, but with Go's strengths of type safety, performance, and concurrency.
- Inspired by the Agent Development Kit for Python
- Built on top of google.golang.org/genai for unified LLM integration
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.