The Server SDK is a Python library that enhances your existing AI agents with platform capabilities. Whether you’ve built your agent with LangGraph, CrewAI, or custom logic, the SDK connects it to the Agent Stack platform, giving you instant access to runtime-configurable services, interactive UI components, and deployment infrastructure. Built on top of the Agent2Agent Protocol (A2A), the SDK wraps your agent implementation and adds powerful functionality through A2A extensions. This enables your agent to leverage platform services like LLM providers, file storage, vector databases, and rich UI components. That’s all without rewriting your core agent logic.Documentation Index
Fetch the complete documentation index at: https://ibm-8c0b5b62-gpt-researcher-integration.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
What the SDK Provides
The Server SDK offers several key capabilities:- Server wrapper: Simplified server creation and agent registration
- Extension system: Dependency injection for services (LLM, embeddings, file storage) and UI components (forms, citations, trajectory)
- Convenience wrappers: Simplified message types like
AgentMessagethat reduce boilerplate - Context management: Built-in conversation history and state management
- Async generator pattern: Natural task-based execution with pause/resume capabilities
Core Concepts
Server and Agent Registration
The SDK uses a server-based architecture where you create aServer instance and register your agent function:
Asynchronous Generator Pattern
Agent functions are asynchronous generators that yield responses. This pattern aligns perfectly with A2A’s task model:- One function execution = One A2A task
- Yielding data = Sending messages to the client
- Pausing execution = Waiting for user input
- Stream responses incrementally
- Yield multiple messages during a single task
- Handle long-running operations gracefully
Extension System
Agent Stack is utilizing A2A extensions to extend the protocol with Agent Stack-specific capabilities. They enable your agent to access platform services and enhance the user interface beyond what the base A2A protocol provides. There are two types of extensions:Dependency Injection Service Extensions
Service extensions use a dependency injection pattern where each run of the agent declares a demand that must be fulfilled by the client (consumer). The platform provides configured access to external services based on these demands:- LLM Service: Language model access with automatic provider selection
- Embedding Service: Text embedding generation for RAG
- Platform API: File storage, vector databases, and platform services
- MCP: Model Context Protocol integration
UI Extensions
UI extensions add extra metadata to messages, enabling the Agent Stack UI to render more advanced interactive components:- Forms: Collect structured user input through interactive forms
- Citations: Display source references with clickable inline links
- Trajectory: Visualize agent reasoning steps with execution traces