Agent Stack is open, self-hostable infrastructure for deploying AI agents built with any framework. Hosted by the Linux Foundation and built on the Agent2Agent Protocol (A2A), it gives you everything needed to move agents from local development to shared production environments—without vendor lock-in.Documentation Index
Fetch the complete documentation index at: https://ibm-8c0b5b62-gpt-researcher-integration.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
What Agent Stack Provides
Everything you need to deploy and operate agents in production:- Self-hostable server to run your agents
- Web UI for testing and sharing deployed agents
- CLI for deploying and managing agents
- Runtime services your agents can access:
- LLM Service — Switch between 15+ providers (Anthropic, OpenAI, watsonx.ai, Ollama) without code changes
- Embeddings & vector search for RAG and semantic search
- File storage — S3-compatible uploads/downloads
- Document text extraction via Docling
- External integrations via MCP protocol (APIs, Slack, Google Drive, etc.) with OAuth
- Secrets management for API keys and credentials
- SDK (
agentstack-sdk) for standardized A2A service requests - HELM chart for Kubernetes deployments with customizable storage, databases, and auth
How It Works
- Build your agent using
agentstack-sdk - Deploy with a single CLI command
- Users interact through the auto-generated web UI
- Development: Run locally with full services for rapid iteration
- Production: Deploy to Kubernetes via HELM and integrate with your infrastructure
Get Started
Quickstart
Get up and running in one command
Wrap Existing Agents
Deploy your existing agents to Agent Stack
Build New Agents
Build your first agent with the SDK
Deploy to Production
Deploy Agent Stack to Kubernetes for your team