Deploy specialized agents on your infrastructure—without building from scratch or using SaaS. AgentSystems combines a federated agent ecosystem, provider portability, and container isolation to create portable AI agents that run in your environment—laptop, server, or data center.
# Install on macOS or Linux - follow the setup prompts
curl -fsSL https://github.com/agentsystems/agentsystems/releases/latest/download/install.sh | sh
Git-based agent index using GitHub forks. No central gatekeepers—anyone can operate their own agent index alongside community indexes.
Agents integrate with OpenAI, Anthropic, Bedrock, Ollama—write once, run anywhere. This is the "write once, run anywhere" moment for AI agents.
Each agent runs in its own Docker container with configurable network egress filtering, thread-scoped artifact storage, and hash-chained audit logs.
Watch the 100-second overview below or dive into the full 9-minute demo and walkthrough.
Teams need specialized AI agents—codebase migration, research synthesis, visual content analysis, structured data extraction—but face three bad options:
Require sending data to third parties
Takes weeks of development per agent (most teams lack ML expertise)
Requires configuring networks, volumes, proxies, and API keys for each agent
Single command deployment handles networking, separation, and audit logging
Run agents on-premises or air-gapped with configurable egress controls
Browse community agents instead of building from scratch
Discover, evaluate, and deploy agents with container isolation and audit logging
Git-based agent index where developers publish via GitHub forks. Anyone can run their own index alongside the community index. No central authority controls listing or distribution.
Each agent runs in its own Docker container with configurable egress filtering, thread-scoped artifact storage, hash-chained audit logs, and lazy startup.
Switch from OpenAI to Anthropic to Ollama through configuration. Run the same agent with different models and providers. Reduce vendor lock-in at the agent level.
Laptop, server, Kubernetes—works where Docker runs.
Langfuse integration for tracing LLM calls and agent execution.
SHA-256 hash-chained logs for operation tracking.
Each request gets a unique thread ID with separate artifacts.
Concrete examples of what you can build with AgentSystems
Input: 20 PDFs
Output: Structured literature review with citations and theme analysis
Input: Legacy Python 2.7 codebase
Output: Python 3.12 implementation with migration notes
Input: Product pages from 10 websites
Output: Structured JSON with pricing, specs, availability
Input: CSV exports + template
Output: Formatted PDF with charts, tables, executive summary
Input: Bank statements from 5 institutions
Output: Standardized transaction records in unified format
Input: Codebase repository
Output: API reference docs with examples and descriptions
Input: 100-page legal documents
Output: Structured data extraction with clause identification
Input: Application codebase
Output: Vulnerability report with findings
Input: Internal databases + requirements
Output: Structured reports
Browse available agents at agentsystems.github.io/agent-index
Get AgentSystems running with a single command
# One-command install (macOS/Linux) curl -fsSL https://github.com/agentsystems/agentsystems/releases/latest/download/install.sh | sh # Initialize and start agentsystems init agentsystems-platform && cd agentsystems-platform agentsystems up # Open web UI open http://localhost:3001
AgentSystems is a runtime for agents built with these frameworks. Your agent code uses LangChain; AgentSystems deploys and runs it.
Different layers. AI gateways route API calls; AgentSystems runs complete applications with workflows and file I/O.
You can build this yourself. AgentSystems is the pre-built, standardized version with discovery, egress control, audit logs, and artifact management.
Some customers won't use your agent if it means sending data to your servers. AgentSystems gives you a distribution channel for those customers.
Use standard frameworks like LangChain and LangGraph. Wrap in a FastAPI app with /invoke, /health, and /metadata endpoints.
Git-based publishing via GitHub forks. No approval process—automated validation only.
Your agent runs on their infrastructure under their control. You don't host or store their data. You focus on agent capabilities, they handle deployment.
Core components work together as a complete system
Build, share, and discover AI agents with a growing community of developers.