Specialized AI agents designed to run locally without sending your data to third parties. Self-hosted and open source. Discover agents in a visual catalog → Install with one click → Runs on your infrastructure with your credentials.
# Install on macOS or Linux - follow the setup prompts
curl -fsSL https://github.com/agentsystems/agentsystems/releases/latest/download/install.sh | sh
Open ecosystem where community developers publish agents through GitHub forks. No gatekeepers.
Each agent runs isolated on your machine with container-level separation.
Not shared with agent builders — your API keys and secrets remain under your control.
Egress proxy with configurable allowlists (default: none).
Then inherit your provider configuration (Ollama, AWS Bedrock, Anthropic API, OpenAI API).
SHA-256 chain in PostgreSQL for transparency and accountability.
Watch the complete workflow from install to running your first agent
There's another path.
Frontier models keep improving — agents get easier to build with less expertise. Consumer hardware keeps advancing — models run locally without cloud providers. NVIDIA's research shows why this works: small language models are the future of agentic AI. Most agent tasks are specialized and repetitive — they don't need massive generalist models.
The infrastructure we build now determines whether we're locked into centralized platforms or not.
AgentSystems is that infrastructure: discover agents, run them locally.
Apache-2.0. No gatekeepers.
Teams need specialized AI agents — codebase migration, research synthesis, visual content analysis, structured data extraction — but face three bad options:
Require sending data to third parties
Takes weeks of development per agent (most teams lack ML expertise)
Requires configuring networks, volumes, proxies, and API keys for each agent
Single command deployment for networking, separation, and audit logging
Run agents on-premises or air-gapped with configurable egress controls
Browse community agents instead of building from scratch
Discover, evaluate, and deploy agents with container isolation and audit logging
Git-based agent index where community developers publish via GitHub forks. Anyone can run their own index alongside the community index. No central authority controls listing or distribution.
Each agent runs in its own Docker container with configurable egress filtering, thread-scoped artifact storage, hash-chained audit logs, and lazy startup.
Agents specify which model they need, then inherit your provider configuration. Switch from OpenAI to Anthropic to Ollama through configuration. Reduce vendor lock-in at the agent level.
Laptop, server, Kubernetes — works where Docker runs.
Langfuse integration for tracing LLM calls and agent execution.
SHA-256 hash-chained logs for operation tracking.
Each request gets a unique thread ID with separate artifacts.
Input: 20 PDFs
Output: Structured literature review with citations and theme analysis
Input: Legacy Python 2.7 codebase
Output: Python 3.12 implementation with migration notes
Input: Product pages from 10 websites
Output: Structured JSON with pricing, specs, availability
Input: CSV exports + template
Output: Formatted PDF with charts, tables, executive summary
Input: Bank statements from 5 institutions
Output: Standardized transaction records in unified format
Input: Codebase repository
Output: API reference docs with examples and descriptions
Input: 100-page legal documents
Output: Structured data extraction with clause identification
Input: Application codebase
Output: Vulnerability report with findings
Input: Internal databases + requirements
Output: Structured reports
Browse available agents at agentsystems.github.io/agent-index
Get AgentSystems running with a single command
# One-command install (macOS/Linux) curl -fsSL https://github.com/agentsystems/agentsystems/releases/latest/download/install.sh | sh # Initialize and start agentsystems init agentsystems-platform && cd agentsystems-platform agentsystems up # Open web UI open http://localhost:3001
AgentSystems is a runtime for agents built with these frameworks. Your agent code uses LangChain; AgentSystems deploys and runs it.
Different layers. AI gateways route API calls; AgentSystems runs complete applications with workflows and file I/O.
You can build this yourself. AgentSystems is the pre-built, standardized version with discovery, egress control, audit logs, and artifact management.
Some customers won't use your agent if it means sending data to your servers. AgentSystems provides a distribution channel for those customers.
Use standard frameworks like LangChain and LangGraph. Wrap in a FastAPI app with /invoke, /health, and /metadata endpoints.
Git-based publishing via GitHub forks. No approval process — automated validation only.
Your agent runs on their infrastructure under their control. You don't host or store their data. You focus on agent capabilities, they handle deployment.
Core components work together as a complete system
Build, share, and discover AI agents with a growing community of developers.