Why Multi-Agent?
Single-agent systems hit a ceiling. A general-purpose assistant can help with many things but excels at none. Multi-agent systems solve this by letting specialized agents collaborate — a coding agent writes code, a review agent checks it, a planning agent orchestrates the workflow.
Agent Composer is a local-first platform for designing these multi-agent systems. It combines the Agno framework for agent orchestration with the AG-UI protocol for real-time streaming interactions.
The Stack
The platform is split into a Python backend and a TypeScript frontend:
Backend (Python 3.12+ / FastAPI)
- Agno framework with AgentOS for agent lifecycle management
- OpenRouter for LLM access (supports any model, including free tiers)
- SQLite for session persistence (auto-managed by Agno)
- MCP integration for extensible tool capabilities
Frontend (Next.js / React 18)
- Agno Agent UI for the chat interface
- Real-time streaming via AG-UI protocol
- Bun as the JavaScript runtime
Agent Configuration
Agents are defined as JSON configurations, making them easy to create, share, and version:
{
"name": "Coding Assistant",
"model_id": "mistralai/devstral-2512:free",
"instructions": "You are a senior software engineer..."
}Each agent gets access to a Python interpreter with sandboxed capabilities: web search, HTTP requests, shell commands, and file operations. This means agents can actually *do things* — fetch documentation, run tests, write files — not just generate text.
Teams: Coordinated Multi-Agent Collaboration
The real power comes from teams. A team definition specifies:
- Members — which agents participate and their roles
- Coordination — how agents hand off work to each other
- Session — shared conversation history across agents
When you send a message to a team, the lead agent decides which specialist to invoke. The coding agent might write a function, then the review agent checks it for bugs, then the lead summarizes the result. All of this streams in real-time through AG-UI.
MCP Integration
Agents can connect to MCP (Model Context Protocol) servers for external capabilities:
{
"servers": [
{
"name": "filesystem",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
"enabled": true
}
]
}This means agents can access databases, APIs, file systems, or any other service that has an MCP server — without modifying the agent code.
Local-First Philosophy
Everything runs on your machine. No cloud dependencies beyond the LLM API (and even that can be local via llama.cpp). Your conversations, agent configurations, and session history stay in a local SQLite database.
The platform supports free-tier models from OpenRouter, so you can experiment with multi-agent systems without any API costs. The default model (xiaomi/mimo-v2-flash:free) is surprisingly capable for agent tasks.
Design Decisions
Why Agno? It handles the hard parts of agent orchestration — session management, tool registration, streaming, and multi-agent coordination — so I could focus on the UX and configuration layer.
Why AG-UI? It provides a standard protocol for streaming agent interactions to a frontend. The Agno Agent UI implements this protocol out of the box, giving us a polished chat interface without building one from scratch.
Why JSON configs? Agents should be data, not code. JSON configurations can be version-controlled, shared between team members, and modified without restarting the server. The config API (POST /config/agents) lets you create agents programmatically.
Getting Started
The entire platform starts with one command:
make devThis launches the backend on port 7777 and the frontend on port 3000. Create your first agent through the UI or the API, and start exploring multi-agent collaboration locally.