2. 1. VibeKanban: Deep Dive
| Company | Bloop AI (YC S21), London-based |
|---|---|
| Product | VibeKanban — Kanban board + AI coding agent orchestrator |
| GitHub | BloopAI/vibe-kanban — 22.3K stars, 2.2K forks, 1,891 commits, 224 releases |
| Team | ~10 people |
| License | Apache 2.0 |
| Tech Stack | Rust (49.4%) + TypeScript (46.6%) + PostgreSQL |
| Install | npx vibe-kanban |
| Pricing | Free (open source) + Cloud tier (launched recently) |
| Traction | 100K+ PRs created, 30K+ active users. Notable users at ElevenLabs, Google AI, T3.chat |
| Latest Release | v0.1.22 (March 2, 2026) |
How It Works
VibeKanban implements a three-stage workflow designed for the “new bottleneck” of AI-assisted development:
- 1. Plan
- Create and organize issues on a visual Kanban board. Break features into sub-issues. Set priorities and tags. Issues support parent-child hierarchy for tracking complex features.
- 2. Prompt
- Open a workspace for an issue. VibeKanban automatically creates an isolated git worktree, launches your configured coding agent (Claude Code, Codex, Gemini CLI, Cursor, Amp, Aider, Copilot, Windsurf, OpenCode — 10+ agents supported), and connects them to the right branch. Multiple workspaces per issue enable parallel agent execution.
- 3. Review
- Review diffs with syntax highlighting. Add inline comments. Send feedback back to the agent for iteration. Preview website changes in a built-in browser with DevTools. Merge via GitHub PR or local merge.
Key Features
- Agent-agnostic: Works with Claude Code, Codex, Gemini CLI, GitHub Copilot, Cursor, Amp, Aider, Windsurf, OpenCode, Factory Droid, Qwen Code. No vendor lock-in.
- Parallel agents: Run multiple agents simultaneously, each in its own git worktree. The core insight: while one agent codes, you plan the next task and review the previous one.
- Git worktree isolation: Each workspace gets its own branch and worktree. No conflicts between parallel agents.
- Built-in browser: Preview, test, and QA web apps directly. Click-to-component navigation jumps to source code.
- Setup/cleanup scripts: Automate dependency installation, builds, and teardown per workspace.
- Agent profiles: Reusable configurations with custom settings for planning mode, model selection, and permissions.
- MCP servers: Extend agent capabilities via Model Context Protocol.
- Tags system: Reusable text snippets inserted into prompts via @mentions.
- Team collaboration: Real-time cloud workspace sharing, org/project management, GitHub and Azure Repos integration.
What Makes It Work (and What Doesn’t)
Strengths:
- The “Plan → Prompt → Review” loop is genuinely the right workflow for AI-assisted development.
- Agent-agnostic design means it doesn’t bet on one LLM vendor.
- Git worktree isolation is elegant — parallel agents can’t step on each other.
- Rust backend suggests performance focus.
- YC backing and 22K stars give distribution advantage.
Weaknesses / Open Questions:
- Cloud pricing not yet public. Monetization strategy unclear.
- 296 open issues suggests growing faster than the team can maintain.
- Requires GitHub auth for full features — doesn’t work offline or for non-GitHub users.
- No mobile app. Web-only UI may lose to terminal tools for some developers.
- Competes with features that Cursor and GitHub are building natively into their IDEs.
Key testimonial: “Vibe kanban is the biggest increase I’ve had in productivity since Cursor” — Luke Harries, ElevenLabs growth lead.
3. 2. Competitive Landscape: 17+ Alternatives
The “AI agent orchestration for coding” space exploded in 2025–2026. At least 17 tools now compete in overlapping niches, from open-source Kanban clones to $200/mo enterprise IDEs. The market segments into four categories:
- Direct Kanban + Agent clones (7 tools) — open-source, visual board + agent execution
- Multi-agent terminal managers (3 tools) — TUI/CLI, no visual board, focused on running parallel agents
- AI-native IDEs with task management (4 tools) — full IDEs absorbing agent orchestration
- PM tools adding agent integration (3 tools) — Linear, GitHub, etc. delegating issues to agents
4. 3. Direct Kanban + Agent Clones
| Tool | Stars | What It Does | Differentiation |
|---|---|---|---|
| Claude Task Master | 24,900 | AI-powered task management via structured files + MCP protocol. Detects 13 IDEs (Cursor, Claude Code, Windsurf, Kiro, Zed, Codex, Gemini, etc.). Agent reads/updates tasks directly from editor. | Biggest competitor by stars. File-based, no visual Kanban UI. MCP-native so it works inside existing editors rather than being a separate app. Complementary but competes for the same “how do I manage AI coding tasks?” use case. |
| Claude Task Viewer | Small | Real-time web Kanban board that reads Claude Code’s native task files. Live updates via SSE. Gantt-style timing. Export to Linear/GitHub/Jira. | Read-only observer. Lightweight. Doesn’t orchestrate agents, just visualizes what Claude Code is doing. |
| Auto-Claude | Small | Autonomous multi-session framework. Visual command center with Kanban. Tasks auto-generated, prioritized, evaluated for complexity. Multiple parallel agents. | Closest ideological match to VibeKanban. Same concept (Kanban + parallel Claude Code) built independently. Includes “Insights” panel. |
| Flux | Small | CLI-first Kanban with first-class MCP integration. Claude Code can read tasks and update status directly. Git-native sync. JSON or SQLite storage. Optional web UI. | Minimal philosophy. No SaaS lock-in. MCP-native. For developers who want the Kanban concept without the heavyweight UI. |
| Claude-Code-Board | Small | Web UI for managing multiple Claude Code CLI sessions simultaneously. Workflow automation, custom prompt templates, real-time notifications. | Claude Code only (not multi-agent). Focused on session management rather than project management. |
| Claude-ws | Small | Visual workspace with Kanban, code editor, Git integration, local-first SQLite. Drag-and-drop, full conversation history, real-time streaming. | Local-first approach. Claude Code only. |
| Automaker | Small | Autonomous AI development studio. Drag-and-drop Kanban where agents pick up tasks, read specs, write code, run tests, fix bugs. Powered by Claude Agent SDK. Real-time “Agent Decoding” thought stream. | More opinionated about full SDLC automation. Less human-in-the-loop than VibeKanban. Early stage. |
Pattern: At least 6–8 open-source projects independently cloned VibeKanban’s concept within months. This signals high demand and a clear product-market fit for the “Kanban for AI agents” idea. But it also signals fragmentation risk — VibeKanban’s YC backing, 22K stars, and cloud product give it the best chance of consolidating.
5. 4. Multi-Agent Terminal Managers
| Tool | Stars | Pricing | What It Does | Key Differentiator |
|---|---|---|---|---|
| Claude Squad | 5,600 | Free (open source) | Go-based TUI managing multiple Claude Code, Aider, Codex, OpenCode, and Amp agents. Each agent gets an isolated tmux session + git worktree. Split-pane view. | Terminal-first. No web UI. Lightweight. Popular with developers who live in the terminal. Same parallel agent concept as VibeKanban but different interface philosophy. |
| Superset IDE | 3,285 | Free + $20/seat/mo Pro | Terminal/IDE running 10+ parallel agents on one machine with git worktree isolation. Built-in diff viewer and notifications. Works with Claude Code, OpenCode, Codex, Cursor Agent, Gemini CLI, Copilot. | Fastest-growing competitor. Launched March 1, 2026. Trending #8 on GitHub at launch. Product Hunt listing. Already monetizing. The most direct threat to VibeKanban Cloud. |
| Composio Agent Orchestrator | Small | Free (MIT). Composio platform has commercial tiers. | Creates isolated worktrees, spins up tmux/Docker, launches agents, handles CI fixes, merge conflicts, and code reviews autonomously. 3,288 test cases. | More infrastructure/automation-oriented. Less visual UI, more CI/CD pipeline integration. Part of well-funded Composio platform. |
6. 5. AI-Native IDEs with Task Management
The biggest threat to standalone orchestration tools: full IDEs absorbing the functionality natively. When Cursor, GitHub Copilot, and Kiro all manage parallel agents inside the editor, does VibeKanban still have a reason to exist?
| Tool | Pricing | Agent Features | Threat Level to VibeKanban |
|---|---|---|---|
| Cursor | Free / $20/mo Pro / $40/user/mo Business | Background Agents: spawn remote async agents on cloud VMs. Linear integration: launch agents from Linear issues. Slack/GitHub integrations. To-do planning in agent responses. ~18% market share, ~$9B+ valuation. | High. Background Agents + Linear integration partially replaces a separate Kanban board for teams already using Linear. But Cursor is an IDE, not a standalone board — users may use both. |
| GitHub Copilot + Agent HQ | $10/mo Pro / $39/mo Pro+ / Enterprise custom | Copilot Workspace: issue → plan → multi-file code → PR. Agent HQ (Oct 2025): orchestrates any agent from GitHub + VS Code. Integrations with Slack, Linear, Jira, Teams, Azure Boards, Raycast. 20M+ developers, ~42% market share. | Very high for enterprise teams. Issue-to-PR workflow integrated into existing GitHub/Jira/Linear stack. But more complex and less focused than VibeKanban’s clean board. |
| Amazon Kiro | Free (preview) | AWS’s AI-native IDE (VS Code fork). Spec-Driven Development: specs (requirements + system design + task breakdown) created first, then agents execute. Built-in task checklist. Agent Hooks for automation. Powered by Claude Sonnet. | Medium. Kiro’s spec + task breakdown overlaps with VibeKanban, but it’s a full IDE locked to AWS ecosystem. |
| OpenAI Codex App | $20/mo Plus / $200/mo Pro (bundled with ChatGPT) | macOS desktop app. “Command center for agents.” Parallel agents across projects. Built-in worktree support. Background tasks. Review changes. | Medium. Competes directly for “manage parallel agents from one interface” use case, but locked to OpenAI models and macOS only. VibeKanban is cross-platform and model-agnostic. |
7. 6. PM Tools Adding Agent Integration
| Tool | Pricing | Agent Integration | Impact |
|---|---|---|---|
| Linear | Free (limited) / $8/user/mo / $14/user/mo | Issues delegated to Codex, Cursor cloud agents, GitHub Copilot. Tracks cycle time by agent type. “Designed for workflows shared by humans and agents.” | For teams already using Linear, the need for VibeKanban is reduced. But Linear doesn’t run or orchestrate agents locally. |
| GitHub Projects | Free (bundled with GitHub) | Copilot Workspace converts issues to plans + PRs. Agent HQ orchestrates from within GitHub UI. | The default for teams living in GitHub. Less specialized than VibeKanban but zero extra tool to install. |
| Jira + Atlassian Intelligence | Free / $8.15/user/mo / $16/user/mo | AI-powered issue summarization, planning suggestions, and (increasingly) agent delegation via third-party integrations. | Low for now. Jira is slow to add agent features, but its enterprise distribution is unmatched. |
8. 7. Head-to-Head Comparison Matrix
| Feature | VibeKanban | Claude Task Master | Claude Squad | Superset IDE | Cursor (BG Agents) | GitHub Agent HQ |
|---|---|---|---|---|---|---|
| Visual Kanban board | Yes | No | No (TUI) | No | No | Partial |
| Multi-agent parallel | Yes | Via MCP | Yes | Yes | Yes (cloud) | Yes |
| Model-agnostic | Yes (10+) | Yes (13 IDEs) | Yes | Yes | Partial | No |
| Git worktree isolation | Yes | No | Yes | Yes | Cloud VMs | Yes |
| Code review UI | Yes (inline) | No | No | Yes (diff) | Yes | Yes (PR) |
| Built-in browser preview | Yes | No | No | No | No | No |
| Team collaboration | Yes (cloud) | No | No | Yes | Yes | Yes |
| Open source | Yes (Apache 2.0) | Yes | Yes | Yes | No | No |
| Pricing | Free + Cloud | Free | Free | Free + $20/seat | $20–40/mo | $10–39/mo |
| GitHub stars | 22.3K | 24.9K | 5.6K | 3.3K | N/A | N/A |
VibeKanban’s moat: The combination of visual Kanban + multi-agent orchestration + model-agnostic + open-source + built-in browser preview is unique. No single competitor matches all five. Claude Task Master is bigger by stars but is file-based with no visual UI. Claude Squad is terminal-only. Superset IDE is the closest emerging threat. Cursor Background Agents are powerful but locked to one IDE.
9. 8. 12 Productivity Patterns for AI Coding
Controlled experiments show 30–55% speed improvements for scoped tasks. But the best practitioners treat AI less as magic and more as a very capable junior engineer that needs good specs, focused context, and human review. These are the patterns that actually work:
- 1. Plan Before Code (Always)
-
The single most universally recommended practice. Never generate code on a complex task without a written plan.
Claude Code has Plan Mode (read-only exploration). Cursor has a similar feature. Many practitioners prefer a plain
.mdfile with numbered subtasks — it persists across sessions, is version-controllable, and stays visible regardless of context resets.
The gold standard: PRD → Plan → Todo → Execute one item at a time.
The recommended loop: Explore → Plan → Implement → Commit. - 2. Spec-Driven Development (SDD)
- Emerged as one of the most important practices of 2025. The workflow: Constitution → Specify → Clarify → Plan → Tasks → Implement. Teams write detailed specs before prompting, treating the spec as the source of truth. Quality of AI output correlates directly with spec quality. Tools: GitHub Spec Kit (open source), AWS Kiro, JetBrains Junie. Teams report the “safe delegation window” expanding from 10–20 minute tasks to multi-hour features.
- 3. One Change Per Prompt
- Combining multiple related changes in one prompt increases context exhaustion and partial results. One function, one bug, one file per prompt. The most reliable constraint across all tools. Stop asking the AI to build the whole feature — the most reliable pattern is atomic units of work.
- 4. Context Hygiene
-
Context management has replaced prompt engineering as the primary skill. Key practices:
clear aggressively (
/clearor equivalent for each new task), keep sessions short, stay under 40% context utilization. Performance degrades as context fills. Use planning phases as context boundaries: research subagent produces a written artifact, fresh context for implementation using that artifact. - 5. Show, Don’t Tell
- Provide examples from the existing codebase rather than abstract style rules. “Here’s how we implemented auth middleware. Use the same pattern for rate limiting.” In-line examples consistently produce better results than verbose instructions.
- 6. Deterministic Enforcement > Probabilistic Prompting
-
CLAUDE.md instructions are suggestions the AI follows probabilistically. Hooks are enforced deterministically.
For security and quality gates, use hooks/CI, not instructions. Claude Code hooks fire at PreToolUse
(block writes to
.env), PostToolUse (auto-run linter after every edit), and Stop (desktop notification). - 7. AI-Assisted TDD
- TDD has had a renaissance because it pairs exceptionally well with AI agents. Tests written before code prevent the agent from “cheating” by writing tests that confirm broken behavior. Workflow: describe behavior in inputs/outputs → AI writes tests → AI writes code to pass tests → AI iterates until all tests pass. Self-healing loop.
- 8. Multi-Agent Domain Routing
- Spawn specialized agents for different parts of the codebase: frontend agent (React/UI), backend agent (API/business logic), database agent (schema/migrations) working simultaneously. Also: competing hypotheses (multiple agents propose different solutions to a hard bug; human picks the best). Start with 3–5 subagents; beyond that, coordination overhead grows.
- 9. Persistent Memory Systems
-
CLAUDE.md: Keep it under 150 lines. Include common commands, code style guidelines, key architectural patterns.
Don’t over-document — the codebase itself is the best style reference.
Memory Bank (Kilo Code): Structured markdown files storing architecture decisions, conventions, and progress.
.cursorrules: Per-project behavioral constraints for Cursor.
GitHub Copilot Memory: Opt-in system retaining preferences across conversations. - 10. Architect Mode (Two-Pass)
- Aider pioneered this: high-level planner model (Gemini 2.5 Pro for long context) creates the plan; cheaper executor model (Claude Sonnet) implements it. Significantly reduces errors on multi-file refactors. The same pattern works manually: use Opus/GPT-4 for planning, Haiku/GPT-4o-mini for execution.
- 11. MCP for Real-Time Context
- Model Context Protocol (Anthropic, Nov 2024) is now adopted by OpenAI, Google, and most major vendors. AI assistants get real-time access to GitHub, Linear, Jira, Figma, databases, documentation — without copy-pasting. Cursor, Claude Code, Copilot, Windsurf all support MCP. The productivity gain: less time copy-pasting context, more time on the actual problem.
- 12. Custom Slash Commands / Prompt Libraries
-
Reusable
.mdtemplates for common tasks: component creation, test generation, migration. In Claude Code: stored in.claude/commands/as custom slash commands. In Cursor:.cursorrules+ project-level prompts. Saves the cognitive overhead of re-writing the same prompt every time.
10. 9. Context Engineering: The Critical Skill
MIT Technology Review called 2025 the year of the shift “from vibe coding to context engineering.” Context management is now the primary determinant of AI coding quality, more important than prompt engineering.
The Five Levers of Context Engineering
| Lever | What It Means | Practical Tip |
|---|---|---|
| Selection | Choose what enters the context window | Aider: explicitly add files. Cursor: .cursorignore for monorepos. Claude Code: use focused file references. |
| Compression | Reduce tokens without losing information | Summarize past context. Claude Code auto-compacts at 95% capacity. Manually compact between phases. |
| Ordering | Position critical info where the model attends most | “Lost in the middle” problem: 20–25% accuracy variance by position. Put key instructions at the start and end. |
| Isolation | Expose only relevant state to the LLM at each step | Use subagents with fresh contexts for subtasks. Don’t carry debugging history into implementation. |
| Format Optimization | Structure context for maximum model comprehension | Structured data beats prose. Use tables, JSON schemas, type definitions over natural language descriptions. |
Practical Context Window Rules
- Stay under 40% utilization for highest quality outputs. A 100K-token context at 40% beats a 100% filled context.
- 100K tokens at 100% costs ~50x more than 10K tokens due to quadratic attention scaling. Smaller contexts are cheaper AND better.
- New task = new context. Don’t carry history that pollutes the next task.
/clearis your friend. - Planning phases as context boundaries. Research/planning subagent produces a written artifact. Start fresh context for implementation using that artifact.
11. 10. Multi-Agent Orchestration Workflows
Proven Multi-Agent Patterns
| Pattern | How It Works | When to Use |
|---|---|---|
| Domain routing | Frontend agent, backend agent, DB agent work simultaneously | Full-stack features touching multiple layers |
| Parallel research | Multiple agents gather info independently; orchestrator synthesizes | Deep research, competitive analysis, code exploration |
| Competing hypotheses | Multiple agents propose different solutions; human picks best | Hard bugs, architectural decisions, performance optimization |
| Lead + subagent | Orchestrator decomposes task, dispatches subagents, monitors, synthesizes | Large features with many subtasks |
| AI-on-AI review | One agent writes code, another reviews it as a quality gate | Critical code paths, security-sensitive changes |
Orchestration Tools
- Claude Squad: Spawn multiple Claude Code / Aider / Codex instances in isolated tmux + worktree sessions
- VibeKanban: Visual board orchestrating any agent with built-in review
- Conductor, Verdent Deck: Smaller tools for managing multiple Claude instances
- LangGraph: Stateful graph-based orchestration (code-first)
- CrewAI: Role-based agent collaboration with defined workflows
- AutoGen (Microsoft): Conversational multi-agent framework
- n8n, Flowise: Visual/low-code agent workflow builders
Scaling rule: Start with 3–5 concurrent agents. Beyond that, coordination overhead (reviewing, merging, resolving conflicts) exceeds the speed gains. The human becomes the bottleneck at around 5 parallel agents.
12. 11. The AI Coding Productivity Stack
| Layer | Tool | Why |
|---|---|---|
| Primary coding agent | Claude Code (terminal) or Cursor (IDE) | Claude Code for terminal lovers. Cursor for visual IDE users. Both support Plan Mode, subagents, MCP. |
| Secondary agent | Aider | Architect mode (planner + executor). Explicit file context. Auto-commits. /undo for easy rollback. |
| Agent orchestration | VibeKanban or Claude Squad | VibeKanban for visual boards + review. Claude Squad for terminal multi-agent. Pick based on your workflow. |
| Task management | Claude Task Master (MCP) or Linear | Claude Task Master for AI-native task files. Linear for team collaboration with agent delegation. |
| Project context | CLAUDE.md + .cursorrules + custom slash commands | Keep under 150 lines. Common commands, code style, architecture patterns. Show examples, not rules. |
| Quality gates | Claude Code hooks + CI/CD | PreToolUse hooks for security. PostToolUse hooks for linting. CI for tests. Deterministic > probabilistic. |
| MCP integrations | GitHub, Linear, Figma, database MCP servers | Real-time context without copy-pasting. Reduces context window waste. |
13. 12. Opportunities & What’s Missing
Gaps in the Current Landscape
| # | Gap | Problem | Opportunity |
|---|---|---|---|
| 1 | Agent analytics dashboard | No tool tracks agent productivity metrics: time per task, code acceptance rate, revision count, cost per feature, quality score. | Build the “Datadog for AI coding agents.” Track which agents perform best on which task types, optimize model selection, justify team spending. |
| 2 | Spec → Kanban automation | VibeKanban requires manual issue creation. No tool automatically breaks a PRD/spec into a populated Kanban board. | AI that reads a spec, generates issues with acceptance criteria, estimates complexity, creates sub-tasks, and populates the board. Kiro does this partially but inside its own IDE. |
| 3 | Cross-agent conflict resolution | When 5 agents work in parallel worktrees, merging creates conflicts. No tool handles this intelligently. | AI-powered merge conflict resolution that understands the intent of both changes and proposes a correct merge. Integrates with the orchestration layer. |
| 4 | Agent cost optimization | Teams running 5–10 agents don’t know what they’re spending. No tool optimizes model selection by task type. | “Use Haiku for boilerplate, Sonnet for business logic, Opus for architecture decisions.” Auto-route by task complexity. Save 60–80% on LLM costs. |
| 5 | Non-code agent orchestration | VibeKanban is code-only. But teams also use AI for docs, designs, data analysis, testing. | Orchestration board for all AI agents: coding agents, writing agents, design agents, QA agents. The “mission control” for AI-assisted product development. |
| 6 | Replay and learning | No tool records successful agent sessions for replay. Teams can’t learn from what worked. | Session recording + replay. “Here’s how the team’s best prompt + context setup produced a perfect feature in 20 minutes.” Knowledge base of winning patterns. |
The Meta-Insight
The AI coding productivity market is following the same trajectory as DevOps 10 years ago: fragmented open-source tools → consolidation around a few winners → platform plays absorb features. VibeKanban, Claude Task Master, and Claude Squad are today’s equivalents of early Jenkins, CircleCI, and Travis. The question is whether a standalone orchestration tool survives, or whether Cursor/GitHub/VS Code absorb the functionality and make it free.
The bet for standalone tools: Orchestration is complex enough, and agent-agnosticism valuable enough, that developers will want a dedicated tool rather than being locked into one IDE’s implementation. The analogy: Docker survived despite cloud providers offering their own container services, because portability and developer experience mattered more than platform integration.