MCP Registry Interface

How I used a team of 15 agents with 4 prompts

← Back to Blog

I keep seeing people ask about how to use coding agents for more complex products that involve:

πŸ‘‰ This is exactly what we're getting into now.

(Other topics like release pipelines, security, testing, auth, and compliance are coming soon.)

So here it is: a real product, built from scratch, using only multi-agent prompts β€” including real data fetching, a working DB, backend and frontend logic, and UI.


πŸ€– The Multi-Agent Framework, Step by Step

Each agent has awareness of the broader process β€” not just its own task.

Learning from how different teams work together, there's value in differentiating roles and ensuring they orchestrate well.

The Product Manager doesn't just define a spec β€” it also adds instructions for the Architect, Designer, and Dev Lead.

The Dev Lead decides how many developers it needs for the task, what their execution order should be, and where their task boundaries lie.

Each agent works with context, not just instructions:

This isn't just chaining prompts β€” it's about shared knowledge

The agents are aware of each other, collaborate asynchronously, and build on one another's work like a team that's always in sync.

It's very basic, but works surprisingly well.

The flow can also be extended into branching paths, feedback loops, and fully autonomous systems.


🧠 The Methodology

Here's how the flow works:

  1. Executive β†’ Product Manager
    I wrote a short business goal. That was the only input needed. The rest was already inside the prompt template.
  2. Product Manager β†’ Architect
    The agent read the spec and defined the architecture, tech stack, and main components.
  3. Product Manager & Architect β†’ Designer
    Created a design system and flow description in markdown β€” no Figma needed (yes, I know the product designers are doing much more than that, I just simplified it here for our example).
  4. Designer & Architect & Designer β†’ Dev Team Lead
    Broke the implementation into tasks, each with full context, test instructions, and checklist.
  5. Dev Team Lead β†’ Executors
    Each task was executed by Claude 3.7 or 3.5 Sonnet, which read the spec, implemented the logic, tested it, and logged results.

All of this was done in Cursor. Each output was saved to a markdown file in a dedicated directory.

πŸ“‚ You can check out the prompt templates here:
πŸ‘‰ github.com/omrilahav/vibe-coding


πŸ›  The Demo: MCP Registry & Reputation Explorer

To showcase the process (I'm working on a much larger-scale product that will be released soon using the multi-agent framework), I picked a realistic project β€” something that usually breaks in no-code environments:

The MCP Registry app scans for real MCP servers, pulls metadata, calculates a reputation score, and displays it in a dynamic UI β€” to help teams select trustable sources.


πŸ“ It Took Just a Short Evening

From empty repo to working product β€” the whole process took less than a few hours.

Some tasks didn't work on the first try. I copied the error message back into the task prompt, ran it again, and it worked. The agents handled most things really well.

To keep it simple, I only used the Glama MCP API as a source. The pagination didn't work properly, so the app only shows part of the results β€” but for the scope of the demo, that was fine.

Future versions could include:

But the goal here was to show the process clearly.


🧭 Want to Try It Yourself?

  1. Clone the framework repo:
    πŸ‘‰ github.com/omrilahav/vibe-coding
  2. Pick a project or feature idea you want to try.
  3. Go to 1-product-manager.md, and write your business goal in the placeholder.
    • You can see the example input I used here
  4. Run the prompt in Claude 3.7 Sonnet (Cursor is what I used)
  5. Save the output in product/, then move on to the next prompt (it already happens automatically for you)
  6. Repeat the process β€” each agent reads the previous output and generates the next
  7. When the Dev Lead creates tasks, give each one to a new agent and run them in order

You'll end up with a full set of outputs β€” specs, architecture, design, code, and tests β€” all created by AI agents, connected by context.


πŸ“ What's in the Demo Repo

You can explore all of this in the demo repo.
Each folder contains actual agent output:

The product works, even with limited data sources. More importantly, it shows how the agents collaborated to build it.


πŸš€ What's Next

This is just one possible direction.
The same methodology can support:

The long-term goal is to create clear, open flows that allow agents to collaborate like systems β€” with shared knowledge, scoped decisions, and well-defined responsibilities.