Agentic Engineering vs Vibe Coding: A Product Manager's Guide to Knowing the Difference

Agentic Engineering vs Vibe Coding: A Product Manager's Guide to Knowing the Difference

Agentic engineering is about to give Product Managers superpowers we never had before. Vibe coding has its place too, but only if we are honest about what it is and what it is not.

Steve James

The distinction that matters

Scroll through any PM community right now and you will find two very different conversations happening under the same banner of “AI-assisted development.” In one, experienced practitioners are describing how AI agents are amplifying their existing expertise, letting them move faster, think bigger, and validate ideas in hours rather than weeks. In the other, people are asking whether Product Managers now need to ship production code because, after all, an LLM can write it for you.

These are not the same conversation. And conflating them is where people get into trouble.

Simon Willison’s Agentic Engineering Patterns guide draws a sharp line between two modes of working with AI. Agentic engineering is what happens when domain experts use AI agents to amplify skills they already possess. Vibe coding is what happens when someone without deep understanding of the output delegates entirely to the model and hopes for the best.

For Product Managers, this distinction is everything. Agentic engineering is, by far, the area we should be embracing. It is going to 10X our productivity and unlock superpowers we never had before. Vibe coding has a place in our toolkit too, but we need to be very clear-eyed about what that place is.


Why agentic engineering is our superpower

The core pattern of agentic engineering, as Willison describes it, is delegation and supervision. Instead of treating an LLM as a glorified autocomplete, you define goals, constraints, quality criteria, and workflows for the AI agent. You review and refine its output. You bring your expertise to bear on what the agent produces.

This should sound familiar. It is, fundamentally, what Product Managers do. We have spent our entire careers defining problems clearly, setting constraints, evaluating outcomes, and making trade-off decisions. We orchestrate. We prioritise. We apply judgement to ambiguous situations. Agentic engineering takes these existing strengths and amplifies them to a degree that was simply not possible before.

Lazar Jovanovic, who works as a professional vibe coder at Lovable and was featured on Lenny’s podcast, captures the underlying principle perfectly:

AI is an amplifier regardless of your background. If you do not know what you are doing, you are just going to produce garbage faster.

The corollary is equally true. If you do know what you are doing, you produce excellence faster. For PMs with strong product sense, clear thinking, and good judgement, agentic AI is a force multiplier unlike anything we have had before.


What 10X actually looks like

Let me be concrete about what this means in practice, because “10X productivity” can sound like empty hype if you do not ground it.

  • Research and synthesis at speed. Competitive analysis that used to take a week of desk research can be synthesised in an afternoon. User research themes can be clustered, cross-referenced against quantitative data, and tested for patterns in hours. One PM described an AI agent that saves hours of manual checking each week by surfacing competitive moves automatically. This is not replacing the PM’s judgement about what matters. It is eliminating the drudgery that sits between a question and an informed answer.

  • Faster feedback loops. Elena Verna described on Lenny’s podcast how her team at Amplitude went from the traditional multi-month cycle of user research to design sprints to engineering roadmap prioritisation, down to prototyping in a couple of weeks. The compression is not about cutting corners. It is about removing the wait time between having an idea and being able to test it against reality.

  • Richer stakeholder conversations. When you can spin up an interactive prototype to stress-test a hypothesis before you have even written the first user story, the quality of your conversations with engineering, design, and leadership changes fundamentally. You are no longer describing an abstract concept in a PRD. You are showing people a working example of what those requirements describe.

  • Judgement as the bottleneck, not bandwidth. Aparna Chennapragada, formerly a senior PM leader at Google, made the point on Lenny’s podcast that the taste-making and the editing function becomes really important in an AI-augmented world. If your role was mostly process management and report generation, you should be concerned. But if your value lies in judgement, prioritisation, and the ability to frame problems clearly, you are about to become significantly more powerful. Agentic engineering shifts the constraint from “I do not have enough time to do the analysis” to “I need to make better decisions with the analysis I now have.”


The expertise amplifier effect

There is a pattern here that I think is worth naming explicitly. Every capability that agentic engineering unlocks for PMs depends on the expertise you bring to the table. The AI agent does not know which competitive signal matters. You do. The agent does not know which user research theme connects to your strategic priorities. You do. The agent does not know whether the prototype it just built actually solves the customer’s problem. You do.

This is what Willison means when he talks about “hoarding things you know how to do.” The more patterns, frameworks, and hard-won experience you have accumulated over your career, the more effectively you can direct AI agents.

Your expertise is not threatened by agentic engineering. It is the prerequisite for it.

Jovanovic made the same point using the Aladdin and the Genie analogy. You rub the lamp, the genie comes out, and your first wish is to be taller. The genie makes you 13 feet tall because you were not specific enough. The quality of what you get from an AI agent is directly proportional to the clarity and precision of your instructions. And clarity and precision in describing what needs to be built, for whom, and why, is quite literally the core PM skill.

David Mytton from Arcjet articulated the boundary well: AI coding only works when there are clear guardrails in place, meaning good documentation and comprehensive tests the agent can run. That, he says, is what differentiates vibe coding from agentic engineering. The guardrails require expertise to define. PMs who have spent years learning to write clear requirements, define acceptance criteria, and think through edge cases are already building the muscle that agentic engineering demands.


The vibe coding caveat

All of which brings me to vibe coding, and why it deserves a more measured take than it usually gets.

I am bullish on Product Managers using vibe coding. But specifically for what it is: a way to supercharge the things we are already doing. When you vibe code a prototype, you are not becoming a software engineer. You are doing what PMs have always done, articulating requirements and demonstrating a potential solution, except that now the output is a working interactive example rather than a static wireframe or a written specification. It is analogous to writing a PRD, except you are showing those requirements in a living, testable form.

I have experienced this firsthand. I built a story mapping tool using Claude Code because every commercially available option was either wildly over-engineered and expensive, or there were templates in Miro and Lucid that ended up being more work than they were worth. I built this thing and I could not be happier with it for my own tasks. It does exactly what I need.

But the moment you start thinking about releasing something like that beyond your own use, you need to be honest about what you do not know. When you vibe code something, you have no real understanding of how it works under the surface. You can see the inputs and the outputs. You can test the happy path and a few edge cases. But you have no idea what shortcuts the model took, what security considerations were ignored, what performance problems are lurking, or what happens when the thing encounters a scenario that neither you nor the model anticipated.

There are sensible mitigations. Get a different LLM to conduct a code review. Ask the model to explain each part of the codebase so you have a basic understanding of how it is put together. Jake Knapp and John Zeratsky, updating their Design Sprint methodology for the AI era, noted on Lenny’s podcast that teams who jumped to vibe coding prototypes too quickly produced output that was “super generic” and did not really describe what the product was. Their advice: you will move faster if you slow down a little at the beginning. That same discipline applies here.

But the fundamental limitation remains. There will always be something you did not know to ask about. Product Managers have been working with engineers for decades, so we know the kinds of things to look out for at a high level. We know about technical debt, scalability, security reviews, testing. But unless you have actually written production code, maintained it, debugged it at 2am, and dealt with the consequences of architectural decisions made years ago, you are not going to catch everything. And in software, what you miss can range from mildly annoying to catastrophically expensive.

The rule of thumb is simple. Vibe code prototypes enthusiastically. Use them to stress-test ideas, to have better conversations with your engineering teams, to show stakeholders what a solution could look and feel like. And when it is time for that prototype to become production-ready, bring in the engineers with decades of experience to make it work properly. The prototype proves the concept. The engineering team makes it real.


What the full picture could look like

I heard someone describe a workflow recently that I think paints a compelling picture of where this is all heading. Their PM team still conducts user research the way they always have. They send out surveys, capture usage metrics, and go out and speak to customers. All of that is captured, transcribed, and ingested into a data store. Agents then go over this data continuously, looking for trending themes, recurring problems, and emerging opportunities. These are automatically added to a dynamically constructed opportunity solution tree.

The PMs monitor this tree, and when something looks to have enough weight behind it, they build a prototype of the potential solution using a tool like Claude Code. That prototype is then dogfooded by the organisation for a few weeks to stress-test both the idea and the proposed solution. If it looks like it has legs, the prototype is handed to the engineering teams, who pull it apart and build a production-ready version using the correct guardrails and technologies to ensure it is performant, secure, and scalable.

What strikes me about this example is that it is not science fiction. Every piece of this workflow exists today. Agentic engineering handles the research synthesis and opportunity identification, amplifying the PM’s expertise rather than replacing it. Vibe coding handles the prototype, giving the team something real to react to rather than a static specification. And professional engineering handles production, because that is where the decades of hard-won expertise in building reliable, secure software actually matters. Each mode of working with AI is used for exactly what it is good at, and nothing more.

That, to me, is the future that PM teams should be aspiring towards.


Where this leaves us

The distinction between agentic engineering and vibe coding is not just semantic. It is the difference between a power tool in the hands of a craftsperson and a power tool in the hands of someone who watched a YouTube tutorial. Both can produce impressive results. Only one can be trusted when it matters.

Product Managers have spent decades building the exact skills that agentic engineering rewards: clarity of thought, structured problem framing, judgement under ambiguity, and the ability to orchestrate people and processes towards an outcome. AI does not diminish any of that. It amplifies all of it. The PMs who recognise this, and who learn to wield these tools with the same discipline they bring to everything else, are going to be extraordinarily effective.

Embrace agentic engineering fully. Use vibe coding enthusiastically for prototypes and personal tools. And when it is time to make something real, bring in the engineers. Not because AI has failed, but because knowing the limits of your expertise has always been the most product-management thing you can do.