The Great PM Skills Debate: What AI Won't Replace
Everyone agrees AI is transforming product management. Nobody agrees on which skills survive. The debate over strategy, taste, and the 'editing function' reveals a profession in the middle of an identity crisis.
There’s a debate happening in product management right now that I find genuinely fascinating, not because it’s new (people have been asking “will AI replace PMs?” for two years), but because it’s finally getting specific. The question has shifted from “will it happen?” to “which bits, exactly?”
The conversation is playing out across Podcasts, research, Medium think pieces, and every PM WhatsApp channel I’m in. Zoë Yang’s analysis of 284 Lenny’s Podcast episodes captures the trajectory well: PMs are shifting from feature delivery to systems thinking, building evals, instrumenting failures, and operating in roles with increasingly blurred boundaries. What’s emerged isn’t a clean answer. It’s a set of contradictions that tell us something important about where the profession is headed.
The one thing everyone agrees on
Let’s start with the common ground, because there is some.
The administrative layer of product management is being compressed. Meeting notes, backlog grooming, ticket management, roadmap formatting, basic prioritisation frameworks, these are already being automated or dramatically accelerated. Nobody seriously disputes this. All opinions seem to frame AI as enhancing PM capabilities rather than replacing them, but even these optimistic takes acknowledge that the coordination and documentation layer is being hollowed out.
Marty Cagan put it bluntly on Lenny’s Podcast: if your job is fundamentally “backlog administrator”, that work is already being done by AI, and it’s only going to get better supported. The question is what’s left once you strip that layer away.
Aparna Chennapragada framed it well: if you’re mostly a process person, tracking things, sending emails, managing the machinery, you’ve got a real question to answer about your value add. But on the flip side, she argues, “the taste-making and the editing function becomes really, really important.”
That word “editing” keeps coming up. And it’s worth digging into.
The Editor-in-Chief thesis
There’s a compelling argument gaining traction that the PM role is shifting from “builder” to “editor”. Eric M. De Castro captured it directly: the bots can manage the backlog, the agents can optimise the velocity, and the role of the Senior PM has fundamentally shifted from “Builder” to “Editor-in-Chief”.
The logic goes like this. When AI can generate strategies, write PRDs, draft user stories, and even prototype features, the PM’s job is no longer to produce these artefacts. It’s to curate, refine, and judge them. You’re not writing the first draft any more. You’re deciding which of five AI-generated drafts is actually good, and why.
A similar point surfaces in a piece on taste in the AI age: we’re entering an era where anyone can make a lot, so the differentiator isn’t how much you can produce, it’s how much you can discard. Building is becoming abundant. Editing is becoming the craft.
This shouldn’t feel entirely alien to experienced PMs. The best Product Managers were always defined less by the ideas they greenlit and more by the ones they killed. Saying no to the majority of what comes down the funnel, the feature requests, the stakeholder pet projects, the shiny distractions, has always been the job. What’s changed is the volume. When AI can generate ten plausible strategies before lunch, the filtering muscle doesn’t become less important. It becomes the whole game. Same skill, but on steroids.
The question of whether machines can even develop taste is explored well by Web Designer Depot, who argue that while AI can learn aesthetic patterns, true taste involves cultural context, intentional rule-breaking, and emotional resonance that remain distinctly human.
I find this persuasive up to a point. But it raises an uncomfortable question that nobody seems to have a good answer to yet: how do you develop editorial judgment if you never do the building? Taste requires reps. If AI does the work, where do the reps come from?
Hilde Dybdahl Johannessen pushed back on the “taste as moat” narrative directly, arguing that taste itself can become a form of gatekeeping and that AI may eventually learn to simulate it through preference learning. It’s a useful corrective. The taste argument is appealing, but it’s not as airtight as its proponents suggest.
The strategy paradox
Here’s where it gets really interesting, and where the PM community is genuinely split.
On one side, you have people arguing that strategy is the skill most vulnerable to AI replacement. The reasoning: AI can process vastly more market data, competitive intelligence, and customer feedback than any human. Strategic frameworks are well-documented and replicable. Pattern recognition from thousands of successful strategies is exactly what AI excels at.
On the other side, you have people arguing that strategy is the skill most protected from AI. The reasoning: strategy requires contrarian thinking, and AI is trained on consensus data. It demands human judgment about what to ignore. It involves organisational politics and relationship dynamics that AI cannot navigate. As Hilde Dybdahl Johannessen points out, without deliberate steering, AI will give your company roughly the same advice as your competitors. The market doesn’t reward copycats. It rewards contrarians who are right.
Asha Sharma and Lenny discussed this tension directly on the podcast. You’d think that an AI with all the information about where the market is going, your metrics, and your product today would be excellent at developing strategy. And yet many people believe it’s the one thing AI won’t be good at for a long time, because that’s where human judgment is most irreplaceable.
I don’t think either side has won this argument. But I think the framing is slightly wrong. Strategy isn’t one thing. The analytical component of strategy, market sizing, competitive mapping, trend identification, is clearly vulnerable. The judgment component, what to bet on, what to ignore, when to zig while everyone zags, is clearly protected. The question for any individual PM is: which of those two things do you actually spend your time doing?
The jagged frontier
Ethan Mollick’s concept of the “jagged frontier” is the best mental model I’ve found for thinking about all of this. The idea is that AI’s capabilities aren’t a clean line. They’re uneven. AI excels at some surprisingly complex tasks while failing at some surprisingly simple ones. And the frontier keeps moving.
The practical implication is that you can’t make blanket statements about what AI can and can’t do. You have to test it, task by task, and develop an instinct for where the frontier currently sits. Mollick’s four principles are useful here:
- always invite AI to the table,
- be the human in the loop,
- treat AI like a person (but remember it isn’t one), and
- assume this is the worst AI you’ll ever use.
That last one matters. Whatever AI can’t do today, it will probably do tomorrow. So building your career strategy around AI’s current limitations is a losing game. The question isn’t “what can’t AI do?” It’s “what will remain uniquely human even as AI gets dramatically better?”
The buy-in problem nobody talks about
Multiple podcast guests raised a point that I think is under-discussed: AI can’t do stakeholder management.
Claire Vo put it well when she noted that she doesn’t know how an AI bot achieves buy-in and alignment, “unless everybody’s got their own little bot and they’re all talking to each other.”
This sounds like a throwaway observation, but I think it’s profound. A huge amount of product management is persuasion. Convincing an engineering lead to prioritise your feature. Getting a sceptical executive to fund an experiment. Navigating the politics of a cross-functional team where everyone has different incentives. Building trust with a design team that’s been burned by PMs before.
None of this is analytical. None of it can be synthesised from data. And it’s not going away any time soon, because it’s fundamentally about human relationships and organisational dynamics. Michele Galli made a related point: many PMs have built their competence around communication, organisation, and alignment. Those skills are useful, but they’re also relatively safe. AI compresses the value of coordination, but not the value of genuine influence.
The blurring of roles
One of the most consistent themes across Product related discussions is that the boundaries between PM, engineer, and designer are dissolving.
Tamar Yehoshua predicted that in five to ten years, these lines will blur significantly because AI will enable PMs to build prototypes and designers to code. Meta PMs are already using AI coding tools to become builders themselves, with one PM describing it as being handed “superpowers”, operating less like a conductor moving work between functions and more like a product owner who can execute directly.
Casey Winters offered the sharpest version of this: if you thought the PM job was just filling in frameworks and collecting promotions, then yes, AI will replace you. The real PM job, the one requiring genuine subject matter expertise, is the least likely to be replaced.
Zevi Arnovitz pushed back strongly against the concern that AI weakens PM skills, arguing instead that it’s a collaborative learning opportunity. He sees AI-assisted building as a way for PMs to deepen their craft, not atrophy it. I think he’s partly right. But the risk of atrophy is real for people who skip the fundamentals entirely and go straight to AI-generated outputs without understanding what good looks like.
Where I land
I’ve been thinking about this a lot, partly because I’ve lived through enough technology shifts to know that the conventional wisdom is usually wrong in at least one important way.
Here’s my take: experienced Product Managers who keep up with the technology are going to become more in demand, not less. The “human” skills that make a great PM, judgment, taste, persuasion, the ability to synthesise conflicting signals into a coherent direction, don’t get replaced by AI. They get amplified by it.
The reason is straightforward. If AI handles the execution burden, the research synthesis, the first drafts, the data crunching, the documentation, then experienced PMs can finally step back from the production line and focus on what they should have been doing all along: orchestration. Setting direction. Making judgment calls. Editing rather than writing. That’s a 10x opportunity for people who have the foundational skills to take advantage of it.
But, and this is the critical caveat, only if they actually engage with the tools. The PMs who will struggle are the ones who either refuse to use AI (and get outpaced) or rely on it blindly (and lose their edge). The sweet spot is what Mollick describes: being the human in the loop, with genuine expertise about when to trust the output and when to override it.
The bifurcation is real. Administrative PMs are in trouble. Strategic, taste-driven, judgment-heavy PMs are entering their golden age. As Saeed Khan argues, AI won’t magically fix role definitions, bad objectives, or lack of strategy, those are human problems that require human solutions. The uncomfortable middle ground is that most PMs are a bit of both, and the transition isn’t going to be comfortable.
The junior PM problem
There’s a question that almost nobody in this debate is addressing honestly, and it’s the one that worries me most: what happens to the people who haven’t gotten good yet?
Every optimistic take on AI and product management, including mine, rests on the same assumption: that experienced PMs with strong judgment and taste will thrive. Fine. But where do experienced PMs come from? They come from junior PMs who were, at one point, not very good at the job.
The basic premise of career development in almost every profession has always been the same. When you start, you’re bad at it. The company understands you’re bad at it. They pay you to do enough of the basic work, the ticket grooming, the meeting notes, the stakeholder chasing, the first-draft PRDs, with the expectation that over time you’ll learn from it, develop judgment, and become genuinely valuable. The grunt work isn’t just work. It’s training.
Now companies are looking at that same grunt work and seeing that AI can do it faster, cheaper, and without needing a desk. The signal from hiring managers is becoming difficult to ignore: they’re no longer prepared to pay someone to be bad at the job long enough for them to get good. Junior PM roles are being cut or simply not backfilled. The entry-level pipeline is narrowing.
This is, of course, extremely short-sighted. The senior PMs everyone is so keen to retain won’t be around forever. They’ll move on, burn out, retire, or get poached. And if there’s nobody coming up behind them, nobody who spent two years in the trenches learning what a good user story looks like by writing five hundred bad ones, then you’ve got a succession crisis dressed up as a cost saving.
It also circles back to the taste question raised earlier. If taste requires reps, and the reps are being automated away, how does the next generation develop the judgment that everyone agrees is irreplaceable? You can’t edit what you’ve never written. You can’t curate if you’ve never built. The “Editor-in-Chief” thesis is compelling for people who already have twenty years of context. It’s a dead end for someone in their first role.
I don’t have a clean answer to this. But I think it’s the most important structural question the profession faces. The debate about whether AI will replace experienced PMs is interesting. The question of whether we’re quietly dismantling the path to becoming one is urgent.
The bottom line
The PM skills debate isn’t really about AI at all. It’s about something the profession has been avoiding for years: what is the actual, irreducible value of a Product Manager?
AI is forcing that question into the open. The coordination gets automated. The strategy gets challenged. What remains is judgment, taste, and the ability to make things happen through people. For experienced PMs who engage with the tools, that’s a genuinely exciting shift. The work gets harder, but it also gets closer to the work that matters.
The problem is that this optimism only holds if you zoom in on the people who are already good. Zoom out and the picture is more troubling. If the profession celebrates the rise of the “Editor-in-Chief” PM while quietly eliminating the junior roles where people learn to write in the first place, we’re not evolving the discipline. We’re hollowing it out.
The companies that get this right will be the ones who recognise that AI doesn’t remove the need to develop people. It changes how you do it. The grunt work might look different, the apprenticeship might be shorter, the tools might be better. But the principle remains: you have to let people be bad at something long enough for them to get good. Any organisation that forgets that is saving money today and buying a talent crisis tomorrow.
So yes, experienced PMs who embrace AI are entering a golden age. But the real test of the profession isn’t whether the current generation thrives. It’s whether we’re building the conditions for the next one to exist at all.