You shipped the feature. It works. The demo is smooth. The engineering team is proud.
Nobody uses it.
This is the most common outcome in product development and nobody talks about it honestly. Not "it failed" — failure implies you tried something risky. This is worse. You built something safe, predictable, well-scoped, and irrelevant.
The feature factory ships twelve things a quarter and moves no metrics. Everyone stays busy. Nobody asks why. Velocity becomes a substitute for impact, and the team confuses output for outcomes until someone finally looks at the numbers and realizes three quarters of shipped features have single-digit adoption.
How you end up here
The request pipeline
A customer asks for something. A sales rep logs it. A PM adds it to the backlog. Six months later, someone pulls it off the list because it's "been requested multiple times."
Nobody checks whether the original requester still cares. Nobody checks whether the problem still exists. Nobody checks whether the three people who asked represent the 3% who email or the 80% who are affected. The request became a proxy for demand, and the proxy was never validated.
The roadmap pressure
You need to show something for next quarter. The backlog has items. Ship the items, show the slide.
This is where prioritization frameworks become theater. RICE scores get massaged to justify whatever's already been decided. Impact scores are gut-feel numbers dressed in spreadsheet formatting. The honest version: "we picked the things that seemed reasonable and fit in the sprint."
If this feels familiar, the evidence-based prioritization guide covers how to replace gut-feel scoring with actual customer evidence.
The empathy gap
PMs use their own product differently than customers do. You know every feature. You understand the mental model. You think "obviously people will find this in settings" because you know it's in settings. Your customers are trying to do one job as fast as possible and ignoring everything else.
The features you think are obvious are invisible to most of your users.
The validation check
Before any feature gets engineering time, run it through three questions:
1. Can you name five customers who have this problem?
Not "customers would probably want this." Actual names. Actual conversations. If you can't point to five, you're guessing. For how to build this evidence base systematically, see the guide on synthesizing customer interviews.
2. How are they solving it today?
If customers have the problem but aren't working around it, it might not be painful enough to solve. Active workarounds — spreadsheets, manual processes, switching tools — are the strongest signal that a solution has pull.
3. What metric moves if you nail this?
"Engagement" is not a metric. "Retention" without a segment is not a metric. Specific: "Day-30 retention for users who complete onboarding." If you can't connect the feature to a measurable outcome, you can't evaluate whether it worked. And if you can't evaluate, you can't learn.
What to do instead
Talk to the humans
Five conversations will teach you more than five weeks of backlog grooming. Not user testing — those come later. Discovery conversations. "Walk me through how you did X last week." "What was frustrating about that?" "What did you try before giving up?"
Most PMs know they should do this. Most PMs don't do it enough. The ones who do ship fewer features and move more metrics.
Kill the backlog
Your backlog is not an asset. It's a graveyard of decontextualized requests slowly decaying. Items older than 90 days are stale. Items without evidence attribution are opinions. Archive everything older than a quarter and see what gets re-raised organically.
The things that actually matter will come back. The things that don't were never going to move the needle.
Measure adoption, not delivery
Stop celebrating launches. Start celebrating adoption curves. A feature that 40% of your target segment uses within two weeks is a win. A feature that 3% uses after a month is a failure — regardless of how well it was built.
Make adoption the first agenda item in your post-launch review. Not "did we ship on time." Not "did we hit the spec." Did actual humans change their behavior because of what we built?
The uncomfortable truth
The best product teams ship less. Not because they're slower — because they're more selective. Every feature has ongoing cost: maintenance, documentation, cognitive load for users, surface area for bugs, complexity for onboarding.
A product with ten features that all get used is better than a product with fifty features where users can't find anything.
The goal isn't to ship. The goal is to matter.
