The theatre of prioritization
Every quarter, the same ritual: leadership wants "data-driven prioritization." Someone dusts off the RICE spreadsheet. Revenue impact? Pure speculation. Confidence? Based on vibes. Effort? Engineering shrugs and says "three weeks" for everything.
You sum it up, sort by score, and present the roadmap. Everyone nods. It feels rigorous.
Then six months later, you're in a retention review explaining why that "high-impact" feature has a 12% adoption rate.
The problem wasn't the framework. The problem was you scored a fantasy.
What fake conviction looks like
A real example I saw last year: B2B SaaS company building a mobile app. The CEO was convinced it was critical. "Enterprise customers keep asking for it," he said. Sales agreed. The PM scored it high on RICE, prioritized it, and spent nine months building it.
Usage after launch? 4% of paid customers. Turns out "asking for it" meant "mentioned it once in a closing call as a nice-to-have."
They didn't validate demand. They validated that the CEO could remember hearing about it.
This is feature factory syndrome in its purest form. Not building too many features — building features you feel confident about but have no actual evidence for.
The inputs to your prioritization framework were:
- Anecdotes from sales (selectively remembered)
- One customer interview (with a prospect who didn't buy)
- Usage data from a different product (in a different context)
- Executive intuition (based on competitors who aren't actually winning)
You fed garbage into RICE and got garbage out. The spreadsheet just made it look scientific.
Why scattered data kills conviction
Most teams aren't ignoring customer feedback — they're drowning in it. Feedback lives in:
- Slack threads with sales
- Support tickets nobody has time to synthesize
- Old interview notes three PMs ago
- Dashboards that only show what happened, not why
- That one spreadsheet someone maintains manually
When prioritization time comes, you can't actually know what matters. So you approximate. You remember the loud stuff. You trust your gut. You build what feels important.
And that's how feature factories run: on manufactured conviction.
The alternative isn't more meetings or better templates. It's making the actual evidence accessible when you're making decisions. This is what tools like Mimir are trying to fix — connecting feedback, usage patterns, and customer context so you're not prioritizing from memory.
What real conviction looks like
I worked with a PM who postponed a feature three board members wanted. Not because she was brave — because she had receipts.
She pulled up:
- 23 customer interviews where the feature came up (only 4 unprompted)
- Usage data showing the workaround had 67% weekly engagement
- Support ticket volume (8 tickets in 6 months, all from the same account)
- Win/loss data showing it came up in 0 lost deals
She didn't kill the feature forever. She just said "not yet" with evidence.
Board members couldn't argue with actual data. They still felt it was important, but feelings don't beat facts when you've got both in front of you.
That's the difference. Not having better instincts. Having better inputs to your instincts.
The real fix isn't another framework
Stop trying to fix prioritization with better scoring systems. Your RICE template is fine. Your ICE spreadsheet is fine. Value vs. Effort matrices work.
The problem is you're scoring made-up numbers.
Before you rank anything:
- Get your evidence in one place. If finding relevant customer feedback takes more than 30 seconds, you'll just skip it and guess.
- Timestamp everything. That feature request from 2022? Different context. Don't treat it like it happened yesterday.
- Count, don't remember. "Lots of customers asked" is not data. "47 customers mentioned it, 12 unprompted, 8 in churn calls" is data.
- Separate signal from noise. The squeaky wheel isn't always representative. Look at patterns, not volume from one loud account.
You don't need a new framework. You need to stop pretending you know things you don't actually know.
The takeaway
Feature factories don't happen because teams lack discipline. They happen because teams lack ground truth.
You can't prioritize effectively when your customer intelligence is scattered across tools, memories, and anecdotes. The framework doesn't matter if the inputs are fiction.
Stop optimizing the spreadsheet. Start fixing what goes into it.
