Mimir analyzed 15 public sources — app reviews, Reddit threads, forum posts — and surfaced 18 patterns with 8 actionable recommendations.
AI-generated, ranked by impact and evidence strength
Rationale
Empty playlists appearing in the trending section (theme 4) directly undermine the discovery experience that drives 40+ playlists and groups of active engagement (theme 0, 1). When users browse trending content expecting curated collections and find non-functional placeholders, it damages platform credibility and wastes discovery real estate that could surface better content.
The trending section is your primary discovery entry point (theme 1, 7) with users creating diverse, creatively-named playlists as a core retention behavior. Empty playlists break this loop by interrupting content exploration and suggesting the platform lacks quality control. This is especially problematic given your global user base (theme 2) where first impressions matter for market penetration.
Without addressing this, you risk users questioning whether trending represents genuine community activity or system noise. The fix prevents degraded trust in your discovery algorithm at the moment users are most open to exploring community content.
7 additional recommendations generated from the same analysis
Users are actively iterating on songs with multiple named variations (theme 0), creating series like 'Soul Sisters 1, 2, 3' and 'Painted pitch black ex1, ex2'. Multiple model versions (v2.2, v3-preview) are in simultaneous use (theme 6), but there's no visible way to track iteration history or understand how changes between models or prompts affected output.
Playlist curation is a primary engagement driver (theme 1) with 40+ trending playlists, creative naming conventions, and thematic organization at scale. Users are organizing into groups and building community-driven collections (theme 7), but there's no evidence of collaborative editing or shared ownership of playlists. Songs receive 46+ likes and 9 comments (theme 0), indicating social engagement exists, but it's passive (consumption) rather than active (co-creation).
Users are generating 10 songs within 3 days, creating entire albums, and producing multiple tagged variations of concepts (theme 0). The credit pricing model offers bulk discounts (150 credits for 2 songs vs 200 separately, theme 3), but there's no evidence of a UI that supports batch creation workflows. Users creating 40+ playlists (theme 0) and organizing into albums suggest they think in collections, not single tracks.
Songs show high engagement (165 likes, 9 comments, theme 0) and users are organizing into playlists with creative naming (theme 1), but there's no evidence of visual identity or artwork associated with tracks. Social platforms where music is shared rely heavily on visual thumbnails for discoverability, and playlist covers are often the first impression in trending sections (theme 7).
Pricing includes multiple tiers (160,000 monthly credits on Pro, theme 3) and bulk discounts (150 credits for 2 songs vs 200 separately), but users generating 10 songs in 3 days (theme 0) have no way to monitor spend or optimize credit efficiency. V3 variable-length output (2-4 min vs v2's fixed 1:35, theme 6) changes the cost calculation per song, but this shift isn't surfaced in a way that helps users budget.
V3 removes support for seed, balance_strength, bpm, and num_songs parameters that v2 offered (theme 6), but multiple versions are in active use simultaneously with no guidance on when to use which. Users are experimenting across versions (v2.2, v3-preview visible on same page, theme 6), but there's no evidence the platform explains trade-offs or recommends versions based on goals.
New users face a blank canvas with Simple and Advanced model options (theme 9), but no evidence of scaffolding that demonstrates output quality or helps them produce their first usable result quickly. Users creating 40+ playlists and 10 songs in 3 days (theme 0) suggest momentum builds after initial success, but there's likely a cold-start problem for new users unfamiliar with prompt engineering.
Mimir doesn't just analyze — it's a complete product management workflow from feedback to shipped feature.
Ranked by severity and frequency, with the original quotes inline so you can judge for yourself.
Ask questions, get answers grounded in what your users actually said.
What's the top churn signal?
Onboarding confusion appears in 12 of 16 sources. Users describe “not knowing where to start” [Interview #3, NPS]
Ranked by impact and effort, with the reasoning you can actually defend in a roadmap review.
Generate documents that reference your actual research, not generic templates.
Transcripts, CSVs, PDFs, screenshots, Slack, URLs.
This analysis used public data only. Imagine what Mimir finds with your customer interviews and product analytics.
Try with your data