Mimir analyzed 15 public sources — app reviews, Reddit threads, forum posts — and surfaced 15 patterns with 7 actionable recommendations.
AI-generated, ranked by impact and evidence strength
Rationale
Officers already spend 45 minutes per report manually drafting narratives, and Abel reduces this to 10 minutes through automation. But the gap between draft generation and submission is where engagement happens. Nine sources describe real-time chat-based editing, policy checkers that flag missing information, and quick prompts for grammar checks or length adjustments. These aren't nice-to-haves — they're the difference between officers trusting the AI output and abandoning it for manual rewrites.
The evidence shows officers need step-by-step procedural guidance during active patrol, not just at the desk. When an officer realizes mid-report they forgot a witness statement or missed a required field, the product should prompt them immediately rather than forcing them back to the BWC footage or their notes. This is a retention driver: if officers feel like Abel catches their mistakes before supervisors do, they'll use it on every call.
Without this, you risk creating a draft generator that officers still have to heavily edit manually, eroding the 4-5x time savings claim. The policy checker and quick-prompt system are what transform Abel from a transcription tool into a copilot officers rely on for every report.
6 additional recommendations generated from the same analysis
Three sources describe investigators searching all BWC footage by person names, vehicle descriptions, clothing, or phrases, essentially converting every call into a field interview database. This is a fundamentally different use case than report generation: investigators aren't writing reports, they're hunting for leads across thousands of hours of footage. Voice fingerprinting distinguishes officer voices from subjects to improve transcript accuracy and surface investigative patterns.
Ten sources describe records staff manually extracting crime elements from officer narratives and transferring data between systems. This is invisible to patrol officers but creates downstream friction that undermines the product's value proposition. If records staff still spend hours manually parsing Abel-generated narratives to populate structured RMS fields, the time savings for officers just shifts the bottleneck rather than eliminating it.
Four sources describe integration with CAD, DEMS, RMS, and evidence lockers, emphasizing zero manual data entry and seamless workflow handoffs. But integration complexity for records staff implies current workflows require manual data transfer between systems. If Abel generates a great report but officers or records staff still have to manually copy fields into their RMS or CAD system, the product hasn't actually eliminated friction — it's just moved it.
Fourteen sources describe Abel Citizen as an online structured reporting portal with multi-language support and geo-location auto-detection. The positioning emphasizes timely records delivery and reducing manual intake burden on records staff. But the evidence describes a web portal, not a mobile app — and citizens filing police reports are likely doing so from their phones, often in situations where they lack reliable connectivity (accidents, thefts, incidents in parking lots or remote areas).
Eleven sources describe quick agency onboarding, per-seat pricing, and free pilot programs designed to reduce adoption friction. But the evidence doesn't describe what happens after initial deployment: how do agency leaders track which officers are using Abel, which report types generate the most edits (indicating template problems), or whether officers are completing the required certification review before submitting reports?
Nine sources describe CJIS compliance, GovCloud hosting, and restricted data sharing, emphasizing that Abel doesn't sell personal information and only shares data with law enforcement agencies or legal requirements. But the evidence also notes that data is processed on servers in the United States or other jurisdictions where service providers operate. For agencies in states with strict data residency requirements (California, Illinois, Texas), the inability to guarantee in-state data storage could be a dealbreaker.
Mimir doesn't just analyze — it's a complete product management workflow from feedback to shipped feature.
Ranked by severity and frequency, with the original quotes inline so you can judge for yourself.
Ask questions, get answers grounded in what your users actually said.
What's the top churn signal?
Onboarding confusion appears in 12 of 16 sources. Users describe “not knowing where to start” [Interview #3, NPS]
Ranked by impact and effort, with the reasoning you can actually defend in a roadmap review.
Generate documents that reference your actual research, not generic templates.
Transcripts, CSVs, PDFs, screenshots, Slack, URLs.
This analysis used public data only. Imagine what Mimir finds with your customer interviews and product analytics.
Try with your data