InsuranceComputer VisionMobile

Mobile-First Insurance Claims

Mobile-first auto claims app for a top-30 auto insurer processing 400K claims/year. YOLOv8 on-device damage detection, guided photo capture, and AI-assisted triage. Complete submission rate from 34% to 94%. Settlement time cut 54%.

94%
Complete Submissions
54%
Faster Settlement
$960K
Annual Savings
Mobile-First Insurance Claims

The Problem

A top-30 auto insurer processed 400K claims annually. Mobile claims submissions had a 34% completion rate—policyholders abandoned the process because they didn't know which photos to take, the forms were confusing, and there was no feedback on whether their submission was sufficient. Incomplete claims required 3–5 follow-up contacts, adding $120 per claim in processing costs. Average settlement time was 18 days.

The Dataset

180K historical claims with 2.4M damage photos, adjuster assessments, repair estimates, and settlement outcomes. 45K photos annotated with damage bounding boxes across 18 damage categories (dent, scratch, crack, shatter, deformation, etc.). Vehicle make/model data for damage-specific severity scoring.

Model & Approach

  • YOLOv8 Damage Detection: On-device object detection model identifying damage areas in real-time through the camera viewfinder. Draws bounding boxes on detected damage, guides user to capture additional angles.
  • Guided Photo Capture: AR-overlay system prompting users to capture specific views: front, rear, left, right, close-up of each damage area, VIN plate, odometer, and license plate. Validates photo quality (blur, lighting, angle) before accepting.
  • Severity Classifier: CNN model classifying damage severity (cosmetic, minor structural, major structural, total loss) from aggregated damage photos. Feeds into adjuster triage queue prioritization.
  • Estimate Predictor: Regression model predicting repair cost range based on damage type, severity, vehicle make/model, and regional labor rates. Provides instant preliminary estimate to policyholder.

Architecture

Native mobile app (Swift/Kotlin) with on-device YOLOv8 (Core ML / TF Lite) → photo upload with metadata (GPS, timestamp, device orientation) → cloud severity classification → estimate prediction → Guidewire ClaimCenter integration → adjuster dashboard. Offline-capable: full claim initiation and photo capture without connectivity.

Deployment

A/B test with 10,000 policyholders: AI-guided app vs. existing form-based app. 8-week test period with completion rate, settlement time, and customer satisfaction as primary metrics. Post-validation, phased rollout to all 1.2M policyholders over 3 months. Adjuster training on AI-triaged claims dashboard with confidence scores and damage annotations.

Results

34%
94%
Submission Completion Rate
18 days
8.3 days
Average Settlement Time
3–5
0.4
Follow-Up Contacts per Claim

ROI

$960K annual savings. $540K from reduced follow-up contact costs (86% fewer touches), $260K from faster settlement (reduced reserves carrying costs), $160K from adjuster productivity gains (AI triage reduces assessment time by 40%). Customer NPS for claims experience: 22 → 61.

Why It Was Hard

On-device ML with real-time camera overlay was the core challenge. YOLOv8 needed to run at 30fps on mid-range phones while drawing bounding boxes without lag. Model quantization (INT8) and Core ML / TFLite optimization got us to 25fps on 3-year-old devices.

Photo quality validation was surprisingly complex. Users take photos in rain, at night, with glare, from bad angles. The quality checker needed to reject unusable photos without frustrating users—clear feedback ("Move closer to the damage area") was essential.

What We Learned

The guided capture UX was the breakthrough, not the AI itself. Users didn't fail because they were lazy—they failed because they didn't know what the insurer needed. The AR overlay showing "take a photo from this angle" increased completion from 34% to 94%.

On-device inference eliminates the "upload and wait" friction. Users see damage detection in real-time through their camera, which builds trust in the system and encourages thorough documentation.

FAQ

How accurate is the AI damage assessment?

91% damage detection accuracy. 87% severity classification. All assessments reviewed by a human adjuster — AI accelerates triage, doesn't replace it.

Does the app work at the accident scene?

Yes. Guided photo capture works offline. Photos stored locally and uploaded when connected. GPS and timestamp metadata captured at the scene.

Can this integrate with our claims system?

REST API for Guidewire, Duck Creek, and Majesco. ACORD messaging standards. Bi-directional sync for status, assignments, and payments.

Have a Similar Challenge?

Tell us about your insurance claims or mobile app project.

Discuss Your Project