Live Insights
Previous case study Next case study

Case Study (01) · Healthcare

Live Insights
Reporting Tool

A unified real-time analytics station for Contact center (950 operators), Enrollment teams & Claims operations team.

Client National BPO Provider Healthcare · Enterprise
My role Lead UX Designer End-to-end ownership
Timeframe 14 months Multi-release delivery
Scope Platform design Research · IA · UI

Context

One national provider, four disconnected systems

A leading BPO provider serving Medicare Advantage, Part D, and Managed Medicaid across 50 states needed to consolidate fragmented operations of contact center, enrollment and claim teams into a single real-time platform. I led UX from discovery through handoff.

Problem & Solution

Problem
  • High attrition and rising call abandonment across the contact center.
  • Manual spreadsheet reporting left managers blind to agent occupancy, queue performance, and workload distribution.
  • Fragmented data made proactive decision-making nearly impossible.
Solution
  • A unified real-time platform consolidating four data sources into role-based views.
  • Agents get task-level detail, managers get team metrics, executives get cross-plan trends.
  • Teams can spot issues, trigger alerts, and act before they escalate.

Constraints & My Role

What couldn’t change
  • No API rewrite of the legacy ACD or claims systems — the new platform had to layer on top.
  • Strict healthcare compliance restrictions on what data could surface to whom.
  • Aggressive 14-month delivery across multiple staged releases.
My role
  • Owned: UX strategy, research, IA, interaction design, stakeholder alignment across product, engineering, and ops.
  • Did not own: visual branding (used client’s existing system), backend architecture, data pipeline implementation.
  • Lead UX Designer end-to-end across discovery, multiple releases, and handoff.

Impact

38% Reduction in report generation time
58% Productivity gain
64% Adoption within 3 months
41 Systems unified into one platform

Rolled out to users who manages 32 health plans which serves 4.4M members nationwide — replacing manual spreadsheet reporting with a single real-time platform serving managers, executives and contact center agents.

Design Approach

Why double-diamond, not something faster

With three audiences, four data systems, and 32 health plans pulling in different directions, the cost of getting the problem wrong was far higher than the cost of moving fast. I chose a structured double-diamond process to separate hearing what each audience needed from deciding what to build.

PROTOTYPING , LOGICS & UT DEFINING SOLUTIONS ESTIMATING VALUE & EFFORT IDEATING SOLUTIONS USER STORIES DEFINING PROBLEMS PRIORITIZING PROBLEMS EXPLORING PROBLEMS BRIEFING PROBLEM SPACE SOLUTION SPACE

Research

Grounding decisions in operator behavior

I ran moderated online interviews directly with representatives across three audiences — agents, managers, and executives — and wrote scripts for BAs to execute shadowing sessions on the operations floor. The combination gave us both the reported experience and the observed workflow.

What users told me

Three audiences, three different frustrations

Managers

  • End-of-day reports were too late to act on.
  • Spent hours compiling spreadsheets instead of managing teams.
  • No consolidated view of occupancy, abandonment, or workload distribution.
  • Needed to identify underperforming agents early enough to coach them.

Executives

  • No unified view across 50 states and 32 health plans.
  • Weekly reporting cycles too slow for agile decisions.
  • Needed predictive insight on losses, saves, and forecasts.

Agents

  • Need visibility on floor operations.
  • Wanted one dashboard with queue, call types, and targets in one view.

Synthesis

Four opportunities worth designing for

Triangulating across the three audiences, the same four themes kept surfacing. These became the north stars the product had to address.

01

Real-time insight at every level

Give agents, managers, and executives access to live performance data — not delayed reports.

02

Consolidated operational view

Replace four disconnected systems with one platform and reduce cognitive load.

03

Claim stage visibility

Show where claims are in the pipeline so managers can identify bottlenecks and prioritize work.

04

Predictive insight on losses and saves

Surface financial exposure signals so leadership can act before issues become costly.

Information Architecture

Four modules, one navigation

I used an impact-vs-frequency matrix to prioritize features, then grouped functionality into four modules aligned to how each audience worked — not how the underlying systems were organized.

Live Insights navigation panel showing four unified modules — Real-Time Monitoring, Diagnostic, Claims, and Workforce — replacing four disconnected legacy systems

Arriving at the Solution

What I shipped, by module

Real-Time Monitoring dashboard
01 / Contact center operations view

Real-Time Monitoring

The floor view for managers. Live agent queue, call volume trends, service level vs. target, and productive vs. non-productive time — refreshed continuously.

02 / Retrospective

Diagnostic

The retrospective view. CSAT trends, forecasted vs. actual handling time, and period-over-period patterns — evidence for coaching and capacity planning.

Call Center Diagnostic dashboard
Enrollment Dashboard
03 / Channel split

Enrollment

Channel-split view. Total enrollments by source, forecast deltas, and LOB-wise plan breakdowns tied to stage-level TAT.

04 / Utilization

Occupancy

Utilization view. DoD, WoW, MoM occupancy with handle time, login time, and trend analysis for aux, idle, and misc time.

Occupancy Dashboard
Claims Pends Dashboard
05 / Risk Exposure

Claims

Financial exposure view. Pending volumes, adjustments, interest paid, and risk breakdowns with potential penalty for out-of-compliance claims and SLA breaches.

06 / Floor visibility

Agent Floor View

A wall-mounted dashboard for the call center floor itself. Designed to be read across a room: agent status at a glance, live queue and call activity, customer satisfaction, and team timing metrics. The platform’s second audience — the agents on the floor — got their own view of how their center was running, in real time, while they were running it.

Agent Floor View dashboard

Reflection

What this project taught me

Challenges

Hardships I had to overcome

01

Aligning stakeholders across 50 states

Different regions had different workflows and success metrics. I navigated this with structured workshops, a shared taxonomy, and an iterative release strategy where regional feedback informed future versions.

02

Driving adoption away from spreadsheets

Both agents and managers were initially skeptical of automated reporting. I involved key users in co-design sessions, demonstrated quick wins early, and made the productivity gain visible from day one.

03

Balancing three very different user needs

Agents needed task-level detail; executives needed high-level trends. A role-based experience with distinct views solved this — without fragmenting into three separate products.

Trade-offs

Deliberate design decisions

01

Real-time data vs. system performance

Pulling live data from four systems simultaneously created bottlenecks. I prioritized real-time refresh for operationally critical metrics (queue status, abandonment) and scheduled cycles for less time-sensitive data — keeping the platform fast without sacrificing what mattered most.

02

What got cut: actionable items on the dashboard

I pushed for actionable items inside the dashboard — letting managers reassign work, escalate, or trigger workflows directly from the metrics they were watching. The business agreed it would be valuable, but integrating across the underlying platforms required pulling in multiple technical teams and a budget no one wanted to commit to. We shipped read-only. I still think it was the right product decision and the wrong one for the user — managers ended up jumping back to legacy tools to act on what the dashboard told them.

03

What got pipelined: sound and visual alerts on the floor dashboard

Alongside the dashboard, I proposed sound cues for queue spikes and visual flags when individual or team metrics crossed thresholds. A good-to-have on paper, but a meaningful one for the actual users — the reps work overnight from contact centers in the Philippines, and a passive dashboard asks more attention from a tired night-shift worker than the design should. The feature was agreed-on but parked for a later release. We shipped a quiet dashboard. The agents who needed the help most got the version that helps least.

Takeaway

What I carry forward

01

Data trust comes before data-driven decisions

Users skeptical of the platform's accuracy wouldn't act on it. Transparency around sourcing and calculation was as important as the design itself.

02

Role-based, not one-size-fits-all

Building one platform for three audiences works — but only when every decision is anchored in a clear user mental model for each.

03

Adoption is a design problem

Getting 950 operators to change how they work isn't a training challenge. The tool's value has to be visible from the first login.