How to measure user experience
Metrics, methods and tools to turn user perceptions into actionable data
Saying user experience is "good" or "bad" is not very useful if you cannot back the claim with data. Measuring UX lets you pinpoint specific problems, prioritise improvements with evidence and demonstrate the impact of design decisions on business metrics.
This guide covers the most widely used quantitative and qualitative methods, the tools that support them and the frameworks that help structure a continuous measurement programme.
Why measure user experience?
Without measurement, design decisions rest on opinions. One stakeholder says "this form is confusing", another says "it looks fine to me". Measuring UX introduces evidence: the form abandonment rate is 68 %, average completion time is double the industry benchmark, and three out of five users click the wrong field first.
Beyond settling internal debates, UX metrics let you establish baselines, set concrete targets ("reduce checkout time by 30 %") and track progress sprint by sprint. Companies like Google, Spotify and Airbnb embed UX metrics in their quarterly OKRs, treating them with the same rigour as financial metrics.
Key quantitative metrics
Quantitative metrics answer "how much" and "how often". They are collected at scale, support statistical comparisons and are ideal for measuring the impact of changes between versions.
The System Usability Scale (SUS) is a 10-question survey that yields a score from 0 to 100. A score above 68 is considered acceptable; above 80, good. The Net Promoter Score (NPS) measures willingness to recommend the product on a 0-to-10 scale and classifies users as detractors, passives or promoters.
Beyond surveys, behavioural metrics capture what users actually do. Task success rate measures the percentage of users who complete a flow without assistance. Time on task quantifies how long they take. Error rate records how many mistakes are made per task. And abandonment rate reveals the exact point at which users stop trying.
- SUS: standardised questionnaire, easy to administer, score from 0 to 100
- NPS: measures loyalty and willingness to recommend, useful for competitive benchmarking
- Task success rate: percentage of users who correctly complete a flow
- Time on task: average duration to finish a task — lower is generally better
- Error rate: number of errors per task, indicates friction points
- Customer Effort Score (CES): measures perceived effort to resolve an issue
Qualitative methods
Quantitative metrics tell you what is happening; qualitative methods tell you why. A think-aloud usability test asks real users to complete tasks while verbalising their thoughts. With five participants you can identify roughly 85 % of usability problems, according to Nielsen’s model.
In-depth interviews explore motivations, frustrations and mental models. Open-ended post-task surveys ("What was the hardest part of this process?") capture immediate perceptions. And diary studies record the experience over days or weeks, revealing patterns that a one-off test cannot detect.
- Usability tests: 5 participants catch ~85 % of problems
- In-depth interviews: explore the "why" behind user behaviour
- Post-task surveys: capture immediate perception after completing a flow
- Diary studies: reveal longitudinal usage patterns over time
- Card sorting and tree testing: evaluate information architecture
Tools for measuring UX
The UX measurement tooling landscape has matured significantly. For behavioural analytics, Hotjar and Microsoft Clarity offer heatmaps, session recordings and conversion funnels. For remote usability testing, Maze and UserTesting let you recruit participants, set tasks and collect success and timing metrics automatically.
For standardised surveys like SUS or NPS, tools such as Typeform or SurveyMonkey handle distribution, collection and analysis at scale. Mixpanel and Amplitude cover product analytics with cohorts, funnels and retention analysis. And for qualitative research synthesis, Dovetail and Condens help organise, tag and synthesise findings from interviews and tests.
- Behavioural analytics: Hotjar, Microsoft Clarity, FullStory
- Remote usability testing: Maze, UserTesting, Lookback
- Surveys and NPS: Typeform, SurveyMonkey, Qualtrics
- Product analytics: Mixpanel, Amplitude, PostHog
- Research synthesis: Dovetail, Condens, EnjoyHQ
Measurement frameworks: HEART and UX Scorecard
The HEART framework, developed by Google’s research team, organises UX metrics into five dimensions: Happiness (subjective satisfaction), Engagement (depth of use), Adoption (new users), Retention (returning users) and Task success (completion of key flows). For each dimension you define goals, observable signals and concrete metrics.
The UX Scorecard is a lighter variant that assigns scores to each major product flow across dimensions like efficiency, satisfaction and error rate. It is updated after each research round and lets you visualise how the experience evolves over time. Both frameworks share one principle: do not measure everything — measure what matters for your business objectives.
- HEART: five dimensions (Happiness, Engagement, Adoption, Retention, Task success)
- Each dimension is defined with Goals → Signals → Metrics
- UX Scorecard: per-flow scoring, updated after each research round
- Pick the framework that aligns with your organisation’s OKRs
How to set up a continuous measurement programme
Measuring UX once is useful; measuring it continuously is transformative. A sustainable programme combines quantitative metrics collected automatically (analytics, periodic NPS) with scheduled qualitative research (one usability test per sprint or per month).
The first step is to define which flows are critical to the business and which metrics represent their health. The second is to automate data collection wherever possible. The third is to set a review cadence: a monthly dashboard the product team reviews alongside business metrics. And the fourth is to close the loop — every finding should produce an action, whether a design change, an A/B experiment or a deeper investigation.
- Define critical flows and their health metrics (success, time, abandonment)
- Automate collection: analytics events, periodic surveys, degradation alerts
- Set a cadence: monthly metric reviews with the product team
- Close the loop: every data point should lead to a concrete action
Key Takeaways
- Measuring UX lets you make decisions based on evidence, not opinions
- Quantitative metrics (SUS, NPS, task success rate) measure the "how much"
- Qualitative methods (usability tests, interviews) explain the "why"
- Google’s HEART framework is a benchmark for organising UX metrics
- A continuous measurement programme combines automatic data with scheduled research
Want to measure and improve your product’s experience?
We implement UX measurement programmes tailored to your product and team. Clear metrics, actionable research and improvements with demonstrable impact.