Usability testing with prototypes

Catch user experience issues before they reach production

10 min

Usability testing is the practice of watching real users interact with a prototype to identify barriers, confusion and improvement opportunities. It is the most direct way to know whether a design works.

Unlike team opinions or internal design reviews, usability tests generate data based on real behaviour. It does not matter what users say they would do — what matters is what they actually do when facing the interface.

Moderated vs unmoderated tests

In a moderated test, a facilitator guides the session in real time, presents tasks, observes behaviour and can ask follow-up questions. It is ideal for deep explorations where you want to understand the why behind each action.

In an unmoderated test, participants receive written instructions and complete tasks at their own pace, typically recording their screen and audio. It is more scalable: you can launch 50 tests in parallel and review results whenever you want.

  • Moderated: best for discovery, qualitative insights and complex flows
  • Unmoderated: best for quantitative validation, large samples and efficiency metrics
  • Recommended combination: start with moderated tests for discovery, then validate with unmoderated tests at scale

Remote testing: advantages and considerations

Remote testing has become the standard since 2020. Tools like Maze, UserTesting, Lookback and Hotjar allow you to run tests with users anywhere in the world, cutting logistics costs and eliminating lab bias.

The main advantage of remote testing is access to geographic and demographic diversity. You can test with users from different countries, time zones and usage contexts without leaving the office. The downside is that you lose some body language cues and environmental context.

How many users do you need?

Jakob Nielsen’s research showed that 5 users catch approximately 85% of usability problems in a qualitative test. This makes usability testing one of the research techniques with the best cost-to-value ratio.

For quantitative validations where you need statistical significance (conversion rates, task times, A/B comparisons), you will need larger samples: between 20 and 40 participants depending on expected variability.

  • 5 users: sufficient to detect major usability problems in qualitative tests
  • 8-12 users: recommended when you have distinct audience segments
  • 20-40 users: necessary for quantitative metrics with statistical significance
  • Rule of thumb: 3 rounds of 5 users with iteration beats 1 round of 15 without changes

How to prepare a usability test

The quality of a usability test depends more on preparation than execution. A poorly planned test with a perfect prototype generates useless data. A well-designed test with a basic wireframe can reveal transformative insights.

  • Define 3-5 realistic tasks that reflect the product’s most critical flows
  • Write a facilitation script with neutral instructions that do not hint at the answer
  • Recruit participants who represent your actual audience, not your team
  • Prepare the prototype ensuring task flows are fully functional
  • Run a pilot test with a team member to catch issues in the script or prototype

Result analysis: from observations to decisions

Collecting recordings and notes is only the first step. The real value lies in synthesising findings into actionable insights and prioritising them by impact. Not every detected problem deserves the same attention.

An effective framework is to classify issues by severity: critical (prevents task completion), major (causes significant frustration), minor (causes momentary confusion) and cosmetic (does not affect the flow). This allows you to prioritise redesign efforts objectively.

  • Group problems by patterns, not by individual user
  • Classify by severity: critical > major > minor > cosmetic
  • Document what works well alongside problems (not just negatives)
  • Generate actionable recommendations, not abstract observations

Usability testing tools

The testing tool ecosystem has matured enormously. There are options for every test type, budget and sophistication level, from enterprise solutions to free tools that work perfectly well for small teams.

  • Maze: unmoderated tests integrated with Figma, automatic flow metrics and heatmaps
  • UserTesting: on-demand participant panel with advanced demographic targeting
  • Lookback: moderated remote tests with two-way video and timestamping
  • Hotjar: session recordings and heatmaps on prototypes deployed as web pages
  • Optimal Workshop: card sorting, tree testing and first-click testing
  • Google Meet + Figma: the most accessible combo for moderated remote tests

Key Takeaways

  • Usability tests reveal problems that internal reviews never catch
  • 5 users are enough for a qualitative test that uncovers the main issues
  • Combine moderated (discovery) with unmoderated (validation at scale) tests
  • Classify issues by severity to prioritise redesign effort
  • Remote testing is the current standard: accessible, scalable and with mature tooling

Want to test your product’s usability?

We design and run usability tests with real users so you can make design decisions based on data, not assumptions.