How we do UX reviews at Nexer

Headshot of Elizabeth Buie

Senior UX Consultant

5 minute read

Elizabeth Buie explores how we conduct three types of UX reviews at Nexer: general UX reviews, journey-based UX reviews, and heuristic evaluations.

A user experience review is a usability inspection method that involves examining an interactive product to determine how well it follows usability principles and best practices. At Nexer we conduct 3 types of UX reviews:

  • General UX review: We explore the product as widely as we can, noting usability issues and guidelines violations we find.
  • Journey-based UX review: We walk through the selected journeys and note usability issues and guidelines violations we find. We do not follow any links or paths that are not part of the agreed user journeys.
  • Heuristic evaluation: Using an agreed set of heuristics, multiple reviewers conduct reviews independently and integrate their findings afterwards.

These processes all come under the heading of usability inspection methods. They are somewhat different from one another, though, and it is important to keep them straight. This article includes guidance on choosing which one to use.

General UX review

Clients sometimes ask us simply to review all or part of a product, without giving us much detail about what they want. We should of course seek to find out whether they have any specifics in mind, but often they do not, or they want as much as we can give them in the time and budget they have allocated to the review. In these cases, we conduct a general review, exploring the product as widely as we can and noting usability issues and guidelines violations we find.

Conducting a general UX review

Team activities:

  • Choose a small number of team members (often just 1) to conduct the review.
  • Reviewers may work separately or together. Having them work together is a good way for one to learn from the other.

Reviewer activities (if more than 1, can be done either together or separately):

  • Cover the product broadly. Do not focus on specific parts or neglect others, unless the client has indicated a wish to have specific parts reviewed. Take screenshots of everything you cover, even if you don’t see any problems with it at first. Take screenshots of the interaction as well — menus, dropdowns, error messages, etc etc. You never know what you might need for the report.
  • Follow paths that look interesting or promising, covering the product broadly. At each step, inspect the interface for anything that could make the process confusing, error prone, longer than necessary, or even just unpleasant or weird — essentially, design features that could make the product less effective, efficient, or satisfying. Include things that violate good design practice or the client’s brand, even if you are unsure how they might affect the experience directly.
  • For each problem, describe it and indicate how it affects the user’s experience. Take more screenshots if you need them.
  • Record your findings using the tool of your choice (PowerPoint, Miro, Word, etc). Include screenshots with comments/arrows/callouts indicating where and what the problem is.
  • For each issue, give your best guess regarding the level of impact it has on users’ experiences. Consider effectiveness, efficiency, and satisfaction. If your product has more than one user audience, indicate which audience(s) the issue affects. Identify issues that need further research such as usability testing, and explore steps the team or client might take to resolve them or address them further. If you have design skills as well as research skills, note possible design changes that might help solve the problem.

Team activities:

  • Produce a final report that represents your (or the team’s) findings and recommendations. Deliver it in the format (PowerPoint, Miro, Word, etc) that the client prefers.

Journey-based UX review

Description of a journey-based UX review

Clients sometimes ask us to use a specific set of user journeys to guide our review. In these cases, we walk through the selected journeys and note usability issues and guidelines violations we find. We do not follow any links or paths that are not part of the specified user journeys.

Conducting a journey-based UX review

Team activities:

  • Choose a small number of team members (often just 1) to conduct the review.
  • Reviewers may work separately or together. Having them work together is a good way for one to learn from the other.

Reviewer activities (if more than 1, can be done either together or separately):

  • Walk through each step of each specified user journey in turn, exercising the interactions needed to accomplish the journey. Take screenshots of everything you cover, even if you don’t see any problems with it at first. Take screenshots of the interaction as well — menus, dropdowns, error messages, etc etc. You never know what you might need for the report.
  • At each step, inspect the interface for anything that could make the journey confusing, error prone, longer than necessary, or even just unpleasant or weird — essentially, design features that could make the product less effective, efficient, or satisfying. Include things that violate good design practice or the client’s brand, even if you are unsure how they might affect the journey directly.
  • For each problem, describe it and indicate how it affects the user journey. Take more screenshots if you need them.
  • Record your findings using the tool of your choice (PowerPoint, Miro, Word, etc). Include screenshots with comments and callouts.
  • For each issue, give your best guess regarding the level of impact it has on users’ experiences. Consider effectiveness, efficiency, and satisfaction. If your product has more than one user audience, indicate which audience(s) the issue affects. Identify issues that need further research such as usability testing, and explore steps the team or client might take to resolve them or address them further. If you have design skills as well as research skills, note possible design changes that might help solve the problem.

Team activities:

  • Produce a final report that represents your (or the team’s) findings and recommendations. Deliver it in the format (PowerPoint, Miro, Word, etc) that the client prefers.

UX review process: Heuristic Evaluation

Description of a heuristic evaluation

“Heuristic evaluation” (HE) is not just a general name for a UX review. HE is a specific type of UX review — the method that Rolf Molich and Jakob Nielsen published (PDF, 1MB) in 1990. Heuristic evaluation involves multiple reviewers (ideally 3 to 5, although you can get value with as few as 2). The evaluators use an agreed set of heuristics (high-level principles, usually Nielsen’s 10) to conduct reviews independently of one another. Afterwards, they compare their findings and decide how to characterise the severity of the issues and what to recommend for addressing them.

Conducting a heuristic evaluation

Team activities:

  • Choose at least 2 team members to participate in the HE as evaluators. Larger projects might aim for 3 or more. Evaluators can be researchers or designers, but they should know something about the domain of the product and be familiar with principles of usability and good UX design.
  • Agree a set of heuristics for the evaluators to use. Make sure all evaluators understand them before anyone starts evaluating the product.
  • Decide which part(s) of the product/website you will evaluate.

Evaluators’ individual activities:

  • Walk through the parts of the interface being evaluated, checking each page and (ideally) each interaction against the agreed heuristics. Take screenshots of everything, not only whole pages but detailed interactions (menus, dropdowns, error messages, etc etc). You never know what you might need for documentation.
  • Ignore any usability issues that you are unable to map to a heuristic.
  • For each violation of a heuristic, describe it and indicate which heuristic it violates. Take as many screenshots as necessary to convey the problem.
  • For each violation, give your best guess regarding the level of impact it has on users’ experiences. Consider all three components of usability — effectiveness, efficiency, and satisfaction. If your product has more than one user audience, indicate which audience(s) it affects.
  • Record your findings using the tool of your choice (PowerPoint, Miro, Word, etc). Include screenshots with comments and callouts. Keep your own record private, to avoid biasing the other evaluations before everyone has finished.

Team activities:

  • Discuss everyone’s findings, identify areas of agreement, and resolve areas of disagreement or difference.
  • Estimate the impact of issues on users’ experiences. Explore research, such as usability testing, and what steps the team might take. If your team has design skills as well as research skills, note possible design changes that might help solve the problem.
  • Integrate the evaluators’ findings into one report. Consider using a Miro board for this, to foster collaboration.
  • Produce a final report that represents the team’s consensus on findings and recommendations. Deliver it in the format (PowerPoint, Miro, Word, etc) that the client prefers.

Some time ago, Rolf Molich, one of the creators of this method, gave an interview to Jeff Sauro of Measuring Usability. Rolf said:

“To conduct a Heuristic Evaluation, you need to put on blinders and view the world through the…heuristics.”

This means that even if you find a glaring usability issue, unless you can clearly map it to one of the agreed heuristics you have to leave it out of the heuristic evaluation report. Many usability specialists find this to be the biggest challenge in doing HE.

Kate Moran and Kelley Gordon, of the Nielsen Norman Group, have written a very helpful article on how to conduct a heuristic evaluation, on which some of the above is based. Their article includes a free PDF workbook (106KB) to assist with the activity.

Reporting findings and recommendations

Including accessibility issues

If you see accessibility issues during a general or journey-based review, feel free to report them. Keep in mind, however, that a UX review is not an accessibility audit. In addition, UX reviewers differ how much they know about accessibility issues and how well they can identify them. Therefore, unless the client has explicitly asked for accessibility findings, feel free to omit them or merely mention them as an added plus. Do not give the impression that you have conducted a comprehensive accessibility audit or that you have reported all of the accessibility issues that the product contains.

Heuristic evaluation does not consider or report accessibility issues unless they fall under the set of heuristics chosen for the evaluation. Most HEs use Nielsen's 10 usability heuristics, although they do not include any accessibility principles. If you notice some serious accessibility issues during the review, alert the client and suggest an accessibility audit. Or if you suspect in advance that the product might contain some important accessibility issues, find a way to include accessibility in your heuristics. Just don’t make the set of heuristics large.

Producing the report

At Nexer we often use PowerPoint for UX review reports. PowerPoint makes it easier than Word to illustrate specific issues by means of screenshots and callouts, and UX review reports don’t need to be text heavy. We sometimes use Miro for recording and reporting findings, although we watch out for its tendency to shrink the fonts in the stickies and we avoid having too many different font sizes across the stickies.

Choosing which method to use

Sometimes the client’s request will make it clear which type of review they want, but often they don’t know. And sometimes a client will ask for a “heuristic evaluation” without knowing exactly what that involves; some people think the term refers generally to any type of UX review. So it’s best to get some clarity before starting the work. Figure 1 shows the process of collecting this information and making the decision.

First, determine whether the client wants a heuristic evaluation. If they don’t specify HE, all is well. If they do ask for an HE, the first thing you need to do is make sure they understand what that involves, especially regarding how many reviewers are required and how long the process is likely to take. They may say no, that’s not what they had in mind; or if they were thinking that way they may recognise that an HE will take more time and effort than their budget will allow. If they want to proceed with HE, then everyone is on the same page and you can go ahead.

If you learn that the client doesn’t want an HE, either because they didn’t ask for it in the first place or because they weren’t fully aware of the resources needed to do one, the only thing left to do is find out whether you can draw on user journeys to guide the review. Maybe they have journeys in mind already, or maybe you can help them define some. Either way, if you have journeys you can use, conduct a journey-based review.

Finally, if you’re not doing a heuristic evaluation and you’re not doing a journey-based review, then voilà! you’re doing a general review.

Flow diagram of the process described in this article, for deciding which version of UX review to conduct.

Figure 1. Flow chart showing decision process for choosing a UX review method