Categories
design validation guerrilla technique heuristic evaluation inspection method methods

Heuristic Evaluation – no users required

Check your product is following simple rules of interface design. It’s fast and finds potential UI issues before your users do. 

There’s no point running a usability test when you already know things are wrong with your design. A fast check-up against a set of best practices will give you plenty to fix.

If you don’t have time or budget for user testing, this is a partial alternative. If you are going to run user tests, this maximizes the value you get from the sessions because you will have fixed the basic problems before users find them for you.  

The ten commandments heuristics

Jakob Nielsen’s 10 heuristics for interfaces are:

  1. Visibility of system status
  2. Match between system and the real world
  3. User control and freedom
  4. Consistency and standards
  5. Error prevention
  6. Recognition rather than recall
  7. Flexibility and efficiency of use
  8. Aesthetic and minimalist design
  9. Help users recognize, diagnose, and recover from errors
  10. Help and documentation

Learn more about what these all mean at Jakob’s site.

Running an evaluation session

A heuristic evaluation is best done by several people. Any one individual will only find around 35% of the issues. 5 people will find around 75%, which is plenty for the team to start working on. Jakob suggests that each reviewer works on their own and then all the reviewers share their notes at the end. In practice, especially with team members who aren’t experts in user experience, it’s faster and more productive to run the session as a group.

  • You’ll all need to be looking at the same UI, so it’s best to project it on a wall (or in a web meeting session if you aren’t co-located). For a large product, select just a single flow or section to work on.
  • To get everyone up to speed, describe the typical persona who would be using this interface, and the user scenario or task they are trying to perform.
  • One team member “drives” the UI and everyone applies the heuristics to each part of the UI.
    • If the UI doesn’t agree with one of the ten rules, list what part of the UI has the issue, and why it’s an issue. Then move on
    • Don’t try to fix the issues during the session – capture all the issues (and all the locations where they occur) before you work out which ones to prioritize and how to fix them.
  • Step through the UI twice – once for context, then again for detail.

You won’t find all the issues. You will probably find slightly different issues than users would in a usability study. However, you will find the big issues. It’s funny how you can stare at the interface all day while you’re building it, but it’s only when you step through it in a formal session that the issues start jumping out at you.

This technique can be used at any stage in the development cycle, from paper prototypes through early builds to finished code. 

Heuristic Evaluations and Cognitive Walkthroughs

Compare and contrast this method with cognitive walkthroughs. Both are inspection methods (they don’t rely on having users available). Heuristic evaluation applies a set of rules to the interface itself, whereas cognitive walkthroughs attempt to find problems that the user would have when they try to complete their task.

The two techniques complement each other well. Heuristic evaluations really benefit from a focus on users and tasks, because that helps to ensure that the UI issues you raise are ones that users would care about.  Decide up front whether you care more about interface consistency/correctness or task completion, and then choose heuristic evaluations or cognitive walkthroughs respectively.