Categories
methods planning user studies user testing

Build the big picture from many small, fast studies

Incremental research aggregates data from frequent small, fast studies to check you’re on track. Running large numbers of participants during any one study is a waste of time and money.

If ever there was a blog posting that deserved to be accompanied by clip art of jigsaw puzzle pieces, this is it. Each piece of research you perform gives you answers to a piece of the puzzle. Plugging them all together gives you insights to the big picture issues.

Five users is sufficient for a usability study

I’m often asked by people who are used to seeing results from market research why it’s OK to only run 5 participants per study. In typical market research studies, surveys are distributed to several hundred participants, and for qualitative work at least 2 focus groups of 8 participants are run in 3 different cities – nearly a ten-fold increase in the number of participants I’d suggest. There are two big differences between typical market research work and typical usability testing work.   

  • What is being measured? Market research tends to ask questions about “liking” – most often asking respondents to project into the future. Usability testing instead observes “acting” – participants completing representative tasks. Watching real behaviors reduces the noise in the answers you get.
  • What questions are you asking? In early usability work we’re most often trying to uncover big problems. This is formative work. If a task causes a problem for more than a few representative study participants it’s likely to be a problem for many users. In contrast, market research is most frequently summative. It tries to find answers that will predict how a population will behave with some degree of confidence. 

So for formative studies where you are observing real user behavior, you can get away with fewer participants.

If you want more detail on the magic number of five participants, see Jakob Nielsen’s analysis. Obviously, this number varies for different types of studies (card sorts require around 15 participants, for instance) or if you have distinct user groups with different characteristics (B2B and retail purchasers, for instance).

Vary your methods

One beautiful side effect of running smaller studies is that you can run more of them, and more different types of research.

Some questions will be best answered by Web metrics. Some by interacting with users. Some need the intersection of metrics (for “what” information) and users (for “why” information).

This last case is one of the most interesting. Your metrics or instrumentation data gives you summative information about where a problem lies, but it can’t often point you to a good solution. Scheduling just five participants to perform tasks in the problem area will pinpoint the exact reasons for the issue and suggest several potential solutions which can in turn be user tested for verification.

Get the team on board

To make your research plan, start with the list of questions that need answering.   

  • Get the team together to list all the questions they have about what you’re building on cards or sticky notes. 
  • Prioritize the urgency and importance of finding answers to each of these questions as a team.
  • Brainstorm ways to get answers – what study types will work best to get you the insight you need?
  • Create a “bucket” for each study type and group the questions that can be answered within each study (still by priority order), to maximize the benefit from each set of participants you bring in. 
    • It’s OK to duplicate question cards so that they live in each of the necessary buckets. 
    • You might even be able to make the question more specific for each different bucket it lives in.
  • Once you’ve run a study, determine which questions were answered and move them to a different column. Write the answer on the question card. This keeps the ongoing research apparent to all team members.
  • Maintain the research plan backlog as new questions get added and priorities change. 

It helps to set up a schedule of revolving door studies. This way, you know you’ll have a constant stream of participants coming in. Your research backlog dictates what research questions each group of participants is asked to help answer. If a key piece of functionality required for the highest priority study isn’t available, it’s easy to look to the next most important questions and form a study around those instead.

Benefits of multiple smaller studies

Each piece of research answers specific questions that are preventing the team from moving on. In aggregate, the observations back each other up and provide the reliability you need. Many studies with smaller numbers of highly representative users gives you as much confidence that the results will generalize as one large study, plus you get answers to many more questions along the way.