Categories
planning user studies

So you want to “run a study” … the cheat sheet

Your team decides they need to “run a study.” They don’t know what that means, and they are relying on you to set it up. That’s a good problem to have. Use this cheat sheet to help you out.

Jump right to the cheat sheet, or read on to learn more about how to narrow down the types of research you should do.

Three types of studies

Choose the right method to answer your question. Different usability methods are better at answering various types of questions. Let’s narrow down the list of methods you could use by first working out what type of question you have.

  • Comparison (get users’ preference, measure comparative performance of two designs)
  • Attitude (feelings about existing product or new features, preferences)
  • Behavior measuring (how do users work today, are they successful with our new interface)

Of the three types of questions, behavioral studies normally require the least amount of interpretation, so they are the easiest type to run and to understand for a team who are new to user testing. You will see issues as they happen, and it’s normally pretty easy to figure out why the issue is occurring.

Comparison questions can be relatively simple to ask, but the big problem is working out what to do with the results. Normally, responses to this type of question will not tell you why users preferred something or were more efficient with a certain interface, so you have to do further research to learn how to replicate the results in the future. 

Once you start asking attitudinal questions, you have to put checks and balances in place to make sure you aren’t coloring the results with your preconceptions. That’s often very hard for a team who haven’t had much experience working with users.

How developed is your product?

You will use different techniques for early exploratory work than for studies of an existing system.

  • Formative studies try to find out what it is that makes users tick. You use the data you gather to make a design. 
  • Summative studies measure whether your design was effective. You use the design you built to gather data.

Early in the development cycle, before you have a design, your formative studies will tell you what to build. Once you have something built (paper prototype or code), you run summative studies to see how well it worked. Obviously there is some crossover. You can still learn new things that inform the design process during a summative-style study.

What are you going to do with the answers?

Decide what you will use the data for.

  • Quantitative studies give you “what” answers: hard numbers. These are useful if you are comparing the effectiveness of two alternatives, or if you want to make a cost justification, but they don’t teach you much about how to do good design.
  • Qualitative studies give you “why” answers: more abstract data like user quotes, goals, or issues. Analysis of these results is more open to interpretation, but is easy enough if you add a bit of structure. Qualitative information is good at giving you design rules for your product.

Formative studies tend to give you qualitative results. Summative studies tend to give you quantitative results. 

How much time/money do you have?

Some usability methods are fast and cheap, others not so much. Often the trade-off is how confident you can be with the results. Sometimes it’s OK to get just a rough understanding of users’ behaviors (early usability study). Sometimes you need to quantify everything in great detail (benchmark test that will be used for tracking behavior over time).

How much experience do you have? 

Some techniques are suitable for anyone to use. Others take training, practice, special equipment or even specific qualifications before they can be used successfully. That means you may be able to run some studies yourself but you may need help using other methods, or maybe even have to hire a vendor to run them for you.

For agile teams on a minimal budget…

If you are trying to work out what to build, or how to make your current product better, start with formative, qualitative, behavioral techniques. These are the easiest to run, will give you a bunch of general-purpose data, and will expose the whole team to user research in a positive way.


Behavior measuring methods

You want to ask…Methods Dangers
Task requirements

How do users work today?   Who are the users for this product?    
Field studies (watch users’ daily lives)
Site visits (watch product use)
Diary study (record experiences over time)
Interviews away from the place of use turn into say, not do.
Careful analysis required
Diary study analysis is time consuming
Likelihood of success  

Are users successful with this product?    
Usability study using paper prototype, code, live site, or competitor product
Lab based
Online/remote  
Several opportunities for bias
Likelihood of success (fast version)  

Will users be able to succeed with this product?  
Inspection techniques
Heuristic evaluation
Cognitive walkthrough  
Have to keep personas in mind Team are not real users
Heuristics can be hard to interpret
Where people look  

Which areas of the screen draw attention?   Do users notice our promotion?
Eye tracking  

Requires a lot of training to do well – use a vendor!
“Seeing” doesn’t imply “understanding”
Navigation structure

How do users group the information on the site?    
Card sorting
Reverse sorting
Results can be hard to interpret
Card sort output isn’t directly equivalent to navigation structure (still needs design input)
Usage  

How do users work with the current product?  
Instrumentation, analytics, data mining Often hard to tie data points to behavior (does a click in a certain location indicate user understanding?)

Tells you what, but not why

Comparison methods

You want to ask…MethodsDangers
Preference

Which way do you prefer to work?    
Survey questionnaire

– In-person
– Phone
– Online  
What people say isn’t what they do.
Hard to design good surveys.
Often needs binary answers to messy questions.
Preference

Which graphical treatment resonates?  
Focus groupAsking users to predict future behaviors.
Not using product. Very hard to moderate well.
What people say isn’t what they do.
Performance

Which design gives us more task completions?  
A/B testing
Site logs for simple comparisons
Tells you what, but not why (pair with a direct observation technique).
Measures success but not satisfaction

Attitude methods

You want to ask… MethodsDangers
Reaction to existing product  

What do you like best/least about X?    
Desirability studiesInterpreting results.
Understanding issue severity.
Desires/emotions for new product

What features should be in the next release?  
Participatory design
Context mapping
Issues turning output into a design.
Surfacing “tacit” knowledge.
Desires for new features

Which feature is most important to you?  
Conjoint analysis [trade-off studies]   Hard to run well (use an expert).
Often used to compare too many options.
Reaction to proposals

How much does this new design resonate?  
Interview
Focus group  
Asking users to predict future behaviors.
Not using product.
Very hard to moderate well.
What people say isn’t what they do.
Reaction to existing product

What are the biggest pain points?    
Read through message boards, feedback e-mail, support calls.
– Count frequency of certain topics
– Affinity diagram of issues
Interpreting postings (sarcasm, etc.).
Understanding issue severity.
Inability to ask direct questions.
Skewed sample (angry customers).