Categories
design validation guerrilla technique planning user studies

Cheap, fast, reliable: you can have all three

Cost-effective, quick research techniques don’t always inspire confidence in your data. Perform many small incremental studies to build reliability over time. 

My wife races bicycles. As with almost any sporting goods, bike components come with any two of the following three properties: strong, light, and cheap. If you’re a serious racer you normally compromise by spending money.

With lean user experience work, the three variables are cheap, fast and reliable. You want the research to be cost-effective for your startup, you need the answers as soon as possible, but you also need to have confidence that the research gives accurate results.

Your compromise won’t be spending money, and the research has to be timely, so the question becomes, how can you get feedback to the product team quickly and cheaply, and still feel confident about its reliability?

Incremental research

The answer is to build up the big picture with your research piece by piece, very much like you build up the product story by story.

  • Each piece of research is cheap and fast
  • Each piece answers specific questions that are preventing the team from moving on
  • In aggregate, the observations back each other up and provide the reliability you need

Build a research backlog

Like for your stories, you need to create a backlog of research questions. The easiest way to do this is to turn it into a data exploration project for the whole team. What questions do they have? How would they propose answering them?

  • Get each team member to write down the questions they have about the stories they’re working on, one question per card
  • Add these questions to a “user research backlog”
  • Just like with story cards, write down the “test” for each question on the back of the card
    • One question might have several potential tests 
    • Some questions may not have any tests (yet)
    • Some questions may be answered only by a combination of tests
  • Team members may need help figuring out a good test. Here, it helps if you have at least some background in user research, but you can always use this cheat sheet to help you out. 
  • At any point, team members can add new questions, and the team can reprioritize the backlog
    • Work for stories happening in the next sprint gets higher priority
    • Work that answers lots of questions at one time gets higher priority
    • Work that answers big questions and show-stoppers gets higher priority
  • Plan out what usability work you can run each sprint to get answers to the team’s questions.
    • This is where the story metaphor breaks down somewhat, because by running one piece of research you might get answers to several research questions. Thus, there aren’t necessarily a certain number of points per research question. Instead, you plan research to answer the top priority questions but also remain aware of what other questions that research will answer. 
  • As you get answers, feed them back to the team – obviously one-on-one to the person working on that story, but also in summary during stand-up so the whole team knows where they stand on user issues. 
    • Your question cards are still a token, so move them to an “answered” stack, and write the answer that you got from user research on the card. 
    • As you run more studies and get more corroborating or contradictory evidence, add it to the already answered questions. This allows you to create a level of confidence in each answer. Many pieces of corroborating evidence gives strong confidence, several pieces of seemingly contradictory evidence shows that more digging is needed. 

You can get the team involved in observations and interpretation of the results. This makes them all more user-aware. It also helps them see that some questions don’t get answered in one go. Instead you chip away at the question piece by piece, and there’s a cost-benefit trade off to each piece of research that you do.

The research backlog dictates the type of work you do

If many of the high-priority questions in your backlog are asking for behavioral data (how users work, what types of information they use, etc.) then it’s probably time for some site visits or a survey.

If instead the high-priority questions are about individual feature-level items, it’s probably time for some prototype or code-based usability studies.

Again, this cheat sheet should help you work out what types of study to run to answer the most questions at one time. Often, questions can – and should – be answered in more than one way. A usability study and a site visit will give two different perspectives on the same problem.

Aggregate the data for reliability

You can get confidence in the results of your studies by pulling together data from different study types. Hopefully the different data points corroborate each other. If they don’t, it’s time to tease apart the problem. By finding out what the differences were between the types of research (different prototype, different persona, different study type, etc.) you can begin to understand why users responded in contradictory ways.

Rather than being frustrated by contradictory results, you should be pleased. A lack of agreement in the data gives you an indication that there’s a hairier problem behind one of your questions than you might have expected. Digging in to that area is likely to uncover an issue that would have caused users real pain if you’d released the product as-is.