Design validation is not a phase, it’s a continuous part of the process. Testing your designs tests your assumptions and lets you make quick course corrections.
In traditional user testing, you wait until you have accumulated a set of features to test before you recruit some participants. User testing tends to lag behind development by at least a month, if not more.
Worse still, some waterfall techniques wait until the entire product is “finished” before user testing it. The question then is, what chance is there of making any changes at all, let alone ones that test your assumptions about the market-readiness of your designs?
Enter the revolving door
Revolving door studies turn this notion on its head. Here, you create a pool of users and then schedule around five people from that pool to test your product towards the end of every sprint, or sometimes every second sprint if you’re on a very short cycle. This narrows the gap between gathering usability data and acting on it from months to days. The team will still be working on the items that were tested, so they can quickly change course if the user studies reveal issues.
At the time you schedule your participants, you probably don’t know what features they’ll be working with when they come in. That’s fine. Work always expands to fill the available time, and user testing tasks always expand to fill the available session slots.
What’s in a revolving door study?
By the time the session date rolls around you will most likely have several features for participants to interact with. Obviously you can mix and match media types within a session.
- A flaky alpha build that just about stands up long enough to get through the task if you select the exact right options.
- Production-ready code to provide final validation of the entire experience.
- Paper sketches of early interface ideas (paper prototypes) that will be built in the next sprint.
- Some other company’s existing product, just so that you can see how well a particular interface style or interaction method works for your intended audience.
It can be a bit jarring for participants to move from one medium to another, so try and keep the number of swaps between different media types to a minimum for each user.
Suggestions for keeping team members involved
Keeping the studies on the same schedule (“Second Thursday in the month”, or “Every other Wednesday”) gives the team some consistency and ensures they free up their schedules to attend.
Maintaining a prioritized backlog of research questions (and how you plan to answer them) means that team members know when the things they care deeply about will be placed in front of users.
Team members get really invested in the studies because you can give them near-immediate feedback on the stories they’re currently working on. You can help foster that investment by introducing a little competition (whose stories get the best user reception). If you are running the studies in your offices, pizza or cookies in the place where team members can watch sessions also works well as an incentive.
By having a steady flow of participants, you can be sure that there is always someone available to help you answer the team’s questions – and the people answering the questions are much better qualified to do so than the heavily invested team members.
As the Lean Startup community knows, products are only hypotheses until validated by the market. Bringing early versions of your product to members of that market on a regular basis before you release lets you test the hypothesis earlier and iterate as necessary to ensure you deliver something that has value to your users and your business.
Cover image: flickr/thomashawk – worth following for the story