Whoa! Did You See That? Collaborative Data Collection and Analysis

Observing a usability test is like witnessing an accident; everyone sees it go down differently. One of the most important steps in usability testing, and yet often skipped by a lot of firms, is conducting a thorough debrief with the observers after testing is complete. Research should not be performed while working in a vacuum. The data and feedback collected from end users is not complete without understanding the observers’ perspectives, which oftentimes is different from the moderator’s. The process for running an observer debrief involves getting all the observers (and even people who did not observe, but should have) to attend a meeting where we review the notes for each test and document the observations and perceived implications. In this session, you will learn various methods for running a debrief, when to use them, and walk away better prepared to get the most out of your research!

After attending this practical session, attendees will be able to take what they have learned and execute a thorough debrief the next day. I will discuss various methods for running a debrief, spending more time on one method in particular, the Wish for/how-to/what-if method. For those that are not familiar, this method is executed by asking observers to create implications for each observation in the form of: Wish for… (W4) How to… (H2) What if… (Wif). The benefit being that stakeholders do not focus on actual solutions at this time, but instead open the discussion up for more out of the box thinking.

What attendees will learn:

– The importance of running a debrief

– How to run a debrief (various options)

– How to keep observers engaged during testing

– Understanding the difference between observation, inference, opinion, and a recommendation

– Pitfalls to avoid

Who will benefit from the presentation and why:

Anyone who moderates usability tests or research studies will benefit greatly from this session. They will learn how to distill the key findings from the observers and keep them engaged throughout the testing process.

Talk Slides

More Talks by Kyle

Discover More Talks

More talks