UX Research - Ask Me Anything Series

06 Jul 2017 10:24 AM | La Tosca Goodwin (Administrator)

On January 27, 2017, our special guest mentor on the mentorship Slack channel was Amanda Stockwell, owner of Stockwell Strategy and longtime researcher and strategist. Amanda answered question pertaining to UX research. The following is a summary of that Slack conversation.

Research- Formative vs. Summative User Testing

by Amanda Stockwell:

Topics discussed:

  • What’s the difference between Formative and Summative User Testing?
  • Recruiting the right users
  • Formulating questions well, especially when you need to provide specific pieces of domain knowledge
  • Preventing bias and moderating tips

The following captures the key questions and takeaways from the session.

Q: Can you start by explaining the difference between Formative and Summative User Testing?

A: Formative research is done at the onset of a project and use to explore existing problems, general needs, and generally gather insights to help guide the way

Summative testing is done after a solution is complete (or at least part is complete) and you're looking to verify how successful you were at hitting the goals you set

Sometimes the methods are the same, but the goal of the research is different.

Q:  I have heard arguments for not needing to recruit from a target audience so long as you properly set up the initial scenario. How would you respond to that argument?

A: I whole heartedly believe that you get better research results when you use real representative users. Theoretically, if you're working on something online, and someone is familiar with the internet, they should be able to figure most things out and you'll be able to identify the absolute worst, most glaring issues.

However, the most glaring set of issues to the general public may not be a big deal to your user set, and you might completely miss things that are big issues to your users.  It’s especially important to recruit your specific target users when working on the figuring out an overall workflow, content, labeling, organization, and navigation.

Q: Can you talk about how to best write usability test tasks?

A: Usability testing is all about exploring how easy or not something is to use and specifically designed to have participants interact with something (could be a website, mobile app, paper prototype, etc.) and perform tasks. The researcher observes users’ interactions and may give the participants specific tasks to complete.

The first tip to make sure you set up a scenario that makes sense to the real users and is written from their perspective. Give the participant some context so they can connect your website/application/whatever to their real goals and get in the mindset of truly performing the task.

You then need to create tasks that would be realistic for your test participants’ real-life goals. For instance, if you’re testing a sporting-goods website with budget-conscious parents, a reasonable task might be something like, “Find your child a pair of soccer cleats for under $35.”

This task is specific, but it’s not biased and won’t lead the participants in any way. Be careful to avoid including terms that a participant could look for on the page or moving users down a path you’d prefer them to take.

Q: Do you have any suggestions for writing usability test questions for when you need to provide the user some contextual information (such as credit card number or account id) without leading them?

A: If there is general domain knowledge that you need to assess in your participants, you can start with a short set of interview questions designed to glean what they already know, then have a few versions of tasks written and tailor the way you ask the questions to the level of knowledge they already have.

If there is specific information they need along the way, you can just give them the general tasks and tell them to let you know when they need specific data.

For instance, I recently did a study on the tax forms people need to fill out to be able to serve liquor. I told them to just go about as they thought they would, and any time they thought they needed a piece of information (like a tax code or account information) to let me know, and I would provide it at the time. I had index cards printed out with that information so I could hand it to them one at a time, but if they didn’t know to ask for it, I didn’t provide it.

Q: What is the best way to prevent bias of the moderator/UX Designer/Researcher?

A: The first thing is to be aware that everyone is naturally inclined to lead people one way or another and you need to practice writing the questions in a neutral way. It's little, subtle differences that make a big difference. For instance - "Tell me about x" is WAY more open than something like, "How did you like..."

Regardless of how well-written your script, it can be tempting to stray from the plan or blurt things out. To help with this, practice being quiet as much as possible, avoiding temptation to blurt things out, and find ways to keep your hands busy, such as holding the script, taking notes (even pretend ones!) or even gripping your hands behind your back.

It’s also helpful to get feedback from colleagues and perform a pilot session, especially with usability tests. Another person can help you identify wording issues or places that might be confusing. Although not always possible, it can also help to record your research sessions and watch yourself later.

Really, moderating is a difficult skill but the best way to get better is to practice, practice, practice!


Copyright © Triangle User Experience Professionals Association

Powered by Wild Apricot Membership Software