10 Mistakes to Avoid When Conducting a Usability Study

Facilitating a usability study is a lot like juggling.

Not only do you need to pay attention to what’s happening on the screen in front of you, but also:

  • Uncover everything you need to learn
  • Put the participant at ease
  • Keep one eye on the clock
  • Trouble-shoot (inevitable) technology issues
  • Remember to hit “record”
  • Ask thoughtful follow-up questions
  • Remember to ask questions from observers
  • Circle back to something the participant said earlier

Becoming a confident, highly skilled usability study facilitator takes lots of practice.

Here are my 10 best tips:

1) Don’t ask leading questions.

Leading questions—questions that subtlety or overtly tell the participant the “right” answer—prevent deeper understanding.

Not asking leading questions is one of the most difficult aspects of testing. It’s important to pause and think about how to carefully word unscripted follow-up questions.

To avoid accidentally asking a leading question, I use a standard set of follow-up questions after each task, such as:

  • Where did you look first / what did you notice first?
  • Is this what you expected to see / experience? If not, what were you expecting instead?
  • What is the purpose of this screen?
  • Which information is most important to you on this screen?
  • What information is missing from this screen that would help you better understand this topic or help you make a decision?
  • What questions do you have about what you’re seeing here?

These types of questions help me understand where the site or app might not be aligned to the user’s expectations. (Is the most important content on the bottom of the page? Is key information missing? Are they seeing what the business team wants them to notice?)

Example leading questions:

  • Did you notice the search button?
  • Is it clear how to watch that video?

By calling an element a “search” button or “video” or even just “button” — you miss the chance to identify whether or not the user understood the element’s purpose, functionality, or meaning.

If you point to the magnifying glass icon, then say, “Is it clear that this is a search button,” an opportunity is missed. It’s better to ask:

  • What is the purpose of this area?
  • What do you think [name of company] is trying to communicate to you here?
  • How would you describe this to a friend?

If I don’t feel confident about an area I’m testing but haven’t yet learned everything I need to, next I will try:

  • Moving on, then circling back to the area in question later
  • Giving the participant another task to complete to see if they use or notice the element in question
  • Ask the participant to show me where they expect the answer / solution to be
  • Ask the participant to explain (in their own words) the problem they’re trying to solve or task they’re trying to complete
  • Ask the participant to describe a similar experience or website, and then ask what they notice is different about this experience

2) Don’t create leading tasks.

In my opinion, a usability study that uses leading tasks is a waste of time and money. A usability study is already somewhat artificial—having someone use a website while a room full of people watch is pretty unnatural. Telling people what to do pushes it over the line.

For real learning, tasks should always be written in a way that forces the participant to THINK about the problem they’re trying to solve.

For example, instead of telling participants which exact links they should click, say, “If you wanted to look for more information about [topic], how would you go about doing that?”

Example of a leading task:

“Go to the furniture section of the site, then show me how you would place a sofa in your shopping cart.”

The participant selects the first sofa they see, then adds it to their cart.

What did you learn? Not much.

Here’s what you might miss out on:

  • Navigation labeling and wayfinding challenges (e.g., category labels)
  • Product filtering preferences and challenges (e.g., which characteristics are important)
  • Product comparison challenges (e.g., how do users compare, which information is most important to compare)
  • Product information needed to make a decision (e.g., dimensions, color options, shipping costs, warranty information, return policy)
  • Call to action challenges (e.g., button placement)
  • Merchandising issues (e.g., selection, value / cost)
  • Influencers (e.g., budget considerations, competitor comparison shopping)

Example of a non-leading task:

“You would like to replace your existing sofa. Show me how you would find the perfect one.”

This type of task is more realistic and will yield many more useful insights.

It forces the participant to imagine a solution that will realistically work for their taste, budget, needs, and space limitations. It reveals what barriers exist and other unforeseen influences in their decision-making process.

3) Don’t use website keywords.

In your tasks and follow-up questions, don’t use the labels that are on the site or app.

Rather than, “Show me how you would return to the home page,” when there is a HOME link staring them in the face—instead try, “How would you return to the front page of the site?”

Sometimes for clarification, it’s necessary to call something what it is called on the site. When that is the case, I will arrange the tasks in an order that exhausts studying the usability of the thing, before I start calling it by its name. Is it the right label? Is it noticeable and understood? We cannot know for sure unless we test it.

To get around calling something what it’s labeled:

  • Use synonyms
  • Describe it by color or location
  • Point to it
  • Use the word(s) the participant uses

Also avoid calling elements an icon, button, or link. Don’t point to the envelope icon, then ask if they understood the “email” feature. Nor inadvertently call something a link—the participant may not have known that text was clickable.

Also, if your participant keeps mispronouncing something on the site, or refers to an element as something else—don’t correct them. Just go with the flow and use it, too, if possible. It doesn’t help anyone to make a participant feel dumb while everyone is watching.

For example, I once had a phone interview participant refer to LinkedIn as “Linkadink” several times. I just went with it and called it that, too. (And actually was fun to say!)

4) Answer questions with more questions.

When I ask participants whether they have any questions about what they’re seeing, it’s a trick question. I’m not actually going to answer their questions. Rather, I note their questions as content gaps or use it as an opportunity to give a follow-up task.

Common participant questions, and my answers:

They ask: Am I doing this right? Should I click this?

I answer: Do whatever you normally would do if I wasn’t here watching you.

They ask: What does this do? What is this for?

I answer: What do you think it does?

They ask: How does …?

I answer: How do you think it should work? What would work best for you?

When in doubt about whether to “help” the participant, answer their question with a question.

If the participant tells me they would call customer service, then I pretend to be customer service. Then have them to talk through their situation or question.

5) Don’t feel pressured to fill silence.

Get comfortable with silence and long pauses. Give participants time to think and finish (or expand) on their thoughts.

If the silence goes on too long (after a minute, maybe), then I will prompt:

  • What are you thinking about?
  • What are you looking at?
  • Tell me about that.

6) Don’t pull a Kanye West. Let participants finish.

Related to long pauses… Don’t interrupt. Let the participant finish their thought, then count to 2 in your head just to be sure they’re done talking. Often, it’s the next couple sentences that reveal a goldmine of insight.

As facilitator, my goal is to always be as quiet observer as possible so that I don’t interrupt their train of thought or flow.

If the user is clearly stuck, gotten off task, or hasn’t found the right destination after the third attempt, then I will prompt with something like:

  • What are you looking for right now? (Then redirect to the task at-hand, if needed.)
  • Where do you expect to find the answer? Where should it be?
  • What word or phrase are you looking for?
  • Is this something you’ve looked for in the past? How have you completed this task before?

Having said all that… I DO interrupt and redirect the conversation if a participant is rambling, repeating themselves, or has gotten completely off topic.

7) Don’t treat the usability study like an A/B test.

I never ask which version of a design or content a participant prefers. A usability study is not a concept test, focus group, or an A/B test—unless you’re conducting the study with literally hundreds of participants. With just 8-10 participants, there are simply too few participants for even reliable directional data.

You CAN test the usability of different versions, but, ideally not all versions should be tested by the same participant. But I’m not a fan of that approach either. Rather than testing different versions, I recommend conducting an iterative study instead. That is, test with 3-4 people—identify the problems and solutions, update the prototype—then test again with 3-4 people. Repeat this process for as much runway or budget as your project will allow.

8) Don’t ask for the participant’s opinion.

I avoid questions that start with the words “What do you think…” or “What are your thoughts about…”

Those types of questions are valid, just not in a usability study. Thoughts and feelings-type questions are more appropriate for focus groups, one-on-one interviews, or even a follow-up survey or conversation.

The job of the usability study is to identify whether the system, site, or app is doing its job, not how it makes them feel.

I DO think it’s appropriate to ask questions like:

  • What do you think that photo is trying to communicate?
  • What do you think is the purpose of this page?

Those types of questions are very different from:

  • What do you think about the placement of this button?
  • What do you think of this screen?

The first set of questions is still trying to uncover whether the correct information or intent is coming through clearly.

The second set of questions is trying to understand whether the participant likes something or not.

9) Don’t ask participants to design alternative solutions.

The vast majority of people are not designers. It’s difficult (and sometimes uncomfortable) for most usability participants to be put on the spot that way.

10) Don’t talk so much.

Whether I’m conducting a customer interview or a job interview, I aim for a 80/20 listening-to-talking ratio. That is, I listen 80% of the time and talk 20%.

You’re paying a lot of money to have the privilege of your customer’s undivided attention for an hour. Stop talking and listen to what they have to say!

I find it’s usually not even helpful to the conversation to chime in with “me too” statements. Participants don’t care about your experiences. Rather, I make sure they feel heard and valued using active listening cues:

  • Smiling
  • Nodding
  • Hm, hm
  • That’s interesting, say more, wow (and so on)

When talking with participants in person or via video chat, I make eye contact, point my knees and feet in their direction, and take minimal notes so they’re not looking at the top of my head the whole time.

In Conclusion

There is a lot to think about during a usability test session. A well-designed discussion guide is critical to ensuring the highest return on investment and gain maximum learnings.

Related Articles

Author: Kristine Remer

Kristine Remer is a CX insights leader, UX researcher, and strategist in Minneapolis. She helps organizations drive significant business outcomes by finding and solving customer problems. She never misses the Minnesota State Fair and loves dark chocolate mochas, kayaking, escape rooms, and planning elaborate treasure hunts for her children.