Research: Usability Testing

Prasad Kantamneni
15 min readSep 23, 2022

Understand how customers interact with your product and identify usability issues.

Usability testing is evaluating a product by testing it with customers. Typically, during a test, participants will try to use your product, while the researchers identify the potential usability problems.

Usability testing is done to assess the design assumptions and usability of a product with real users. It typically captures insights in usability/findability, understandability, and the value of a particular function, or workflow. It involves planning, testing, and synthesizing the results.

Free & Discounted Udemy Courses — Explore Now

Test Plan

This comprises all the variables you need to define before going into the study. This will be discussed with product leads to finalize and move towards execution. Below is the structure of a test plan. It comprises:

  • Defining the research goal
  • Identifying research questions
  • Defining methodology
  • Research protocol
  • Recruitment
  • Define metrics for evaluation
  • Stimuli planning
  • Discussion Guide (the script)

Goal: Collaborate with the product team to understand the goal of this usability test. If the designs are ready, go through them to get the essence of the product and its UX.
Example of a research goal — Identify the usability issues a new user will encounter.

Scope: Mention the product, specify the eXperiences, and other parts your test will cover i.e. purchasing eXperience, website architecture.

Research Questions: Identity the key questions/assumptions that require answers to achieve the research goal.
For the research goal example above, the research questions can be:

  • “Are users able to navigate to the primary information easily on the homepage?”
    Each question can have several questions that can contribute to its answer.
  • “Are users able to identify the navigation points?”
  • “Do users understand the naming and grouping in navigation?”
  • “What is the mental model of the users while navigating through the interface?”

There can also be specific questions that can be documented. For example, “How findable is the search?” You need to identify which scenarios will help you to answer these questions.

Scenarios: After finalizing the research goal and questions, start documenting the scenarios that need to be tested for answering the questions. Also document any other background data required to understand the persona or scenario in a better way.
For example, stating how this specific persona’s daily life is: “John, a salesman, is a person who travels for 3–4 hrs in a day to sell products. He uses this product while traveling”

As a typical study might run for an hour, try not to have more than five (5) scenarios for a test. A usability session is typically around 45 minutes.

Methodology:
There are two methods of doing the usability test -

  • Moderated Usability Test.
  • Unmoderated Usability Test.

Moderated Usability Test: This type of test is conducted in real-time by a facilitator with a participant. The participants are assigned tasks to execute. Facilitators can ask questions to clarify or dive into issues through contextual and spontaneous questions after/while tasks are completed. The role of the observer is to monitor the behavior of the participants and report the outcome of the test.

Unmoderated Usability Test: Participants use the product/designs in the absence of observers or facilitators. The participant’s voice, screen activity, facial expressions may be recorded using automated software. Observers analyze this information to get findings. Some tools allow you to ask pre-defined follow-up questions by displaying them after each task or session. You can also send questions by email after the tasks are completed. In both cases, the questions are the same for any user as they are pre-defined. You will not have the flexibility of asking contextual questions that give the detailed “why”.

Determine which method should be used based on the following factors.

Plan the equipment required and the roles of your team according to the methodology chosen. If it is a moderated study, plan who the facilitator is and who the observers are.

Study details: Based on the methodology, plan the schedule, location, and session time for the study.

Schedule and Location: State when and where the test will be conducted. You need to specify the plan which includes how many sessions per day, and what particular times. We recommend having no more than three (3) moderated studies a day.

Sessions: The length of the sessions typically should be around 30–60 minutes. Plan this according to the scenarios being tested. When scheduling participants, remember to leave at least 30 minutes between sessions to reset the environment and to debrief after each session with observer(s) to process key takeaways while they are still fresh.

Research Protocol

Recruit Test Participants.
To recruit participants, think about two things; the required customer base, and how to reach them.

If you have an app that targets gamers, for example, you could post your request on a Facebook page for gamers. If your website targets university lecturers, you could send out a request for participants in educational newsletters or websites. If you can invest more, approach a recruiting firm to find participants for you (don’t forget to provide screener questions to find the appropriate people).

Screeners help you connect with the participants that are most relevant to your customer base. To recruit participants, you should have screening questions that can let you know about their background and characteristics.

For example, if you are conducting a test for a website about online games, you can have demographic factors like:
1. Age: Only 20–40.
2. Country: Only the US (Because these are story-based games in the US).
3. Device: Mobile and web.
4. Experience: Not a factor.

If it is for a domain-specific product, start with open-ended questions like “in less than 5 sentences, briefly describe one challenge you have with [field of study]”. Do not have more than two (2) open-ended questions, and do not overload the participants by asking more than 6–8 overall questions.

You want to avoid having the potential participant “guess” what answers you’re looking for and answer accordingly, to increase their chances of being accepted.

  • The questions must extract specific information about their work to understand whether they are the best fit.
  • The questions should also avoid revealing complete information about the study.
  • The questions should have options that will make the user think before choosing.
  • Don’t go with too many open-ended questions as it will be hard to filter by reading through them.

So, following these, for testing an online games application, instead of directly asking “Do you play online games?”, you can start with the question “What are your hobbies?” and ask them to select from a list of options (which will include playing games).

For every question, define the criteria for selecting/rejecting the participant for the system. You can also ask a series of follow-up questions to better understand and filter the participants.

What are your hobbies? (You can define a condition to proceed further only if the participant selects playing online games as one of the answers.)

  • Reading books
  • Playing online games (proceed further)
  • Playing soccer
  • Painting
  • Blogging

If the participant selects “Playing online games”, you can ask a follow-up question like:

“How frequently do you play online games? And define conditions for each answer:

  • Daily (Proceed further)
  • Weekly once or twice (Proceed further)
  • Monthly once or twice (Rejected)
  • Very rarely (Rejected)

Based on the product you are testing and its research goal, document the questions that are required.

You also need to plan how many participants to test. Testing five (5) people will catch 85% of the issues in design and you can catch the remaining 15% of issues by testing 15 users.

Recruit at least eight (8) people as two (2) people may not show up. One (1) may not be a good fit.

You may also need to offer some sort of incentive for participants as they have spent their time for you.

Metrics: You need to define the metrics by which you can evaluate the usability of this specific product. There are both qualitative and quantitative metrics.

Qualitative metrics:

These qualitative metrics are from detailed observations of why a certain thing happened. Examples of qualitative metrics are:

  • Importance
  • Intuitiveness
  • Satisfaction

For example, satisfaction can be derived from Likes, Dislikes, and Recommendations. Participants will provide information when asked about what they liked or disliked in the product or a specific task. When asked, they can also provide recommendations they think would help the product.

Quantitative metrics:
These metrics are the numerical data that come during/after analysis of the data from the test.

Expert Tip: Do not over-represent quantitative data. As you are running a sample size, you might get into the risk of overgeneralizing.

Here are a few examples of quantitative metrics.

  • Successful Task Completion: The task is considered successful when the participant has achieved the goal of the task. Include the questions and answers which help the observers understand whether a task is successful or not. (E.g. Did the participant complete the task? Did the participant go with the planned flow?)
  • Critical Errors: Critical errors are hurdles that do not allow the user to complete the tasks. For example, not understanding how to navigate to the next step, which will not allow the participant to complete the task.
  • Non-Critical Errors: Non-critical errors are those that lead to inefficiencies (workarounds) but still can be recovered and do not affect the completion of the task. For example, choosing a wrong menu option and then switching back.
  • Error-Free Rate: The percentage of test participants who completed the task without any errors.
  • Time On Task: The amount of time it took a participant for completing the task.
  • Number of clicks: The number of clicks that participants made for completing the task.

Example of important metrics:
Efficiency: What is the time to completion?
Effectiveness: Are participants able to complete the tasks?
Intuitiveness: Do the participants stop? Do the participants hesitate?
Satisfaction: Overall expression + feedback.

Stimuli: Define the condition the participant needs to be in, in terms of the environment around them. Also, what is the fidelity of the designs they need to look at so that your research questions can be answered?

Discussion guide: Write a protocol on how things should work in the study. This should act as a script for you while conducting usability tests.

  • The script is a guide. A session may veer off-script sometimes. This is fine and likely to happen — go with the natural flow of conversation with the participant and come back to key questions as much as you can to get your RQs answered.
  • The script is typically broken down into three main sections: Introduction, Scenarios to Cover and Wrap Up.
  • Wrap Up includes thanking the participants.

Start with a welcoming introduction:
Introduce yourself and also give context about the product and the goal of the meeting. Make them comfortable by asking about their well being. State that there are no wrong or right answers (everything is valuable). If the session will be recorded, request permission to record upfront. Here, you can emphasize that the recording will be for internal research purposes only (not shared in the public domain). You can start the test after asking this question “Do you have any questions before we start?” to get their doubts clarified.

Script to test the scenarios:
Document the background for your persona to give more context, and start with your first scenario. Describe the scenario clearly and let the participant operate. If you need answers for specific aspects like the findability of a call to action or understanding of content, you can ask them contextually. If it is about the value of a concept, ask them once the task is done. It largely depends on your research questions i.e. “Can the user complete this flow?” may not warrant any questions in between, but there are likely RQs within it such as “Are they able to find x on their first try?”

Questions should be specific and understandable but should not judge or reveal the intent.

Another example: Suppose you are running the scenario of a user purchasing a product on Amazon, and one of your research questions is to see if the homepage with new add-ons like “Offers” and “Product types” is valuable for the user. Instead of asking whether these offers or types of products are valuable for them, they can explain their thoughts on the screen and listen if they mention anything about the new add-ons.

Have backup questions in case they are not mentioning the intended things; In this case, contextually ask “What do you think of the offers provided?”

Plan for the alternatives when participants do not click the intended elements. For example, they need to click on the (+) icon to create a campaign. If users are not able to find this, you can point them to it and ask: “What do you expect the (+) icon will do?” You can tell them to go ahead and click it and see if their expectations were met.

Here are a few example questions:

When you want to test whether a participant is clear on what they are doing, ask these example questions below to get clarity:

Question: “I noticed you tried to take action on the menu. Can you please describe what exactly happened?”

Follow-up question 1: “What did you do exactly in this process?”[1]
Follow-up question 2: “Why did you take this approach?” [1]

When a participant is going in an unexpected direction, ask questions similar to the ones below to understand more:

Question: “I noticed you tried to interact with this element a couple of times, but nothing happened. Can you tell me what you were expecting at that moment to happen?

When you need to understand the value and intuitiveness, ask this type of question:

Question 1: “How was the experience of using the designs to complete this task?”

Follow-up question 1: “How did you feel about the language used?”

Follow-up question 2: “How easy/hard was the navigation (or search functionality)?”

Follow-up question 3: “What do you think about the layout of the content?”

Follow up question 4: “What do you think about the need to scroll to reach the content?”

Example of ways to NOT ask questions.
Non-ideal way: “May I know the reason why you did not click this icon?” [1]

This question sounds judgmental.
Ideal way: “Can you describe your thoughts while observing these icons?”
Follow up question: “What do you expect to happen when you interact with this icon? And why?” [1]
Non-ideal way: “Would you agree that approach A is better than approach B?”[1]

This is a leading question that favours approach A.
Ideal way: “Which of these two approaches do you prefer? And why?[1]
Non-ideal way: “Did you notice you could get to that page using this element?[1]

This could reveal a way of navigating that might be necessary for another task.
Ideal way: If we want to ask about things that participants did not seem to notice, then it could be rephrased and asked when all tasks have been tested (post-test). “What do you expect from this [menu, icon, link, search]?”

Other questions to ask in usability test tasks are about the user’s satisfaction in performing a certain task.
Importance/Value: On a scale of 1 to 5 (where 1 is not at all valuable and 5 is very valuable), how valuable is this task? Why?
Satisfaction rating: On a scale of 1 to 5 (where 1 is not at all easy and 5 is very easy), how easy was it to complete the task? Why? [1]

While framing these questions, make sure you indirectly or directly get the answers to your research questions.

Follow the same for each scenario and complete the test script.

Get the prototype ready for testing:
For the respective scenarios, get the prototype ready. Based on the stage of the product and the questions you want answers for, use the demo build or appropriate Rapid Prototyping techniques to build the prototype. Make sure that the interactions and the content are according to the scenario.

Conduct the usability test.
Preparing for the study:
It’s time to get ready to conduct your test. To do so, you’ll want to think about setting up your space and equipment and ensure you do a pilot test before testing with actual participants. A pilot test is like a rehearsal before testing with the participants.

During the test:

  • Follow the protocol you documented for the study.
  • Make participants comfortable in any given situation.
  • Maintain a soft tone while asking the questions.
  • Observe the participant’s reactions and ask questions which will give you more clarity.
  • If you realize that the participant is happy, shocked, or surprised, ask about why they felt that way.
  • Be aware that the participant might not be of the same demographic, so you need to guide them through a few things.

Wrap Up:
Ask these questions once the test is completed;

  • What went well?
  • What did not go well?
  • What would you change about it?
  • Any final words?

Note: The wrap up should include thanking them for their time; any information on when they can expect their incentive (if applicable); if they would be willing to be contacted for a future study; if they know of anyone else who might be interested in participating (if the study is actively recruiting) etc.

Synthesis.

  1. List the tasks and the questions you want answers for in each task.
  2. Document the answers and issues, if any, by going through the usability recordings or notes.
  3. After completing this for all the users, collate the data, and understand how each question has different answers from different participants.
  4. Do affinity mapping for collating and analyzing the data.
    Affinity mapping — This is a way that is often used to organize data and ideas. Affinity diagrams help organize information into groups of similar items to analyze qualitative data or observations. This grouping can be made on similar issues or similar topics.
Source: Medium Author: Lim Zhiyang

6. Analyze this grouped data and get insights. Insights typically answer the “so what?” of a finding.

You can get findings from these groupings and analyses. For example, after grouping and analyzing similar issues across different participants, you had two findings. Now, connect the dots between them to draw insights:

You get these insights by mapping one finding with another and cross-checking the reasons for these findings. In this case, you need to check if the participants who took more time to create a campaign and who skipped onboarding are common or not. You will also check the pain points and reasons they mentioned while performing the task.

While reviewing the data, consider how global the problem is in the product and how severe it is. Your findings might have global implications on the product. For example, you may find that participants could not find what they needed on the page because of too much information. Check if this is a problem in the other pages too to understand if there is a product-level issue.

Some insights are more severe than others. To help differentiate, you should note the severity of the problems.

For example:
Critical: If we do not fix this, the user cannot perform the task in any way. (Fix these as the priority).
Serious: Users might feel it is inefficient and might get annoyed and give up on tasks. (After critical issues, fix these next before launching the product).
Minor: Users might feel uncomfortable, but will still be able to complete the task. (You can fix these later once you are done with the categories above).

Ideation and recommendations.
For each of these insights, ideate with your team to come up with efficient solutions. Have various solutions for each issue and choose the best one that suits the needs. State these as research recommendations for each issue.

Final report.
After this, create a usability test report containing the background, what was tested, findings, insights, and recommendations in a presentable and digestible manner.

Here are a few examples of usability test reports:
Usability test report — Traqr fitness
Usability test report — Pen America

Tips / Best Practices

  1. Discuss the overall goals of the product with the product leads.
  2. Finalize the research goal with the product leads before starting the test plan.
  3. Conduct usability testing in the Wireframes stage to iterate more quickly.
  4. Frame the research questions before deciding on the scenarios to test.
  5. Decide on the methodology to use based on the questions to be answered and the investment needed.
  6. Include screeners for selecting the participants to receive relevant and effective inputs.
  7. Consider testing with at least five (5) users.
  8. Do not plan more than eight (8) scenarios in a usability testing session as participants might not be able to concentrate for more than an hour.
  9. Do a pilot study to rehearse if everything is defined appropriately.
  10. Make participants comfortable sharing their opinions.
  11. Don’t show the intentions or hints in both your tone or expressions while participants are in a test.
  12. Use affinity mapping to synthesize data and identify patterns.
  13. Document the raw data and findings digitally so that you can refer to them easily.
  14. Give enough time (at least a day or two) to draw insights from findings (post-synthesis).

Checklist

  1. Define the goal of usability testing.
  2. Finalize the goal with product leads.
  3. Create screeners for participants.
  4. Finalize the incentives and participant list with product leads.
  5. Document the discussion guide.
  6. Discuss the test plan with product leads.
  7. Follow the discussion guide during the test.
  8. Check if any questions are judgmental or will reveal the intent.
  9. Do a pilot study before testing with real participants.
  10. Check that the equipment and the note-taking team are ready before the test.
  11. Request the participant’s permission before recording the session.
  12. Have video/audio recordings or images that act as proof for your findings and insights.

Related Topics

  1. Research Fundamentals

Quick Question

Arrange the following items in the appropriate sequence for a usability study:

  1. Goal
  2. Analysis
  3. Research questions
  4. Conducting test
  5. Scenarios

Leave your answer in the comments section below!

Free & Discounted Udemy Courses — Explore Now

--

--

Prasad Kantamneni

I am a Designer, Problem Solver, Co-Founder of an Inc 5000 Studio, and an Educator by Passion. My goal is to Demystify Design & teach Pragmatic strategies.