Here, the traditional method of regular recommendations will not suffice. So, with your test results in hand, how do you effectively create a usability report? a few years ago I did website usability analysis for a body-building e-commerce shop. How standard are the technologies involved? May 10, 2017 - UX and usability testing analysis is a critical skill. Consider how global the problem is throughout the site, and how severe it is; acknowledge the implications of specific problems when extended sitewide (e.g., if one page is full of typos, you should probably get the rest of the site proofread as well). Thanks for sharing these techniques, Carlos. Try and broaden the insight if it isn’t exactly identical with another, but is still strongly related. Don’t worry, it’s not too much, and at the end of this article, you’ll find a spreadsheet that automates the whole process. Planning for various eventualities can be a roadmap for the conversation. Conducting a usability test is one thing, analyzing it is something completely different. Tasks... Usability Evaluation (14) Reporting Usability Test Results. It's best to do this digitally, with a tool like Excel or Airtable, as you want to be able to move the data around, apply tags, and sort it by category. By clicking Accept Cookies, you agree to our use of cookies and other tracking technologies in accordance with our. Read on for our best practices. How clear are the business/user requirements? Use a framework to document data. The risk is that there may be a disconnect between issues found and solutions identified. Create the issue matrix by placing the sticky notes in the proper quadrant according to impact and frequency. In some cases, you may have the power to just make the changes yourself. The values may come from a simple linear sequence (e.g., 1, 2, 3, 4, etc.) This is an amazing article. Let’s go back to our spreadsheet, which now looks like this: According to this example, we should prioritize the development of the solutions in the following order (from the higher to lower ROI): solution 1, then solution 3 and 2. Hope that helps! Divided into four distinct phases –discover, define, develop, and deliver– the double diamond is a simple visual map of the design process. E.g. You can start a usability evaluation at the prototyping stage and repeat it with each update to ensure they don’t introduce any new issues. Consider your next steps before you plan how to conduct and present your analysis. Awesome article! Compare feedback and statistics on success rates to evaluate the changes and confirm that they fixed the problem. Which is the better approach? One of the most powerful is the double diamond from The British Design Council, which in turn uses divergent-convergent thinking. In other situations, you may need to make your case to higher-ups at your company—and when that happens, you’ll likely need to draft a report that explains the problems you discovered and your proposed solutions. Usability tests help you: 1. In other words, removing obstacles (issues) would be our priority number one. Remind yourself of the problem areas of your website, or pain points, that you wanted to evaluate. Identify the users to test: You probably have some idea about who the users or customers are that come to your website. The issues (i1 to i3) with their severities (4.95, 6.7 and 10.05), An indicator of 1 every time a solution matches (addresses) an issue, The effectiveness of each solution (4.95, 4.95 and 16.75), The complexity of each solution (1, 3 and 5) estimated by the team, The ROI of each solution (4.95, 1.65, 3.35). It needs to be set up for easy idea generation and insights later in the process—the key is to clearly structure and organize the data to avoid clutter. Loved the double diamond approach (that I've seen around into the Design Thinking methodology I guess) and I'm not holding my breath to test it soon! Using personality assessment tests with your current employees and sharing the results with them can help team members better understand each other, which is a win-win for all involved. Let’s start by borrowing some ideas from the creative process. The first is a critical issue that should be corrected immediately, while the second is a minor issue that can be put on the back burner for some time in the future. Quantitative data metrics for user testing include: Qualitative data is just as, if not more, important than quantitative analysis because it helps to illustrate why certain problems are happening, and how they can be fixed. At the end of the exercise, the solutions in quadrant 1 are the ones with best ROI (more effective and less complex) signifying top priority. Once you've finished running your usability testing sessions, it's time to evaluate the findings. If you want to follow this methodology, here’s a template (Google Sheet): https://goo.gl/RR4hEd. The breathtaking How To Analyze Usability Test Results | Toptal Pertaining To Usability Test Report Template picture below, is section of Usability Test Report Template document which is listed within Report Template, usability test report format, usability test report template and posted at August 27, 2019. Laborious as it may be, proper analysis is a worthwhile step that can maximise the value of usability tests. However, usability reports are a strong method for communicating your results to your team and your wider organization in a clear, professional way, in order to build support for your research efforts. In order to check the user experience, you need to have a product that you’re going to test. Using these methodologies brought up the following observations from teams who used it in various projects: Especially when dealing with bigger studies, the issue prioritization keeps the team focused on what really matters, saving time and resources by reducing unwanted cognitive challenges like information overload, analysis paralysis, and decision fatigue; The connected end-to-end workflow keeps solutions more aligned with usability test outputs (because issues and solutions are paired), reducing the risk of implementing less-than-optimal solutions; We can easily apply this method collaboratively (in part or as a whole) using online tools. Meaning an "observation" approach (Positive+negative feedbacks). To summarize the steps: we started by collecting data, then we prioritized issues according to specific parameters. These can be caused by a number of different factors, including decision fatigue and many types of cognitive biases. And it’s hard to put into words … That said, it’s unnecessarily complicated to learn how to analyze usability test results if you haven’t done any testing yet. Borrowing from this logic, we have the following steps: Calculate the effectiveness of each solution. What are the resources required to develop this solution? For instance, during the prioritization phase, the positive attitudes and behaviors of the users observed in testing are not included. One way to do it is to create a matrix for issues (impact x frequency) and place it next to another for solutions (effectiveness x complexity). Carlos is a user experience strategist, researcher and strategist who creates useful, simple and pleasurable digital products and services. A regular usability test with five to ten participants can easily generate more than sixty issues. Read on for our best practices. When we understand how co-workers and managers prefer to communicate, the workplace becomes a more productive, comfortable environment. Identifying your main areas of interest will help you stay focused on the most relevant feedback. Place these solutions in the solution matrix, starting at quadrant 1 (top left). If possible, please share yours thoughts after applying it :-). The steps are: For each issue, generate multiple solution ideas—What are possible ways to address the issue? Good solutions are versatile! The method above involves some (basic) calculations repeated many times, so it’s best to use a spreadsheet. Users with more experience tend to perform better on tasks, have higher perceptions of usability, and have higher Net Promoter Scores. You're right, this method is suitable for collaboration while keeping solutions tied to problems. Analyze your results Objectives. In the example above, a fictional usability test made with three participants yielded two issues: As resources are limited, it is necessary to prioritize usability issues in a way that will optimize analysis. Each matrix is divided in four quadrants, indicating prioritization. Identifying your main areas of interest will help you stay focused on the most relevant feedback. If you previously created user personas or testing groups, record that here as well. The more severe the issue addressed, the better the solution. For each issue a user discovered, or unexpected action they took, make a separate note. Thanks for the awesome feedback, Priscila! How to analyse usability testing results. Anyways, collecting feedback is always good, so in the spreadsheet mentioned in the article there is a subsection (tab "ISSUES", line 55) to collect general feedbacks, which can be categorized as "positive", "negative" and "neutral". The process of turning a mass of qualitative data, transcripts, and observations into an actionable report on usability issues can seem overwhelming at first—but it's simply a matter of organizing your findings and looking for patterns and recurring issues in the data. Three usability issues experienced by three participants (p1, p2 and p3); The task ‘create a post’ appearing twice and assigned a, Each issue was assigned a value given its. Thanks for putting this together. How do you go about collecting data and analyze results? This is the cost-benefit relationship, calculated by dividing the effectiveness of the solution by its complexity. The difference is that it only serves to register the information, there is no calculation between them and the issues. A clearly-written plan will also help you explain the goals of the test to your team, and help you achieve buy-in, if you need that. It can feel redundant to create a report after a well-executed usability test, especially if you’ve involved the rest of your team from the start. Thanks for the feedback :-) When analyzing the data you’ve collected, read through the notes carefully looking for patterns and be sure to add a description of each of the problems. The short answer: the one that best fits your situation and is best aligned with your goals. Step 2: Have your prototype or product ready to Test . This could be roughly compared to the business value in agile methods. Review your testing sessions one by one. Repeat the steps above for the remaining issues (quadrants 2, 3 and 4, in this order). Based on your objectives, decide the format of your test result compiling. Usability Testing also known as User Experience(UX) Testing, is a testing method for measuring how easy and user-friendly a software application is. When you're done, your data might look similar to this: Assess your data with both qualitative and quantitative measures: In most usability studies, your focus and the bulk of your findings will be qualitative, but calculating some key numbers can give your findings credibility and provide baseline metrics for evaluating future iterations of the website. For example, instead of just “Avoid using a hamburger menu,” it’s better to state a specific solution, such as “Use a horizontal navigation and vertical tree menu.”. This will give you an idea of how many users experienced problems with a certain step (e.g., check out) and the overlap of these problems. Its usability testing solution includes varied features such as task analysis, multiple path analysis, heatmaps, A/B testing, guerilla testing, and more. It’s important to identify the data you want before you begin testing. It can feel like “drinking from the firehose” while waiting for the feared analysis paralysis to rear its ugly head. The more severe the issue, the more effective its solution. Sometimes the solution is quite obvious—like correcting the placement of a UI component. Note: We will need to use some basic math. It can be a website that has limited functionality, a demo app or an interactive wireframe. In an attempt to discover usability problems, UX researchers and designers often have to cope with a deluge of incomplete, inaccurate, and confusing data. I miss the "CHARTS" tab in the spreadsheet. Critical: impossible for users to complete tasks, Minor: annoying, but not going to drive users away. After the recommended changes have been decided on and implemented, continue to test their effectiveness, either through another round of usability testing, or using A/B testing. Spot on! Prototype development is the first part of the process, regardless of the field that you specialize in. Add up the severities of all issues addressed by the solution. Starting with your research questions, the first step is to collect the data generated by the usability test. Our updated table would look like this: In the example above, we have the following scenario: That’s it for now. Aug 8, 2019 - UX and usability testing analysis is a critical skill. 5m read . An inconvenient fact: usability testing will always uncover far more problems than you and your team are able to fix. A considerable risk when trying to solve usability problems is going down the wrong track trying to come up with solutions that don’t truly address the issues at hand. Such anecdotes and insights will help you come up with solutions to increase usability. He is a focused problem-solver and reliable team player, bringing together strategic thinking, creativity, and user-centered philosophy in his daily work. The goal of usability testing is to detect any usability problems by collecting qualitative and quantitative data to determine the satisfaction of representative users with the product. Issues the user encountered while performing tasks, Comments (both positive and negative) they made, Bad example: the user clicked on the wrong link, Good example: the user clicked on the link for Discounts Codes instead of the one for Payment Info, Quantitative analysis will give you statistics that can be used to identify the presence and severity of issues. exactly what I was working on and this should help us a lot in making our own template better. However, when determining a goal for task completion rate, context matters. To reduce the risk of making bad design decisions, we need: a) several solution alternatives to choose from, and b) an effective selection process. Subscription implies consent to our privacy policy. Here’s a way to stay sane during the process. The goal for UX research and usability testing analysis is to obtain qualitative WHY data for the already observed WHAT data coming from the Behavioral UX Data analysis. It is a design process with clearly defined and integrated problem and solution phases. (tree testing, hallway usability tests, benchmark testing) A good report should: Visit our page on reporting templates for more guidance on how to structure your findings. Which means that we do not have the final product to test. Each project will require a different approach. I did 2 things: a) went down to the gym and left a few flyers there (with the permission of the gym owner, of course) b) posted to a body-building Facebook group. Mark additional issues that the solution may address—in practice, a single good solution can address multiple issues. Usability testing can save you a ton of time and money by revealing issues while they are still easy to fix. Finding users: You can recruit users right off your website using a pop-up invite, email users from an existing customer list, or use a panel agency that finds users who meet your requir… Was your process more issue oriented because you are assuming that positive feedback wouldn't actually help to make your product better ? AWESOME! Now that you have a list of problems, rank them based on their impact, if solved. Usability Testing. Copyright © 2014 - 2020 Hotjar Ltd. All rights reserved. Home; About us; What we do; Clients; Blog; Be a tester ; Contact us; Home; About us; What we do; Clients; Blog; Be a tester; Contact us; Web Usability Blog. It’s important to understand the limitations of this approach. For example: being unable to complete payments is a more urgent issue than disliking the site's color scheme. Afterwards, we generated solution ideas for those issues and, finally, prioritized them. Record the task the user was attempting to complete and the exact problem they encountered, and add specific categories and tags (for example, location tags such as check out or landing page, or experience-related ones such as broken element or hesitation) so you can later sort and filter. Benchmarking is a great way to measure results against a goal. Look for patterns and repetitions in the data to help identify recurring issues. Finally, apart from usability testing, this approach can also be extended to other UX research techniques. Sort the data in your spreadsheet so that issues involving the same tasks are grouped together. ), exactly as used in agile methods like planning poker. Analysing usability testing results is a crucial part of the process to produce meaningful recommendations and actions. If there are no previous results to compare, aim for a … Well, that probably deserves an entire blog post, but let’s try to scratch the surface. Thanks, Carlos! Before you start analyzing the results, review your original goals for testing. Hi Fabien, Again, be specific, so that it’s easier to evaluate ideas. If you are an agile or design thinking practitioner, you know what I mean. Tags: Analysis, Analyze Usability Data, Usability Study, Usability Testing, User Research 0 I have noticed that there is a wealth of literature online on how to conduct a Usability Study, but I have yet to find anything to demonstrate how to analyze the data collected from a study. Task analysis is the process of learning about ordinary users by observing them in action to understand in detail how they perform their tasks and achieve their intended goals. It’s downloadable, and you can freely customize it to your needs. Making sense of usability test results. Hi Carlos, Thanks for this article. The purpose is to test the concept and build upon the initial framework (or scrap the idea altogether). How can we apply visual tools like sticky notes to work with the approach shown in this article? In order to simplify this approach, we had to leave one parameter out. I'm keen on using this for a research study of mine so am looking for a specific reference to use. In the free Guide to Usability Testing, we divide the tests into four categories based on Christian Rohrer’s fantastic article: Scripted — These tests analyze the user’s interaction with the product based on set instructions, targeting more specific goals and individual elements. Not every moderator is great at thinking on the fly, or knows the product well enough to be able to ad-lib. I've been reading this for more than 1 hour, give it all the attention and I consumed it lot better than other articles, thanks for sharing. That's a very good question. Simply put, define how critical the task is for the business or user by setting a numeric value to it. Similarly to issue prioritization, we need to prioritize solutions according to some parameters. Some epic stuff here. How to analyze your data They may be things like: logging in, searching for an item, or going through the payment process, etc. There are a number of points your plan should include, from the scope of the test to the number of testers, and we outline them all right now. We found our most important usability issues in this order: 3, 2 and 1. In other words, the more effort and uncertainty, the more complex the solution. Thank you!Check out your inbox to confirm your invite. In most cases, it’s sufficient to: A common approach for organizing usability issues, used by Lewis and Sauro in the book Quantifying the User Experience, is to plot the data as shown in the table below, with issues in the rows and participants in the last few columns. However, it’s not a walk in the park. It's not enough to simply present the raw data to decision-makers and hope it inspires change. For example, a user who could not find a support phone number to call and another who couldn’t find an email address should be grouped together, with the overall conclusion that contact details for the company were difficult to find. The biggest variable we find is the users’ prior experience with a website. 13 Nov 2019. First of all, thanks for sharing this awesome article. If they all encountered the same problem, then conclude that there is an issue that needs to be resolved. Website analysis is the practice of testing and analyzing a website's performance in relation to SEO, speed, competition, and traffic. Nearly everyone I know (including me—of course) loves working with sticky notes and whiteboards, not only because it’s usually faster and fun, but also because it facilitates collaboration. Any site can benefit from some form of website analysis if the results are then used to improve it—for example, by reducing page size to increase overall speed or optimizing a landing page with lots of traffic for more conversions. Regards. The double diamond is exactly what we need to build a framework that will handle the usability issues and find ways to solve them. The situation becomes trickier for those issues with non-obvious or many possible solutions. Reorganize the solutions, keeping them specific—as required, merge or split the solutions to avoid redundancies and too much abstraction. Most likely, each category will correspond to one of the tasks that you asked users to complete during testing. Figures like rankings and statistics will help you determine where the most common issues are on your website and their severity. or something more elaborate like the Fibonacci sequence (1, 2, 3, 5, 8, etc. This method can measure improvement and determine the success of the results. Which solution is better? Let’s see how it works in a spreadsheet (of course we want to automate this, right?). For example, you may find that several users had issues with entering their payment details on the checkout page.
2020 how to analyze usability test results