User test is the testing of the interface and functionality of a website by real users in vivo. The marketer sets specific tasks for interaction with the site, collects relevant results from page visitors, and assesses the level of convenience of the online store. It is important to give members maximum freedom to interact with the site. This will give an idea of how intuitive and user-friendly the platform is.
Website usability test is a process in which a group of users tests the design of a website for ease of use and usability. This testing is run several times between the early development period and the final release of the site. This is how experts find design flaws that they overlooked before.
TOP Tools for User Testing and Usability Testing from Plerdy
Heatmap for User Testing and Usability Testing
- Free live website tracking.
- Find the first click on the page and follow the sequence of subsequent transitions.
- Segment clicks by device type and traffic source.
- Distribute users into groups and audit each of them.
- Identify elements of the website that are ineffective for usability.
- Analyze clicks on dynamic elements.
PopUP Forms for User Testing and Usability Testing
The comprehensive Plerdy solution helps you independently and quickly customize pop-up forms on site pages. Provides site CTR growth up to 30%, helps to increase sales, reduces bounce rate, and provides an extensive analysis of project performance. Tasks that PopUP Forms will help to solve:
- Remind users of items added to the cart but not yet purchased.
- Offer discounts to potential customers.
- Run polls on the site.
- Create forms for invitations and registrations to events.
- Collect data about user interactions with Google Ads remarketing forms.
SEO Checker for User Testing and Usability Testing
Each site and its content is crawled and evaluated by the search engine. The quality of the optimization drives the score.SEO Plerdy analysis helps plan and improve the search engine optimization of sites, bringing the web project to the top of the search. Automatically analyzes pages after receiving SEO data and checks for factors that search engine algorithms take into account. Additional functions:
- Daily reviews for keywords on website pages, title, description, H1, Noindex, and other significant elements.
- Allows you to integrate the Google Search Console API and analyze added and missing keywords.
- Automatically detects missing keywords in title, description, and H1 and shows the history of changes.
- Segments pages with the same errors.
Takes into account Mobile-First indexing, performs SEO analysis of over 1 million pages, and reduces the risk of traffic loss. Analytics can be exported to Google Sheets. The main functionality is available without subscription and payment. Paid packages provide advanced features.
Session Replay for User Testing and Usability Testing
The session replay tool receives and assembles information by capturing video of replays of mouse movements, scrolling, clicks, and touches of users. The owner sees the site through the eyes of potential customers.
Attributes segment records:
- Geolocation of site users.
- Custom events.
- Activity type: clicks, scrolls, cursor movement, data entry.
- Traffic channel.
- Device type: mobile or desktop versions.
Event Tracking for User Testing and Usability Testing
What tasks does it solve:
- Tracks user actions related to a specific goal: interactions with forms, images, videos, buttons, scrollbars, Ajax content, and other page elements.
- Provides the ability to select one or more events using a category or item identifier.
- There is an option to add events from site pages automatically.
- Removal of unnecessary or irrelevant events is provided.
- Automatically transfers events to Google Analytics.
Sales Performance for User Testing and Usability Testing
- Analyzes key e-commerce metrics for a selected period.
- Segments of the impact of site elements based on traffic channels and the type of visitors’ devices.
- Divides users into groups and analyzes which elements each interacted with before purchasing.
Unlike Google Analytics, it collects more accurate data. The tool collects unique information for UX analysis, compatible with all types of sites.
Conversion Funnel for User Testing and Usability Testing
The Plerdy Conversion Funnel records the interactions of leads with website elements and pages. It helps to understand user behavior and the stages a customer goes through to purchase. The software makes it possible to:
- Analyze the number of unique page views at each funnel step.
- Find out at what stage of the conversion funnel most customers leave.
- Explore funnels by device type and traffic channel.
- Adjust the appearance of the funnel (horizontal/vertical).
Differences between user testing and usability testing
User testing is often confused with usability testing. These processes are radically different and are carried out at various stages of the site’s life. The first clarifies whether the target audience needs a specific solution or service. And usability testing helps determine if the target audience can effectively use a product or service. What elements are unproductive, what the guests are not paying attention to, and what needs to be more explicit. Let’s look at three key differences between the methods.
- Basis: Who, What, Where, When and How Testing.
- Purpose, key objectives and tactics. The analysis should understand whether the project works according to the chosen strategy. Does it achieve the goals, and at what points does it not meet the audience’s expectations? The goal of the product forms the strategy, and the course of its implementation forms the tactics.
- Parameters for measuring success in user testing.
- Possible scenarios and questions to ask.
Before starting usability testing, you need to answer the following questions:
- Determine the scope of work: the entire site/application/product or part of it.
- Project problems or questions that in-house marketers cannot answer – can visitors get important information from the home page.
- Timing, duration, frequency of sessions, and the time required for the participant to solve the problem.
- What questions to ask participants before and after the testing session.
- The number of users, types, how to recruit participants, the type of equipment, and perhaps the size and resolution of the monitor, operating system, browser, etc., will be necessary.
- Qualitative and quantitative data need to be collected, such as the percentage of successful completion, percentage of errors, and mission time.
- Methods: Moderated or unmoderated interviews, A / B tests, usage tracking, etc.
- Critical and non-fatal errors: deviations from scenarios that affect the authenticity of the results.
Moderated and unmoderated interviews work well for user testing. This is how the audience tells themselves whether they need a specific concept and whether the idea is useful.
It is more convenient to determine user preferences and attitudes to specific elements using the A / B test method.
First testers can be found on crowdfunding sites. Subsequently, they will provide the site owner regular feedback as the product or service develops. Potential customers can be shown an introductory video about who needs the promoted product and what purposes on the same sites. Next – offer to make a preliminary order. Such testing is recommended at the stage of project launch. This will make it possible to save part of the budget if you find out in advance whether the planned startup will be of interest to the target audience.
Before launching a new service to the general public, the service can be tested on a group of regular customers. You will be working with people who already trust the store and are familiar with the brand – don’t waste resources on an introduction.
These methods can be adapted for usability testing. The topmost productive ones include:
- Oculography (eye-tracking) and clickstream testing – Help identify screen areas that have attracted the most attention. The method is effective when combined with unmoderated or moderated interviewing.
- Research Diary – Testers describe the process of using the product over a long period.
- A / B tests work when needed to improve the existing email list, project design, call-to-action form, and other elements.
- An unmoderated or moderated interview influences a visitor’s decision: a call to action or a site page. Evaluate the first impression of visiting a site/application and analyze the usability of similar competitors’ products.
After completing usability testing, developers receive data on:
- The platform features that work well.
- Problems that need to be fixed first.
- Brief for a further action plan.
A usability test report consists of the results of all tests, their analysis, and conclusions:
- User completion percentage.
- Time spent by each user during a test.
- The bounce rate for individual tasks.
- User feedback on the degree of satisfaction.
- Testers’ recommendations – what needs to be changed.
Data on usability issues should be systematized and prioritized according to their importance and impact on the project’s success, which will help you prioritize your decision. In practice, problems are segmented into five groups by severity: from critical to non-product impact.
In user testing, the team does not collect results at the end but in the process. Successful test completion depends on constructive customer feedback. So the marketer can get both the expected test result and completely new, unpredictable information about the product.
An essential element is the segmentation of responses. Marketers practice breaking one extended test into several short surveys. In this case, the specialist receives intermediate results and can change the direction of the test based on the problems that are understood until the process is complete.
Information for the final report needs to be structured and segmented by priority of problems; this can include atypical comments and feedback from usability testers that may affect the improvement of the service.
To achieve the best results, you need to find a group of people close to your ideal target customer or well represented by your customer base. Many online platforms have access to an international audience for remote testing. There you can select your participants based on a variety of factors. Such solutions are great for user tests.
If you rely on guerrilla tests and other personal methods and cannot create a homogeneous test group, you should at least collect relevant data (age, training, profession, computer/mobile use). With this background knowledge, you can better evaluate any differences in the results.
For high-quality data, you have to ask your test participants the right questions before and after the test. To find the right mix, you should take time to brainstorm with your team and potential early beta testers.
What to test
The success or completion rate measures the average number of people completing a task. It is the most common indicator for measuring usability. In moderated tests, the moderators record the success rate. In the case of unmoderated tests, the participants report themselves. Or you can then analyze the session recordings.
However, to record the actual user experience, the success rate alone is insufficient. Therefore, you should also include other metrics, for example:
- The number of errors: how many errors or unnecessary actions did a user make when completing a test? (0.7 errors per user are considered a benchmark.)
- Task duration: the time indicates how long it takes a user to complete a test. There are no general guidelines because the requirements differ too much.
- Task difficulty: users can assess the difficulty of the task on a scale from 1 (very difficult) to 7 (very easy). The average value is 4.8.
- Single Usability Metric (SUM): this metric combines success rate, task duration, and task difficulty and thus displays the general usability. The average value is 65% across industries.
- Task satisfaction: with the help of a website survey, you can record user mood after completing the task.
The more metrics you create, the more detailed and comprehensive your image of the test participants’ experience becomes.
All in all, the first thing you’ll need to do is set a clear objective. What do you want to learn? What answers would you like to find? If you set a clear goal, your success chances grow. For instance, if you’re designing an application for sales, your goal may be to see whether it is easy for users to buy goods, etc.
The use phase test
The use phase is quite intuitive. A user test is needed to understand if people need what you offer and whether it is easy for them to utilize this or that solution. User tests can be called the beginning of a product’s life cycle. As soon as you have an idea for a product, you must consider a user test. The usability test is carried out after a prototype, or a design is made.
So, when should I run a user test? A user test is almost always needed. Because by observing the target group during a user test, important insights and potential for improvement can be identified. Ideally, a user test should accompany the entire course of the project. A user test in the planning and development phase can save costs. A user test ensures that the product meets the planned requirements and user needs even during and after the market launch. In addition, a user test not only supports the development of new products but also checks the added value of the innovations during a relaunch. The user test thus accompanies agile product development at all times and avoids major usability problems before they arise.
User testing and usability testing are not mutually exclusive. It is part of a successful marketing strategy to maintain brand loyalty and increase product appeal. So, now you know all the differences between user and usability testing.