Previously, I explained interviews, surveys, and card sorting as techniques that help UX researchers understand how users think and feel, what they need and want, and why. In this post, I will review UX research methods best suited to understand user behavior and its causes.
As mentioned before, there exist many UX research methods, but not all of them have to be employed on any given project. The exact selection of techniques depends on the specific needs of a project, its budget, and timeline.
By usability testing, we specifically mean an evaluative, behavioral research method that consists of observing users (directly or indirectly) while they complete specific tasks on a website or within an application. At Caktus, we conduct qualitative usability testing during which we observe the user’s interactions with a website or an application.
It’s worth noting that usability testing can be undertaken with different goals in mind:
- As a formative study to evaluate the current state of usability of a website, ahead of a redesign.
- As a summative study to evaluate the final state of a feature or a website at the end of a project (or a development cycle).
- As a formative assessment of a competitor's website or application to understand what usability problems exist and should be avoided.
Moderated usability testing
Moderated usability testing is a study moderated by a Caktus UX designer. It can be done in person on-site, or remotely by leveraging a third-party platform that allows us to connect with the user over the internet, have them share their screen, observe as they complete the tasks they’re presented with, and record the entire session. The platform also allows other observers to join in remotely, a great way for the client stakeholders to gain a direct insight about their product.
Unmoderated usability testing
Unmoderated usability testing is conducted with the help of a third-party platform that allows us to create tasks, deliver them to the user along with a link to the website or application under evaluation, and record the session during which the user is completing the tasks. We can then evaluate the recording and analyze the findings in order to issue recommendations.
On-site observation is a qualitative study that can result in behavioral or attitudinal insights. When done as generative research, it consists of observing users during their daily, work routines in order to better understand how they work, what their needs and pain points are, etc. When conducted as evaluative research, it means observing users completing tasks within an application in order to identify usability problems. The latter may seem similar to usability testing. There is, however, an important difference between the two approaches.
In usability testing, the participants are novice application users (users who had not used the application before) and the researcher provides them with tasks that imitate real-world scenarios. In an on-site observation, the researcher observes people who use the application in their work. Users walk the researcher through their workflows in the application, pointing out what’s working and what’s not working. The researcher gains insights that are not only behavioral (representing what users do while interacting with the application), but also attitudinal (representing what people think and say, what their opinions are).
Treejack testing is a qualitative or quantitative (depending on the participant sample size), evaluative method that allows us to assess how well information architecture and/or a navigation design pattern aligns with the users’ mental model. It consists of asking users to find labels representing content items within a tree-like model of information architecture or the navigation. At Caktus, we conduct treejack-testing with the help of a third-party service. It allows us to measure not only the success and failure rates, but also to see the path a user takes to locate each content item.
First-click testing is typically a quantitative, evaluative, behavioral method, in which users are presented with static images of an interface (either screenshots or high fidelity mockups) and asked to complete tasks by clicking on what they interpret as interactive elements of the interface, e.g., links or buttons. The premise of this approach is founded in a 2009 study (3), which showed that the user’s first click is a good indicator of a successful completion of a task. In other words, if the user’s first click is correct, they’re more likely to find what they’re looking for than if their first click is incorrect. When done with a large sample of participants, results of first-click testing are a good predictor of usability of the UI elements being tested.
At Caktus, we have used first-click testing as a qualitative method in an iterative series of tests that include card sorting, treejack testing, and first-click testing. In this approach we employ first-click testing in a way similar to treejack testing, as a method to assess the efficacy of a design that resulted from card-sorting. We leverage a third-party platform to perform first-click testing.
Analytics review is a quantitative, behavioral, evaluative research method. We use it to supplement the qualitative research we do. While a source of valuable data, analytics on its own does not necessarily deliver answers to questions about the quality of user experience or about usability. In combination with qualitative methods, however, it can enhance the process of diagnosing existing problems and improving user experience.
Analytics review consists of reviewing a set of metrics that an application’s or website’s analytics tool captures, e.g.
- paths users take to reach certain content, sources of incoming traffic;
- keywords used to find the content of interest;
- events (or user interactions) on a page e.g., clicks, downloads, etc.;
- conversion rates;
- time spent on a page;
and more. In addition, reviewing a website’s search logs can be an insightful source of information about content users frequently look for or are not finding by means of the website’s main navigation.
Selecting UX Research Methods for a Project
The research methods we employ to analyze and understand user behavior can be helpful at any stage of a project.
We may begin a redesign project with:
- Analytics review to gain insights about user behaviors on the current website or in an application
- Usability testing of the current website to uncover existing usability problems
- Competitive usability testing to reveal which digital experiences work well and which do not
- On-site observations of users with or without the technology the project is concerned with
We may test initial designs for the project by conducting:
- Treejack testing
- First-click testing
- Usability testing
And we monitor the usability of the implementation by conducting moderated or unmoderated usability testing.
For further reading, I suggest the following:
- UX Research Cheat Sheet, Susan Farrel, Nielsen Norman Group
- When to Use Which User-Experience Research Methods, Christian Rohrer, Nielsen Norman Group
- Bailey R.W., Wolfson C.A., Nall J., Koyani S. (2009) Performance-Based Usability Testing: Metrics That Have the Greatest Impact for Improving a System’s Usability. In: Kurosu M. (eds) Human Centered Design. HCD 2009. Lecture Notes in Computer Science, vol 5619. Springer, Berlin, Heidelberg