Here, I want to document all the stuff I’ve been reading, and some of the key grabs for me.
At the moment, I’m wrapped up in the variety of methodologies, but will supplement that with more focused thoughts and writings on the various arms, legs and other flailing parts of this discipline.
Complete Beginner’s Guide to UX Research
Quotes from the page:
UX practitioners have borrowed many techniques from academics, scientists, market researchers, and others. However, there are still types of research that are fairly unique to the UX world.
UX research has two parts: gathering data, and synthesizing that data in order to improve usability.
We can also divide UX research methods into two camps: quantitative and qualitative.
Quantitative research is any research that can be measured numerically. It answers questions such as “how many people clicked here” or “what percentage of users are able to find the call to action?” It’s valuable in understanding statistical likelihoods and what is happening on a site or in an app.
Qualitative research is sometimes called “soft” research. It answers questions like “why didn’t people see the call to action” and “what else did people notice on the page?” and often takes the form of interviews or conversations. Qualitative research helps us understand why people do the things they do
Common Methodologies
The various types of UX research range from in-person interviews to unmoderated A/B tests (and everything in between), though they are consistent in that they all stem from the same key methodologies: observation, understanding, and analysis.
Observation
The first step to conducting research is learning to observe the world around us. Much like beginning photographers, beginning researchers need to learn how to see. They need to notice nervous tics that may signal that their interviewees are stressed or uncertain, and pick up on seemingly minor references that may reflect long-held beliefs or thoughts that should be further probed.
Observation may seem like a simple skill, but it can be clouded by unconscious biases—which everyone has. Design researchers train themselves to observe and take notes so that they can later find patterns across seemingly diverse groups of people.
Understanding
Much like observation, understanding is something we do all the time in our daily lives. We strive to understand our coworkers, our families, and our friends, often trying to grasp a point of contention or an unfamiliar concept. But for UX researchers, understanding has less to do with disagreements and more to do with mental models.
A mental model is the image that someone has in their mind when they think of a particular phrase or situation. For example, if someone owns an SUV, their mental model of “car” will likely differ from the mental model a smart car owner. The mental model informs the decisions we make; in the case of the car owners, when asked “how long does it take to drive to Winnipeg,” their answers will vary based on the gas mileage their vehicles get, among other things.
Design researchers need to understand the mental models of the people they interview or test, for two reasons. First, we all speak in shorthand at times. Researchers must recognize that shorthand based on the mental model of the speaker. Second, if the researcher can accurately identify the user’s mental model, he or she can share this information with the design team, and design to accommodate the model.
Analysis
Research on its own can be valuable, but in order to use the insights to inform design, it needs to be analyzed and ultimately presented to a larger team. Analysis is the process by which the researcher identifies patterns in the research, proposes possible rationale or solutions, and makes recommendations.
Some analysis techniques include creating personas or scenarios, describing mental models, or providing charts and graphs that represent statistics and user behaviors. Although the techniques described here are focused predominantly on conducting research, it’s important to remember that research is only valuable if it is shared. It does no one any good when it’s locked away in a cabinet, or forgotten in the excitement of design.
Daily Tasks and Deliverables
Every UX project is different, and the tasks that one researcher takes on will differ from those appropriate in another setting. Some of the most popular forms of research are interviews, surveys and questionnaires, card sorts, usability tests, tree tests, and A/B tests.
Interviews
One-on-one interviews are a tried and true method of communication between a researcher and a user or stakeholder. There are three main types of interviews, each of which is used in a different context and with different goals.
Directed interviews are the most common sort. These are typical question-and-answer interviews, where a researcher asks specific questions. This can be useful when conducting interviews with a large number of users, or when looking to compare and contrast answers from various users.
Non-directed interviews are the best way to learn about touchier subjects, where users or stakeholders may be put off by direct questions. With a non-directed interview, the interviewer sets up some rough guidelines and opens a conversation with the interviewee. The interviewer will mostly listen during this “conversation,” speaking only to prompt the user or stakeholder to provide additional detail or explain concepts.
Ethnographic interviews involve observing what people do as they go about their days in their “natural habitats.” In this sort of interview, the user shows the interviewee how they accomplish certain tasks, essentially immersing the interviewer in their work or home culture. This can help researchers understand the gaps between what people actually do, and what they say they do. It can also shed light on things that users do when they are feeling most comfortable.
Surveys and Questionnaires
Questionnaires and surveys are an easy way to gather a large amount of information about a group, while spending minimal time. These are a great research choice for projects that have a large and diverse group of users, or a group that is concerned with anonymity. A researcher can create a survey using tools like Wufoo or Google Docs, email it out, and receive hundreds of responses in just minutes.
There are downsides to surveys and questionnaires though. The researcher can’t interact directly with the respondents, and therefore can’t help with interpreting questions or framing them if the wording isn’t quite perfect; and researchers typically have a limited ability for follow up. Surveys see a far higher response rate when they do not require a login or contact information, and this anonymity makes it impossible to ask for clarification or further details.
Card Sorts
Card sorts are sometimes done as part of either an interview or a usability test. In a card sort, a user is provided with a set of terms, and asked to categorize them. In a closed card sort, the user is also given the category names; in an open card sort the user creates whatever categories he or she feels are most appropriate.
The goal of a card sort is to explore relationships between content, and better understand the hierarchies that a user perceives. Many content strategists and information architects rely on card sorts to test out hierarchy theories, or kickstart work on a site map.
Usability Tests
Usability testing involves asking potential or current users of a product or service to complete a set of tasks and then observing their behavior to determine the usability of the product or service. This can be done using a live version of a site or app, a prototype or work-in-progress, or even using clickable wireframes or paper and pencil.
While there are many variations and styles of usability tests, there are three that are commonly used: moderated, unmoderated, and guerrilla.
Moderated usability tests are the most traditional type of test. They can happen in person, or via screenshare and video. Whole usability labs are set up, complete with one-way mirrors for stakeholders to observe, for the purpose of conducting moderated usability tests. In a moderated test an unbiased facilitator talks with the user, reading aloud the tasks and prompting the user to think aloud as he or she accomplishes the tasks. The facilitator’s role is to act as a conduit between stakeholders and the user, phrasing questions to evaluate the effectiveness of a design and testing assumptions while helping the user feel comfortable with the process.
Unmoderated usability tests, sometimes also known as asynchronous research, is conducted online, at the user’s convenience. The tasks and instructions are delivered via video or recorded audio, and the user clicks a button to begin the test and record his or her screen and audio. Just like in the moderated test, users are encouraged to speak their thoughts aloud, though there is no facilitator to ask follow up questions. Unmoderated tests are available through numerous online sites and can be significantly cheaper than moderated tests.
Guerrilla testing is a modern, lightweight take on traditional tests. Instead of renting a lab, guerrilla research is typically done out in the community; users are found at coffee shops or subway stations and asked to complete basic tasks with a website or service, in exchange for a few dollars, a coffee, or just out of the goodness of their hearts. While guerrilla testing is a great option, particularly on a budget, it is best used only for products or services with a large user base. More niche products will struggle to find reliable information from the random selection acquired in guerrilla testing.
Tree Tests
Just as card sorts are a great way to gather information before a website’s architecture has been created, tree tests are helpful in validating that architecture. In a tree test, users are given a task and shown the top level of a site map. Then, much like in a usability test, they are asked to talk through where they would go to accomplish the task. However, unlike in a usability test, the user doesn’t see a screen when they choose a site section. Instead, they will see the next level of the architecture. The goal is to identify whether information is categorized correctly and how appropriately the nomenclature reflects the sections of the site.
A/B Tests
A/B testing is another way of learning what actions users take. An A/B test is typically chosen as the appropriate research form when designers are struggling to choose between two competing elements. Whether the options are two styles of content, a button vs. a link, or two approaches to a home page design, an A/B test requires randomly showing each version to an equal number of users, and then reviewing analytics on which version better accomplished a specific goal. A/B testing is particularly valuable when comparing a revised screen to an older version, or when collecting data to prove an assumption.
Tools of the trade
User Research has the potential to be a sizable undertaking, sometimes to the point that budgetary and scheduling concerns scare people away. Fortunately, today we often see a more casual, habitual approach. The tools we have available are responsible for much of that shift.
Ethnio
Ethnio was the first moderated remote research software when it launched, and it’s still going strong. Ethnio finds users who are currently using a site or app, and (with their permission) allows interviewers to ask them questions about their experience as they go. It automates many elements of the typical in-person test, including real-time notifications, and paying participants with Amazon gift cards. Ethnio has a fourteen day free trial, and four pricing options, to accommodate businesses of every size.
Optimal Workshop
Optimal Workshop has everything! The full Workshop is a bundle of four research tools, all of which are also available sold separately (and very affordably). Treejack is great for remotely testing information architecture, either to test the nomenclature or the hierarchies themselves. Optimal Sort provides online card sorting, to see how users choose to organize content. Chalkmark offers heat maps of click patterns across a site, and Reframer is a tool for taking notes and identifying themes easily. All come highly recommended.
Learn more about Optimal Workshop
SurveyMonkey
Surveys and questionnaires are great ways to gather information, but they’re most useful when hundreds of responses can be seen at once. Enter SurveyMonkey, an online survey-creation and reporting tool, which allows people to customize and brand their own surveys and then send them out via social media, embed them into websites, or integrate with mass mailings. SurveyMonkey also allows for easy analysis and reporting when the results come in. It’s available as a free Basic version, or for a monthly fee with additional features.
UsabilityHub
For quick design iterations, gut checks, and clear user feedback, UsabilityHub is an incredible resource for low budget teams. With a super-simple interface (yay!) and fast turn-around times, iterative testing and research is a few clicks away through UsabilityHub.
UserTesting.com
When it’s not possible to schedule a real-time test with users, UserTesting.com is a great way to see how people use a site. Researchers can create a series of tasks, and then receive videos from participants—either pre-chosen, or randomly selected. Researchers are able to see a video of the participant using the site, and speaking aloud to explain what they’re doing. UserTesting.com offers Basic and Pro options, and prices accordingly.
Learn more about UserTesting.com
UserZoom
The good news is, whatever you need, UserZoom has it. Usability testing, both moderated and unmoderated, remote testing for mobile and desktop, benchmarking, card sorting, tree testing, surveys, and rankings: they’ve got it! The bad news, as can be expected with any product this robust, is that it can be overwhelming to learn, and it is expensive. Still, for organizations with the budget to handle it, UserZoom is a solid, effective choice.Learn more about UserZoom
When to Use Which User-Experience Research Methods: 20 UX Methods in Brief
Here’s a short description of the user research methods shown in the above chart:
Usability-Lab Studies: participants are brought into a lab, one-on-one with a researcher, and given a set of scenarios that lead to tasks and usage of specific interest within a product or service.
Ethnographic Field Studies: researchers meet with and study participants in their natural environment, where they would most likely encounter the product or service in question.
Participatory Design: participants are given design elements or creative materials in order to construct their ideal experience in a concrete way that expresses what matters to them most and why.
Focus Groups: groups of 3–12 participants are lead through a discussion about a set of topics, giving verbal and written feedback through discussion and exercises.
Interviews: a researcher meets with participants one-on-one to discuss in depth what the participant thinks about the topic in question.
Eyetracking: an eyetracking device is configured to precisely measure where participants look as they perform tasks or interact naturally with websites, applications, physical products, or environments.
Usability Benchmarking: tightly scripted usability studies are performed with several participants, using precise and predetermined measures of performance.
Moderated Remote Usability Studies: usability studies conducted remotely with the use of tools such as screen-sharing software and remote control capabilities.
Unmoderated Remote Panel Studies: a panel of trained participants who have video recording and data collection software installed on their own personal devices uses a website or product while thinking aloud, having their experience recorded for immediate playback and analysis by the researcher or company.
Concept Testing: a researcher shares an approximation of a product or service that captures the key essence (the value proposition) of a new concept or product in order to determine if it meets the needs of the target audience; it can be done one-on-one or with larger numbers of participants, and either in person or online.
Diary/Camera Studies: participants are given a mechanism (diary or camera) to record and describe aspects of their lives that are relevant to a product or service, or simply core to the target audience; diary studies are typically longitudinal and can only be done for data that is easily recorded by participants.
Customer Feedback: open-ended and/or close-ended information provided by a self-selected sample of users, often through a feedback link, button, form, or email.
Desirability Studies: participants are offered different visual-design alternatives and are expected to associate each alternative with a set of attributes selected from a closed list; these studies can be both qualitative and quantitative.
Card Sorting: a quantitative or qualitative method that asks users to organize items into groups and assign categories to each group. This method helps create or refine the information architecture of a site by exposing users’ mental models.
Clickstream Analysis: analyzing the record of screens or pages that users clicks on and sees, as they use a site or software product; it requires the site to be instrumented properly or the application to have telemetry data collection enabled.
A/B Testing (also known as “multivariate testing,” “live testing,” or “bucket testing”): a method of scientifically testing different designs on a site by randomly assigning groups of users to interact with each of the different designs and measuring the effect of these assignments on user behavior.
Unmoderated UX Studies: a quantitative or qualitative and automated method that uses a specialized research tool to captures participant behaviors (through software installed on participant computers/browsers) and attitudes (through embedded survey questions), usually by giving participants goals or scenarios to accomplish with a site or prototype.
True-Intent Studies: a method that asks random site visitors what their goal or intention is upon entering the site, measures their subsequent behavior, and asks whether they were successful in achieving their goal upon exiting the site.
Intercept Surveys: a survey that is triggered during the use of a site or application.
Email Surveys: a survey in which participants are recruited from an email message.
I guess which tools you use depends on the place you’re at at a project…or the place you come into a project.
There is a lot said about the difference between Agile, Sprint, Lean UX etc. At the moment, the feel like constraints, or workflows that take the place of common sense, or my understanding of common sense. I’m reading Lean UX, and there are a lot of aspects in it that appeal to me so far, like the quick iteration, and ongoing research. It’s something I’ll have to constantly balance, and once I’ve a group of fantastically resolved tools based on these research methodologies, I’ll be in a more confident place.
Some Methodological Definitions:
Agile:
“Individuals and interactions over processes and tools
agile manifesto
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan”
https://agilemanifesto.org/
Stober, T., & Hansmann, U. (2010). Overview of Agile Software Development. In T. Stober & U. Hansmann (Eds.), Agile Software
Development: Best Practices for Large Software Development Projects (pp. 15–33). https://doi.org/10.1007/978-3-540-70832-2_2http://www.agilenutshell.com
Lean UX:
“The Lean principles underlying Lean Startup apply to Lean UX in three ways.
Gothelf, J., & Seiden, J. (2016). Lean UX: designing great products with agile teams. ” O’Reilly Media, Inc.
First, they help us remove waste from our UX design process.
We move away from heavily documented handoffs to a process that creates only the design artifacts we need to move the team’s learning forward.
Second, they drive us to harmonize our “system” of designers, developers, product managers, quality assurance engineers, marketers, and others in a transparent, cross-functional collaboration that brings non-designers into our design process.
Last, and perhaps most important, is the mindset shift we gain from adopting a model based on experimentation. Instead of relying on a hero designer to divine the best solution from a single point of view, we use rapid experimentation and measurement to learn quickly how
well (or not) our ideas meet our goals. In all of this, the designer’s role begins to evolve
toward design facilitation, and with that we take on a new set of responsibilities.”
https://www.interaction-design.org/literature/article/a-simple-introduction-to-lean-ux
Waterfall
The waterfall model is a traditional and linear approach for software development. Each phase is completely finished before the next ones starts. Every project starts with the requirement phase where all the requirements are collected, documented, and discussed with all the stakeholders.
Stober, T., & Hansmann, U. (2010). Traditional Software Development. In T. Stober & U. Hansmann (Eds.), Agile Software Development: Best Practices for Large Software Development Projects (pp. 15–33). https://doi.org/10.1007/978-3-540-70832-2_2
Sprint
“The sprint is a five-day process for answering critical business questions through design, prototyping, and testing ideas with customers. Developed at GV, it’s a “greatest hits” of business strategy, innovation, behavior science, design thinking, and more—packaged into a battle-tested process that any team can use.”
https://www.gv.com/sprint/
Scrum
Self-organizing and cross-functional teams who work in sprints (usually 2 weeks). Scrum teams practice rituals such as sprint planning, stand-up meetings/daily scrum, sprint review, and retrospectives to address complex and adaptive problems while delivering products effectively.
https://www.scrum.org/resources/what-is-scrum
Kanban
Workflow framework to help teams visualise and communicate their work, set up work limits and archive changes. It’s a lean method that was inspired by the success of the Toyota manufacturing system where the
https://www.atlassian.com/agile/kanban
production is based on customer demand.
Hammarberg, M., & Sunden, J. (2014). Kanban in action. Manning Publications Co
Human-Centred Design
“When you understand the people you’re trying to reach—and then design from their perspective—not only will you arrive at unexpected answers, but you’ll come up with ideas that they’ll embrace.”
https://www.designkit.org/resources/1
Kaizen
Japanese term for continuous improvement.
“Do little things gradually better every day.
Masaaki Imai
Set and achieve ever higher standards.
Treat everyone as customer.
Continually improve in all areas and on all levels.”