Ut complete guide to user testing

Page 1

The complete guide to user testing websites, apps, and prototypes The complete guide to user testing websites, apps, and prototypes 1


Contents 03 INTRO 04 DEFINE YOUR OBJECTIVE 05 IDENTIFY WHAT AND WHO YOU’RE STUDYING Determine your product and devices Select your test participants Other considerations

07 BUILD YOUR TEST PLAN Create a series of tasks Write clear questions

14 LAUNCH A DRY RUN 15 BEWARE OF ERRORS 17 ANALYZE YOUR RESULTS 19 CONCLUSION

The complete guide to user testing websites, apps, and prototypes 2


Remote user research is fast, reliable, and scalable.

Intro User feedback is the key to making any business successful—whether you’re launching an app, redesigning a website, refining product features, or making ongoing improvements to your customer experience. So how do you go about getting that feedback? Remote user research is a fast, reliable, and scalable way to get the insights you need to improve your customer experience. Unlike traditional in-lab usability testing or focus groups, you can recruit participants and get results in a matter of hours, not days or weeks. You’re not limited by your location, schedule, or facilities. You’ll get candid feedback from real people in your target demographic in their natural environment: at home, in a store, or wherever they’d ordinarily interact with your product. Performing remote user research on a regular basis helps you: VALIDATE YOUR PRODUCT IDEAS BEFORE COMMITTING RESOURCES INFLUENCE DECISION-MAKERS IN YOUR COMPANY TO MAKE IMPROVEMENTS ALIGN YOUR TEAM ON PRIORITIES IDENTIFY OPPORTUNITIES TO DIFFERENTIATE YOUR COMPANY FROM YOUR COMPETITION TRANSFORM YOUR COMPANY’S APPROACH TO CUSTOMER EXPERIENCE

In this eBook, we’ll cover how to plan, conduct, and analyze your remote user research. If you’re new to research, you’ll learn the ropes of setting up a study and getting straight to the insights. And if you’re an experienced researcher, you’ll learn best practices for applying your skills in a remote study setting.

The complete guide to user testing websites, apps, and prototypes 3


Define your objective The first step toward gathering helpful feedback is setting a clear and focused objective. If you don’t know exactly what sort of information you need to obtain by running your study, it’ll be difficult to stay on track throughout the project. Ask yourself: What am I trying to learn? You don’t need to uncover every usability problem or every user behavior in one exhaustive study. It’s much easier and more productive to run a series of smaller studies with one specific objective each. That way, you’ll get focused feedback in manageable chunks. Be sure to keep your objective clear and concise so that you know exactly what to focus on.

Example of a complex objective: Can users easily find our products and make an informed purchase decision? This objective actually contains three very different components: 1. finding a product, 2. getting informed, and 3. making a purchase. Example of a better objective: Can users find the information they need? As you set your objective, think about the outcomes your stakeholders will care about. You may be interested in identifying how users interact with the onboarding flow, for example, but that won’t be helpful if your team needs to identify ways to improve the monetization process. Keeping your objective front and center will help you structure your studies to gain insights on the right set of activities. It’ll also guide you in whom you recruit to participate in your study, the tasks they’ll perform, and what questions you should ask them.

The complete guide to user testing websites, apps, and prototypes 4


Identify what and who you’re studying Once you know your research objective, it’s time to determine a few specifics. First, think about what type of product you’re studying and which devices will be used to interact with it. Are you looking for feedback on a prototype? A released or unreleased mobile app? A website? If you have multiple products you want to learn about, then you’ll want to set up multiple studies—it’s best to test one product at a time so you get the most focused feedback. Keep the URL or app file handy when you set up your study. This will be where your participants will start their study. PRODUCT TYPE

STARTING PLACE

Prototype (in a tool like InVision)

Shareable URL

Static image

Upload to Box or Dropbox; copy shareable URL

Website

URL

Released app

Name of the app in the App Store/Play Store

Unreleased iOS app

.IPA file

Unreleased Android app

.APK file

You can user test digital products at any stage of development, from wireframe to prototype to live product.

Consider which devices and/or browsers you’ll want to include in your study. If your product is available on multiple devices, we recommend testing your mobile experience with at least as many users as your desktop experience. Next, consider who you will need to recruit to participate in your study. With remote user testing, you don’t need to search far and wide to find people to give you feedback. You simply select your demographics or other requirements when you set up your study.

The complete guide to user testing websites, apps, and prototypes 5


In many cases, you can use basic demographic criteria (such as age, gender, and income level) to recruit your study participants. If you have a broad target market or if you’re simply looking to identify basic usability issues in your product, then it’s fine to use broad demographics. The key is to get fresh eyes on your product and find out whether people understand how to use it. In other cases, you’ll want to get a closer match for your target audience, such as people who work in a certain industry, make the health insurance decisions for their household, own a particular kind of pet, etc. This is most appropriate if you’re interested in uncovering trends and opinions from your target market rather than pure usability. To recruit the right people for your study, you’ll set up a screener question. Screener questions are qualifying questions that allow only the people who select the “right” answer to participate in your study.

So how many test participants should you include? It’s been demonstrated that five participants will uncover 85% of the usability problems on a website, and additional users will produce diminishing returns. Resist the temptation to double or triple the number of users to uncover 100% of your usability problems. It’s more efficient (and easier) to run a study with five participants, make changes, run another study with another five users, and so on. Finally, if you are looking for trends and insights beyond basic usability issues, you may want to include a larger sample size. We recommend five to eight participants per audience segment—so if you have three distinct buyer personas, you will want to include a total of 15-24 participants in your study. Other considerations Sometimes, you may be interested in a more elaborate study. These methodologies are also possible with remote user research: Longitudinal studies: Watching users interact with a product over a period of time. Competitive benchmarking studies: Comparing the user experience of several similar companies over time.

In rare instances, you may need to use your own existing customers for your study, such as to test out an advanced feature of your product that a non-customer wouldn’t understand.

USERTESTING TIP: The broader your demographic requirements are, the faster you’ll get your results because more people will qualify for your study.

Beyond-the-device studies: Observing non-digital experiences, such as unboxing a physical item or making a purchase decision in a store. Moderated studies: Leading users through your study one-on-one in real time. Benchmarking studies: Tracking changes in your user experience on a monthly or quarterly basis. To learn more about how and when to use each of these techniques, check out our UX Research Methodologies Guidebook.

The complete guide to user testing websites, apps, and prototypes 6


Build your test plan Your test plan is the series of tasks and questions your participants will follow and respond to during the study. The key to a successful study is a well-designed plan. Ideally, your test plan will result in qualitative and quantitative feedback. Ask users to perform specific tasks and then answer questions that will give you the insights you need in a measurable way. A task should be an action or activity that you want a user to accomplish at that time. Example of a task: Go through the checkout process as far as you can without actually making a purchase. Use a question when you want to elicit some form of feedback from a user in their own words. Example of a question: Was anything difficult or frustrating about this process? Using questions to get quantitative measurements: Rating scale, multiple choice, and written response questions can be helpful when you’re running a large number of studies and you’re looking to uncover trends. You’ll be able to quickly glance at the resulting data rather than having to watch every video. From there, you can zero in on the outliers and the most surprising responses. We recommend taking a few moments to think back to your objective and consider the best way to convey results to your team. When the results come back, how do you want that feedback to look? Will you want quantitative data so you can create graphs? Written responses that you can use to create word clouds? Verbal responses so you can create a video clip and share it with your team? Establishing the type of deliverable you need from the outset will help you determine the right way to collect the information you need.

The complete guide to user testing websites, apps, and prototypes 7


This chart highlights various question types and shows the kind of results that can be expected when those questions are put to use.

TYPE

EXAMPLE

BENEFITS

Verbal response

Describe and demonstrate what, if anything, was most frustrating about this app.

Produces qualitative feedback and makes great clips for a highlight reel.

Multiple choice

Do you trust this company? • Yes • No

Great for collecting categorical responses.

Rating scale

How likely are you to return to this site again?

Written response

“Not at all likely” to “Very likely”

What do you think is missing from this page, if anything?

(These can be nominal, dichotomous, or even ordinal.)

Consider using both broad and specific tasks in your tests Broad, open-ended tasks help you learn how your users think. These types of tasks can be useful when considering branding, content, layouts, or any of the “intangibles” of the user experience. They give you the chance to assess usability, content, aesthetics, and users’ sentimental responses. Broad tasks are also good for observing natural user behavior. If you give participants the space to explore, they will! Here’s an example of a broad task: Find a hotel that you’d like to stay in for a vacation to Chicago next month. Share your thoughts out loud as you go.

Good for collecting ordinal variables. (Low, medium, high)

Good for running post-study analysis & collecting quotes for user stories.

USERTESTING TIP: When you’re not sure where to focus your test, try this: Run a very open-ended test using broad tasks like “Explore this app as you naturally would for about 10 minutes, speaking your thoughts aloud.” You’re sure to find areas of interest to study in a more targeted follow-up test.

Create a series of tasks The key to collecting actionable feedback is getting your participants to take action by performing specific tasks. Tasks are the steps that guide the user from the start to finish of your study. Along the way, you’ll ask them to perform a series of activities on your app, site, or prototype. As they work, they’ll be sharing their thoughts out loud. When you’re creating tasks for your study, focus on the insights you want to gather about the specific objective you determined. If you have a lot of areas you want to test, we recommend breaking these up into different studies. In most cases, it’s best to keep your study around 15 minutes in length, so keep this in mind as you plan your tasks.

Specific tasks help pinpoint where users get confused or frustrated trying to do something specific. They’re useful when you’re focused on a particular feature or area of your product and you need to ensure that the participants interact with it. Here’s an example of a specific task: Use the search bar to find hotels in Chicago, specifying that you want a non-smoking double room from July 10-15. Then sort the results by the highest ratings.

The complete guide to user testing websites, apps, and prototypes 8


Plan tasks using logical flow

Make task instructions concise

The structure of your study is important. We recommend starting with broad tasks (exploring the home page, using search, adding an item to a basket) and moving in a logical flow toward specific tasks. The more natural the flow is, the more realistic the study will be—and the better your results will be.

People are notorious for skimming through written content, whether they’re interacting with a digital product or reading instructions for a user test.

Example of poor logical flow: Create an account > find an item > check out > search the site > evaluate the global navigation options. Example of good logical flow: Evaluate the global navigation options > search the site > find an item > create an account > checkout. Ask yourself whether you’re interested in discovering the users’ natural journey or whether you need them to arrive at a particular destination. If it’s about the journey, give the participants the freedom to use the product in their own way. But if you’re more focused on the destination, guide them to the right location through your sequence of tasks.

One way to ensure that your participants read your whole task is to make the task short and your language concise. Example of poor task wording: “Add the item to your cart. Now shop for another item and add it to your shopping cart. Then change the quantity of the first item from 1 to 2. Now go through the whole checkout process using the following information...” Example of good, concise task wording: This single task should be split into at least four separate tasks: 1. Add the item to your cart. 2. Shop for another item and add it to your cart. 3. On the shopping cart, please update the quantity of the first item you added from 1 to 2. 4. Now proceed through the entire checkout process.

If you’re interested in both the journey and the destination, give the users the freedom to find the right place on their own. Then, in subsequent tasks, tell them where they should be. You can even include the correct URL or provide instructions on how to navigate to that location. Also, if you think a specific task will require the user to do something complicated or has a high risk of failure, consider putting that task near the end of the study. This will help prevent the test participants from getting stuck or off track right in the beginning, throwing off the results of your entire test.

The complete guide to user testing websites, apps, and prototypes 9


WRITE CLEAR QUESTIONS Once you’ve mapped out a sequence of tasks for users to attempt, it’s time to start drafting your questions. It’s important to structure questions accurately and strategically to get reliable answers and gain the insights that you really want. Tips for gathering factual responses Don’t use industry jargon. Terms like “sub-navigation” and “affordances” probably won’t resonate with the average user, so don’t include them in your questions unless you’re certain your actual target customer uses those words on a daily basis. Define new terms or concepts in the questions themselves (unless the goal of your study is to see if they understand these terms/concepts). If you’re asking about some sort of frequency, such as how often a user visits a particular site, make sure you define the timeline clearly. Always put the timeline at the beginning of the sentence. BAD: How often do you visit Amazon.com? BETTER: How often did you visit Amazon.com in the past six months? BEST: In the past six months, how often did you visit Amazon.com? After you’ve written the question, consider the possible answers. If the respondent could give you the answer “It depends,” then you should make the question more specific. It’s best to ask about first-hand experiences. People are notoriously unreliable in predicting their own future behavior, so ask about what people have actually done, not what they would do. It’s not always possible, but try your best to avoid hypotheticals and hearsay. Example of asking about what someone will do or would do: How often do you think you’ll visit this site in the next six months? Example of asking about what someone has done: In the past three months, how often have you visited this site? Example of hearsay: How often do your parents log into Facebook? Better example: Skip this question and ask the parents directly!

The complete guide to user testing websites, apps, and prototypes 10


Tips for gathering subjective data Opinions are tricky. To accurately gather user opinions in a remote study, the name of the game is making sure that all participants are answering the same question. The stimulus needs to be standardized. To make sure your participants are all responding to the same stimulus, give them a reminder of which page or screen they should be looking at when they respond to the question. For example, “Now that you’re in the ‘My Contacts’ screen, what three words would you use to describe this section?”

Subjective states are relative. “Happy” in one context can mean something very different from “happy” in another context. For instance:

Happy

Not happy

(Happy = opposite of not happy)

You’re not judging the intelligence of your respondents when analyzing their results, so make sure that your questions don’t make them feel that way. Place the fault on the product, not the test participant. Bad example: “I was very lost and confused.” (agree/disagree) Good example: “The site caused me to feel lost and confused.” (agree/disagree)

Happy

Neutral

Unhappy

(Happy = the best!)

Be fair, realistic, and consistent with the two ends of a rating spectrum. Bad example: “After going through the checkout process, to what extent do you trust or distrust this company?” I distrust it slightly ←→ I trust it with my life Good example: “After going through the checkout process, to what extent do you trust or distrust this company?” I strongly distrust this company ←→ I strongly trust this company

Very happy

Happy

Neutral

Unhappy Very unhappy

(Happy = better than neutral, but not the best) Plus, emotional states are very personal and mean different things to different people. Being “very confident” to a sheepish person may mean something very different from what it means to an experienced executive.

The complete guide to user testing websites, apps, and prototypes 11


Avoid asking vague or conceptual questions. Break concepts up when you’re asking the questions and put them back together when you’re analyzing the results. For example, imagine that you want to measure parents’ satisfaction with a children’s online learning portal. Satisfaction is a vague and complex concept. Is it the thoroughness of the information? Is it the quality of interaction between the teachers and students online? Is it the visual design? Is it the quality or difficulty of the assignments posted there? Instead of asking about overall satisfaction, ask about all the criteria independently. When you’re analyzing the results, you can create a composite “satisfaction” rating based on the results from the smaller pieces. Avoid leading questions With leading questions, you influence the participants’ responses by including small hints in the phrasing of the questions themselves. More often than not, you’ll subconsciously influence the outcome of the responses in the direction that you personally prefer. Leading questions will result in biased, inaccurate results, and you won’t actually learn anything helpful. In fact, it might lead you to make flawed decisions. While it may lead to the answer you “want” to hear, it doesn’t help your team make improvements, so don’t be tempted to use leading questions! BAD: “How much better is the new version than the original home page?” GOOD: “Compare the new version of the home page to the original. Which do you prefer?”

USERTESTING TIP: If you’re asking about task success, remember to define what a success is. If the previous task instructs a user to find a tablet on Amazon and add it to the cart and you ask “Were you successful?” be sure to clarify whether you are asking about finding a tablet or adding it to the cart.

The complete guide to user testing websites, apps, and prototypes 12


Best practices for different question types

Rating scale questions allow you to measure participants’ reactions on a spectrum. They’re a great way to benchmark common tasks and compare the results with a similar test run on your competitor’s product. •

Multiple choice questions are great for collecting yes/no responses or answers that can’t be applied to a scale.

Written response questions result in short answers that can be used to collect impressions and opinions.

Ask questions that can be answered in a couple of words or sentences at most. Typing long responses can become frustrating for participants, especially on mobile devices. Good example: What three words would you use to describe this app?

Use these questions sparingly. The greatest value in remote user research usually comes from hearing participants speak their thoughts aloud naturally. Written response questions are good for getting a snapshot of the users’ impressions, but if you overuse them, the quality of the responses will often degrade after several questions.

Create a word cloud from all your users’ responses to quickly see which words they’re using to describe their experience.

Use relative extremes: Make the negative feeling have the lowest numerical value and the positive answer have the highest numerical value. In other words, make difficult = 1 and easy = 5, not the other way around. Stay consistent throughout the test! Use the same end labels and the same wording when you’re repeating a question. Consider asking “why?” after a multiple-choice or ratings scale. Then, when you get your results back, you can go back and hear the participants discuss their answers or responses. Asking “why?” also prompts people to think critically about their answer.

Multiple choice responses should be exhaustive, meaning that every possible response should be included in your response options. At the same time, you want a manageable number of responses. We recommend two to six response options per question. If you suspect that there are just too many options, do your best to guess which options will be mentioned most, and then include an “Other” option.

Ask only one question at a time. Don’t do this: “Did you find the tablet you were looking for, and was it where you expected to find it? Yes/No” Instead, break it up into two separate questions.

Choose mutually exclusive responses since users will only be able to select one answer. If it’s possible for more than one answer to be true, include a “More than one of these” option.

The complete guide to user testing websites, apps, and prototypes 13


Launch a dry run Before you launch your study to all of your participants, we recommend conducting a dry run (sometimes called a pilot study) with one or two participants. This will help you determine whether there were any flaws or confusing instructions within your test. The primary goal is to make sure that people who read your tasks and questions interpret them in the way they were meant to be understood. Here’s a good structure for doing this: 1

Release your study to just one or two participants.

2

Listen to them as they process each task and question, and take note of any trouble they encounter while trying to complete their first pass.

3

Once you’ve made adjustments and you feel pretty good about your test, we recommend having one more person run through the test to ensure you’ve cleared up any confusing or leading questions.

If you set your study up in UserTesting, you can easily run one test, review it, and then add more participants to the same study. Or, if you need to make changes, you can create a similar study with the click of a button and then edit the tasks or questions as needed. At the end of every study, it’s a good practice to review your notes to identify any weak portions of your test and rewrite these tasks or questions for next time. This will save you time in the long run, especially if you’re testing out prototypes first, and then running more people through a similar test once you’ve pushed out the new changes to your website or app. Another reason to spend time to get your questions just right is if you plan to run a benchmark study. When you run an ongoing review of your website or app over time, you shouldn’t change your questions along the way or you’ll risk skewing the results and ruining any chances of an accurate comparison.

The complete guide to user testing websites, apps, and prototypes 14


Beware of errors There are several common errors that can skew your data. The good news is that if you’re aware of them, you can avoid them. Below, we’ve outlined the most common error types along with some suggestions on reducing your chances of running into these problems in your studies.

SAMPLING ERROR Sampling error occurs when you recruit the wrong participants to participate in your study. When this happens, you may end up with a bunch of opinions from people outside your target market—and therefore, they aren’t very helpful. For example, perhaps your target market includes SaaS sales executives, so you tried to recruit people who work in software sales, but the actual participants ended up being electronics store retail associates. SOLUTION: Ask clear and precise screener questions to help qualify potential study participants. If you’re uncertain whether your screener questions are accurately capturing the right users, do a dry run with a small handful of participants. As the first step of your study, have them describe aloud what they do for a living (or how familiar they are with your industry, or whatever criteria will help you determine whether they’re actually your target market).

RESEARCHER ERROR With this type of error, participants misunderstand a task or question because of the way it was worded. Study participants will often take instructions literally and search for the exact terminology that you include in your tasks. SOLUTION 1: Try out your study with several people and monitor their reaction to your questions. You’ll quickly learn whether or not your questions are accurately communicating what you want them to. SOLUTION 2: Be aware of your target audience and ask questions in a manner that naturally resonates with them. Use plain language. Slang, jargon, regionalisms, and turns of phrase can easily confuse participants during a study.

The complete guide to user testing websites, apps, and prototypes 15


RESPONDENT ERROR

SOCIAL DESIRABILITY ERROR

In this case, the participants are giving you inaccurate or false information. There are several reasons that this may occur:

With this error, participants feel pressured to give a response that they think is most popularly accepted in society, even if it’s not true. For example, if you ask people about their tech-savviness, people may over-report their abilities because they think it’s “better” than not being tech-savvy.

• They don’t trust you with their personal information. • They’re uncomfortable sharing details of their personal lives. • They’ve become fatigued and have resorted to bogus responses to get through the test quickly. • They don’t understand whether you’re looking for their opinion or the “right” answer. SOLUTION 1:

SOLUTION 1: When you’re looking for test participants, be sure to explain that you value the skillsets or demographic characteristics you’re requesting. Emphasize that you hope to learn how your product will be useful or beneficial to people like them.

Reassure participants that their responses won’t be shared publicly. SOLUTION 2: SOLUTION 2: At the very beginning of your study, be sure to explain that if they have to fill out any personal information, their responses will be blurred out to protect their identity. SOLUTION 3: Keep your test short (around 15 minutes, in most cases) so you don’t fatigue your participants.

FAULTY PARTICIPANT RECALL These errors occur when a participant is unable to correctly remember the event you’re asking about. This happens when your question asks them to recall something too far in the past or in too much detail. SOLUTION: Do a gut check. Can you remember the specifics of something similar? If not, revise your question.

Reassure your participants that they’ll remain anonymous.

ACQUIESCENCE BIAS When acquiescence occurs, the participant will tell you what they think you want to hear out of fear of offending you. For example, they may dislike your app but don’t want to make you feel bad about your work. This is more common in moderated tests than unmoderated tests. SOLUTION 1: If you’re too close to the product (for example, if you’re the designer), you may want to use an impartial moderator to moderate your tests for you. Skilled researchers can help ensure impartiality, reducing barriers to sharing the truth. SOLUTION 2: Reassure participants that you value their truth and honesty and that none of their feedback will be taken personally.

The complete guide to user testing websites, apps, and prototypes 16


Analyze your results Once you’ve launched your study and gotten your results back, it’s time to get to work on analysis. Before you do anything, start by thinking back to your objective. The objective is important and easy to lose sight of. Be sure not to get distracted by irrelevant findings, and stay focused. (However, you may choose to keep a side record of interesting findings that aren’t related to the objective but may prove useful later.) With your objective in mind, you can jump into the data and videos.

WHAT TO LOOK FOR When we’re reviewing full recordings of user tests, we often make annotations along the way. These annotations are captured alongside the video, so you can easily jump back to interesting moments later on. As you review the data from your questions, keep an eye out for any responses that are outside the norm. If nine participants thought a task was fairly easy but one thought it was extremely difficult, investigate further by watching the video that goes along with the unusual response. What about that participant’s experience was different?

Explore correlations When reviewing rating scale and multiple-choice responses, look for correlations between the length of time spent on a task and negative responses from participants. This is a starting point for identifying areas of the user experience that may be particularly troublesome. It’s important to remember, however, that correlation doesn’t tell the whole story. Don’t jump to conclusions based on correlations alone; be sure to watch the relevant moments of the videos to hear what participants are thinking.

Take note of user frustrations as well as items that users find particularly helpful or exciting. These become discussion points for design teams and can often help to uncover opportunities for improvements to future releases. It’s important to identify the things that people love, too, so that you don’t inadvertently try to “fix” something that’s not broken when trying to improve the user experience of your product.

The complete guide to user testing websites, apps, and prototypes 17


HARNESS THE POWER OF THE SPREADSHEET It’s easy to gather, analyze, and share your findings right within the UserTesting dashboard. You’ll be able to see time on task, responses to metrics questions, and more at a glance. However, if you find that you want to perform more detailed analysis, you can download the data from your study into an Excel spreadsheet. This can be especially helpful for compiling findings from multiple studies side by side, breaking down patterns in studies with lots of participants, and comparing responses from different demographic groups.

Sharing your research findings will help your company stay focused on the user experience.

SHARE YOUR FINDINGS Research is meant to be shared! Once you’ve established your findings, it’s time to present them to your stakeholders. In most cases, this means hosting a meeting with the involved departments and sharing the relevant findings in a presentation. Here are a few ideas for successfully relaying your research findings: Create a highlight reel of video clips. Use charts to represent any interesting metrics data from your questions. Back up your claims with user quotes from the studies. Use a word cloud to display the most common words used throughout your study. Be careful not to place blame on any of your teammates. If you have a lot of negative findings, choose your words carefully. “Users found this feature frustrating” is much easier to hear than “This feature is terrible.” Encourage team members to ask questions about the findings, but remind them not to make excuses. They’re there to learn about the customer experience, not to defend their design decisions. Sharing research findings with stakeholders and colleagues in multiple departments can be a great way to promote a user-centered culture in your company.

The complete guide to user testing websites, apps, and prototypes 18


Conclusion We encourage you to make remote user research a standard part of your development process. User feedback is crucial for designing and developing a successful product, and remote research makes it fast and easy to get that feedback. With a clear objective, the right tasks, and carefully planned and worded questions, you’ll gather useful, actionable insights. And whether you plan on conducting your research on your own or enlisting the help of UserTesting’s expert research team, you’re well on your way to improving your customer experience.

The complete guide to user testing websites, apps, and prototypes 19


Create great experiences UserTesting is the fastest and most advanced user experience research platform on the market. We give marketers, product managers, and UX teams on-demand access to people in their target audience who deliver audio, video, and written feedback on websites, mobile apps, prototypes, and even physical products and locations. 2672 Bayshore Parkway, Mountain View, CA 94043 www.usertesting.com | 1.800.903.9493 | support@usertesting.com

The complete guide to user testing websites, apps, and prototypes 20


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.