Why did well-maintained American B-17 aircraft, commanded by experienced pilots in WWII, keep collapsing onto the tarmac? Psychologist Alphonse Chapanis was tasked with solving the intriguing riddle.
His findings? Two identical toggle switches were placed side by side on the planes’ cockpits — one for the landing gear and another for the landing flaps. So, when landing, a pilot could reach for the landing gear, retracting the planes’ wheels instead of controlling the wing flaps to reduce speed. To solve this, Chapanis changed the shape of the two toggle switches. The result? Zero accidents from then on.
This story on the importance of good design reminds us that research is about humbly acknowledging that we don’t know what we need to know. Thus, we have to find out. When we forget this, we design things based on our assumptions, often with costly consequences.
To build or improve products, you need to know how your users think, feel, and behave. One of the many ways you can do this is through surveys. The problem is that designing surveys can be a challenging task. The second issue is that you might not even need one. Are you sure that a survey is an adequate method to answer your research questions?
To address this, we will first talk about the goals and purpose of surveys. If your answer remains a “yes, surveys will answer my research questions,” I will then provide you with an easy step-by-step guide to great survey design.
But first, goals.
What Are the Goals of Surveys?
Surveys can be used by people from different departments with various goals, for instance:
- User Experience: UXers may need to know which features users use the most (or the least) or how easy or difficult a given experience was.
- Marketing: Marketeers may need information on their visitor profile or to identify the most critical use cases for the target audiences.
- Customer Experience: CXers may need to identify to what extent the product meets customers’ expectations or measure their attitudes towards your product.
Do I Really Need a Survey?
Like in any research study, the first thing you need to do is align with your team and stakeholders on what you need to learn and why. Once you know the “what” and the “why,” you need to decide which research method(s) are most suitable for answering your research questions.
So ask yourself: is a survey the best way to answer my research questions, or would other methods be better options? Consider that surveys are especially suited for “How many & How much” questions about what people say (e.g., their attitudes, beliefs, perceptions, knowledge, and goals). If a survey is an appropriate method to address your research questions, the next thing to consider is: how can you design a survey that asks the right questions in the right way? I’ll share some tips to help you with this.
Who Do You Want to Get Answers From?
Before you start writing the questions, you need to clarify who your target respondents are. You need to know why you need certain types of people to answer your survey, which circles back to your research goals.
For example, imagine you need to know the most common difficulties users have when they start to use the product versus when they already have some experience with it. In this case, you would need to recruit at least two different groups of respondents: newcomers and more experienced users (e.g., with more than six months of experience using your product).
Time to Write Down the Questions
Next, it is time to write your survey questions. But before you do that, find out if there is any standardized survey that fits your needs — you do not have to reinvent the wheel. For example, if you want to measure the perceived usability of your product, you can use a standardized survey like the System Usability Scale (SUS).
Before you write the questions, have in mind what you will do with the data and how you will analyze it. For example, imagine you need to know the essential criteria users consider when choosing which assets to install. This points to the need to get ranked data, so you could ask respondents to rate the three most important criteria, for example. Then, you could calculate the average ranking for each criterion to find out which were the three most preferred.
In any case, writing good survey questions means writing questions that generate valid and reliable results. That is not as easy as it may seem. Fortunately, there is a set of best practices you can follow. The following ones are by no means an exhaustive list of those practices, but they will provide you with a good starting point.
Start with a definition of your survey concepts and constructs
For example, if your survey is about user delight, define what “delight” means. If the main concept has several dimensions (e.g., user delight can be composed of joy and positive surprise), describe them too. This will help you translate those definitions into concrete questions.
Use simple and clear language
Avoid technical jargon and use simple, familiar, and short words whenever possible (unless your audience is a specialized group of people). This relates to Nielsen’s Heuristic #2 — speak the users’ language. But, don’t trust your writing skills too much: ensure your respondents understand your questions by running pretests (more on that in a bit).
To illustrate this point: in one study, researchers compared the ratings collected with the original UMUX-Lite (a popular questionnaire used by UXers) item wording — “The system’s capabilities meet my requirements'' — with a simplified wording —“The system’s features meet my needs.” They found no significant difference between these two versions’ means and the distribution of their response options.
Avoid using ambiguous terms
Steer away from them both in your questions and response options. Words like “often” and “regularly” can mean different things to different people. Again, pretests are a powerful way to check this and replace potentially ambiguous terms for (ideally) unequivocal ones.
Even the pronoun “you” can be ambiguous. For example, in “Do you use cooking books to get ideas for your meals?”, “you” can refer only to the respondent or to their household.
Your questions need to make people feel open to sharing any response. They shouldn’t feel constrained to agree with you or, in the worst-case scenario, offended by any assumption you may have leaked into your questions.
Suppose you ask respondents to say how much they agree or disagree with “It is hard to use this product because its concepts are unfamiliar to me.” You are assuming the concepts might be unfamiliar to the respondents in the first place. While the product might indeed be complex for respondents to use, it may be for other reasons rather than unfamiliar concepts.
Ask only one thing at a time
If you ask a double-barreled question, you won’t know what the responses refer to. Imagine you ask me, “How easy or difficult was it to fetch and update your entities?” Suppose I answer “very easy.” Do you know what I am referring to? Also, I had to exert unnecessary cognitive effort since I had to answer two questions at once! So, make sure respondents are not answering two different things in a single question.
Provide response options like “N/A” or “Others”
That will help you get more accurate responses since you won’t be forcing people to answer something that may not apply to them. While in some circumstances, you might want to make respondents “choose a side,” the fact is that by doing so, you might be forcing participants to choose an option that doesn’t represent what they think. In some cases, respondents might even feel frustrated by failing to see a choice that they can identify with and become skeptical about the quality of the survey. So, unless you are confident that the response options cover the full range of possibilities, it is good to provide “other,” for example.
Ask about likely memorable events or behaviors
Memory is fallible, and researchers should take recall answers with a grain of salt. If you want to ask about distant-past or non-salient events (e.g., likely automatic behaviors), consider using visual cues to help people recall them.
For example, imagine you are interviewing users, and you want to know how they fetch data. They might have done it countless times so that it might have become automatic. Help them answer that question by asking them to show you how they fetch data.
Ensure your close-ended questions include all likely response options
There are ways to do this:
- Review previous data on the question topic. What are the likely answers?
- If previous data is unclear, insufficient, or non-existent, ask open-ended questions about your survey topics in the early stages of question drafting. Learn from what people say.
Ask the most important questions at the beginning
This way, you are more likely to get the data you need the most if people quit partway.
Check if each question is helpful
I say this from experience: it is easy to get excited and ask questions that, although interesting, are outside the scope of your survey. Be relentless here: if a question doesn’t help you achieve your research goals, remove it. Also, this will help you keep the survey short: a not-so-small thing!
Not So Fast: Pretest!
Now, you can look at all your questions and feel you have an amazingly clear batch of them. But even if you have followed the best practices above and tens of others, you don’t know how good your questions are unless you get feedback from people from your target audience. That’s why you should pretest your survey; because it will help you not to be misunderstood.
With pretests, you assess how people (ideally with the same profile of your target audience) interpret your questions, among other aspects, like the introduction, length, and flow of the survey, as well as technical glitches. Any interpretation or other kinds of problems you find at this stage will allow you to improve your survey.
When you pretest your survey, you can ask respondents to:
- Think aloud as they answer the questions.
- Say in their own words what they think each question means.
- Explain how they chose a particular answer over others.
After the pretests, apply the feedback you got to improve the questions, response options, and any other aspects of the survey. If you feel uncertain about the changes you made, consider running a second round of pretesting. At the end of this process, you should be confident that your survey is ready to be distributed and likely yield valid and reliable results.
- 6 Survey Pretests to Consider: Pilots, Focus Groups & More
- Survey Question Examples - How to Write a Great Survey
- Writing Survey Questions
- Sampling Methods & Best Practices
- Sampling Errors & Non-Sampling Errors
- How to Determine Sample Size in Research
- Seven Ways to Make Survey Questions Clearer – MeasuringU