Surveys can be so useful, but they can also be wildly misused. When I was teaching at a school, I once wrote a survey trying to figure out why students did (or did not) choose to study my year 11 Computer Science elective. I sent it to the whole year 11 cohort, and made the subject line of the email “Chocolate!” which got their attention quite effectively. I got a lot of responses, and a wonderful mountain of data… except… I hadn’t thought, really, about what I was going to do with that data, and it was an absolute mongrel to analyse. So how do you actually write a great, useful survey? Here are seven tips to get you started.
- Figure out what questions you really want the answers to, and make sure your questions really target those answers. You need to plan your analysis in advance, so that you don’t wind up finding out too late that you didn’t ask the question you really needed. For example, if you want to know what people eat on a typical day, figure out in advance how you’re going to categorise different foods, so that your questions fit those categories. You might have fruits, vegetables, processed foods, meat, and dairy products, or you might go for home cooked vs pre-cooked from the supermarket vs take away food. If you don’t think about this in advance, the answers people give you might not allow you to answer your question – something like pizza might be home cooked or pre-cooked from the supermarket or take away. Don’t fall into the trap of asking for the answer to life, the universe, and everything, tempting though it is. You might just get 42 and not know what to do with it!
- Make sure your questions don’t assume you know everything. For example, if you want to find out what wakes people during the night, you can ask questions about pets, small children, and outside noises, but if you assume that you have listed all possibilities, you might be missing other significant reasons why people woke up. You always need a free text “other” box to capture things you might have missed. The answers you are looking for are in the “other” box more often than you expect!
- Make sure your questions are neutral. Leading questions are some of the most common problems I see with surveys. Research has shown time and time again that the phrasing of the question can influence the answer. People mostly want to help, and they will subconsciously give you the answer they think you want. You will likely get very different responses from “How awesome was the workshop???” vs “What did you think of the workshop?”
- Make sure your scales are even. Likert scale questions are great, (eg “on a scale of 1-5, how tired are you?”) because they give you numbers that are easy to answer, but how you set them up can really skew the results. Always use an odd number (1-3, 1-5, etc), and make sure the middle value is neutral. For example: 1 – full of energy, 3 – neither tired not energetic, 5 – exhausted. Or 1 – terrible, 3 – neutral, 5 – fantastic. Otherwise you wind up with an uneven number of responses on one side of your results, which is like having your hand on the scales when you weigh something. It will tip your results in one direction. For example: 1 – bad, 2 – neutral, 3 – great, 4 – fabulous. This gives your respondents twice as many ways to say it was good, which might make your results look better, but is much less valid.
- Don’t forget qualitative results. Always have a free text question in your survey that gives your participants room to explain their answers, or tell you things you might not have thought of. If you only have likert scales and numeric (quantitative) data, it’s nice and easy to analyse, but it might well mean you miss something really significant. Is there anything else you want to tell us? Did anything else affect you? Did you notice anything else? For my workshops I always collect numeric data that I can analyse and compare over time, like “How likely are you to recommend the workshop to others? 1 – not likely at all, 5 – extremely likely” and “how has your confidence in working with data science changed? 1 – not changed at all, 5 – much more confident.” But I also include open questions like “What was the best thing about the workshop?” and “What was the worst thing about the workshop?” or “What would you like less of in the workshop?” and “What would you like more of in the workshop?” These are often the most useful part of the survey.
- Remember that your results aren’t perfect. Like all data, survey data is never perfect. People can make mistakes, or even lie on surveys. Sometimes this is because they don’t feel safe giving you personal information, sometimes it’s because we’re actually really terrible at remembering things, sometimes it’s just because we click the wrong button, or actively want to mess with your results. Some answers would be different on another day, especially things like “how happy would you say you are overall?” or “how bad are your allergies, on average?” Don’t take your survey results as hard evidence or incontrovertible truths.
- Make sure your analysis matches your questions. I often see people claim that their surveys found the answer to something that they did not, in fact, even ask. For example, don’t take this question: Would you prefer Labor or Liberal to win the election? And label the resulting graph: Labor votes vs Liberal votes. People might vote Green, Independent, or a minor party, but that wasn’t asked. Or this question: “Do you prefer hot weather or cold?” might get reported as “X% of people hate the heat.” That’s not what was asked. Make sure that your resulting write up and graphs are accurately labeled and reflect the data you actually have, not the data you wish you had.
These seven tips are the foundation of good surveys, but they’re not the whole story. What traps have you seen surveys fall into? What mistakes have you made on your own surveys?