Sheila B Robinson

Reflections of an everyday educator/program evaluator/professional developer…LEARNER

Hindsight is 20/20, even with surveys (Cross post with actionable data blog)

Leave a comment

Yep, it’s another great co-post with the splendid Kim Firth Leonard, of the actionable data blog.

Almost everyone (probably everyone, actually) who has written a survey has discovered something they wish they had done differently after the survey had already launched, or closed, with data already in hand. This is one of the many ways in which surveys are just like any written work: the moment you’ve submitted it, you inevitably spot a typo, a missing word, or some other mistake, no matter how many editing rounds you undertook. Often it’s a small but important error: forgetting a bit of the instructions or an important but not obvious answer option. Sometimes it’s something you know you should have anticipated (e.g. jargon you could have easily avoided using), and sometimes it’s not (e.g. an interpretation issue that wasn’t caught in piloting – you DID pilot the survey, didn’t you?).

Sometimes, it’s a big mistake – forgetting to ask a question you needed the answer to in order to answer an evaluation question or inform a key programmatic decision; or, asking a question in such a way that when you analyzed the data, you scratched your head and thought, “so what? “These responses aren’t useful after all!”

Image of young girl doing "face palm."

Image credit: Will Hastings via Flickr

Here are a few survey design areas where we locate some of our hard-learned lessons:

  • That question that needed the instruction “check all that apply” (or perhaps “choose only one” – regardless, it should have been clearer whether respondents should choose one or more than one answer).
  • That typo that hid in a question despite many editing rounds, and reared its ugly head making the question thoroughly confusing for those who couldn’t read our minds.
  • That answer option that we swear we’d included in an earlier draft but somehow got left off when we entered our questions into an online survey tool.
  • That instruction we should have included about the timeframe for feedback — for example, we asked questions on Day 4 of a 4-day workshop referring to the whole event as “the session.” Respondents interpreted “session” as just one day and gave us feedback for only that day.

Sometimes we don’t even realize we’ve made a mistake until our trusty respondents let us know it. They write in the margins indicating we didn’t have an appropriate response option for them, or they let us know in a comment box that they didn’t answer question accurately because the available responses didn’t apply.

Surveys are inherently imperfect tools and can quite easily be fraught with error. No survey instrument or survey question is immune from this, and it has been long understood by survey researchers that even seemingly minor changes in question wording can result in large differences in responses1. Of course, researchers have all sorts of ways we work to prevent this. Our most successful surveys are those built carefully — with lots of planning and intention, multiple rounds of review, and careful testing. We build surveys after studying other existing successful measures and in consultation with those we wish to respond to our survey. We aim to use familiar language, to make sure questions are measuring what we meant them to measure (validity), that they are understandable, and that they are answerable (sufficient answer options).

These and other great tips will be part of the checklist we’re drafting as part of our learning journey in survey design. Our multi-pronged approach to world survey design domination (:-)) includes:

  • Continuing to co-author blog posts (click here for our previous posts on survey design) as new insights emerge from our work and study  
  • Crowdsourcing tips for our checklist on survey design
  • Facilitating professional development workshops for our survey research peers and colleagues (see below for details of our upcoming session)
  • Writing a book on survey design with specific emphasis on a rigorous question design process (target publication date 2017).

Interested in learning more? If you’re attending Evaluation 2015, the American Evaluation Association’s annual conference in Chicago, IL, please join us on Friday morning, November 13, at 8:00am for our session, Crafting Quality Questions: The Art & Science of Survey Design. Not traveling to the conference? That’s OK! Stay tuned for more blog posts on survey design and of course, the book!

1If you absolutely must know more about how survey question design can influence responses right away and you can’t possibly wait for our next blog post or our book, see:

Fowler, F. (1995). Improving Survey Questions: Design and evaluation. Sage Publications, Inc.

Lenzner, T. (2012). Effects of Survey Question Comprehensibility on Response Quality. Field Methods. p. 1-20. Sage Publications. http://fmx.sagepub.com/content/early/2012/08/31/1525822X12448166

Schuman, H., & Presser, S. (1977). Question wording as an independent variable in survey analysis. Sociological Methods & Research. 6(2), November 1977. Sage Publications, Inc.

Schwarz, N. & Oyserman, D. (2001). Asking questions about behavior: Cognition, communication, and questionnaire construction. American Journal of Evaluation, 22(2), pp. 127–160.

Sudman, S., Bradburn, N.M., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco. Jossey-Bass Publishers.

Tanur, Judith (Ed.). (1992). Questions about questions: Inquiries into the cognitive bases of surveys. Russell Sage Foundation. New York.

Tourangeau, Roger. 2003. Cognitive aspects of survey measurement and mismeasurement. International Journal of Public Opinion Research, 15(1), pp. 3-7.

What survey design lessons have YOU learned the hard way? What mistake or typo will you now never repeat?

Author: Sheila B Robinson, Ed. D

Custom Professional Learning, LLC sheilabrobinson.com

Leave a comment