Some things we should already know

Earlier in the week Jeffrey Henning (IMHO the best of the MR bloggers) served up a much-praised and frequently retweeted post on why respondents abandon Web surveys. His post does what most of the respondent engagement debate does not do and that is get down to some basic facts about what it is in surveys that turns off people who already have volunteered to do surveys. Mostly that debate has started with the Flash imperative and never looked back.

I have taken the liberty of posting Jeffrey's chart below. It shows that the biggest single cause of abandonment is uninteresting subject matter. Media downloads are a bit of a red herring because the majority of surveys don't rely on multimedia so let's put survey length as the second biggest factor.  Causes_of_survey_incompletion Now we can generalize that the main causes of abandonment are subject matter and length. Those grids that are so roundly condemned amount to a mere 15% (sorry, Andrew). I would extend Jeffrey's argument just a bit and suggest that outright abandonment is an extreme behavior and that many more complete boring and long surveys in a half-hearted way just to get the incentive.

What strikes me most about all of this is that it seems to be news to way too many people. The influence of topic and burden on survey participation has been talked about for decades. For one especially relevant discussion have a look at Nonresponse in Household Surveys by Groves and Couper. Now I know it's mostly about in-person surveys so it can't possibly teach us anything about Web surveys, but if you were to read the chapter about how survey design affects participation you might appreciate that, regardless of mode, topic salience and perception of the burden of participating (a.k.a length) are key. (Other important elements of survey design such as incentives are bridges that panel respondents already have crossed.)

The question in my mind is why don't we already know this stuff? Why do we have to keep relearning the basics? And why do we let ourselves get distracted by all of this talk about engaging people with cute gadgets and eye candy? What respondents really want is shorter surveys on topics they find interesting. The Web may change a lot of things but this isn't one of them.


Comments

5 responses to “Some things we should already know”

  1. From the findings of the Lightspeed survey that Jeffrey cites, it looks like the factors that cause the most people to abandon surveys may also be the toughest ones to address. How do you compromise on subject matter?
    But I do think that research sometimes kids itself that it is giving people “a chance to voice their opinion” when a lot of the time it is asking about stuff that no normal person could ever care about, or about memory/perception of fact rather than opinion (e.g. how many toothpastes can you name, which of these brands does the most to protect the environment etc.)

  2. Thanks for the kind words, Reg. To set the record straight, that post was a recap of James Sallows’ great presentation at Online Research Methods. As such, it is his argument, not mine. James deserves the credit for the interest.
    My own thoughts on improving completion rates – much less retweeted! – are here:
    http://blog.vovici.com/blog/bid/27039/Maximizing-Survey-Completion-Rates
    Your title for this post reminds me of Ray Poynter’s series, “Things All Researchers Should Know”. I think most firms don’t do enough to train their staff on established methods and research on research. The lure of the new leads too many of us to reinvent the old.

  3. I do think that most of us already know this stuff but we choose to ignore it because of laziness and complacency. Sure, that’s a harsh statement, but let’s be honest.

  4. Even if everyone in MR had read Groves & Couper, it would not change the fact that clients have questions that aren’t intrinsically interesting to their consumers as Robert says. This is partly an issue of industry maturation. As compared to a few decades ago, MR is so integrated into day-to-day product management that we are more often invited to research mundane topics than big, meaty ones. Within the confines of the command-and-control, “asking” model, the levers we have available to us are making the asking process enjoyable and easy even if the subject matter isn’t, filtering out respondents who may be “qualified” but are not interested, or providing making the incentive sufficient to overcome lack of interest. None of these is perfect and all subject to a variety of abuses. Which makes it easier to understand why the “listening” model is so enticing.

  5. I was at the MRS Conference, James Sallow’s piece also touched on questionnaire wording – badly phrased, written from the client not user perspective – which doesn’t appear here. Overall, the talk was indeed excellent and made me personally sit up and think hard about data quality in the context of quant. online surveys.