I've been working along with some colleagues on the lit review section of a paper for the ESOMAR Congress. The topic is "gamification" as the next experiment designed to increase respondent engagement in online surveys. As anyone who has done their homework knows the issue of survey respondent engagement did not arise with the growth of online panels and online surveys. Over 40 years ago two of the legends, Charlie Cannell and Rober Kahn, were arguing that there is an optimal length for a survey and once that length is exceeded respondents become less motivated to respond and put forth less cognitive effort, causing survey data quality to suffer. In 1981 Regula Herzog and Jerald Bachman identified the tendency for respondents to "straight-line" in large numbers of consecutive items that shared the same scale, especially as they progressed through the questionnaire. Ten years later Jon Krosnick introduced the term "satisficing" to describe the tendency for survey respondents to lose interest and become distracted or impatient as they progress through a survey, putting less and less effort into answering questions.
What I find especially striking about this "early" work is its tone. It's not accusatory. No fingers are pointed and there is no implication that these are "bad respondents." Reflecting its mostly psychological roots, the arguments say that when we create certain kinds of conditions with surveys this is how people will react. No one suggests that people who exhibit these kinds of behaviors don't deserve to be interviewed, that we need to get them out of our datasets or they don't deserve to be heard. This stands in stark contrast to how the same problem has been discussed over about the last five years in the context of online surveys. Name calling has been popular—inattentives, mental cheaters, speedsters, or simply, bad respondents. (Here I admit that I am as guilty as anyone of some this.) And in most circles, current rhetoric to the contrary, the emphasis still is almost totally on getting rid of people who exhibit these behaviors rather than seriously attacking what pretty much everyone who has studied these problems over the last half century agrees is the root cause: long surveys on not very interesting topics. And now to that double whammy the online paradigm offers almost no limits on how often you might be interviewed.
Another legend and former boss, Norman Bradburn, proposed a simple solution way back in 1977: convince respondents that the data are important. Unfortunately, we seem to have taken a different path.
Comments
3 responses to “Getting to the bottom of the respondent engagement problem”
The difference in tone is in-built in the respective academic/policy research vs. commercial research cultures. The former culture understands that the existence and quality of their data depends on people doing something they are not naturally inclined to do (answer questions, often deeply personal, at length and posed by a stranger). The MR culture started out very similar to that of the academic culture, but has been heavily colored by the impersonal nature of online panels, which tempt us all to talk about respondents in commodity terms. Hence, I think, the well-meaning but rather crude backlash against the term ‘respondents’ in some MR forums.
Very much agree with Theo that the impersonal nature of online panels plays a significant part. The ‘distance’ between the researcher designing the survey and the respondent has become too great – enter alienation and its consequences as described. Especially the use of third party access panel providers who, somewhat understandably from a business perspective, treat their panels as assembly lines to maximize profit, with little concern for (and understanding of) data quality.
Another aspect that plays a part, is that the challenge of engaging respondents isn’t the same today, as it was 2-3-4-5 decades ago. Even the concept of “gamification” has changed rapidly over the past decades, involving increasing levels of interactivity, realism, cognitive processing and collaboration (ref. “Gamer demographics”, Kapp, 2007). We do need new tools to be able to compete for people’s time and attention – and cutting it shorter isn’t going to do the trick alone. As for “boring topics” we might actually learn one thing from gaming? – where the conclusion has been that “fun and theme are not related” (Gabe Zichermann). The core challenge with gamification of course (as with e.g. “flash-ing” up our questionnaires), is how it’ll affect our data…
I agree with most of what’s said above, so no need to repeat.
Couple of questions to add though:
What’s better a neutral (read: dull) survey with uninterested respondents, or a well written (read: potentially biasing) survey with engaged respondents?
I’d take the latter in a second.
Secondly, has anyone ever tested what happens when you remove all incentives from a survey?
I’d like to think as researchers someone’s done some research.
Finally, why does MR refuse to get back to respondents with the results of the research? Give them some information back for their time rather than a couple of generic “points”. You’d be surprised how motivating a bit of feedback can be (real feedback I mean: Company x will now do this, rather than x% of you are male.)
Scott