Special Issue on Nonresponse

At the end of 2006 Public Opinion Quarterly published a special issue devoted to nonresponse in surveys.   There is no denying that declining respondent cooperation is the most serious problem we face as an industry.  Key government face-to-face surveys like the Current Population Survey and the National Health Interview Survey are still getting north of 85 percent, but the latter is losing almost a point of response a year.  A 2005 POQ  article on nonresponse in a key academic telephone survey, the University of Michigan’s Survey of Consumer Attitudes, reported that it was losing a point and half of response a year.  A survey that once routinely got 70 percent plus is now struggling to get 40 percent.  In Market Research, the situation is even more dire with response rates hovering around 10 percent or worse.

There are six articles in the issue and I’m not going to try to summarize them all here.  I instead want to focus on two related themes.  The first is that response rate may not be as good an indicator of survey quality as we once thought.  For example, Scott Keeter and his colleagues at Pew conducted two identical RDD surveys. One they worked hard and got a 50 percent response rate.  The other they worked less hard and got a 25 percent response rate.  When they compared 84 measures of attitudes and behavior across the the two surveys they found only a handful of significant differences. 

The other is the increasing focus on nonresponse bias.  Simply put this tries to get at how the survey estimates might differ if we were able to interview everyone.  Put another way, it asks whether those who did not respond are different from those who did in some important way that might change our results.  Conceptually, this is a very appealing argument, but measuring nonresponse bias is difficult because we don’t know much of anything about the people who did not respond. 

Post stratification adjustments (a.k.a. weighting) attempts to get at this by bringing the demographics of our completed interviews in line with those of the population.  It assumes that people’s attitudes or behaviors measured in the survey are strongly related to demographics.  But is that always the case?  Probably not.  To do this effectively we need to know a lot more about nonrespondents and about those characteristics that are mostly closely associated with whatever we are measuring in our survey.  No easy task.

And one more piece of bad news.  All of those things we have learned in methods studies that help us get higher response rates (e.g., offering different modes or creating topic salience) may be counterproductive precisely because they can create nonresponse bias.  For example, I might want to disclose the topic of my survey as a way to create interest and therefore higher participation but this may result in people who feel positively about the topic participating and those who find it uninteresting may decide against responding.  When that happens, I have nonresponse bias.

Let me say the obvious: these are really tough issues.  But at least there is work going on out there that is trying to help us see through what really is a major challenge if not downright crisis for the survey profession.  It is going to be very interesting to watch and learn.


Comments

2 responses to “Special Issue on Nonresponse”

  1. Reg…wouldn’t this area of non-response investigation be something that online panels (or other panels i suppose) could help with? We tend to know a lot about panel non-respondents from their initial sign-up information.

  2. Yes, that’s true. But, and it’s a big one, all that would help us to understand is how representative of the survey is of the overall panel membership. We are mostly interested in representing a population and the panel does a poor job of that.