It is a well established principle in survey research that the same question asked in different survey modes (e.g., telephone vs. Web, face-to-face vs. telephone, interviewer-administered vs. self-administered, etc.) will sometimes elicit a different pattern of responses. This issue of mode effects is a complicated one and it has now emerged as a major point of focus within the industry as we look to convert studies to Web from other modes or take the half-way step of mixed mode.
This is a multi-faceted issue and in this post I want to just speak to the narrow topic of missing data, that is the frequency of non-substantive responses such as Don’t Know or Refused. I also want to focus on what is most important to us here at MSI, that is, the differences we are likely to see between telephone and Web. Those differences often seem to come down to how you present your question on the Web.
On the telephone we require that every question have an answer, even if it’s just the interviewer recording that the respondent refused to answer or didn’t know the answer. But even though we will accept Don’t Know or Refused as an answer, those codes are almost never read to the R so they don’t necessarily think of them as answers they can select. Only interviewers can do that. When we were developing our original Web Questionnaire Standards we carried over the principle that every question must be answered, but to be fair to the R (and consistent with the phone) we gave them the option of a Don’t Know or a Refused on the screen, although we did urge the use of only one such code to reduce visual clutter. This is consistent with one methodological school of thought that maintains that sometimes Rs may indeed not have an answer and if you force them to give you one they will just make something up or pick a response at random. Certainly not what we want.
There is another school of thought that argues Rs who choose a non-substantive response are satisficing, that is, not taking the survey task seriously, and therefore the non-substantive response options should not be offered. They punt on the issue of whether you should require a response.
Both the literature and some work we have done internally show pretty clearly that if you put the non-substantive responses on the screen in a Web survey then Rs will select them more often than they will on the telephone where they are not read. In some mixed mode tests we’ve done on an Energy study the Web produced twice the rate of non-substantive responses as the telephone. You can reduce this effect by not displaying the non-substantive response on the Web. Of course, you then need to decide whether to require a response or not. A sensible compromise here might be to require a response to an attitude or opinion question but not require or even provide a non-substantive option on questions of fact or behavior where an R may legitimately not know the answer or refuse. We have been revising our Web Questionnaire Standards in this direction.
Open ends are even more problematic. For example, on the above-mentioned Energy study:
- 46 percent of Web Rs provided no mentions to a set of open end questions as opposed to 0 percent of phone Rs.
- 16 percent of Web Rs provided more than one mention to the same question versus 34 percent of phone Rs.
Unlike the options suggested above for closed ends, no good options come to mind on open ends. When we have required a response we get lots of Rs giving us answers like "nothing."
We are using what we learn from the latest research and our own experience to evolve Web Questionnaire Standards that we believe will get us the best possible data. But the science is still evolving and it’s not always clear what course is best.
In subsequent posts I will take up the related issues of social desirability and visual vs. aural presentation.