I am finally getting around to wading through the mother lode of academic research noted in an earlier post way back at the beginning of March. The special POQ issue has two articles, one looking at Web versus face-to-face and the other comparing CATI, Web and IVR. The results are not particularly surprising, but it's nice to see one's suspicions confirmed with well-designed and executed research.
Dirk Heerwegh and Geert Loosveldt report on results from a survey in Belgium designed to assess attitudes toward immigrants and asylum seekers. They put considerable effort into designing both the Web and face-to-face survey based in Dillman's unimode construction principles. In other words, they worked hard at making the two surveys as comparable as possible rather than optimizing each to its own mode. Their results are pretty convincing. The Web survey produced a higher rate of "don't know" responses, more missing data, and less differentiation in scales.
Frauke Kreuter, Stanley Presser, and Roger Tourangeau looked at social desirability bias across three methods–one with an interviewer (CATI), one without an interviewer (Web), and one with sort of an interviewer (IVR). They drew a sample of University of Maryland alumni and asked a variety of questions about academic performance and post graduation giving. They were able to verify the respondent's answers against university records. In essence, they were able to tell who was telling the truth and who was not. As with Heerwegh and Loosveldt, the results are pretty much what we would expect. Web reporting was the most accurate and CATI the least with IVR generally somewhere in the middle.
So there you have it. We used to like to say that "the Internet changes everything." Well, it does not appear to have changed some basic principles of survey research.