This is one of the most pressing problems of the day and the answer seems more and more elusive. Way back in 2005 a team at Stanford led by Doug Rivers (co-founder of Knowledge Networks) and Jon Krosnick (father of satisficing) compared seven different US panels and RDD telephone to look for straightforward mode effects and comparability across basic socio-demographics. Without going into all the detail, the basic findings were very reassuring. While there were differences, they were not so great as to set off any alarm bells. The results seemed to say that there are few differences to worry about among the major national US panels.
Unfortunately, things have looked different in practice. On the few studies in which we have compared Web panel results with RDD telephone we have been disappointed, both on behavioral questions relative to health care and general attitudes about current issues.
So it was with interest that I heard a paper by Ted Vonk, Robert van Ossenbruggen, and Pieter Willems at the Barcelona conference in which they described the results of a study of 19 online panels in The Netherlands. But before I describe the study I must note that The Netherlands is a bit different from the US. Internet penetration is at 80% (vs. less than 70% in the US) and broadband penetration is the highest in the world (63% of households). Just the fact that they have 19 panels in a country that small says something! The study used a sample of 1000 from each panel and they executed about a 12 minute survey simultaneously across all 19 panels. The questionnaire asked about basic socio-demographics, some political attitudes, brand and advertising awareness, and Internet behaviors. They key findings:
- The response rates varied from 18% to 77% with an average of 50%! Amazing by US standards. The panels with the lowest response rates were those that do not drop non-responding panel members on any kind of regular basis.
- Newer panel members respond better than older members.
- Panel members recruited by traditional methods (from other surveys, directly solicited, etc.) respond better than those who self-select via banner ads, links, etc.
- Lottery incentives seem to generate lower response.
- Differences in response rate did not translate into differences in the survey data.
- Panelists were generally representative on socio-demographics but not representative in terms of political attitudes as measured by recent election outcomes.
- Almost two-thirds of respondents belonged to more than one panel and multi-panel membership was highest on panels using self-selection methods.
So what’s the takeaway here? Well, I think two things. First, while panel recruitment and maintenance practices make a difference in response rates and the likelihood of encountering professional respondents who do lots of surveys and belong to multiple panels it does not appear that this creates samples with alarmingly different socio-demographic profiles. However, there do seem to be attitudinal differences and that should be very concerning to us. Understanding those differences and figuring out how best to deal with them is a huge challenge, and there is much work that still needs to be done.
And, of course, remember what I said at the outset: this is The Netherlands and things may be quite different there.