I had significant misgivings about going to this conference given its location (Las Vegas) and the fact that I already was committed to speaking in London earlier in the week. But I was glad I went. The WARC Conference was interesting but the focus there was mostly on the so-called NewMR. That's pretty much always the case at European conferences these days where the dominance of online panels is nowhere near what it is here in the US. So it was a nice change of pace. CASRO presenters raised, but did not completely answer, three key questions.
First, has the industry overreacted to the "panel data quality crisis?" The research presented by the folks from DMS/uSamp demonstrated pretty clearly that we pay a significant price with respondent validation, regardless of the vendors we use. All the solutions they looked at excluded significant numbers of younger people, non-whites and Hispanics, and less well educated respondents. Are these really imposters or are they just people who don't show up in "the system" because they don't have credit cards or mortgages or bank accounts? MarketTools showed their work for rooting out "bad respondents" defined as chronic satisifcers. This method, too, shows signs of a demographic bias, although on average these respondents comprise only about 2-5% on any given survey. A the end we have to ask ourselves whether our drive to provide squeaky clean online clean sample is not adding more bias on top of an already biased sample. What standards, if any, should we agree on across the industry when it comes to "cleaning" online panel sample?
Second, what new dangers do we encounter as we move beyond the traditional online panel model, trying to increase capacity and diversity through more and more multisourcing? Comparisons of traditional panel sample with intercept samples at this conference seemed to have mostly positive outcomes, that is, few important differences. But much like panels in general, every panel company has its own wrinkles in how it designs its routers and the sample sources it uses. And so it's likely that the problem of variability among panels will continue.
Finally, what if anything can we do to solve the problem of lack of representivity in all of our online methods, but especially panels? Is there a solution at all? We didn't get an answer at this conference but if, as they say, recognizing that you have a problem is the first step toward solving it then we are making progress. There were signs at this conference that the industry is beginning to treat this less as a weighting problem and more as a sample selection problem. I think that's a start. But I also think that the path to a solution, if there is one, lies in the commercial and academic sectors working together to find it. Sicking to the empirical argument–it just works–is what got us into this to start with. We need some theory and we need to be able substantiate it with a good deal more empirical research than we have so far.
All in all I think CASRO continues to do a good job with this conference. Two thumbs up to my buddy Frank Kelly and his committee for putting it together.
Comments
One response to “CASRO Online Conference: Final Thoughts”
Sigh. It sounds like people are still focused on responders as the culprits as opposed to surveys as the culprits. Let’s put the blame where it belongs, on the researchers!