I am at the AAPOR annual conference in Boston. My first observation: it is huge. For example, at 8:00 this morning there are no fewer than eight separate sessions, each with five to six presenters. There is no way you can come close to covering the whole thing. So I have tentatively chosen to focus on two consecutive sessions about online sampling, mostly without using online panels.
I'm having two reactions to this. First, I'm feeling like I'm watching the wheel being reinvented. Ok, so maybe a reboot of online sampling is a better description. But there is a certain naiveté in doing sampling from Facebook, Google, or an email blast to a list of unclear origin and expecting some chance of getting a sample that matches some high quality probability sample like that used by the GSS. Second, and probably of greater importance, the level of transparency and analysis is refreshing, especially given the lack of transparency we have seen over the years on the part of online sample companies.
For years I have been frustrated by this industry sector's out-of-hand rejection of online and a research agenda that seems directed at demonstrating only that online does not work. But the people who do this kind of work have a lot they can contribute to the debate about the quality of online samples and how to improve it. To paraphrase a statement made by Doug Rivers at an AAPOR conference two years ago, it's time to move beyond keeping score. It's nice to see that finally happening.