Day 2 of The Road to the Client Congress brought a bit more insight. We were joined by Bob Groves, Director of the Survey Research Center of UM and arguably one of the smartest and most influential survey methdologists on the planet. His name tag was the only one with a title on it ("Prof"); the rest of us were just Reg, Laura, Walter, Janice, Bill, etc.
The day started where the previous day left off, with lots of hand wringing about "bad" panelists. One popular solution one hears is scoring individual panelists so that they develop a kind of electronic rap sheet that we can use to ID the especially evil respondents. Bob was quick to remind us that the history of surveys has been more about the quality of estimates produced than monitoring of the quality of respondents. Are these bad boys and girls so numerous that they pervert our estimates and cause us to make bad business decisions?
It caused me to wonder whether the current focus on "panel data quality" is something of a red herring that keeps us from looking at the really key issue here: how do we generated reliable estimates from non-probability samples?
I also was reminded of some research I saw presented earlier this year by some people from IPSOS. They looked at the various bad panelist behaviors and tried to see which of them had a signficant impact on estimates. While I can’t say that I found their results or conclusions especially convincing, I think it’s the kind of study we need to see more of.