This is something of an unplanned follow-up to my last post. While catching up on my reading I came across an interesting article in the winter issue of POQ by LinChiat Chang and Jon Krosnick reporting on a neat little study that compares responses from a telephone RDD sample, an online panel recruited by RDD (Knowledge Networks), and an online volunteer panel (Harris Interactive). One key finding is in sync with something I have heard talked about in connection with the ARF study: online panel respondents do a good job of completing questionnaires and the more they do the better they get.
As a broad summary, in the Chang and Krosnick study the online panel results showed less satisifcing, less social desirability, better self-reports, and greater internal consistency than did the results from the RDD telephone sample. The online panel folks were just better at doing surveys. Some of that was attributed to more practice, but some of it also was due to a stronger tendency for the volunteer panel respondents to select only those studies where they had a strong interest in the survey topic. The telephone results, on the other hand, were more representative demographically and in terms of electoral participation. These differences persisted even after weighting.
A cynic might say, "Pick your poison." An optimist might say, "Good input to fit-for-purpose methodology choices." I might wonder whether we've used up a lot of time, money, and stomach lining worrying about the wrong problem.
Comments
2 responses to “Practice makes perfect”
There are a ton of parallel studies like this. In the end, my philosophy is that if the conclusions you draw from your studies consistently predict the marketplace, then it doesn’t matter how unperfect your method was. Accurate prediction is the only thing we are trying to achieve.
My bulleted summary of their review of other research on the “practice effect”:
* Regularly answering surveys may improve accuracy of responses
* Panel members may become more introspective and self-aware, improving their reporting
* Respondents’ answers to attitudinal questions improve with practice
* “Stimulus hypothesis” that asking about future activity prompts that activity
* Past surveys makes panelists less like general population
* Panelist attrition nonrandomly affects panel representativeness
Good stuff! Thanks for giving this study some needed publicity.