Sometimes close counts in more than horseshoes

Ah, when bloggers argue. It all started with a cry of anguish from lovestats about representative samples being “100% unattainable.” At some point I gather that Ray Poynter told us that “online quant is busted,” although that’s just hearsay on my part. Now Jeffrey Henning has weighed in and reasoned down to the conclusion that “Representative samples are 95% attainable, with a confidence interval of plus or minus two pundits.” Last, but certainly not least, Dan Kvistbo, who as far as I can tell does not blog, has tweeted closest to the real issue: “the degree of compromise varies enormously – and thus the quality of the approximation…”

The issue is not whether one method gets to the truth while another does not, but how close or how far away we are. No method is 100% accurate every time. All of our methods produce estimates with some amount of error in them and it’s our job as researchers to figure out how much error there is and then decide how that error limits the interpretation we pass to our clients. That’s why the concept of total survey error is so useful. It provides a nice framework for us to conceptualize the weaknesses in a method and therefore the kind of bias/error we might expect in our results. It can also point us to additional investigations we might do to understand just how big an impact an obvious source of error (e.g., high nonresponse) might have on our results.

Unfortunately, most of us in MR don’t know as much as we should about TSE or the significant body of research exploring its various dimensions. We really should. Not only would it make us better researchers capable of doing better by our clients, it also might put an end to inane press releases like this one.


Comments

3 responses to “Sometimes close counts in more than horseshoes”

  1. Sorry, Reg, I should have made it clear: Ray presented a Revelation Great Thinking webinar yesterday, where he made the comment. I will link to it once it is uploaded.

  2. Jeremy Hemingray Avatar
    Jeremy Hemingray

    However lacking in ‘error’ your survey is, the information you get is only as good as the questions you ask. Given how poor we (humans) seem to be at bearing witness to our own thoughts and behaviours (Herd et al), isn’t an equally big problem for MR the assumption that by asking a set of highly structured questions we can derive data that yields genuine insight into the real motives for human behaviour? Isn’t this partly what Lovestats was driving at?

  3. Jeremy Hemingray Avatar
    Jeremy Hemingray

    I mean Earls et al