It’s the Process, Stupid!

I am in Berlin at the International Conference on Survey Methods in Multinational, Multiregional, and Multicultural Contexts. Quite a mouthful and so it’s known simply as 3MC. Yesterday I had lunch with some European colleagues involved in both ESOMAR and ISO. One of them posed the question of which is better: a survey from a well designed probability sample that is poorly executed or a survey from a non-probability sample that is well executed. This, as it turns out, is a theme that keeps popping up from session to session.

The first session was all about guidelines for conducting multi-cultural or multi-country research. It featured a set of guidelines developed via CSDI . I have more to say about them in about in a week or so once they are released and I have the link. For now the real point is that a very distinguished set of survey researchers including the likes of Tom Smith from NORC and Lars Lyberg from Statistics Sweden seemed to agree that a good survey is, in the words of Bill Blyth from TNS, 10 percent design and 90 percent process. To paraphrase Lars, there are so many opportunities to introduce error in questionnaire construction, translation, data collection, coding, data processing, analysis, and so on that the problems of sampling error begin to seem less important. At Statistics Sweden, for example, he discovered that errors in coding could lead to estimates in labor force participation to be off by as much as 40 percent.

This theme reappeared in a subsequent session that featured presentations about some of the major public policy surveys now going on around the world. In many of these countries it’s hard to even think about have a high quality sampling frame from which to draw a representative sample. So researchers do what they can but mostly focus on consistency of execution. Good advice, I think.