I’ve just come back from Toronto where I gave a talk at NetGain 3.0, a one-day conference put on by MRIA. As the title suggests, the focus was online research and the presentations covered all of the usual ground that conferences like this cover. Now I don’t mean that as a knock. I think it’s good news that the issues are being widely discussed in all sorts of venues. There may not be a whole lot of new solutions being proposed but at least people are increasingly aware of the problems the industry and clients are wrestling with.

The conference was opened by Pete Cape from the UK and SSI. Pete has been a major voice in the ongoing debate. He took the group through an exercise that quickly exposed that we are in industry of amateurs with little background or formal training in market research. Most people seem to have just stumbled into the business, and that’s not just true in Canada. It goes a long way toward explaining why we struggle with many of these methodological issues. Bottom line: as an industry we too often don’t really understand what we are selling or the validity of the claims we make for it.

Next up was John Wright, a political pollster from Ipsos-Reid. His talk was equal parts bragging about how accurate their telephone polling has been, presenting lots of data “proving” that online can be just as good as telephone polLing if it’s done right, and railing at organizations like MRIA and AAPOR for their intransigence around the reporting of margin of error statistics for online studies. The truth is that political polling is one arena where online has been shown to work pretty well, although the art of political polling is arcane enough that we should probably not infer much about other kinds of research. The railing against MRIA and AAPOR was Exhibit A in Pete Cape’s argument that research training is desperately needed in our industry. I happened to be sitting next to the Standards Chair for MRIA and we agreed that John’s quarrel was not with MRIA or AAPOR but with the guy who invented the margin of error calculation with its problematic assumption that you had a probability sample.

Next up was a paper by Anne Crassweller that she had also presented in Dublin at the ESOMAR Panels Conference. It’s one of those studies chronicling attempts to move a long-term study online and failing to do so because the topic—newspaper readership—is to some degree correlated with online behavior. This would seem to be a classic example of where online does not fit the purpose of the research.

Then came what I thought was the best presentation of the conference y Barry Watson from Environics. These guys build population segmentation models based on attitudes and values. Barry presented some data comparing three online panels to the general US population. A key segment way overrepresented in the panels is what they call “liberal/progressives.” The underrepresented segments included groups they call “disenfranchise and” and “modern middle America.” To really understand the implications one would have to dig deeper into the segment composition but this approach of trying to understand the attitudinal and behavioral differences of online panelists versus the general population strikes me as very important and generally missing when people make claims of “representativeness.” Mostly the industry as expressed these things in demographic terms which really are somewhat meaningless in this context.

Barry also gave us the best quote of the conference: “Bias is only a problem when you don’t know what it is.”

The afternoon was less interesting, even with me kicking it off. My main message: let’s stop talking about representativeness and instead focus on understanding bias and how it relates to the business problem we are studying.

Next we had the obligatory argument for “eye candy” to increase respondent engagement and lots of data to show just how widespread social desirability bias can be. And there was a pitch from the RFL people about their “pillars of quality.”

When it was all said and done I found it not a bad way to spend a day. I got some fresh perspective and a chance to rant a bit which is always welcome.


Comments

4 responses to “Toronto in January?”

  1. Ouch !! Reg, sorry you misunderstood what I said and took the facts of accuracy as “bragging”…on the substance of it all, I’d say your review is disappointing where someone as smart as you can actually help explain why the online and other methods are coming out the same as the RDD.
    And the basic question: if the media who report these things now view them as defacto equivalent because they are coming out the same and accurate on election results using differing methodologies, how do we square the circle when we are essentially ordered by the industry statistics police to call up the reporters and tell them to take off the margin of error because it is not representative?
    That was the essence of what I said and you may have missed the point. Respected methodoligists such as yourselves can dance on the heads of pins or be “exhibit A” insulting, but it doesn’t help answer some basic questions that are out there…
    Please explain…
    Thanks and regards.
    John Wright
    Senior vice president
    Ipsos Reid

  2. the Geek Avatar
    the Geek

    Controversy at last! Blogs are supposed to be over opinionated rants to stir controversy and get people interacting around issues but no one ever seems to react to what I write. Until now.
    First, let me repeat what I said from the podium at the conference. “We are an industry of amateurs. I have a Ph.D. in history. I am not a survey methodologist: I just play one at conferences.” So I am at least Exhibit B in Pete’s case and I don’t feel too badly about it. Nor should anyone else.
    Second, there is a clear record that the one area in which online has been shown in multiple trials to produce results that match RDD is political polling. But there are many other areas of research where that is not the case, some of which also were reported yesterday. I don’t know why it works for political polling, but I know it doesn’t work in many other cases because the sample frame is not the target population, it’s this tiny subset of people put together in a rather haphazard way that cannot possibly represent in precise proportions all of the attitudes, beliefs, and behaviors of the total population. There are all of these intervening variables that might explain why some people are in the frame and the vast majority of others are not, and so far we have no way of measuring those variables. In short, the frame is biased in a multitude of ways that we can’t measure and so samples drawn from that frame also will be biased.
    Third,a margin of error calculation essentially says that if I repeat my survey with successive samples from the same frame 95% of my estimates will fall within the specified range. That’s probably just as true with online except, of course, the frame is not the population. It’s a subset of the population constructed in a very unsystematic and unpredictable way. So there simply is no way using the language of sample statistics to make statements about the general population based on survey results that use opt-in panels.
    And my wife will be the first to tell you, I’m a lousy dancer.

  3. Ok…so does it seem strange to you that the three guys most engaged or mentioned in this discussion all have history degrees?
    My question’s still stand: how does this work in one part of the research field with verifiable proof in the public domain and yet people say it doesn’t work elsewhere in the field?
    It’s not a fluke. It is demonstrable fact. How can the same tools work in one place but not in another? You can’t infer that political polling is not real research. Re-read my paper: there was nothing radical there. Just an open ended challenge to the rest of the community because in the public and news media mind it now is equivalent while the statistics police and margin of error matron’s want to disavow it. What I basically said, but didn’t, was “do you folks realize that if you keep at this, the media and every other stakeholder out there will be confused over the legitimacy of everything done online? Is this what the industry wants to do and be known for? Is this the battle they want to fight? Isn’t there something else more important to deal with?”
    John Wright
    Senior Vice President
    Ipsos Reid

  4. the Geek Avatar
    the Geek

    Having watched political pollsters up close for a number of years I have concluded that is as much art as science. Political pollsters have advantages that most other researchers don’t have. They measure on a repeated basis, they have the work of others for comparison, and they get to validate their results in very clear terms. Over time they hone their questions and develop elaborate weighting schemes that give them a distinct advantage over researchers studying other kinds of problems.