Plus ca change

In the current issue of Research Business Report Bob Lederer muses on one of his favorite topics, online panel quality, and, at the risk of oversimplifying, seems to say that after lots of industry-wide soul searching it's now time for some action. He concludes by saying, "I suspect that 2010 will be all about tests and adoption of solutions that breathe reliability, replicability, and added value in to an infant (decade-old) research mode that, to those paying attention, had deeply serious shortcomings."

Had? If we have learned nothing else over the last five years it ought to be this: the online panel model is deeply flawed in both theory and practice. Like my allergies, they can be controlled but they never can be fixed.

Let's talk theory first. Well, there is no theory. The arguments are all empirical: it works. The methodology has been legitimized largely by anecdote and the endless repetition of, "It works." No underlying scientific principles have been enunciated or testable theories proposed. Without theory we can never be sure when it will work and when it won't. A tiny handful of people recognize the flaws and are trying to apply a broader set of techniques for working with nonprobability samples that have been developed in disciplines outside of survey research. I hope they come up with something. But the vast majority of practitioners in MR treat panel sample just like it was a probability sample drawn from a high coverage frame. It's not. It's a tiny slice of the population that we just don't understand very well at all. The notion that they are representative of the broader population is just plain silly.

And how about practice? For at least five years we have been talking about four main problems:

  • People sometimes create false identities when they join panels and misrepresent themselves in order to maximize survey opportunities.
  • People sometimes rush through surveys and don't make an honest effort to answer thoughtfully.
  • People will sometimes take the same survey more than once, or worse yet, develop bots that simulate a respondent and take the same survey many times over.
  • The experience of being on a panel and taking lots of surveys over time can change how people respond.

We are told that there are solutions for all of these, but too often the solutions themselves just introduce more problems. For example, it is rapidly becoming a standard for a panel company to "validate" a panelist's identify by bumping his or her particulars up against one of the big marketing databases like Acxiom or Experion. But these databases fall well short of universal coverage of the population and tend to miss people who don't have credit cards or bank accounts. And so real people are rejected and more bias introduced into the panel. Worse yet, solutions that are proposed are then ignored.  For at least the last two years we have known that simply collecting a respondent's IP address and easily-retrieved information from his or her browser can help us identify duplicates. Yet in the past week I've seen two studies with significant duplication in samples from well-established panel companies that claim they use digital fingerprinting to guard against just this sort of problem.

So I think the notion that the online panel paradigm can be "fixed" is fanciful. The essential problem is that the goal our clients have set for us (faster and cheaper) is fundamentally at odds with what we pretend to deliver (accuracy and validity). And it didn't start with online. MR has been cutting corners for decades to make it faster and cheaper. Exhibit A: Quota sampling. We work in a competitive environment where the lower price wins more often than not and the buyer doesn't always understand what he's buying. Another lesson of the last five years: that dynamic is not going to change any time soon.

So regardless of their problems, online panels are not going away. Bob hopes that in 2010 we will see more tests and the actual adoption of some of the solutions now on the table. So do I. But there is a bigger challenge that I see no sign of the industry stepping up to. Here I quote Lyndon Johnson who once said, "Boys, I don't know much, but I know the difference between chicken shit and chicken salad." We might take that to heart. What's been missing so far in all of the discussion of panel quality is a frank admission of what online is not. The panel quality solutions are fine, but they don't replace the need for us to do a much better job of conditioning the conclusions we draw and the advice we give our clients on the quality of the evidence at hand.


Comments

3 responses to “Plus ca change”

  1. Spot on Reg!
    You highlight many critical issues that so desperately need airing and thinking about.

  2. Michael Conklin Avatar
    Michael Conklin

    Hi Reg:
    I’d like to take this opportunity to correct some misinformation regarding current practices in online panels that you have put forth in this post. Specifically, I’d like to highlight your comments on validation of a panelist’s identity.
    You state “it is rapidly becoming a standard for a panel company to “validate” a panelist’s identify by bumping his or her particulars up against one of the big marketing databases like Acxiom or Experion. But these databases fall well short of universal coverage of the population and tend to miss people who don’t have credit cards or bank accounts.”
    Acxiom claims to cover 95%-98% of Adult US households in their database, which by the way does not include credit data (that would be illegal). So I am not sure what you mean by “well short of universal coverage”, it is certainly not as short of universal as, for example, land line phones. That being said, it certainly is not 100%, so there does exist the possibility that a small number of “real” people will be excluded from the panel. Does this, as you state, result in “more bias being introduced”? Bias is introduced only if these “real” people who are rejected, answer differently than the “real” people who remain in the panel. We know (based on research we conducted in April of 2008) that people who are “rejected” answer differently from those who remain. Since the vast majority of those who are rejected are actual “fakes” (see coverage rates above and the 25-30% rejection rates in any given panel) it seems that the overall true amount of bias in the resulting panel is reduced by taking these steps instead of being increased.
    I completely agree with your assertion that these issues did not arise with online panels. Our MR clients have come to the conclusion that the imperfect information they get from marketing research (online or otherwise) is better than no information because the perfect solution cannot be attained.

  3. Theo Downes-Le Guin Avatar
    Theo Downes-Le Guin

    I would quibble with Michael’s comment in that I don’t think Reg is holding out for a perfect solution; none of us believes that perfection is out there or even a worthy goal for MR. But holding out for solutions that are based in scientific theory (hardly the same as perfection) is not a bad idea. The periods when MR and polling have group-thought itself into a frenzy of change and pseudo-innovation have not always ended well for us (viz. Dewey defeats Truman, Beecham v. Yankelovich). As an industry we are saddled with a basic lack of professional education and standards that makes true innovation slow and error rates high (engineers can’t just call themselves engineers, and they make a lot of mistakes). So we should be even more wary when we decide to reject or ignore theory and focus solely practical results.
    That being said, Reg, never raise a problem without a potential solution. Let alone twice.