A couple of weeks back Research
asked a number of industry wise men and women
to pick one word to sum up MR in 2010.
Jeffrey Henning chose “probability.”
He goes on to say, “What ‘subprime lending’ was to the financial
industry, ‘access panel’ was to the market research industry.” He then references the Yeager
and Krosnick study released in 2009 showing that probability samples still
trump convenience samples when it comes
to accurate measurement. Just above
Jeffrey’s commentary was Ray Pointer’s, and in Ray’s commentary he wrote, “the
industry has largely abandoned random probability sampling.” So who is right?
When the online panel chickens first came home to roost three
or four years ago and the great panel data quality crisis ensued I, much like
Jeffrey, predicted “a flight to quality.”
It never happened. The industry
managed somehow to pivot on the issues and to redefine quality into what
MarketTools has neatly summed up with their TrueSample tagline, “Real, Unique,
and Engaged.” Tired old concepts such
as representivity and accuracy were defined out of the equation. Survey results need only be consistent and directional.
The most important characteristic of a
survey respondent became the willingness to faithfully complete our long,
tedious and boring surveys, rather than the natural ability to accurately
represent everyone else out there sharing the same attitudes, beliefs,
behaviors and views of the world.
There are lots of reasons why this has happened and why it’s
probably never going back. Chief among
them is the fact that our customers no longer place any real value on
accuracy. It’s just too boring. Clients want “insight.” Intuition and gut feelings are valued above
plodding, systematic analysis. Pop
science has driven out real science.
Anecdotal evidence is sufficient.
I want to write a paper titled, “The End of Representivity: Market
Research in the Age of Blink.”
So, yes, I think Ray has it right. But he went on to say that researchers will “need
to get their heads around concepts such as triangulation and confirmatory,
disconfirming, and maximum variation samples.”
Right again. But I wouldn’t count
on it.
Comments
5 responses to “Just a little whining”
You can do probability-sampled Internet panels. Why aren’t more people doing it? It’s not *that* expensive, given that you do not need to supply computers and Internets to as great a percentage of households as you once did.
I’m not ready to throw in the towel yet!
Thank you for a(nother) interesting read Reg.
You once referred to the online panel crisis as “the perfect storm” – Heavy survey taking, panel conditioning, declining cooperation, increase in demand, persistent mode differences and long and complex questionnaires, had eventually led to survey results that could not only not be projected to the general population, but that didn’t appear to be replicable from one study to the next either. Since then, in 2006, I have been anticipating and awaiting that we’d sooner or later have our own perfect storm in Europe. When – at last year’s WARC Online Research Conference – you suggested that BRIC countries learned from US mistakes, my first thought was that I’m not at all convinced that even Europe has taken any real lessons from the US experience. However, although we’ve seen a few moderate to strong breezes, we haven’t come anywhere near the high winds that blew over the US market research industry in particular, when Dedeker and other clients had realized that they were “trading data quality for cost savings”. Seeing the direction our industry has taken since then, I’m beginning to question whether we’ll ever come to a similar situation.
Just this week, I heard conclusions coming out of the Net Gain 4 conference on social media and online research in Toronto, that “there are no bad panels, just different panels”. Not much unlike what the ARF FoQ study appears to have concluded (I’ll admit to being somewhat disappointed by the outcomes of this ‘million-dollar’ study, at least from what I’ve learned to date). Although these statements make no sense at all to me, you may well be right that clients, and, I would add, a large part of our industry, simply no longer care about accuracy. Indeed I fear that Diane Hessan of Communispace was right, when she said last Wednesday in New York that “top executives would rather have fast than perfect”.
With this in mind though, I would also say that I do think that there’s still a demand and place for Jeffrey Hennings’ “Probability”. If not in the average commercial brand related study, then at least within medical and many areas of social research?
Finally, I know that I am not alone in hoping that you will keep “whining”!
PS. Apologies for the length of this comment!
Thanks for the comment, Dan. I was especially intrigued by your quote: Diane Hessan of Communispace was right, when she said last Wednesday in New York that “top executives would rather have fast than perfect”. We all know this to be the case but how often does someone come right out and say it. Is there a paper or something I might have a look at?
What a lovely blog!
Coming very late to this particular party, but I get the feeling that some executives would rather get stupid results very fast than great results that require very slightly longer than the marketing department’s attention span. I mean, my new sofa’s taking 8 weeks. Should I demand it in 2?
Unfortunately it *is* that expensive, at least relative to the prices that clients have come to expect from access panels. Even if you don’t have to supply computers and connectivity, you still have to draw a probability sample and persuade people to be come empaneled using offline means. Many MR firms don’t even have sampling left who know how to construct a probability sampel. And the cost structure for anything that involves human persuasion is so high – at least relative to letting people sign up on a website – that the end result is unappealing to many clients. Until we come up with a complete database of relatively stable database of identifiers (unique email or IP addresses) of people or machines, and make that database publicly in the way that the Bell system made phone number data available, I’m afraid I don’t see a happy merger of probability techniques with the internet.