Here we go again

That was my first reaction when I read a press release from Gongos Research that starts off by saying, "a new study proves that smartphone-based survey data is statistically comparable to online survey data." (My emphasis added.) I would have been a lot more comfortable with this if instead of "proves" they had said "shows" and instead of "is statistically comparable" they had just said "can be statistically comparable." But they are certain. This study proves it. Like gravity, it's not just a good idea, it's the law.

Ray Poynter has already pointed out that one study does not prove anything. Einstein agrees with Ray having once said that "no amount of experimentation can ever prove me right but a single experiment can prove me wrong." Einstein (To be fair, I don't think he was necessarily commenting on the Gongos release.) A colleague of mine pointed out that given the comparison is to online you could dismiss this as damming with faint praise. My worry here is that we are entering into a new era that is a sorry replay of the early years of online where its evangelists made all sorts of claims based on a handful of poorly-understood studies only to discover a few billion dollars of research later that there were some problems we had overlooked and online, at least as it was being practiced, was not all that it was cracked up to be. Some of these problems we have yet to solve. But we're working on them.

I think this happens because MR is first and foremost a business and therefore making money is the first priority; doing good research comes in second. Creating competitive advantage is key and one sure way to do that is to feature a cool new methodology with a dose of empirical research to demonstrate its validity. The upside here is that it encourages innovation and creating thinking. The downside is the confusion, disappointment and skepticism that it creates among clients. I don't mean to single out Gongos; they are neighbors and nice people, some of whom I know personally. This is an industry issue.

There is a better and more reasoned way to go about this sort of thing and it relies on building a theoretical framework that specifies under what conditions a methodology works well and when it works poorly. I recommend to you an interesting paper by Carlile and Christensen on the process of building theory in management research. Briefly, their version of the scientific method starts by collecting lots of obsScientificmethodervations then categorizing them based on outcomes and on the properties of those observations that might explain those outcomes.  We don't do one study and scream "Eureka!" It's an ongoing cycle of replication and new experimental designs. Applied to the case at hand we might look a whole range of mobile studies, their target respondents, the sample, the study topic, the details of execution, the validity tests used, etc. We might build a body of research and from that  develop something we might call "a theory" about when mobile is the right choice and how to use it effectively. Doing so would enhance the quality of the research done in MR.

Unfortunately, we've not really done that with online and so it probably won't happen with mobile either.  After all, we have businesses to run. Caveat emptor.