CASRO Online Conference: Day 1

I’m at the CASRO Online Conference: Issues in Panel Management and Online Data Collection. The Twitter hashtag is #casro if you like to follow the blow-by-blow.
The conference opened with a keynote by Jacqueline Anderson from Forrester. She said all of the things you would want a keynoter to say to open a conference like this one. Her main thrust was the four big trends we face as an industry:

1. We face a whole new set of methods (social media, neuroscience, mobile, etc.) that are gradually replacing our traditional reliance on surveys and asking questions.

2. We need to come to terms with an expanding set of data sources.

3. DIY.

4. The need for more impactful presentation of research results.

Lest we all get too depressed by all of these challenges she encouraged us by telling us that we are uniquely positioned to flourish in this new world by application of the training, knowledge, and experiences we have as researchers. She also argued that we are well-positioned to pick the winners.

Next up were Melanie Courtright and Chuck Miller from DMS/uSamp. They showed us lots of data from an experiment they ran looking at the impacts of various techniques to validate online respondents, that is, determining whether these individuals are who they say they are rather than people creating false identities to get into surveys. Their data showed pretty clearly that the people you loose with this sort of validation are younger, non-white, and lower SES. My personal view is that you lose these people not because their frauds, but because they don’t have credit cards, bank accounts, or mortgages and therefore don’t show up in the databases we use for validation. They presented a ton of data to show how getting rid of these folks impacts survey results but it was all too fast to make sense of. I look forward to seeing the presentation once it’s posted to the CASRO site.

They were followed by Inna Burdein from NPD who brought us back to the ongoing concern about the impact of panelist experience on survey results. She noted the division of opinion within the industry about profesisonal respondents. The “saints” view is that they are good, reliable respondents who know how to fill out questionnaires. The “sinners” view is that they are cynics who give half-hearted efforts and skew our data. She had beaucoup charts showing how people who have done lots and lots of surveys respond differently from those who have only done one or two. It was pretty clear that the more experienced panelists underreport the number of brands they patronize and the categories they purchase in. The Q&A brought out a question that her data could not answer: are these results due to some change in panelist behavior over time or is it an attrition effect, that is, do people who give lots of information just stop participating after a couple of surveys and the panelists who hang around are ones who consistetly under report from the first survey on?

After the soft drinks and cookies placed to lure us into the clutches of the exhibitors we heard from the reliable Pete Cape from SSI about quotas. Pete asked the (rhetorical) question: is our use of quotas science or just sciency? He showed us how the application of different demographic quotas can produce significantly different results. Were I to fault Pete’s presentation it would be on the grounds that he did a good job of showing us the dangers of what we’be been doing but gave us little to take away about what we should be doing. The Q&A that followed laid bare the fundamental lack of basic knowledge about sampling throughout the industry (or at least in the room), but one key point someone did make was a version of what Kish called “judgement sampling.” The argument here is that people who work in a given category all the time have at least informal models in their heads about the characteristics (demographic and behavioral) that correlate with the survey topic and have guidelines for selecting samples that balance those characteritistics. Still, I think “sciency” is probably the right answer.

Next we heard from Nallan Suresh from MarketTools who told us about to find and exclude “bad respondents” from panels. “Bad” in this case was defined as the chronically unengaged, people who satisfice over and over again, survey after survey. He had lots of data to demonstrate the problem and document that in any given survey these people constitute about 2%-5% of the total completes. He looked a little at their demos but unfortunately could only tell us that they tend to be young and male. In the Q&A a number of people made the same point in different ways: that these may be people who really don’t have opinions and the way they answer surveys reflects how they interact with the world in general. They are not “bad,” just not especially cognitively active.

The day ended with a sort of mega session on mobile organized by Bob Fawson from Opionloogy. There were presentations by Sean Conry from Techneos and Patrica Graham from Knowledge Networks. And then a panel with AJ Johnson of Ipsos, Peter Milla the independent consultant, and Nathan Eagle from txteagle. IMHO there were two highlights.

The first was Pat Graham’s review of an experiment designed to see if they could make mobile work effectively. In an especially impassioned presentation she descried the strength of mobile to be (1) the joy of being able to tell a clients something they didn’t already know and (2) the way in which mobile can be a “recall buster” by virtue of getting data at the moment rather than post hoc. I’ve seen a lot of mobile presentations and they all follow a standard formula: charts showing stupendous growth followed by a list of worries about the limitations of the device followed by a standard laundry list of applications. But Pat’s presentation gave us another perspective. Maybe you had to be there.

The other standout prez in this session was by Nathan Eagle. He described the use of mobile in emerging markets which is where he thinks there is the greatest potential for research. He made two really important points. The first is that we should not be thinking about smartphone applications. The vast majority of people in these countries are using what we would consider obsolete devices and we need to design our research applications appropriately. The second is that mobile service is extremely expensive given the living standards and in some developing countries a person’s mobile bill may be as much as 10% of his/her annual income. So incentives in the form of free air time are especially compelling.