Day 2 launched with Jeff Miller from Burke (and conference co-chair) giving us a nice summary of the previous day. Then we moved into two presentations looking at differences in sample composition from traditional online panels versus those constructed through online intercepts. This is a really important issue given the current trend among panel companies to augment their own panels with all sorts of other sources (social networking sites, classic "river," other panels, etc.).
David Bakken from KJT Group was first up and described an experiment comparing the standard panel methodology, inviting of panel members to a routing site, and intercept. His points of comparison included the American Community Survey, a high quality probability-based survey conducted by the US Census Bureau. There is a lot by way of detailed findings in this study but the bottom line seemed to be that (1) the intercept added to the overall diversity of the sample in a positive way and (2) all three online sources synced up pretty well on most measures.
He was followed by Gina Pingitore from J.D. Power. She described a similar comparison of intercept samples (referred to in her case as "real time" surveys) to traditional panel. She looked at two ongoing surveys—one of satisfaction with energy utility companies and the other with credit cards. Again, lots of data made it difficult to absorb but the general takeaway was that things match up pretty well both at the individual survey level and with previous waves. The weakness here, however, was no external validity of the sort included in the previous presentation.
The next two presenters looked at ways to correct bias in online panels. Mitch Eggers from GMI described their Pinnacle product. They have been looking at ways to select samples that are balanced on attitudes as well as demographics. Their benchmark for attitudes is the General Social Survey (another high quality probability-based survey) and its various clones throughout Europe. The underlying premise is that the GSS is an accurate accounting of the attitudes and beliefs of Americans across roughly 60 questionnaire items. In Mitch's view, if your sample syncs up with the GSS then it's nationally representative. So they start by administering a 60 item battery to every sample source they use and the results drive a profile of each panel that can be used in sample selection. God, of course, is in the details which were difficult to get through in the 25 minutes or so of the presentation. But it probably is worth a closer look.
He was followed by Steve Gittelman of Marketing, Inc. and Adam Portner of Research Now. Their focus was on adding demographic balance to standard panel through the use of social network sample. The question Steve was trying to answer: how much is too much? Over about the last three years Steve has profiled literally hundreds of panels from all over the world and has used those data to build a segmentation scheme that he argues allows him to draw sample that is consistent over time and representative at least of the online population if not the population as a whole. These segments are then used to monitor incoming sample from social networking sites to ensure that the amount added does not throw the segments out of whack.
I am quick to admit that I have not done justice to any of these four presentations in the two or three sentences I've allotted to them. All four presented lots and lots of data. A thorough appraisal of each should begin with a close examination of those data. Contacting these folks directly may be worth your time if you are as interested in the issue here as I am.
At this point I had to leave for the airport. But before leaving I recruited Bob Fawson of Opinionology to summarize the last sessions for me. Here is Bob's report.
The afternoon launched with Betty Adamou from Nebu speaking on the intersection of research and gaming. Betty is passionate about gaming and made the case that respondents would like to participate in research games. The average attention span of students in the UK is 10 minutes, but the average gamer will spend over thirty minutes at each sitting. How games would collect data salient to research questions is less clear, however. Issues of cognitive focus and representivity need to be addressed more clearly. Gamification of surveys is a hip topic at the moment and we no doubt will be hearing more in the future.
Jamie Baker-Prewitt from Burke followed with an evaluation of social media buzz as a new source of market information. It was a thoughtful and methodical treatment of the topic, where scraped social media data were compared against traditional survey data for a range of brands across three waves. There were a few key findings: (1) volume of brand buzz (both positive and negative) positively correlates with customer loyalty; (2) brands perceived as 'good value' have negative correlation with volume of brand buzz; and (3) there is a negative correlation between brand trust and the percentage of brand buzz that is negative. Jamie also noted that for brand equity metrics, correlations are strongest when social media is measured after the survey (i.e. used as a lagging indicator). Lest we get too excited about the application of statistical tests with social media data, Jamie reminded us that it is still not representative or projectable.
The conference closed with a panel discussion on digital media tracking and evaluation. Frank Kelly from Lightspeed moderated with Jim Forrest from Ipsos, John Bremer of Compete, and Duane Berlin, CASRO's legal counsel participating. There is a strong [healthy?] tension between the need to track behavior for accurate measurement and privacy concerns. John noted that the work they do is 'a bit Orwellian.' And, while respondents express a desire for greater disclosure and control over the experience, they rarely read the disclosures or use the tools they currently have. The panelists expressed a good deal of optimism about the future of this work, even in the face of impending regulatory oversight. That is, until Duane showed up with a bucket of cold water. Most practitioners, according to Duane, are currently in violation of Federal law. All in all a very good panel discussion. There may be significant implications for the usefulness of panels and, more specifically, our ability to recruit new respondents in the future. I' sure CASRO will keep a close eye as the regulatory issues keep unfolding.
Comments
One response to “CASRO Online Conference: Day 2”
That was a very thorough and comprehensive discussion. They tackle very important subject and issues. I’m sure everyone were able to learn a lot from it.