Tardy report on last November’s “Research Industry Summit “

Shame on me. Way back in November Colleen Carlin attended a conference in Chicago and dutifully wrote up a report for me to share. Then somehow it got lost in my inbox. Imagine that! In the vein of better late than never here is her report. (As a footnote, this is the same conference referred to in this earlier post.

The Research Industry Summit: Solutions that Deliver Quality Respondent Quality

The theme of the summit was around quality data. Most sessions covered the quality issue as it relates to Web panels. The technical issues (cheaters/duplicates) that have been at the forefront of the quality movement (as it relates to panel data) have faded into the background as a focus on respondent motivation took front stage. The general idea seems to be that respondent engagement is synonymous with data quality. Several suggestions of how to engage respondents were put forth by different panel providers. Greenfield Online is suggesting that the use of flash programming produces more interesting surveys and this will lead to higher levels of respondent engagement. They presented results from one (yes just one) experiment where they randomly assigned people to either a 'traditional' survey or a survey with flash programming. The survey with flash programming was deemed to be more engaging based on the following outcomes: on average, the time to complete the flash survey was a minute less than the traditional survey; the flash survey had a higher response rate and lower dropout rate. What was not examined was the issue of bias. Does the use of flash programming introduce respondent bias into the data set? Not all computers or data connections can handle the flash programming, are these respondents somehow different on the attributes of interest? More research is needed to determine the efficacy of advanced visual (i.e. flash) techniques on data quality.

Another idea for engaging respondents models itself on popular social network sites, like Facebook. Toluna is the first panel company to embrace the concept of social networks as a way to engage panelists. The 2.4 million Toluna panel members can now interact with each other via an interface that looks very similar to Facebook. They can poll and debate each other. Mike Cooke pointed out a potential problem with this model is participatory bias and conditioning effects. How will the interaction between panel members affect their responses to surveys? An employee of e-rewards told me that everyone in the panel industry is closely watching the success or failure of this concept.

Ali Moiz of Peanut Labs sees the synergy between social networking sites and panels a bit differently. Through tools like Facebook Connect he thinks that panel providers and clients can begin to verify the identity of respondents in addition to mining their profiles for additional data. With permission from respondents, a company could gain access to Facebook profiles and use this information to validate that someone is who they say they are (i.e. a technology decision maker). The problem I see here is that it is probably about as easy to set up a false Facebook profile as it is to misrepresent oneself on a Web panel.

A presentation from a client in the financial services industry highlighted the need for caution when changing the mode of data collection for existing research from phone to web. Research done over the phone was compared to results from three different web panels. Product demand ratings between the phone data and data from two of the panels were statistically identical. A third panel twice yielded results that were over 20 percentage points higher than what was reported on the phone or the other two panels. A closer examination of the third panel revealed that it was a newer panel and thus had less seasoned panel members. All past web studies were aggregated and meta data about respondents added (i.e. how many surveys completed, tenure on panel, etc). The findings indicate that more seasoned panelists and those who take more surveys give lower product demand ratings. A link to actual purchase behavior showed that these lower product demand ratings were much closer to actual purchase behavior than the other results. These findings call for the need to include panelist behavior as a demographic that can be included in analysis.

My conclusion is that we need to be focused on ways to engage panelists, but not in a 'one size fits all' manner. Panelists, like people, are motivated in a multitude of ways and we need to figure out a way to include techniques that will appeal to nearly everyone. We need to be cautious and carefully test different techniques proposed to increase engagement and determine how the data are impacted. Primum non nocere.