CASRO Panels Conference – Day 2

Back at CASRO Panels for another day. First speaker was to be Joel Rubinson from ARF but he has sent his number two, Ray Pettit. He began by reviewing the major findings from the Foundations of Quality project:

  • There are lots of variations in results among panels. In other words, results on a study will change if you change panels. Another consequence is that lending can be dangerous.
  • Purchase intent measures are impacted by a panelist's tenure on the panel.
  • Doing lots of surveys is not necessarily bad.

He also put up a graphic showing all of the things that impact "Panel Data Quality." When I saw it my thought was that they are rebuilding the Total Survey Error Model. Seems like that would have been a good starting point rather than trying to slowly reinvent it more or less piece by piece in a somewhat unsystematic way. It's an existing framework with a rich literature to back it up. Drawing on and participating in that might have been a better approach than reinventing it.

The rest of the presentation was about the QeP process that involves a formalized set of forms and procedures to document things at the panel stage, the individual survey stage, and the research agency editing stage. It's been tested with some big suppliers and met with enthusiasm. They are going to run training programs for it as they roll it out more broadly.

In the Q&A one of the program chairs (Jeff Miller) pressed him on when we will see detailed results. So far we've only seen high level stuff and there has been considerable disappointment around the industry in terms of what we have seen so far. The summary stuff apparently was published in the Journal of Advertising Research December issue. He promised the detailed stuff in March.

Also in the Q&A someone pointed out that just as this is rolling out for panels the whole panel landscape is changing. How quickly they can evolve to deal with that probably is a critical success factor for the initiative.

Next we heard from Nallan Suresh and Michael Conklin from MarketTools. They have been doing some interesting building regression models to understand what drives respondent engagement. The essence of their model is a combination of behavior and outcomes on debrief questions. Key findings:

  • Shorter surveys are better than longer surveys and the max seems to be around 17-20 minutes.
  • Matrix questions are inherently problematic, although they do shorten surveys so one has to seek a balance.
  • Easily understood questions with intuitive answering devices are better than complex and difficult questions with unfamiliar answering conventions that cause respondent's to struggle on some pages of the survey.
  • Key indicators of bad survey design are high rates of abandonment and satisficing.

 

To their credit, they do not advocate dealing with the problem by color and flash gadgets as many others have done.

This is good common sense stuff, and it's nice to see it backed up with data. As an example they have brought a client to testify to his ability to move a fairly complex survey task from a CLT setting to online and simplifying it along the way. Lots of data to show that mostly, it worked. A nice story, but it seems like a bit of a nonsequitur.

It's not their point, but the cynic in me wonders if maybe the kinds of people who show up at CLT testing are the same kind of people who sign up for panels.

Last presenter is this segment was Adam Porter from e-Rewards/Research Now. He reported on some research designed to get a handle on what Rs view as a good survey versus a bad survey.

  • They found a positive relationship between survey satisfaction and incentive size.
  • They found a negative relationship between survey satisfaction and length.

Not much of a surprise there but the better findings focused on the characteristics of a bad survey:

  • Unclear questions
  • Repetitive questions
  • Not relevant (to the R) questions
  • No way to express an opinion (no DK, not able to skip, etc.)
  • Too detailed
  • Too many clicks

The positives were essentially the opposite of the negatives. But one key point: the single most problematic feature was restricted answer options. No DK, no way to refuse, could not skip, no open end to express an opinion.

 


Comments

2 responses to “CASRO Panels Conference – Day 2”

  1. Hi Reg–
    I want to make sure your readers don’t get the wrong idea about your comment:
    Back at CASRO Panels for another day. First speaker was to be Joel Rubinson from ARF but he has sent his number two, Ray Pettit.
    DOCTOR Ray Pettit is very close to this ARF initiative and totally able to represent ARF progress. While I had every intention of going, I was unaware that it conflicted with an ARF board meeting. I hope that clarifies things.

  2. Indeed. And for anyone who has not been paying attention to what has been coming out of the FOP initiative, Ray has been coauthor on virtually everything.