CASRO Online – Part 4

Day 2 of CASRO Online. John Bremer, Conference Co-Chair, has promised "a riveting day." I'm ready for that. He's reviewing yesterday's session and makes it sound better than I remember it. Maybe it's me. To my eye, it had a really strong start and sort of drifted downhill a bit. But still better than some other conferences I've attended over the last couple of years.

First topic this morning is long surveys. The underlying premise is the sad realization that clients are just not going to accept shorter surveys. So what to do, especially for mobile? My buddy Frank Kelly is leading off and he's got data showing that people's tolerance for long surveys declines as you go from a PC to a tablet to a smartphone. So this presentation is going to be about "chunking" surveys into respondent-friendly pieces, and then putting them back together ("fusion"). Two kinds of chunks: within the same respondent (presumably over time) or across respondents (maybe at the same time). This is serious business and it takes a lot of thoughtful planning. It also requires some modeling to understand the structure of the overall questionnaire and the "hooks" that enable you to chunk it out and then fuse it back together. To be honest, I've not followed the discussion other than to note that there seems to be some Bayesian modeling involved. Maybe I'll get it when I read the actual paper. But I think this is important stuff for us to be paying attention to. I'm not sure that the within respondent approach has legs because in the end, the same respondent still has to do it all. But it might be really interesting if it can be done across respondents.

Now Frank is wandering into potential heresy by wondering whether all questions need to be answered by all respondents, routing aside. Do we need the same number of completes for every question? If you can sell that to a client it's another and much simpler way to shorten surveys, but my guess is that it would be a tough sell to clients.

A new presentation with a somewhat similar theme from folks at Gongos and SSI. Their focus is chunking across respondents and dealing with the missing data that you inevitably accumulate in the process. What I already like about this presentation is that the hooks they want to use to put humpty dumpty together are attitudinal and/or behavioral questions rather than demographics. Good choice! So this is really about respondent matching. Cool. They also tried hot deck imputation and it seemed to work as well. Bad news: matching by demos seems to work better than matching by attitudes/behavior. Strange.

Overall, it seems to me that there are two other ways to deal with the problem of downsizing questionnaires for mobile. The simplest and most obvious is to just design shorter questionnaires. We ask a lot of questions that just don't need to be asked, but convincing clients of that has generally gone nowhere. The second option is using surveys to ask a few questions to supplement respondent profiles already built from big data. This is the "getting to why" approach. Things may eventually evolve in that direction, but not soon enough. Modularization of chunking looks to me like the right next step. There is some literature on this sort of thing over in the scientific side of the industry that people who are working on this problem should deleve into.