It’s been a big week for Twitter.  First came the announcement
of the Nielsen deal
and this item that seems to
say Coke is embracing Twitter surveys in a big way.  Expect a stampede to follow.

This was inevitable but the timing is curious.  Pew tells us that as of May 15%
of US adult Internet user say they use Twitter
and just about half of those
do so every day.  And those folks are
disproportionately Afro-American, under 15 and live in urban areas.

You could look at this as one more reason for MR to work the
worry beads, or you could see it as an opportunity. 
What me worryAfter all, we’re supposed to be good at
sampling, right?  And the basic
principles of sampling are really useful for looking at a dataset,
understanding its biases, and explaining who the data represent, what it means
for the client’s target market, and therefore what actions the client should
take. 

 But alas, the last 15 years of online research 
has demonstrated pretty clearly that we don’t understand sampling much at
all.  If we did, we would have recognized panels for what they are and either labeled the work appropriately or developed
the techniques to overcome its shortcomings. 
The later issue has finally moved to the top of the agenda for some, but sadly
not for all.

So here we go again, ready or not.  Twitter, Facebook, Google, Mobile, Big Data –
we are going to have to deal with all of it. 
Will we dig into all of it in a systematic way to figure out what’s really
there and what it can tell us or we will just accept it all at face value?  I’d like to think that one way for us to
morph as these new data streams go mainstream is by leveraging our experience
in research design and especially in the basic principles of sampling, not so
we become samplers (God forbid) but so that we can evaluate the validity of the
data in front of us.

It's hard to feel encouraged.  And if one more person tells me that it’s going to be ok because
of the law of large
numbers
I surely will scream.  


Comments

6 responses to “#Twittersurvey: You knew this was coming”

  1. @Reg, my feeling is that the breakthrough we need is more insight into when the reactions to a message or question are broadly homogeneous, and when it is heterogeneous.
    When most people think the same thing, the sample structure tends not to matter very much. This is where things like Twitter and Google Consumer Surveys can help. If we have four ads for Pepsi, the rank ordering from one type of Pepsi user-group compared with another Pepsi user-group tends to be the same – unless the ads are well outside of the box.
    However, when views, attitudes, beliefs differ we need to balance the sample, which means knowing something about the population. This is where Twitter and even online access panels create dangers.

  2. Exactly! Kish calls them “disturbing variables” defined as “uncontrolled extraneous variables which may be confounded with the explanatory (dependent and independent) variables.” We need to identify the confounders that may be important given the particular survey topic(s)and balance them properly (just as we do with demos). It’s a big job but some of the political guys seem to have figured it out in their domain. And some of the panel companies (GMI, Toluna) are on the scent.

  3. I’m in your boat. yay. One more survey tool. One more way to get biased results. One more way to get non-generalizeable data. We know very well that individual websites attract very different groups of people and twitter is a perfect example of that. Every single social media research project I run shows that twitter results are completely different from facebook from youtube from flickr from XYZ results. So, sure Twitter surveys will be fun and interesting, but predictive? Nope.

  4. Nice article, thanks for the information.

  5. Great questions and responses. The challenges of innovation and pushing beyond the normal/traditional always brings new issues and problems to solve. Those who solve them, stay ahead. This seems like an opportunity.

  6. P.S. I like being a “sampler”.