Monday, September 22, 2014

Might Twitter Have Helped the Polling for the Scottish Independence Vote?

So Scotland is remaining part of the U.K.  At least for now.  A momentous event in history almost happened.  Yet didn't.

In retrospect, lots of questions deserve to be asked regarding all those public opinion polls that seemed to indicate the vote was going to be a "Yes" for independence.  Where did they go wrong, and for that matter, was social media a better predictor of the outcome?

Justin Wolfers over at the University of Michigan noted how polling got it wrong, however the betting markets got it right.  In other words, asking people how they intended to vote turned out to be a pretty bad predictor, but asking people which side they thought would win was actually far better.  As a result, all of the pollsters calling the election close were basically "looking at the wrong data to make that conclusion".

The Monkey Cage is right to point out, however, that if you only look at the polling in the final few days, the "No" movement actually came out ahead each time, albeit often within the margin of error.  Thus - since the closer you get to the day of an election, the better polls are at predicting the outcome - the polls actually didn't "get it wrong" at all.  They correctly predicted the ultimate outcome of the referendum, even though their numbers turned out to be off by a few percentage points.

Let the political scientists sort this out.  In the meantime, the Monkey Cage raises a more intriguing question: Should online social media activity inform such polling, and if so, how?  Consider:

What strikes me as potentially useful about the Twitter data is if we view it in combination with the polling data. Suppose someone had told you before the election that the final polls (Now at 52 percent) was likely to be off by 3 percent, but they didn’t know in which direction. At that point, figuring out that direction would be crucially important, and could at least in part hinge on knowing which survey response (i.e., “Yes” or “No”) could be most likely to trigger a “Bradley Effect,” that is, an overestimating of support for one side because people didn’t want to admit they were voting the other way because they thought others (including here the pollster) might think badly of them. From this perspective, the Twitter data might prove useful, as it could show us which side had the popular enthusiasm, thus making it harder for people to admit to pollsters that they might not vote in that way, which in this case would be the “Yes” vote.

Using Twitter to measure "popular enthusiasm" might be a worthy supplement.  At least for determining the youth vote.  But that selection bias might negate the benefit in the first place.  Besides, after watching Trendwatch display the frequency of "Yes" and "No" tweets the day of the referendum in real-time, which heavily favored the "Yes" movement most of the day, one has to remain skeptical about its trustworthiness in predicting voting outcomes.


  

0 Comments:

Post a Comment

<< Home