social science did ok with the 2016 election but not great


From Seth Masket at Pacific Standard.

People have been having meltdowns over polls, but I’m a bit more optimistic. When you look at what social science has to say about elections, it did ok last week. I am going to avoid poll aggregators like Nate Silver because they don’t fully disclose what they do and they appear to insert ad hoc adjustments. Not horrible, but I’ll focus on what I can see:

  1. Nominations: The Party Decides model is the standard. Basically, the idea is that party elites choose the nominee, who is then confirmed by the voters. It got the Democratic nomination right but completely flubbed the GOP nomination. Grade: C+.
  2. The “fundamentals” of the two party vote: This old and trusty model is a regression between two party vote share and recent economic conditions. Most versions of this model predicted a slim victory for the incumbent party. The figure above is from Seth Masket, who showed that Clinton 2 got almost exactly what the model predicted. Grade: A
  3. Polling: Averaged out, the poll averages before the election showed Clinton 2 getting +3.3 more points than Trump. She is probably getting about %.6 more than Trump. So the polls were off by about 2.7%. That’s within the margin of error for most polls. I’d say that’s a win. The polls, though, inflated the Johnson vote. Grade: B+.
  4. Campaigns don’t matter theory: Clinton 2 outspent, out organized, and out advertised Trump (except in the upper midwest) and got the same result as a “fundamentals” model would predict. This supports the view that campaigning has a marginal effect in high information races. Grade: A.

But what about the Electoral College? Contrary to what some folks may think, this is a lot harder to predict because state level polls produce worse results in general. This is why poll aggregators have to tweak the models a lot to get Electoral College forecasts and why they are often off. Also, the Electoral College is designed to magnify small shifts in opinion. A tiny shift in, say, Florida could move your Electoral College total by about 5%. Very unstable. That’s why a lot of academic steer clear of predicting state level results. All I’ll say is that you should take these with a grain of salt.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

November 15, 2016 at 12:01 am

3 Responses

Subscribe to comments with RSS.

  1. Seriously? “So the polls were off by about 2.7%. That’s within the margin of error for most polls.” That’s what aggregation checks. If they’re all off in the same direction, the margin of error for each poll is not the point.

    The likely-voter model error – under-expecting Trump voters to turn out – represents a failure of understanding the political dynamic. Maybe they just applied the likely-voter model from previous elections, or maybe they flubbed some kind of adjustment. But it seems likely that’s where they need to look.


    Philip N. Cohen

    November 15, 2016 at 4:46 pm

  2. @Phil: Aggregation cancels sampling errors, not errors in measurement. Considering that likely voter models are very hard to do well AND we had a small, but non-trovial, third party candidate, a 2.5% error is reasonable. As of Nov. 14, the Cook Report shows a 47.8/47.0 Dem/GOP split. If you expect polls to perfectly match electoral outcomes, then you have a very high bar, more than most people who work with electoral data.

    Liked by 1 person


    November 15, 2016 at 7:59 pm

  3. I was’t commenting on the overall error, I was commenting on the “within the margin of error” excerpt I quoted, for just the reason you say. That is a meaningless defense of the error.

    (It’s not really useful to argue over whether the science was reasonably bad or not; if you had to choose right or wrong, the polling consensus was obviously wrong. The question is why, and what to do about it.)


    Philip N. Cohen

    November 16, 2016 at 5:30 pm

Comments are closed.

%d bloggers like this: