orgtheory.net

is economics partisan? is all of social science just wrong?

Last week, fivethirtyeight.com put up a piece titled, “Economists Aren’t As Nonpartisan As We Think.” Beyond the slightly odd title (do we think they’re nonpartisan? should we expect them to be?), it’s an interesting write-up of a new working paper by Zubin Jelveh, Bruce Kogut, and Suresh Naidu. It went up a week ago, but since it gives me a chance to write about three of my favorite things, I thought it was still worth a post.

Favorite thing 1: Economists

The research started by identifying the political positions of economists, using campaign contributions and signatures on political petitions. This suggested economists lean left, by about 60-40. Not surprising so far. Then Jelveh et al. used machine learning techniques to identify phrases in journal articles most closely associated with left or right positions. Some of these are not unexpected (“Keynesian economics” versus “laissez faire”), while others are less obvious (“supra note” is left, and “central bank” is right).

It turns out that political position is associated with subfield and institutional location:

For example, macroeconomists and financial economists are more right-leaning on average while labor economists tend to be left-leaning. Economists at business schools, no matter their specialty, lean conservative.

Again, not incredibly surprising, though interesting to see data documenting it.

The final major point is that political position — this time as estimated from article phrases, not actual political activities — is associated with research results — here measured by estimates of various elasticities and the fiscal multiplier, which in turn imply particular political positions. So, for example, if the labor supply elasticity is high, then raising income taxes will cause people to work less. Economists who used right-leaning phrases were more likely to produce estimates that implied right-leaning political positions, and vice versa.

So. That was all interesting to read about, but none of it was (despite the headline) incredibly shocking. But the article then leads into my next favorite topic.

Favorite thing 2: How social science is translated into policy recommendations.

Thus far, a perfectly reasonable piece of social science. The authors are careful, even in the 538 version, to avoid implying that folks are deliberately messing with results, arguing instead that economists probably self-select into research areas and methodological strategies that align with their political preferences. Fair enough. But, this being 538, they need a policy takeaway, and here’s where things get weird. What shall we do with the finding that social science is not, in fact, devoid of politics?

Policymakers may need to “re-center” economists’ findings by adjusting for ideology. Take the area of tax rates for high earners. The average optimal tax rate reported by economists in our data is 41 percent. Using our model, we can also estimate that these economists as a group are slightly left of center. We can then figure out what optimal top tax rate a hypothetical centrist economist would report: 33 percent.

Huh. So rather than keep trying to figure out what the economically optimal tax rate for high earners is — setting aside, for a moment, all the other stuff besides economics one might be concerned with re high earners and tax rates — one should just estimate the centrist position and go with that? Because “political middle” = “most likely to be right”? That strikes me as a little bizarre.

Favorite thing 3: How much of social science is probably full of mistakes.

Finally, when I first looked at the post and the chart highlighted in it, my initial reaction was, hmm, that doesn’t seem like much of a relationship at all. When I returned to it yesterday, the main chart had been updated. The original one was incorrect, “missing about 40 percent of its points and ma[king] the relationship it showed look weaker than it actually is.” Mother Jones, responding to the original chart, had a similar reaction as I did, titling its piece “Economists Are Almost Inhumanly Impartial.” You can compare the two charts below:

(Wrong chart, still up at Mother Jones)

blog_ideology_economists

(Correct, updated chart, from 538)

ideology

 

Now if anything, we should feel good that this error was caught and corrected. Anyone can make a mistake, particularly in a blog post, and if mistakes are fixed things are working like they should.

But it reminded me of another post I read this week, one that’s a little more discouraging. The Quarterly Journal of Political Science has been requiring for a decade that published papers include a “replication package” of data and code. It does a basic internal review of the replication package to make sure it produces the results in the paper. Here’s how that’s gone:

Of the 24 empirical papers subject to in-house replication review since September 2012, [1] only 4 packages required no modifications…Most troubling…13 (54 percent) had results in the paper that differed from those generated by the author’s own code. Some of these issues were relatively small — likely arising from rounding errors during transcription — but in other cases they involved incorrectly signed or mis-labeled regression coefficients, large errors in observation counts, and incorrect summary statistics. Frequently, these discrepancies required changes to full columns or tables of results.

Wow. That is really not good, though kudos to QJPS for trying. Still, coming back to the original article, it raises a different point: maybe the political leanings of social scientists are among the least of their (our) problems.

Written by epopp

December 15, 2014 at 2:11 pm

6 Responses

Subscribe to comments with RSS.

  1. Reblogged this on SocioTech'nowledge.

    Like

    Pedro Calado

    December 15, 2014 at 4:49 pm

  2. On the QJPS thing, a lot of that is just people not listing dependencies (eg “ssc install gllamm”), immaterial rounding error, and even more, ahem, detail-oriented things like PRNG that work differently on different platforms. Basically, it’s really saying that getting code to replicate across machines/platforms is hard, but you’re conflating that w the very different thing of saying it’s wrong. Eubank never says exactly how many of the mistakes materially affect the results, only that there were 13/24 that had different results and that some but not all of these were material. I’m gonna hold off judgement on how much to care until I find out how many of these mistakes would materially affect how to interpret a paper.

    Like

    gabrielrossman

    December 15, 2014 at 8:58 pm

  3. Well, withholding judgment is fair enough, but the wording in the original (“some of these issues were relatively small…but in other cases…”) implies to me significant proportions of each (rounding errors etc. and more substantive issues). If it’s 5 cases of 24 in which the errors mean coefficients in the original paper have to be reversed or whole tables have to be changed, that’s a big problem. Maybe that’s not actually the case, but that’s what I read the post as implying.

    The other point here is that what QJPS is doing is still just looking at whether the paper’s results can be replicated from what’s submitted — there is still another whole layer of questions around whether they’re actually right. Given how many high-profile problems there have been in the last year around various results, from Reinhart & Rogoff to the whole p-hacking conversation, this is only one of many possible modes of failure.

    Like

    epopp

    December 15, 2014 at 9:24 pm

  4. Agreed that this is what the paper is implying, but given how nitpicky the post itself is about being specific about specifying operating system versions, etc, it’s pretty ironic that what you and I both see as the key issue is implied rather than clearly stated. I posted a comment there and will link any reply in this thread.

    On p-hacking and that sort of thing, having well-documented working code won’t necessarily detect it, but it will make it easier for others to detect it ex post. So that’s actually a plus to this kind of thing compared to doing nothing.

    btw, to return to one of your earlier posts about Sociological Science being tacitly (but I assure you, inadvertently) quant, this kind of thing also has biases in what kind of work it promotes. In particular it will make it much easier to publish work based on experiments (because they require simple code to interpret but which are ridiculously vulnerable to various problems not detectable through code) but harder to publish work based on anything that is hard to document precisely (ctrl-F for “ArcGIS” or “simulation” to see what I mean) or that is based on proprietary and/or sensitive data (“it has been necessary for QJPS staff to be written in IRB authorizations”).

    Like

    gabrielrossman

    December 15, 2014 at 10:05 pm

  5. Eubank replied to my comment. http://thepoliticalmethodologist.com/2014/12/09/a-decade-of-replications-lessons-from-the-quarterly-journal-of-political-science/comment-page-1/#comment-3091
    key points “We do have many errors that are not third decimal place rounding errors, but whether they would influence a referee’s view is another question. I also want to emphasize that the argument being made here is not that most unreviewed work is wrong. Rather, the argument is primarily that simply requiring replication packages be released with published papers may not accomplish the aims of journal editors if those packages are not reviewed.”

    Like

    gabrielrossman

    December 16, 2014 at 3:32 am

  6. Nice, thanks. I like your point about what kind of work these practices incentivizes/discourages. It’s worth thinking through more generally, especially if there is any real movement toward evidence-based-social-science (for lack of a better term).

    Like

    epopp

    December 16, 2014 at 5:11 am


Comments are closed.