Archive for the ‘economics’ Category
A few weeks ago, economics columnist Noah Smith wrote a blog post about how economics should raid sociology. This raises interesting questions about how academic disciplines influence each other. In this case, why has sociology not been a good a receptor for economics?
I start with an observation, which Smith also alludes to: Sociology has already been “raided” by economics with only moderate success. In contrast, economists have done very well raiding another discipline, political science. They have done fairly well in establishing pockets of influence in public policy programs and the law schools. By “success,” I do not mean publishing on sociological topics in economics journals. Rather, “success” means institutional success: economists should be routinely hired sociology programs, economic theory should become a major feature of research in graduate programs, and sociological journals should mimic economics journals. All of these have happened in political science but not sociology.
Here’s my explanation – Sociology does not conform to the stereotype that economists and other outsiders have of the field. According to the stereotype, sociology is a primarily qualitative field that has no sense of how causal inference works. In some accounts, sociologists are a bunch of drooling Foucault worshipers who babble endlessly in post-modern jargon. Therefore, a more mathematical and statistical discipline should easily establish its imprint, much as economics is now strongly imprinted on political science.
The truth is that sociology is a mixed quantitative/qualitative field that prefers verbal theory so that it can easily discuss an absurdly wide range of phenomena. Just open up a few issues of the American Sociological Review, the American Journal of Sociology or Social Forces. The modal article is an analysis of some big N data set. You also see historical case studies and ethnographic field work.
It is also a field that has import traditions of causal identification, but does not obsess over them. For example, in my department alone, there are three faculty who do experiments in their research and one who published a paper on propensity scores. Some departments specialize in social psychology which is heavily experimental, like Cornell. There are sociologists who work with data from natural experiments (like Oxford’s Dave Kirk), propensity scores (like IU’s Weihua an), and IV’s (I actually published one a while ago). The difference between economics and sociology is that we don’t reward people for clever identification strategies or dismiss observational data out of hand. If possible, we encourage identification if it makes sense. But if an argument can be made without it, that’s ok too.
So when economists think about sociology as a non-quantitative field, they simply haven’t taken the time to immerse themselves in the field and understand how it’s put together. Thus, a lot of the arguments for “economic imperialism” fall flat. You have regression analysis? So does sociology. You have big N surveys? We run the General Social Survey. You have identification? We’ve been running experiments for decades. One time an economist friend said that sociology does not have journals about statistical methods. And I said, have you heard of Sociological Methodology or Sociological Research and Methods? He’s making claims about a field that could easily be falsified with a brief Google search.
In my view, economics actually has one massive advantage over sociology but they have completely failed to sell it. Economists are very good at translating verbal models into mathematical models which then guide research. The reason they fail to sell it to sociology is for a few reasons.
First, economists seem to believe that the only model worth formalizing is the rational actor model. For better or worse, sociologists don’t like it. Many think “formal models = rational actor model.” They fail to understand that math can be used to formalize and study any model, not just rational choice models.
Second, rather than focus on basic insights derived from simple models, economists fetishize the most sophisticated models.* So economists love to get into some very hard stuff with limited applied value. That turns people off.
Third, a lot of sociologists have math anxiety because they aren’t good at math or had bad teachers. So when economists look down at them and dismiss sociology as whole or qualitative methods in particular, you loose a lot of people. Instead of dismissing people, economists should think more about how field work, interviews, and historical case studies can be integrated with economic methods.
I am a big believer in the idea that we are all searching for the truth. I am also a big believer in the idea that the social sciences should be a conversation not a contest of ego. That means that sociologists should take basic economic insights seriously, but that also means that economists should turn down the rhetoric and be willing to explore other fields with a charitable and open mind.
** For example, I was once required to read papers about how to do equilibrium models in infinite dimensional Banach spaces. Cool math? Sure. Connection to reality? Not so sure.
I just discovered an Economist article from last year showing that, once again, which college you go to is a lot less important than what you do at college. Using NCES data, PayScale estimated return on investment for students from selective and non-selective colleges. Then, they separated STEM majors from arts/humanities. Each dot represents a college major and its estimated rate of return:
Some obvious points:
- In the world of STEM, it really doesn’t matter where you go to school.
High prestige arts majors do worse, probably because they go into low paying careers, like being a professional painter (e.g., a Yale drama grad will actually try Broadway, while others may not get that far). [Ed. Fabio read the graph backwards when he wrote this.]
- A fair number of arts/humanities majors have *negative* rates of return.
- None of the STEM majors have a negative rate of return.
- The big message – college matters less than major.
There is also a big message for people who care about schools and inequality. If you want minorities and women to have equal pay, one of the very first things to do is to get more into STEM fields. All other policies are small in comparison.
Marko Grdesic wrote an interesting post on why modern economists don’t read Polanyi. He surveyed economists at top programs and discovered that only 3% had read Polanyi. I am not shocked. This post explains why.
For a while, I taught an undergrad survey course in sociology with an economic sociology focus. The goal is to teach sociology in a way interesting to undergraduate business and policy students. I often teach a module that might be called “capitalism’s defenders and critics.” On defense, we had Smith and Hayek. On offense, we had Marx and Polanyi.
And, my gawd, it was painful. Polanyi is a poor writer, even compared to windbags like Hayek and Marx. The basic point of the whole text is hard to discern other than, maybe, “capitalism didn’t develop the way you think” or “people change.” It was easily the text that people understood the least and none of the students got the point. Nick Rowe wrote the following comment:
35 years ago (while an economics PhD student) I tried to read Great Transformation. I’m pretty sure I didn’t finish it. I remember it being long and waffly and unclear. If you asked me what I was about, I would say: “In the olden days, people did things for traditional reasons (whatever that means). Then capitalism and markets came along, and people changed to become rational utility maximisers. Something like that.”
Yup. Something like that. Later, I decided that the Great Transformation is a classic case of “the wiki is better than the book.” We should not expect readers to genuflect in front if fat, baggy books. We are no longer in the world of the 19th century master scholars. If you can’t get your point across, then we can move on.
Over at Econ Talk, Russ Roberts interviews James Heckman about censored data and other statistical issues. At one point, Roberts asks Heckman what he thinks of the current identification fad in economics (my phrasing). Heckman has a few insightful responses. One is that a lot of the “new methods” – experiments, instrumental variables, etc. are not new at all. Also, experiments need to be done with care and the results need to be properly contextualized. A lot of economists and identification obsessed folks think that “the facts speak for themselves.” Not true. Supposedly clean experiments can be understand in the wrong way.
For me, the most interesting section of the interview is when Heckman makes a distinction between statistics and econometrics. Here’s his example:
- Identification – statistics, not economics. The point of identification is to ensure that your correlation is not attributable to an unobserved variable. This is either a mathematical point (IV) or a feature of research design (RCT). There is nothing economic about identification in the sense that you need to understand human decision making in order to carry out identification.
In contrast, he thought that “real” econometrics was about using economics to guide statistical modelling or using statistical modelling to plausibly tell us how economic principles play out in real world situations. This, I think, is the spirit of structural econometrics, which demands the researcher define the economic relation between variables and use that as a constraint in statistical estimation. Heckman and Roberts discuss minimum wage studies, where the statistical point is clear (raising wages do not always decrease unemployment) but the economic point still needs to be teased out (moderate wage increases can be offset by firms in others ways) using theory and knowledge of labor markets.
The deeper point I took away from the exchange is that long term progress in knowledge is not generated by a single method, but rather through careful data collection and knowledge of social context. The academic profession may reward clever identification strategies and they are useful, but that can lead to bizarre papers when the authors shift from economic thinking to an obsession with unobserved variables.
On Twitter, Michigan higher ed prof Julie Posselt compares the quantitative GRE scores for various social science disciplines. Take home message: social sciences are comparable in terms of recruits, but economics has stronger math skills. Take home point #2: there is still a lot of overlap. The bottom third of econ overlaps with other social sciences. This probably reflects that the extraordinarily mathematical approach to econ is a phenomenon of the strong programs that attract those with degrees in physical science and they have pushed the more traditional economics student to the bottom of the distribution.
Remember acid rain? For me, it’s one of those vague menaces of childhood, slightly scarier than the gypsy moths that were eating their way across western Pennsylvania but not as bad as the nuclear bombs I expected to fall from the sky at any moment. The 1980s were a great time to be a kid.
The gypsy moths are under control now, and I don’t think my own kids have ever given two thoughts to the possibility of imminent nuclear holocaust. And you don’t hear much about acid rain these days, either.
In the case of acid rain, that’s because we actually fixed it. That’s right, a complex and challenging environmental problem that we got together and came up with a way to solve. And the Acid Rain Program, passed as part of the Clean Air Act Amendments of 1990, has long been the shining example of how to use emissions trading to successfully and efficiently reduce pollution, and served as an international model for how such programs might be structured.
The idea behind emissions trading is that some regulatory body decides the total emissions level that is acceptable, finds a way to allocate polluters rights to emit some fraction of that total acceptable level, and then allows them to trade those rights with one another. Polluters for whom it is costly to reduce emissions will buy permits from those who can reduce emissions more cheaply. This meets the required emissions level more efficiently than if everyone were simply required to cut emissions to some specified level.
While there have clearly been highly successful examples of such cap-and-trade systems, they have also had their critics. Some of these focus on political viability. The European Emissions Trading System, meant to limit CO2 emissions, issued too many permits—always politically tempting—which has made the system fairly worthless for forcing reductions in emissions.
Others emphasize distributional effects. The whole point of trading is to reduce emissions in places where it is cheap to do so rather than in those where it’s more expensive. But given similar technological costs, a firm may prefer to clean up pollutants in a well-off area with significant political voice rather than a poor, disenfranchised minority neighborhood. Geography has the potential to make the efficient solution particularly inequitable.
These distributional critiques frequently come from outside economics, particularly (though not only) from the environmental justice movement. But in the case of the Acid Rain program, until now no one has shown strong distributional effects. This study found that SO2 was not being concentrated in poor or minority neighborhoods, and this one (h/t Neal Caren) actually found less emissions in Black and Hispanic neighborhoods, though more in poorly educated ones.
A recent NBER paper, however, challenges the distributional neutrality of the Acid Raid Program (h/t Dan Hirschman)—but here, it is residents of the Northeast who bear the brunt, rather than poor or minority neighborhoods. It is cheaper, it turns out, to reduce SO2 emissions in the sparsely populated western United States than the densely populated east. So, as intended, more reductions were made in the West, and less in the East.
The problem is that the population is a lot denser in the Northeastern U.S. So while national emissions decreased, more people were exposed to relatively high levels of SO2 and therefore more people died prematurely than would have been the case with the inefficient solution of just mandating an equivalent across-the-board reduction in SO2 levels.
To state it more sharply, while the trading built into the Acid Rain Program saved money, it also killed people, because improvements were mostly made in low-population areas.
This is fairly disappointing news. It also points to what I see as the biggest issue in the cap-and-trade vs. pollution tax debate—that so much depends on precisely how such markets are structured, and if you don’t get the details exactly right (and really, when are the details ever exactly right?), you may either fail to solve the problem you intended to, or create a new one worse than the one you fixed.
Of course pollution taxes are not exempt from political difficulties or unintended consequences either. And as Carl Gershenson pointed out on Twitter, a global, not local, pollutant like CO2 wouldn’t have quite the same set of issues as SO2. And the need to reduce carbon emissions is so serious that honestly I’d get behind any politically viable effort to cut them. But this does seem like one more thumb on the “carbon tax, not cap-and-trade” side of the scale.
Ever since the publication of Piketty’s Capital in the 21st Century, there’s been a lot of debate about the theory and empirical work. One strand of the discussion focuses on how Piketty handles the data. A number of critics have argued that the main results are sensitive to choices made in the data analysis (e.g., see this working paper). The trends in inequality reported by Piketty are amplified by how he handles the data.
Perhaps the strongest criticism in this vein is made by UC Riverside’s Richard Sutch, who has a working paper claiming that some of Piketty’s major empirical points are simply unreliable. The abstract:
Here I examine only Piketty’s U.S. data for the period 1810 to 2010 for the top ten percent and the top one percent of the wealth distribution. I conclude that Piketty’s data for the wealth share of the top ten percent for the period 1870-1970 are unreliable. The values he reported are manufactured from the observations for the top one percent inflated by a constant 36 percentage points. Piketty’s data for the top one percent of the distribution for the nineteenth century (1810-1910) are also unreliable. They are based on a single mid-century observation that provides no guidance about the antebellum trend and only very tenuous information about trends in inequality during the Gilded Age. The values Piketty reported for the twentieth-century (1910-2010) are based on more solid ground, but have the disadvantage of muting the marked rise of inequality during the Roaring Twenties and the decline associated with the Great Depression. The reversal of the decline in inequality during the 1960s and 1970s and subsequent sharp rise in the 1980s is hidden by a fifteen-year straight-line interpolation. This neglect of the shorter-run changes is unfortunate because it makes it difficult to discern the impact of policy changes (income and estate tax rates) and shifts in the structure and performance of the economy (depression, inflation, executive compensation) on changes in wealth inequality.
From inside the working paper, an attempt to replicate Piketty’s estimate of intergenerational wealth transfer among the wealthy:
The first available data point based on an SCF survey is for 1962. As reported by Wolff the top one percent of the wealth distribution held 33.4 percent of total wealth that year [Wolff 1994: Table 4, 153; and Wolff 2014: Table 2, 50]. Without explanation Piketty adjusted this downward to 31.4 by subtracting 2 percentage points. Piketty’s adjusted number is represented by the cross plotted for 1962 in Figure 1. Chris Giles, a reporter for the Financial Times, described this procedure as “seemingly arbitrary” [Giles 2014].9 In a follow-up response to Giles, Piketty failed to explain this adjustment [Piketty 2014c “Addendum”].
There is a bit of a mystery as to where the 1.2 and 1.25 multipliers used to adjust the Kopczuk-Saez estimates upward came from. The spreadsheet that generated the data (TS10.1DetailsUS) suggests that Piketty was influenced in this choice by the inflation factor that would be required to bring the solid line up to reach his adjusted SCF estimate for 1962. Piketty did not explain why the adjustment multiplier jumps from 1.2 to 1.25 in 1930.
This comes up quite a bit, according to Sutch. There is reasonable data and then Piketty makes adjustments that are odd or simply unexplained. It is also important to note that Sutch is not trying to make inequality in the data go away. He notes that Piketty is likely under-reporting early 20th century inequality while over-reporting the more recent increase in inequality.
A lot of Piketty’s argument comes from international comparisons and longitudinal studies with historical data. I have a lot of sympathy for Piketty. Data is imperfect, collected irregularly, and prone to error. So I am slow to criticize. Still, given that Piketty’s theory is now one of the major contenders in the study of global inequality, we want the answer to be robust.