Archive for the ‘economics’ Category
I’m working on a paper about the regulatory reform movement of the 1970s. If you’ve read anything at all about regulation, even the newspaper, you’ve probably heard the term “command and control”.
“Command and control” is a common description for government regulation that prescribes what some actor should do. So, for example, the CAFE standards say that the cars produced by car manufacturers must, on average, have a certain level of fuel efficiency. Or the EPA’s air quality standards say that ozone levels cannot exceed a certain number of parts per billion. Or such regulations may simply forbid some things, like the use of asbestos in many types of products.
This is typically contrasted with incentive-based regulation, or market-based regulation, which doesn’t set an absolute standard but imposes a cost on an undesirable behavior, like carbon taxes, or provides some kind of reward for good (usually meaning efficient) performance, as utility regulators often do.
The phrase “command and control” is commonly used in the academic literature, where it is not explicitly pejorative. Yet it’s kind of a loaded term. Who wants to be “commanded” and “controlled”?
So as I started working on this paper, I became more and more curious about the phrase, which only seemed to date back to the late 1970s, as the deregulatory movement really got rolling. Before that, it was a military term.
To the extent that I had thought about it at all, I assumed it was a clever framing coined by some group like the American Enterprise Institute that wanted to draw attention to regulation as a form of government overreach.
So I asked Susana Muñiz Moreno, a terrific graduate student working on policy expertise in Mexico, to look into it. She found newspaper references starting in 1977, when the New York Times references CEA chair Charles Schultze’s argument that “the current ‘command-and-control’ approach to social goals, which establishes specific standards to be met and polices compliance with each standard, is not only inefficient ‘but productive of far more intrusive government than is necessary.’”
And sure enough, Schultze’s influential book of that year, The Private Use of Public Interest, uses the phrase a number of times. Which makes sense, as Schultze was instrumental in advancing regulatory reform and plays a key role in my story. But he’s clearly not the AEI type I would have imagined coining such a phrase—before becoming Carter’s CEA chair Schultze was at Brookings, and before that he was LBJ’s budget director.
Nevertheless, given Schultze’s influence and the lack of earlier media use of the term, I figured he probably came up with it and it took off from there.
But I started poking around Google Scholar, mostly because I wondered if some more small-government-oriented reformer of regulation had been using it prior to Schultze. I thought James C. Miller III might be a possibility.
I didn’t find any early uses of the term from Miller, but you know what I did find? An obscure book chapter called “Command and Control” written by Thomas Schelling in 1974.
Sociologists probably best know Schelling from his 1978 book, Micromotives and Macrobehavior, and its tipping point model, which shows how the decisions of agents who prefer that even a relatively small proportion of their neighbors be like them (read: of the same race) can quickly lead to a highly segregated space. Its insights are regularly referenced in the literature on neighborhoods and segregation.
(If you haven’t seen it, you should totally check out this brilliant visualization of the model, “Parable of the Polygons”.)
Economists know him for a broader range of game theoretic work on decision-making and strategy—work that was recognized in 2005 with a Nobel Prize.
Anyway, I just checked out the chapter—and I’m pretty sure this is the original source. Like most of Schelling’s work, it’s written in crystal-clear prose. The chapter itself is only secondarily about government regulation; it’s is in an edited book about the social responsibility of the corporation. It hasn’t been cited often—29 times on Google Scholar, often in the context of business ethics.
Schelling muses on the difficulty of enforcing some behavioral change—like making taxi passengers fasten their seat belts—even for the head of a firm, and considers how organizations try to accomplish such goals: for example, by supporting government requirements that might be more effective than their own policing efforts.
It’s a wandering but fascinating reflection, with a Carnegie-School feel to it. And the “command and control” of the title doesn’t refer to government regulation, but to the difficulties faced by organizational leaders who are trying to command and control.
In fact, if I didn’t know the context I’d think this was a completely coincidental use of the phrase. But the volume, Social Responsibility and the Business Predicament, is part of the same Brookings series, “Studies in the Regulation of Economic Activity,” that published Schultze’s lectures in 1977, and which catalyzed a network of economists studying regulation in the early 1970s.
So while Schultze adapts the phrase for his own needs, and it’s possible that he could have borrowed the military phrase directly, my strong hunch is that he is lifting it from Schelling. Which actually fits my larger story—which highlights how the deregulatory movement built on the work of McNamara’s whiz kids from RAND, a community Schelling was an integral part of—quite well.
I can’t resist ending with one other contribution Schelling made to the use of economics in policy beyond his strategy work: the 1968 essay, “The Life You Save May Be Your Own.” (He was good with titles—this one was borrowed from a Flannery O’Connor story.) This introduced the willingness-to-pay concept as a way to value life—the idea that one could calculate how much people valued their own lives based on how much they had to be paid in order to accept very small risks of death. Controversial at the time, the proposal eventually became the main method policymakers used to place a monetary value on life.
Thomas Schelling. He really got around.
I wanted to start this post with a dramatic question about whether some knowledge is too dangerous to pursue. The H-bomb is probably the archetypal example of this dilemma, and brings to mind Oppenheimer’s quotation of the Bhagavad Gita upon the detonation of Trinity: “Now I am become Death, the destroyer of worlds.
But really, that’s way too melodramatic for the example I have in mind, which is much more mundane. Much more bureaucratic. It’s less about knowledge that is too dangerous to pursue and more about blindness to the unanticipated — but not unanticipatable — consequences of some kinds of knowledge.
The knowledge I have in mind is the student-unit record. See? I told you it was boring.
The student-unit record is simply a government record that tracks a specific student across multiple educational institutions and into the workforce. Right now, this does not exist for all college students.
There are records of students who apply for federal aid, and those can be tied to tax data down the road. This is what the Department of Education’s College Scorecard is based on: earnings 6-10 years after entry into a particular college. But this leaves out the 30% of students who don’t receive federal aid.
There are states with unit-record systems. Virginia’s is particularly strong: it follows students from Virginia high schools through enrollment in any not-for-profit Virginia college and then into the workforce as reflected in unemployment insurance records. But it loses students who enter or leave Virginia, which is presumably a considerable number.
But there’s currently no comprehensive federal student-unit record system. In fact at the moment creating one is actually illegal. It was banned in an amendment to the Higher Education Act reauthorization in 2008, largely because the higher ed lobby hates the idea.
Having student-unit records available would open up all kind of research possibilities. It would help us see the payoffs not just to college in general, but to specific colleges, or specific majors. It would help us disentangle the effects of the multiple institutions attended by the typical college student. It would allow us to think more precisely about when student loans do, and don’t, pay off. Academics and policy wonks have argued for it on just these grounds.
In fact, basically every social scientist I know would love to see student-unit records become available. And I get it. I really do. I’d like to know the answers to those questions, too.
But I’m really leery of student-unit records. Maybe not quite enough to stand up and say, This is a terrible idea and I totally oppose it. Because I also see the potential benefits. But leery enough to want to point out the consequences that seem likely to follow a student-unit record system. Because I think some of the same people who really love the idea of having this data available would be less enthused about the kind of world it might help, in some marginal way, create.
So, with that as background, here are three things I’d like to see data enthusiasts really think about before jumping on this bandwagon.
First, it is a short path from data to governance. For researchers, the point of student-level data is to provide new insights into what’s working and what isn’t: to better understand what the effects of higher education, and the financial aid that makes it possible, actually are.
But for policy types, the main point is accountability. The main point of collecting student-level data is to force colleges to take responsibility for the eventual labor market outcomes of their students.
Sometimes, that’s phrased more neutrally as “transparency”. But then it’s quickly tied to proposals to “directly tie financial aid availability to institutional performance” and called “an essential tool in quality assurance.”
Now, I am not suggesting that higher education institutions should be free to just take all the federal money they can get and do whatever the heck they want with it. But I am very skeptical that, in general, the net effect of accountability schemes is generally positive. They add bureaucracy, they create new measures to game, and the behaviors they actually encourage tend to be remote from the behaviors they are intended to encourage.
Could there be some positive value in cutting off aid to institutions with truly terrible outcomes? Absolutely. But what makes us think that we’ll end up with that system, versus, say, one that incentivizes schools to maximize students’ earnings, with all the bad behavior that might entail? Anyone who seriously thinks that we would use more comprehensive data to actually improve governance of higher ed should take a long hard look at what’s going on in the UK these days.
Second, student-unit records will intensify our already strong focus on the economic return to college, and further devalue other benefits. Education does many things for people. Helping them earn more money is an important one of those things. It is not, however, the only one.
Education expands people’s minds. It gives them tools for taking advantage of opportunities that present themselves. It gives them options. It helps them to find work they find meaningful, in workplaces where they are treated with respect. And yes, selection effects — or maybe it’s just because they’re richer — but college graduates are happier and healthier than nongraduates.
The thing is, all these noneconomic benefits are difficult to measure. We have no administrative data that tracks people’s happiness, or their health, let alone whether higher education has expanded their internal life.
What we’ve got is the big two: death and taxes. And while it might be nice to know whether today’s 30-year-old graduates are outliving their nongraduate peers in 50 years, in reality it’s tax data we’ll focus on. What’s the economic return to college, by type of student, by institution, by major? And that will drive the conversation even more than it already does. Which to my mind is already too much.
Third, social scientists are occupationally prone to overestimate the practical benefit of more data. Are there things we would learn from student-unit records that we don’t know? Of course. There are all kinds of natural experiments, regression discontinuities, and instrumental variables that could be exploited, particularly around financial aid questions. And it would be great to be able to distinguish between the effects of “college” and the effects of that major at this college.
But we all realize that a lot of the benefit of “college” isn’t a treatment effect. It’s either selection — you were a better student going in, or from a better-off family — or signal — you’re the kind of person who can make it through college; what you did there is really secondary.
Proposals to use income data to understand the effects of college assume that we can adjust for the selection effects, at least, through some kind of value-added model, for example. But this is pretty sketchy. I mean, it might provide some insights for us to think about. But as a basis for concluding that Caltech, Colgate, MIT, and Rose-Hulman Institute of Technology (the top five on Brookings’ list) provide the most value — versus that they have select students who are distinctive in ways that aren’t reflected by adjusting for race, gender, age, financial aid status, and SAT scores — is a little ridiculous.
So, yeah. I want more information about the real impact of college, too. But I just don’t see the evidence out there that having more information is going to lead to policy improvements.
If there weren’t such clear potential negative consequences, I’d say sure, try, it’s worth learning more even if we can’t figure out how to use it effectively. But in a case where there are very clear paths to using this kind of information in ways that are detrimental to higher education, I’d like to see a little more careful thinking about the real likely impacts of student-unit records versus the ones in our technocratic fantasies.
A few weeks ago, economics columnist Noah Smith wrote a blog post about how economics should raid sociology. This raises interesting questions about how academic disciplines influence each other. In this case, why has sociology not been a good a receptor for economics?
I start with an observation, which Smith also alludes to: Sociology has already been “raided” by economics with only moderate success. In contrast, economists have done very well raiding another discipline, political science. They have done fairly well in establishing pockets of influence in public policy programs and the law schools. By “success,” I do not mean publishing on sociological topics in economics journals. Rather, “success” means institutional success: economists should be routinely hired sociology programs, economic theory should become a major feature of research in graduate programs, and sociological journals should mimic economics journals. All of these have happened in political science but not sociology.
Here’s my explanation – Sociology does not conform to the stereotype that economists and other outsiders have of the field. According to the stereotype, sociology is a primarily qualitative field that has no sense of how causal inference works. In some accounts, sociologists are a bunch of drooling Foucault worshipers who babble endlessly in post-modern jargon. Therefore, a more mathematical and statistical discipline should easily establish its imprint, much as economics is now strongly imprinted on political science.
The truth is that sociology is a mixed quantitative/qualitative field that prefers verbal theory so that it can easily discuss an absurdly wide range of phenomena. Just open up a few issues of the American Sociological Review, the American Journal of Sociology or Social Forces. The modal article is an analysis of some big N data set. You also see historical case studies and ethnographic field work.
It is also a field that has import traditions of causal identification, but does not obsess over them. For example, in my department alone, there are three faculty who do experiments in their research and one who published a paper on propensity scores. Some departments specialize in social psychology which is heavily experimental, like Cornell. There are sociologists who work with data from natural experiments (like Oxford’s Dave Kirk), propensity scores (like IU’s Weihua an), and IV’s (I actually published one a while ago). The difference between economics and sociology is that we don’t reward people for clever identification strategies or dismiss observational data out of hand. If possible, we encourage identification if it makes sense. But if an argument can be made without it, that’s ok too.
So when economists think about sociology as a non-quantitative field, they simply haven’t taken the time to immerse themselves in the field and understand how it’s put together. Thus, a lot of the arguments for “economic imperialism” fall flat. You have regression analysis? So does sociology. You have big N surveys? We run the General Social Survey. You have identification? We’ve been running experiments for decades. One time an economist friend said that sociology does not have journals about statistical methods. And I said, have you heard of Sociological Methodology or Sociological Research and Methods? He’s making claims about a field that could easily be falsified with a brief Google search.
In my view, economics actually has one massive advantage over sociology but they have completely failed to sell it. Economists are very good at translating verbal models into mathematical models which then guide research. The reason they fail to sell it to sociology is for a few reasons.
First, economists seem to believe that the only model worth formalizing is the rational actor model. For better or worse, sociologists don’t like it. Many think “formal models = rational actor model.” They fail to understand that math can be used to formalize and study any model, not just rational choice models.
Second, rather than focus on basic insights derived from simple models, economists fetishize the most sophisticated models.* So economists love to get into some very hard stuff with limited applied value. That turns people off.
Third, a lot of sociologists have math anxiety because they aren’t good at math or had bad teachers. So when economists look down at them and dismiss sociology as whole or qualitative methods in particular, you loose a lot of people. Instead of dismissing people, economists should think more about how field work, interviews, and historical case studies can be integrated with economic methods.
I am a big believer in the idea that we are all searching for the truth. I am also a big believer in the idea that the social sciences should be a conversation not a contest of ego. That means that sociologists should take basic economic insights seriously, but that also means that economists should turn down the rhetoric and be willing to explore other fields with a charitable and open mind.
** For example, I was once required to read papers about how to do equilibrium models in infinite dimensional Banach spaces. Cool math? Sure. Connection to reality? Not so sure.
I just discovered an Economist article from last year showing that, once again, which college you go to is a lot less important than what you do at college. Using NCES data, PayScale estimated return on investment for students from selective and non-selective colleges. Then, they separated STEM majors from arts/humanities. Each dot represents a college major and its estimated rate of return:
Some obvious points:
- In the world of STEM, it really doesn’t matter where you go to school.
High prestige arts majors do worse, probably because they go into low paying careers, like being a professional painter (e.g., a Yale drama grad will actually try Broadway, while others may not get that far). [Ed. Fabio read the graph backwards when he wrote this.]
- A fair number of arts/humanities majors have *negative* rates of return.
- None of the STEM majors have a negative rate of return.
- The big message – college matters less than major.
There is also a big message for people who care about schools and inequality. If you want minorities and women to have equal pay, one of the very first things to do is to get more into STEM fields. All other policies are small in comparison.
Marko Grdesic wrote an interesting post on why modern economists don’t read Polanyi. He surveyed economists at top programs and discovered that only 3% had read Polanyi. I am not shocked. This post explains why.
For a while, I taught an undergrad survey course in sociology with an economic sociology focus. The goal is to teach sociology in a way interesting to undergraduate business and policy students. I often teach a module that might be called “capitalism’s defenders and critics.” On defense, we had Smith and Hayek. On offense, we had Marx and Polanyi.
And, my gawd, it was painful. Polanyi is a poor writer, even compared to windbags like Hayek and Marx. The basic point of the whole text is hard to discern other than, maybe, “capitalism didn’t develop the way you think” or “people change.” It was easily the text that people understood the least and none of the students got the point. Nick Rowe wrote the following comment:
35 years ago (while an economics PhD student) I tried to read Great Transformation. I’m pretty sure I didn’t finish it. I remember it being long and waffly and unclear. If you asked me what I was about, I would say: “In the olden days, people did things for traditional reasons (whatever that means). Then capitalism and markets came along, and people changed to become rational utility maximisers. Something like that.”
Yup. Something like that. Later, I decided that the Great Transformation is a classic case of “the wiki is better than the book.” We should not expect readers to genuflect in front if fat, baggy books. We are no longer in the world of the 19th century master scholars. If you can’t get your point across, then we can move on.
Over at Econ Talk, Russ Roberts interviews James Heckman about censored data and other statistical issues. At one point, Roberts asks Heckman what he thinks of the current identification fad in economics (my phrasing). Heckman has a few insightful responses. One is that a lot of the “new methods” – experiments, instrumental variables, etc. are not new at all. Also, experiments need to be done with care and the results need to be properly contextualized. A lot of economists and identification obsessed folks think that “the facts speak for themselves.” Not true. Supposedly clean experiments can be understand in the wrong way.
For me, the most interesting section of the interview is when Heckman makes a distinction between statistics and econometrics. Here’s his example:
- Identification – statistics, not economics. The point of identification is to ensure that your correlation is not attributable to an unobserved variable. This is either a mathematical point (IV) or a feature of research design (RCT). There is nothing economic about identification in the sense that you need to understand human decision making in order to carry out identification.
In contrast, he thought that “real” econometrics was about using economics to guide statistical modelling or using statistical modelling to plausibly tell us how economic principles play out in real world situations. This, I think, is the spirit of structural econometrics, which demands the researcher define the economic relation between variables and use that as a constraint in statistical estimation. Heckman and Roberts discuss minimum wage studies, where the statistical point is clear (raising wages do not always decrease unemployment) but the economic point still needs to be teased out (moderate wage increases can be offset by firms in others ways) using theory and knowledge of labor markets.
The deeper point I took away from the exchange is that long term progress in knowledge is not generated by a single method, but rather through careful data collection and knowledge of social context. The academic profession may reward clever identification strategies and they are useful, but that can lead to bizarre papers when the authors shift from economic thinking to an obsession with unobserved variables.
On Twitter, Michigan higher ed prof Julie Posselt compares the quantitative GRE scores for various social science disciplines. Take home message: social sciences are comparable in terms of recruits, but economics has stronger math skills. Take home point #2: there is still a lot of overlap. The bottom third of econ overlaps with other social sciences. This probably reflects that the extraordinarily mathematical approach to econ is a phenomenon of the strong programs that attract those with degrees in physical science and they have pushed the more traditional economics student to the bottom of the distribution.