Archive for the ‘economics’ Category
Yesterday the New Republic wrote about how little attention has been paid to policy in the current election. In 2008, the network news programs devoted 220 minutes to policy; this year, it’s been a mere 32 minutes.
The piece goes on to bemoan the decline of the public-interest obligation once held by broadcasters (and which still remains, in vestigial form) in exchange for their use of the airwaves, and to connect the dots between the gradual removal of those restrictions and the toxic media environment we find ourselves in today. While — I think appropriately — the article doesn’t overemphasize the causal effects, it does highlight a broader shift that was going on in the 1970s and is still echoing today.
The 1970s saw a wide, bipartisan embrace of the deregulatory spirit in many areas. The transportation industries — air, rail, trucking — were one chief target. Banking was in there. So was energy. More controversial, and less bipartisan, was the push for the removal of new social regulations—rules meant improve the environment, health, and safety. But even when it came to social regulation, both sides believed in regulatory reform. (I’ve recently written about some of this history.)
Economists were one group that made a strong case for economic deregulation — the removal of price and entry barriers in industries like transportation, energy, and finance. (For the definitive account, see Martha Derthick and Paul Quirk’s 1985 book.) Their role in airline deregulation, led by the colorful Alfred “To me, they’re all just marginal costs with wings” Kahn, is probably best known. But economists also had something to say about the Federal Communications Commission.
Perhaps the most famous — certainly one of the earliest — critics of the FCC was Ronald Coase. Coase argued in 1959 that there was no good reason, technical or economic, for the government to own the airwaves, and made the case for auctioning off the radio spectrum. He was not at all impressed with the argument that licenses should be distributed according to the “public interest”, and emphasized not only the legal ambiguity of that standard, but the fact that the FCC’s decisions reflected “a degree of inconsistency which defies generalization.”
At the time, the idea of the airwaves as a public trust was so universally accepted that Coase’s views seemed quite radical, even to other economists. When, in 1962, he extended his argument into a 200-page RAND report, coauthored with Bill Meckling and Jora Mirasian, RAND quashed it for being too incendiary. Later, recalling these events, Coase quoted an internal review of the paper: “I know of no country on the face of the globe—except for a few corrupt Latin American dictatorships—where the ‘sale’ of the spectrum could even be seriously proposed.”
By the early 1970s, though, a new consensus had emerged in economics around questions of regulation, and this consensus saw FCC demands that broadcasters behave in unprofitable ways not as acting in the “public interest,” but as a source of efficiency losses that should, at a minimum, be regarded skeptically. This aligned with increasingly loud arguments from outside of economics (as well as within) about regulatory capture, which implied that the “public interest” pursued by executive agencies would never be more than a sham, anyhow.
Eventually, this shift in mood led to a change in how the FCC regulated broadcasters. The public interest standard was loosened, and in 1981 the agency began to shift from using hearings to allocate spectrum licenses — in theory to the applicants that best served the public interest — to lottery. In 1994, it moved another step closer to Coase’s prescription, beginning to auction off the licenses — a move that stimulated a great deal of research in auction theory as well as generating substantial revenue.
The “public interest” goal, which had initially been baked into the allocation process (however poorly it was pursued in practice) became increasingly marginalized. Or perhaps it was subsumed within the assumed public interest in encouraging efficient use of the spectrum. The process echoes the one that took place in antitrust policy, in which historically significant goals other than allocative efficiency — goals that often conflicted with efficiency and even with each other — were gradually defined as being simply beyond the scope of what could be considered. (Indeed, Coase’s criticism of the inconsistency of the FCC’s behavior sounds quite similar to Justice Stewart’s scathing critique of merger law, written around the same time: “the sole consistency I can find is that under Section 7 [the merger section of the Clayton Act], the Government always wins.”)
I don’t know enough about the history of the FCC to have an informed opinion on whether the public interest standard as it stood circa 1970 was redeemable or if the agency was irreparably captured. And I definitely don’t think the decline of that standard is the main explanation for the current media environment, which goes far beyond television.
But I do think that the demise of the idea that we should expect media to have obligations beyond profit — which is bound up with the ideal, if not the practice, of the public interest standard — is a big contributor. Individual journalists — that increasingly rare breed — may remain professionally committed to an ethical code and a sense of mission that isn’t primarily about sales. But at the corporate level, any such qualms were abandoned long ago, and the journalistic wall between “church and state” — editorial and advertising — continues to crumble.
What this means is that we get political news that is just horse race coverage, and endless examination of the ugliest aspects of politics — which, unsurprisingly, encourages more of the same. Actually expecting media to pursue the “public interest”, whether through regulatory means or professional commitment, may be unrealistically idealistic. But giving up on the concept entirely seems certain to take us further down the path in which objective lies merit just as much attention as truth.
Roger E. Farmer has a blog post on why economists should not use complexity theory. At first, I though he was going to argue that complexity models have been dis-proven or they use unreasonable assumptions. Instead, he simply says we don’t have enough data:
The obvious question that Buzz asked was: are economic systems like this? The answer is: we have no way of knowing given current data limitations. Physicists can generate potentially infinite amounts of data by experiment. Macroeconomists have a few hundred data points at most. In finance we have daily data and potentially very large data sets, but the evidence there is disappointing. It’s been a while since I looked at that literature, but as I recall, there is no evidence of low dimensional chaos in financial data.
Where does that leave non-linear theory and chaos theory in economics? Is the economic world chaotic? Perhaps. But there is currently not enough data to tell a low dimensional chaotic system apart from a linear model hit by random shocks. Until we have better data, Occam’s razor argues for the linear stochastic model.
If someone can write down a three equation model that describes economic data as well as the Lorentz equations describe physical systems: I’m all on board. But in the absence of experimental data, lots and lots of experimental data, how would we know if the theory was correct?
On one level, this is a fair point. Macro-economics is notorious for having sparse data. We can’t re-run the US economy under different conditions a million times. We have quarterly unemployment rates and that’s it. On another level, this is an extremely lame criticism. One thing that we’ve learned is that we have access to all kinds of data. For example, could we have m-turker participate in an online market a million times? Or, could we mine eBay sales data? In other words, Farmer’s post doesn’t undermine the case for complexity. Rather, it suggests that we might search harder and build bigger tools. And, in the end, isn’t that how science progresses?
Last week, we discussed Devah Pager’s new paper on the correlation between discrimination in hiring and firm closure. As one would expect from Pager, it’s a simple and elegant paper using an audit study to measure the prevalence and consequences of discrimination in the labor market. In this post, I want to use the paper to talk about the journal publication process. Specifically, I want to discuss why this paper appeared in Sociological Science.
First, it may be the case that Professor Pager directly went to Sociological Science without trying another peer reviewed journal. If so, then I congratulate both Pager and Sociological Science. By putting a high quality paper into public access, both Professor Pager and the editors of Sociological Science have shown that we don’t need the lengthy and cumbersome developmental review system to get work out there.
Second, it may be the case that Professor Pager tried another journal, probably the ASR or AJS or an elite specialty journal and it was rejected. If so, that raises an important question – what specifically was “wrong” with this paper? Whatever one thinks of the Becker theory of racial discrimination, one can’t critique the paper on lacking a “framing” or have a simple and clean research design. One can’t critique statistical technique because it’s a simple comparison of means. One can’t critique the importance of the finding – the correlation between discrimination in hiring and firm closure is important to know and notable in size. And, of course, the paper is short and clearly written.
Perhaps the only criticism I can come up with is a sort of “identification fundamentalism.” Perhaps reviewers brought up the fact discrimination was not randomly assigned to firms so you can’t infer anything from the correlation. That is bizarre because it would render Becker’s thesis un-testable. What experimental design would allow you get a random selection of firms to suddenly become racist in their hiring practices? Here, the only sensible approach is Bayesian – you collect high quality observational data and revise your beliefs accordingly. This criticism, if it was made, isn’t sound upon reflection. I wonder what, possibly, could the grounds for rejection be aside from knee jerk anti-rational choice comments or discomfort with a finding that markets do have some corrective to racial discrimination.
Bottom line: Pager and the Sociological Science crew are to be commended. Maybe Pager just wanted this paper “out there” or just got tired of the review process. Either way, three cheers for Pager and the Soc Sci Crew.
One of the most striking arguments of Gary Becker’s theory of discrimination is that there is a cost of racial discrimination. If you hire people based on personal taste rather than job skills, your competitors can hire these better works and you work at a disadvantage. I think the strong version argument isn’t right. Markets do not instantly weed out discriminators. But the weak version has a lot of merit. If you truly avoid workers based on race or gender, you are giving away a huge advantage to the competition.
Well, turns out that Becker was right, at least in one data set. Devah Pager has a new paper in Sociological Science showing that discrimination is indeed associated with lower firm performance:
Economic theory has long maintained that employers pay a price for engaging in racial discrimination. According to Gary Becker’s seminal work on this topic and the rich literature that followed, racial preferences unrelated to productivity are costly and, in a competitive market, should drive discriminatory employers out of business. Though a dominant theoretical proposition in the field of economics, this argument has never before been subjected to direct empirical scrutiny. This research pairs an experimental audit study of racial discrimination in employment with an employer database capturing information on establishment survival, examining the relationship between observed discrimination and firm longevity. Results suggest that employers who engage in hiring discrimination are less likely to remain in business six years later.
Commentary: I have always found it ironic that sociologists and non-economists have resisted the implications of taste based discrimination theory. If discrimination in markets is truly not based on performance or productivity, there must be *some* consequence. However, a lot of sociologists have a strong distrust of markets that draws their attention to this rather simple implication of price theory. I don’t know the entire literature on taste based discrimination, but it’s good to see this appear.
Among higher ed policy folks, there’s a counter-conventional wisdom that there is no student loan crisis. For the most part (the story goes), student loans are a good investment that will increase future wages, and students could borrow quite a bit more before the value of the debt might be called into question. Indeed, some have argued that many students are too reluctant to borrow, and should take on more debt.
Just this month, two new pieces came out that reiterate this counter-narrative: a book by Urban Institute economist Sandy Baum, and a report by the Council of Economic Advisers. Yes, everyone agrees the system’s not perfect, and tweaks need to be made. (Susan Dynarski, for example, argues that repayment periods need to be longer.) Fundamentally, though, the system is sound. Or so goes the story.
What can we make of this disconnect between the conventional wisdom—that we are in the throes of a student loan crisis—and this counter-conventional story?
To understand it, it’s worth thinking about three different student loan crises. Or “crises”, depending on your sympathies.
First, there’s the student who has accrued six figures of debt for an undergraduate degree. Ideally, for media purposes, this is a degree in women’s studies, art history, or some other easily-dismissible field. The New York Times specialized in these for a while.
Since student loan debt is not bankruptable, these people really are kind of screwed, although income-based-repayment options have improved their options somewhat. And they make for a dramatic story—as well as lots of moralizing in the comments.
Second, there’s the student who took on debt but didn’t finish a degree. These people often struggle, because their income doesn’t go up much, if at all. In fact, the highest default rates are among those who left school with the smallest debts (< $5000), presumably because they didn’t graduate.
These folks disproportionately attend for-profit institutions, whose degrees have less payoff anyhow, but even more importantly, have abysmal graduation rates. (Community colleges have low graduation rates too, but they’re a lot cheaper.) The debt-but-no-degree people are also kind of screwed, although again, income-based repayment plans can help them a lot, as would a bankruptcy option.
So we’ve got the crisis of people who borrow too much for a four-year degree, and the crisis of people who borrow a little, but don’t complete the degree, often because they’re attending a school whose entire business model is to sign up new students for the purpose of taking their loan money.
These are both problems—even “crises”—but they are solvable. For the first, cap federal loans (including PLUS access) for undergraduate degrees, and make all loans bankruptable, so private lenders are leerier of loaning large amounts to students.
For the second—well, I’d probably be comfortable eliminating aid to for-profits, but let’s say that’s beyond the political pale. Certainly we could place a lot more limitations on which institutions are eligible for federal aid, whether that’s tied to graduation rates, default rates, or some other measure. And, again, making student loans bankruptable would help people who really needed to get a fresh start.
Wait, so what’s the third crisis?
The thing is, these two “crises” may be devastating to individuals, but in societal terms aren’t that big. The six-figure-debt one really drives policy wonks crazy, because every student debt story in the last ten years has led with this person, but the percentage of students who finish four-year degrees with this many loans is very small. Like maybe a couple of percent of borrowers small.*
Proponents of the Counter-Conventional Wisdom (C-CW) take the second group—those who borrow but don’t finish a degree—more seriously. This group is often really hurting, despite having smaller loan balances. But they only make up perhaps 20% of borrowers, and since their balances are relatively low, an even smaller fraction of that $1.4 trillion student loan figure we hear so much about.**
The real question—the one that determines whether you think there’s a third crisis—is how you react to the other 75-80% of borrowers. The C-CW crowd looks at them and says, eh, no crisis. These folks come out with four-year degrees, $20 or $30,000 of government-issued student loan debt, will pay $300 a month or so for ten years, and then move on with their lives. We could argue about how much of a burden this is for them, but it’s clearly not a crisis in the same way it is for the NYU grad with $150k in loans, or for the Capella University dropout trying to pay back $7500 on $10 an hour.
This C-CW is based on the premise that 1) college is a human capital investment that is worth taking on debt for up to the expected economic payoff, 2) individual borrowing is a reasonable and appropriate way to finance this investment—indeed, more sensible than paying for the costs collectively—and 3) as long as debt is kept to a “manageable” level (as indicated by students not going into default and having access to forbearance when their income is low), then there’s no crisis.
Why this understates the problem
I take issue with this position, though, on at least three fronts.
1. “Typical” student debt is increasing .
Individual borrowing levels are still rising rapidly, and there’s no reason to think we’ve neared a max. A recent Washington Post editorial cited the CEA report as saying that “[t]he average undergraduate loan burden in 2015 was $17,900.” But that’s not what the average graduate holds. That’s what the average loan-holder holds, including those who have already been paying for a number of years. Estimates for the average 2016 graduate, by contrast, are considerably higher—in the $29,000 to $37,000 range—and growing. The fraction of all students who borrow also continues to increase.
College costs keep rising. State budgets are still under pressure. The penalties for not completing college keep increasing. We can only expect loan sizes to continue to go up. At what point does “reasonable borrowing” become “unreasonable burden”? And tweaks like expanding income-based repayment or extending the standard repayment period won’t bend the curve (to borrow from another debate)—if anything it will enable the further expansion of lending.
2. We are all Capella now.
These debates often overlook the effects of federal aid policy on colleges as organizations, something I’ve written about elsewhere. (The exception is the attention given to the Bennett hypothesis, which suggests that colleges will simply turn federal aid into higher tuition prices.)
But that doesn’t mean organizational effects don’t exist. Continuing to shift the cost burden to individual students is going to accelerate the already intense pressure on public colleges in particular to recruit and retain students, because with students come tuition dollars.
The drive to attract students is already undermining a lot of traditional values in higher education. It encourages schools to spend money on marketing and branding, rather than education. It promotes a consumerist mindset among students who quite reasonably feel that they have become customers. It encourages schools to develop low-value degree programs simply to generate revenue, and recruit students into them regardless of whether the students will benefit.
The values that limit colleges from doing kind of thing are what separates nonprofits from for-profits in the first place. If they go away, we all become Capella. And allowing “reasonable” lending to keep expanding moves us straight in that direction.
3. It gives up on actual public education.
Ultimately, though, the biggest problem I have with this position is that it concedes the possibility, or even the value, of real public higher education entirely. It doesn’t matter whether that’s because the C-CW sees it as a pipe dream, or because it sees it as an irrational use of public funds, since individuals benefit personally from their education in the long run.
This post is already too long, so I won’t go into a detailed defense of public higher ed here. But I do want to point out that if you accept the premise of the C-CW—that student loan debt will only become a crisis if it increases individual costs beyond the returns to a college degree—you’ve already given up on public higher education. I’m not ready to do that.
And it looks like I’m not the only one.
* This number is actually surprisingly difficult to find. In 2008, it was only 0.2% of undergrad completers, but average debt for new graduates has increased about 40% since then, so the six-figure camp has undoubtedly grown.
** Again, exact numbers hard to pin down. 15% of beginning students who borrowed from the government in 2003-04 had not completed a degree six years later, nor were they still enrolled. This figure has doubtless increased as nontraditional borrowers—who are less likely to finish—have become a bigger fraction of the total pool of borrowers, hence my 20% guesstimate.
I’m working on a paper about the regulatory reform movement of the 1970s. If you’ve read anything at all about regulation, even the newspaper, you’ve probably heard the term “command and control”.
“Command and control” is a common description for government regulation that prescribes what some actor should do. So, for example, the CAFE standards say that the cars produced by car manufacturers must, on average, have a certain level of fuel efficiency. Or the EPA’s air quality standards say that ozone levels cannot exceed a certain number of parts per billion. Or such regulations may simply forbid some things, like the use of asbestos in many types of products.
This is typically contrasted with incentive-based regulation, or market-based regulation, which doesn’t set an absolute standard but imposes a cost on an undesirable behavior, like carbon taxes, or provides some kind of reward for good (usually meaning efficient) performance, as utility regulators often do.
The phrase “command and control” is commonly used in the academic literature, where it is not explicitly pejorative. Yet it’s kind of a loaded term. Who wants to be “commanded” and “controlled”?
So as I started working on this paper, I became more and more curious about the phrase, which only seemed to date back to the late 1970s, as the deregulatory movement really got rolling. Before that, it was a military term.
To the extent that I had thought about it at all, I assumed it was a clever framing coined by some group like the American Enterprise Institute that wanted to draw attention to regulation as a form of government overreach.
So I asked Susana Muñiz Moreno, a terrific graduate student working on policy expertise in Mexico, to look into it. She found newspaper references starting in 1977, when the New York Times references CEA chair Charles Schultze’s argument that “the current ‘command-and-control’ approach to social goals, which establishes specific standards to be met and polices compliance with each standard, is not only inefficient ‘but productive of far more intrusive government than is necessary.’”
And sure enough, Schultze’s influential book of that year, The Private Use of Public Interest, uses the phrase a number of times. Which makes sense, as Schultze was instrumental in advancing regulatory reform and plays a key role in my story. But he’s clearly not the AEI type I would have imagined coining such a phrase—before becoming Carter’s CEA chair Schultze was at Brookings, and before that he was LBJ’s budget director.
Nevertheless, given Schultze’s influence and the lack of earlier media use of the term, I figured he probably came up with it and it took off from there.
But I started poking around Google Scholar, mostly because I wondered if some more small-government-oriented reformer of regulation had been using it prior to Schultze. I thought James C. Miller III might be a possibility.
I didn’t find any early uses of the term from Miller, but you know what I did find? An obscure book chapter called “Command and Control” written by Thomas Schelling in 1974.
Sociologists probably best know Schelling from his 1978 book, Micromotives and Macrobehavior, and its tipping point model, which shows how the decisions of agents who prefer that even a relatively small proportion of their neighbors be like them (read: of the same race) can quickly lead to a highly segregated space. Its insights are regularly referenced in the literature on neighborhoods and segregation.
(If you haven’t seen it, you should totally check out this brilliant visualization of the model, “Parable of the Polygons”.)
Economists know him for a broader range of game theoretic work on decision-making and strategy—work that was recognized in 2005 with a Nobel Prize.
Anyway, I just checked out the chapter—and I’m pretty sure this is the original source. Like most of Schelling’s work, it’s written in crystal-clear prose. The chapter itself is only secondarily about government regulation; it’s is in an edited book about the social responsibility of the corporation. It hasn’t been cited often—29 times on Google Scholar, often in the context of business ethics.
Schelling muses on the difficulty of enforcing some behavioral change—like making taxi passengers fasten their seat belts—even for the head of a firm, and considers how organizations try to accomplish such goals: for example, by supporting government requirements that might be more effective than their own policing efforts.
It’s a wandering but fascinating reflection, with a Carnegie-School feel to it. And the “command and control” of the title doesn’t refer to government regulation, but to the difficulties faced by organizational leaders who are trying to command and control.
In fact, if I didn’t know the context I’d think this was a completely coincidental use of the phrase. But the volume, Social Responsibility and the Business Predicament, is part of the same Brookings series, “Studies in the Regulation of Economic Activity,” that published Schultze’s lectures in 1977, and which catalyzed a network of economists studying regulation in the early 1970s.
So while Schultze adapts the phrase for his own needs, and it’s possible that he could have borrowed the military phrase directly, my strong hunch is that he is lifting it from Schelling. Which actually fits my larger story—which highlights how the deregulatory movement built on the work of McNamara’s whiz kids from RAND, a community Schelling was an integral part of—quite well.
I can’t resist ending with one other contribution Schelling made to the use of economics in policy beyond his strategy work: the 1968 essay, “The Life You Save May Be Your Own.” (He was good with titles—this one was borrowed from a Flannery O’Connor story.) This introduced the willingness-to-pay concept as a way to value life—the idea that one could calculate how much people valued their own lives based on how much they had to be paid in order to accept very small risks of death. Controversial at the time, the proposal eventually became the main method policymakers used to place a monetary value on life.
Thomas Schelling. He really got around.
I wanted to start this post with a dramatic question about whether some knowledge is too dangerous to pursue. The H-bomb is probably the archetypal example of this dilemma, and brings to mind Oppenheimer’s quotation of the Bhagavad Gita upon the detonation of Trinity: “Now I am become Death, the destroyer of worlds.
But really, that’s way too melodramatic for the example I have in mind, which is much more mundane. Much more bureaucratic. It’s less about knowledge that is too dangerous to pursue and more about blindness to the unanticipated — but not unanticipatable — consequences of some kinds of knowledge.
The knowledge I have in mind is the student-unit record. See? I told you it was boring.
The student-unit record is simply a government record that tracks a specific student across multiple educational institutions and into the workforce. Right now, this does not exist for all college students.
There are records of students who apply for federal aid, and those can be tied to tax data down the road. This is what the Department of Education’s College Scorecard is based on: earnings 6-10 years after entry into a particular college. But this leaves out the 30% of students who don’t receive federal aid.
There are states with unit-record systems. Virginia’s is particularly strong: it follows students from Virginia high schools through enrollment in any not-for-profit Virginia college and then into the workforce as reflected in unemployment insurance records. But it loses students who enter or leave Virginia, which is presumably a considerable number.
But there’s currently no comprehensive federal student-unit record system. In fact at the moment creating one is actually illegal. It was banned in an amendment to the Higher Education Act reauthorization in 2008, largely because the higher ed lobby hates the idea.
Having student-unit records available would open up all kind of research possibilities. It would help us see the payoffs not just to college in general, but to specific colleges, or specific majors. It would help us disentangle the effects of the multiple institutions attended by the typical college student. It would allow us to think more precisely about when student loans do, and don’t, pay off. Academics and policy wonks have argued for it on just these grounds.
In fact, basically every social scientist I know would love to see student-unit records become available. And I get it. I really do. I’d like to know the answers to those questions, too.
But I’m really leery of student-unit records. Maybe not quite enough to stand up and say, This is a terrible idea and I totally oppose it. Because I also see the potential benefits. But leery enough to want to point out the consequences that seem likely to follow a student-unit record system. Because I think some of the same people who really love the idea of having this data available would be less enthused about the kind of world it might help, in some marginal way, create.
So, with that as background, here are three things I’d like to see data enthusiasts really think about before jumping on this bandwagon.
First, it is a short path from data to governance. For researchers, the point of student-level data is to provide new insights into what’s working and what isn’t: to better understand what the effects of higher education, and the financial aid that makes it possible, actually are.
But for policy types, the main point is accountability. The main point of collecting student-level data is to force colleges to take responsibility for the eventual labor market outcomes of their students.
Sometimes, that’s phrased more neutrally as “transparency”. But then it’s quickly tied to proposals to “directly tie financial aid availability to institutional performance” and called “an essential tool in quality assurance.”
Now, I am not suggesting that higher education institutions should be free to just take all the federal money they can get and do whatever the heck they want with it. But I am very skeptical that, in general, the net effect of accountability schemes is generally positive. They add bureaucracy, they create new measures to game, and the behaviors they actually encourage tend to be remote from the behaviors they are intended to encourage.
Could there be some positive value in cutting off aid to institutions with truly terrible outcomes? Absolutely. But what makes us think that we’ll end up with that system, versus, say, one that incentivizes schools to maximize students’ earnings, with all the bad behavior that might entail? Anyone who seriously thinks that we would use more comprehensive data to actually improve governance of higher ed should take a long hard look at what’s going on in the UK these days.
Second, student-unit records will intensify our already strong focus on the economic return to college, and further devalue other benefits. Education does many things for people. Helping them earn more money is an important one of those things. It is not, however, the only one.
Education expands people’s minds. It gives them tools for taking advantage of opportunities that present themselves. It gives them options. It helps them to find work they find meaningful, in workplaces where they are treated with respect. And yes, selection effects — or maybe it’s just because they’re richer — but college graduates are happier and healthier than nongraduates.
The thing is, all these noneconomic benefits are difficult to measure. We have no administrative data that tracks people’s happiness, or their health, let alone whether higher education has expanded their internal life.
What we’ve got is the big two: death and taxes. And while it might be nice to know whether today’s 30-year-old graduates are outliving their nongraduate peers in 50 years, in reality it’s tax data we’ll focus on. What’s the economic return to college, by type of student, by institution, by major? And that will drive the conversation even more than it already does. Which to my mind is already too much.
Third, social scientists are occupationally prone to overestimate the practical benefit of more data. Are there things we would learn from student-unit records that we don’t know? Of course. There are all kinds of natural experiments, regression discontinuities, and instrumental variables that could be exploited, particularly around financial aid questions. And it would be great to be able to distinguish between the effects of “college” and the effects of that major at this college.
But we all realize that a lot of the benefit of “college” isn’t a treatment effect. It’s either selection — you were a better student going in, or from a better-off family — or signal — you’re the kind of person who can make it through college; what you did there is really secondary.
Proposals to use income data to understand the effects of college assume that we can adjust for the selection effects, at least, through some kind of value-added model, for example. But this is pretty sketchy. I mean, it might provide some insights for us to think about. But as a basis for concluding that Caltech, Colgate, MIT, and Rose-Hulman Institute of Technology (the top five on Brookings’ list) provide the most value — versus that they have select students who are distinctive in ways that aren’t reflected by adjusting for race, gender, age, financial aid status, and SAT scores — is a little ridiculous.
So, yeah. I want more information about the real impact of college, too. But I just don’t see the evidence out there that having more information is going to lead to policy improvements.
If there weren’t such clear potential negative consequences, I’d say sure, try, it’s worth learning more even if we can’t figure out how to use it effectively. But in a case where there are very clear paths to using this kind of information in ways that are detrimental to higher education, I’d like to see a little more careful thinking about the real likely impacts of student-unit records versus the ones in our technocratic fantasies.