Archive for the ‘policy’ Category

book spotlight: beyond technonationalism by kathryn ibata-arens

At SASE 2019 in the New School, NYC, I served as a critic on an author-meets-critic session for Vincent de Paul Professor of Political Science Kathryn Ibata-Arens‘s latest book, Beyond Technonationalism: Biomedical Innovation and Entrepreneurship in Asia.  


Here, I’ll share my critic’s comments in the hopes that you will consider reading or assigning this book and perhaps bringing the author, an organizations researcher and Asia studies specialist at DePaul, in for an invigorating talk!

“Ibata-Arens’s book demonstrates impressive mastery in its coverage of how 4 countries address a pressing policy question that concerns all nation-states, especially those with shifting markets and labor pools.  With its 4 cases (Japan, China, India, and Singapore),  Beyond Technonationalism: Biomedical Innovation and Entrepreneurship in Asia covers impressive scope in explicating the organizational dimensions and national governmental policies that promote – or inhibit – innovations and entrepreneurship in markets.

The book deftly compares cases with rich contextual details about nation-states’ polices and examples of ventures that have thrived under these policies.  Throughout, the book offers cautionary stories details how innovation policies may be undercut by concurrent forces.  Corruption, in particular, can suppress innovation. Espionage also makes an appearance, with China copying’s Japan’s JR rail-line specs, but according to an anonymous Japanese official source, is considered in ill taste to openly mention in polite company. Openness to immigration and migration policies also impact national capacity to build tacit knowledge needed for entrepreneurial ventures.  Finally, as many of us in the academy are intimately familiar, demonstrating bureaucratic accountability can consume time and resources otherwise spent on productive research activities.

As always, with projects of this breadth, choices must made in what to amplify and highlight in the analysis.  Perhaps because I am a sociologist, what could be developed more – perhaps for another related project – are highlighting the consequences of what happens when nation-states and organizations permit or feed relational inequality mechanisms at the interpersonal, intra-organizational, interorganizational, and transnational levels.  When we allow companies and other organizations to, for example, amplify gender inequalities through practices that favor advantaged groups over other groups, what’s diminished, even for the advantaged groups?

Such points appear throughout the book, as sort of bon mots of surprise, described inequality most explicitly with India’s efforts to rectify its stratifying caste system with quotas and Singapore’s efforts to promote meritocracy based on talent.  The book also alludes to inequality more subtly with references to Japan’s insularity, particularly regarding immigration and migration. To a less obvious degree, inequality mechanisms are apparent in China’s reliance upon guanxi networks, which favors those who are well-connected. Here, we can see the impact of not channeling talent, whether talent is lost to outright exploitation of labor or social closure efforts that advantage some at the expense of others.

But ultimately individuals, organizations, and nations may not particularly care about how they waste individual and collective human potential.  At best, they may signal muted attention to these issues via symbolic statements; at worst, in the pursuit of multiple, competing interests such as consolidating power and resources for a few, they may enshrine and even celebrate practices that deny basic dignities to whole swathes of our communities.

Another area that warrants more highlighting are various nations’ interdependence, transnationally, with various organizations.  These include higher education organizations in the US and Europe that train students and encourage research/entrepreneurial start-ups/partnerships.  Also, nations are also dependent upon receiving countries’ policies on immigration.  This is especially apparent now with the election of publicly elected officials who promote divisions based on national origin and other categorical distinctions, dampening the types and numbers of migrants who can train in the US and elsewhere.

Finally, I wonder what else could be discerned by looking into the state, at a more granular level, as a field of departments and policies that are mostly decoupled and at odds. Particularly in China, we can see regional vs. centralized government struggles.”

During the author-meets-critics session, Ibata-Arens described how nation-states are increasingly concerned about the implications of elected officials upon immigration policy and by extension, transnational relationships necessary to innovation that could be severed if immigration policies become more restrictive.

Several other experts have weighed in on the book’s merits:

Kathryn Ibata-Arens, who has excelled in her work on the development of technology in Japan, has here extended her research to consider the development of techno-nationalism in other Asian countries as well: China, Singapore, Japan, and India. She finds that these countries now pursue techno-nationalism by linking up with international developments to keep up with the latest technology in the United States and elsewhere. The book is a creative and original analysis of the changing nature of techno-nationalism.”
—Ezra F. Vogel, Harvard University
“Ibata-Arens examines how tacit knowledge enables technology development and how business, academic, and kinship networks foster knowledge creation and transfer. The empirically rich cases treat “networked technonationalist” biotech strategies with Japanese, Chinese, Indian, and Singaporean characteristics. Essential reading for industry analysts of global bio-pharma and political economists seeking an alternative to tropes of economic liberalism and statist mercantilism.”
—Kenneth A. Oye, Professor of Political Science and Data, Systems, and Society, Massachusetts Institute of Technology
“In Beyond Technonationalism, Ibata-Arens encourages us to look beyond the Asian developmental state model, noting how the model is increasingly unsuited for first-order innovation in the biomedical sector. She situates state policies and strategies in the technonationalist framework and argues that while all economies are technonationalist to some degree, in China, India, Singapore and Japan, the processes by which the innovation-driven state has emerged differ in important ways. Beyond Technonationalism is comparative analysis at its best. That it examines some of the world’s most important economies makes it a timely and important read.”
—Joseph Wong, Ralph and Roz Halbert Professor of Innovation Munk School of Global Affairs, University of Toronto
Kathryn Ibata-Arens masterfully weaves a comparative story of how ambitious states in Asia are promoting their bio-tech industry by cleverly linking domestic efforts with global forces. Empirically rich and analytically insightful, she reveals by creatively eschewing liberalism and selectively using nationalism, states are both promoting entrepreneurship and innovation in their bio-medical industry and meeting social, health, and economic challenges as well.”
—Anthony P. D’Costa, Eminent Scholar in Global Studies and Professor of Economics, University of Alabama, Huntsville
For book excerpts, download a PDF here.  Follow the author’s twitter feed here.

james buchanan and the stealth plan for insurance copays

I’ve been thinking about James Buchanan again in light of Jennifer Burns’ new critical review of Nancy MacLean’s Democracy in Chains. (Steve Teles and Henry Farrell both defend their own positions — and their independence from Charles Koch — one last time as well.)

I’m done talking about Democracy in Chains, but Buchanan was on my mind today. I don’t know how much direct influence he had on public policy. He hasn’t come up that much in my work, although obviously public choice arguments bolstered the case for deregulation.

Recently, though, I’ve been trying to wrap my head around health policy a little bit, in part to test whether arguments I’ve worked out looking at other social policy domains apply there as well. And here Buchanan plays an interesting — though quite indirect — role.

Health economics as a field only emerged in the 1960s. After federal health spending shot up with the 1965 passage of Medicare and Medicaid, government became increasingly interested in supporting such research.

One of the early papers that shaped that field — and indeed, the whole policy debate over universal health insurance — was Mark Pauly’s 1968 American Economic Review paper, “The Economics of Moral Hazard.”

The paper points out that individuals who are insured against all health costs are likely to seek out more care, at least of some types, than those who are not insured. Thus insurance that includes no deductible or cost-sharing is likely to result in overuse of care. The argument seems obvious now, but at the time — while familiar to insurers — it was novel in economics.

(Interestingly, in Kenneth Arrow’s comment on the paper, his counterargument to Pauly is basically, “this is why we have norms” — to prevent people from consuming more than they need: “Nonmarket controls, whether internalized as moral principles or externally imposed, are to some extent essential for efficiency.”)

Anyway, Pauly was a student of James Buchanan, and credits Buchanan with turning his attention to health policy. Pauly thought he’d do a thesis on “designing the economic framework for a government-funded voucher system for public education” (ahh, now we’re getting into MacLean territory).

But the passage of Medicare had created new pools of money for health research, and Buchanan suggested Pauly might look at health care instead.

Focused on his studies as well as his new wife, 25-year-old student Pauly was only vaguely aware and not much interested in these outside happenings until his mentor, James M. Buchanan, PhD, explained that the law creating Medicare also provided funds for academic health economics studies. He suggested that Pauly switch his thesis focus from education to health care economics and apply for a federal grant.

“Broadly speaking, I was interested in government and public policy,” Pauly remembers. “But the thing that drew me to health care economics was the money. I wish I could be more noble, but that was the reason. I got the grant and the rest is history.”

Three years later, the moral hazard paper was published. It significantly eroded the economic case for universal health insurance without meaningful cost-sharing—just the sort of plan that Ted Kennedy was then advocating—although economists like Rashi Fein would spend the next decade trying to build support for just such a plan.


The moral hazard argument also led the Office of Economic Opportunity to initiate the RAND Health Insurance Experiment, which was intended to estimate the effects of different pricing structures on healthcare consumption and outcomes.

After a decade of study and nearly $100 million in expenditures, the Health Insurance Experiment found that cost-sharing reduced the use of care without harming outcomes. (There was, of course, much debate over the results.) Employers took note: “The fraction of major companies with cost-sharing insurance plans rose from 30% to 63% in the years immediately following the publication of the experimental results.”

The next couple of decades would see repeated attempts to reform healthcare, but the principle of social insurance — of some kind of broad-based, universal coverage like Medicare — stayed on the margins of health policy conversations, replaced by a focus on cost-sharing, means-testing, and the promotion of competition.

So James Buchanan never got the education vouchers he would have liked, and that MacLean focuses on the context of Virginia’s multiyear desegregation battle. And he hardly would have been a fan of Obamacare, which gave government a sizable new role to play in healthcare. And really, whatever credit — or blame — there is should go to Pauly, not Buchanan. Buchanan was just there with advice at a critical moment.

But maybe, just a little, we can point to James Buchanan for helping to give us the healthcare system—with plenty of copays and high deductibles, and still no universal coverage—that we have today.

[With credit to Zach Griffen, who knows much more than I do about both health economics and health policy, for pointing me in the right direction.]

Written by epopp

September 18, 2018 at 8:47 pm

Posted in economics, health, policy

the PROSPER Act, the price of college, and eroding public goodwill

The current Congress is decidedly cool toward colleges and the students attending them. The House version of the tax bill that just passed eliminates the deduction on student loan interest and taxes graduate student tuition waivers as income. Both House and Senate bills tax the largest college endowments.

Now we have the PROSPER Act, introduced on Friday. The 500-plus page bill does many things. It kills the Department of Education’s ability to keep aid from going to for-profit schools with very high debt-to-income ratios, or to forgive the loans of defrauded student borrowers . It loosens the rules that keep colleges from steering students into questionable loans in exchange for parties, perks, and other kickbacks.

And it changes the student loan program dramatically, ending subsidized direct loans and replacing them with a program (Federal ONE) that looks more like current unsubsidized loans. Borrowing limits go up for undergrads and down for some grads. The terms for income-based repayment get tougher, with higher monthly payments and no forgiveness after 20 years. Public Service Loan Forgiveness, particularly important to law schools, comes to an end. (See Robert Kelchen’s blog for some highlights and his tweetstorm for a blow-by-blow read of the bill.)

To be honest, this could be worse. Although I dislike many of the provisions, given the Republican higher ed agenda there’s nothing shocking or unexpectedly punitive, like the grad tuition tax was.

Still, between the tax bill and this one, Congress has taken some sharp jabs at nonprofit higher ed. This goes along with a dramatic downward turn in Republican opinion of colleges over the last two years.Capture

Obviously, some of this is a culture war. Noah Smith highlights student protests and the politicization of the humanities and social sciences as the reason opinion has deteriorated. I think there are aspects of this that are problems, but the flames have mostly been fanned by those with a preexisting agenda. There just aren’t that many Reed Colleges out there.

I suspect colleges are also losing support, though, for another reason—one that is much less partisan. That is the cost of college.

I think colleges have ignored just how much goodwill has been burned up by the rise in college costs. For the last couple of weeks, I’ve been buried in data about tuition rates, net prices, and student loans. Although intellectually I knew how much prices risen, it was still shocking to realize how different the world of higher ed was in 1980.

The entire cost of college was $7,000 a year. For everything. At a four-year school. At a time when the value of the maximum Pell Grant was over $5,000, and the median household income was not far off from today’s. Seriously, I can’t begin to imagine.

The change has been long and gradual—the metaphorical boiling of the frog. The big rise in private tuitions took place in the 90s, but it wasn’t until after 2000 that costs at publics (both sticker price and net price—the price paid after scholarships and grant aid) increased dramatically. Unsurprisingly, student borrowing increased dramatically along with it. The Obama administration reforms, which expanded Pell Grants and improved loan repayment terms, haven’t meant lower costs for students and their families.

Picture1What I’m positing is that the rising cost of college and the accompanying reliance on student loans have eroded goodwill toward colleges in difficult-to-measure ways. On the one hand, the big drop in public opinion clearly happened in last two years, and is clearly partisan. Democrats have slightly ticked up in their assessment of college at the same time.

But I suspect that even support among Democrats may be weaker than it appears, particularly when it comes to bread-and-butter issues, rather than culture-war issues. Only a small minority (22%) of people think college is affordable, and only 40% think it provides good value for the money. And this is the case despite the growing wage gap between college grads and high school grads. Sympathy for proposals that hit colleges financially—whether that means taxing endowments, taxing tuition waivers, or anything else that looks like it will force colleges to tighten their belts—is likely to be relatively high, even among those friendly to college as an institution.

This is likely worsened by the common pricing strategy that deemphasizes the importance of sticker price and focuses on net price. But the perception, as well as the reality, of affordability matters. Today, even community college tuition ($3500 a year, on average) feels like a burden.

The point isn’t whether college is “worth it” in terms of the long-run income payoff. In a purely economic sense there’s no question it is and will continue to be. But pushing the burden of cost onto individuals and families, rather than distributing it more broadly, makes it feel unbearable, and makes people think colleges are just in it for the money. (Which sometimes they are.) I’m always surprised that my SUNY students think the mission of the university is to make money off of them.

This perception means that students and their families and the larger public will be reluctant to support higher education, whether in the form of direct funding, more financial aid, or the preservation of weird but mission-critical perks, like not taxing tuition waivers.

The PROSPER Act, should it come to fruition, will provide another test for public institutions. Federal borrowing limits for undergraduates will rise by $2,000 a year, to $7,500 for freshmen, $8,500 for sophomores, and $9,500 for juniors and seniors. If public institutions immediately default to expecting students to take out the new maximum in federal loans each year, they will continue to erode goodwill even among those not invested in the culture wars.

The sad thing is, this is a self-reinforcing cycle. Colleges, especially public institutions, may feel like they have no choice but to allow tuition to climb, then try to make up the difference for the lowest-income students. But by adopting this strategy, they undermine their very claim to public support. Letting borrowing continue to climb may solve budget problems in the short run. In the long run, it’s shooting yourself in the foot.



Written by epopp

December 4, 2017 at 3:55 pm

what nonacademics should understand about taxing graduate school

There are many bad provisions in the proposed tax legislation. This isn’t even the worst of them. But it’s the one that most directly affects my corner of the world. And, unlike the tax deduction for private jets, it’s one that can be hard for people outside of that world to understand.

That proposal is to tax tuition waivers for graduate students working as teaching or research assistants. Unlike graduate students in law or medical or business schools, graduate students in PhD programs generally do not pay tuition. Instead, a small number of PhD students are admitted each year. In exchange for working half-time as a TA or RA, they receive a tuition waiver and are also paid a stipend—a modest salary to cover their living expenses.

Right now, graduate students are taxed on the money they actually see—the $20,000 or so they get to live on. The proposal is to also tax them on the tuition the university is not charging them. At most private schools, or at out-of-state rates at most big public schools, this is in the range of $30,000 to $50,000.

I think a lot of people look at this and say hey, that’s a huge benefit. Why shouldn’t they be taxed on it? They’re getting paid to go to school, for goodness sakes! And a lot of news articles are saying they get paid $30,000 a year, which is already more than many people make. So, pretty sweet deal, right?

Here’s another way to think about it.

Imagine you are part of a pretty typical family in the United States, with a household income of $60,000. You have a kid who is smart, and works really hard, and applies to a bunch of colleges. Kid gets into Dream College. But wait! Dream College is expensive. Dream College costs $45,000 a year in tuition, plus another $20,000 for room and board. There is no way your family can pay for a college that costs more than your annual income.

But you are in luck. Dream College has looked at your smart, hardworking kid and said, We will give you a scholarship. We are going to cover $45,000 of the cost. If you can come up with the $20,000 for room and board, you can attend.

This is great, right? All those weekends of extracurriculars and SAT prep have paid off. Your kid has an amazing opportunity. And you scrimp and save and take out some loans and your family comes up with $20,000 a year so your kid can attend Dream College.

But wait. Now the government steps in. Oh, it says. Look. Dream College is giving you something worth $45,000 a year. That’s income. It should be taxed like income. You say your family makes $60,000 a year, and pays $8,000 in federal taxes? Now you make $105,000. Here’s a bill for the extra $12,000.

Geez, you say. That can’t be right. We still only make $60,000 a year. We need to somehow come up with $20,000 so our kid can live at Dream College. And now we have to pay $20,000 a year in federal taxes? Plus the $7000 in state and payroll taxes we were already paying? That only leaves us with $33,000 to live on. That’s a 45% tax rate! Plus we have to come up with another $20,000 to send to Dream College! And we’ve still got a mortgage. No Dream College for you.

This is the right analogy for thinking about how graduate tuition remission works. The large majority of students who are admitted into PhD programs receive full scholarships for tuition. The programs are very selective, and students admitted are independent young adults, who generally can’t pay $45,000 a year. Unlike students entering medical, law, or business school, many are on a path to five-figure careers, so they’re not in a position to borrow heavily. Most of them already have undergraduate loans, anyway.

The university needs them to do the work of teaching and research—the institution couldn’t run without them—so it pays them a modest amount to work half-time while they study. $30,000 is unusually high; only students in the most selective fields and wealthiest universities receive that. At the SUNY campus where I work, TAs make about $20,000 if they are in STEM and $16-18,000 if they are not. At many schools, they make even less. (Here are some examples of TA/RA salaries.)

Right now, those students are taxed on the money they actually see—the $12,000 to $32,000 they’re paid by the university. Accordingly, their tax bills are pretty low—say, $1,000 to $6,000, including state and payroll taxes, if they file as individuals.

What this change would mean is that those students’ incomes would go up dramatically, even though they wouldn’t be seeing any more money. So their tax bills would go up too—to something like $5,000 to $18,000, depending on their university. Some students would literally see their modest incomes cut in half. The worst case scenario is that you go a school with high tuition ($45,000) and moderate stipends ($20,000), in which case your tax bill as an individual would go up about $13,000. Your take-home pay has just dropped from $17,500 a year to $4,500.

What would the effects of such a change be? The very richest universities might be able to make up the difference. If it wanted to, Harvard could increase stipends by $15,000. But most schools can’t do that. Some schools might try to reclassify tuition waivers to avoid the tax hit. But there’s no straightforward way to do that.

Some students would take on more loans, and simply add another $60,000 of graduate school debt to their $40,000 of undergraduate debt before starting their modest-paying careers. But many students would make other choices. They would go into other careers, or pursue jobs that don’t require as much education. International students would be more likely to go to the UK or Europe, where similar penalties would not exist. We would lose many of the world’s brightest students, and we would disproportionately lose students of modest means, who simply couldn’t justify the additional debt to take a relatively high-risk path. The change really would be ugly.

All this would be to extract a modest amount of money—only about 150,000 graduate students receive such waivers each year—as part of a tax bill that is theoretically, though clearly not in reality, aimed at helping the middle class.

It is important for the U.S. to educate PhD students. Historically, we have had the best university system in the world. Very smart people come from all over the globe to train in U.S. graduate programs. Most of them stay, and continue to contribute to this country long after their time in graduate school.

PhD programs are the source of most fundamental scientific breakthroughs, and they educate future researchers, scholars, and teachers. And the majority of PhD students are in STEM fields. There may be specific fields producing too many PhDs, but they are not the norm, and charging all PhD students another $6,000-$11,000 (my estimate of the typical increase) would be an extremely blunt instrument for changing that.

Academia is a strange and relatively small world, and the effects of an arcane tax change are not obvious if you’re not part of it. But I hope that if you don’t think we should charge families tens of thousands of dollars in taxes if their kids are fortunate enough to get a scholarship to college, you don’t think we should charge graduate students tens of thousands of dollars to get what is basically the same thing. Doing so would basically be shooting ourselves, as a country, in the foot.

[Edited to adjust rough estimates of tax increases based on the House version of the bill, which would increase standard deductions. I am assuming payroll taxes would apply to the full amount of the tuition waiver, which is how other taxable tuition waivers are currently treated. Numbers are based on California residence and assume states would continue not to tax tuition waivers. If anyone more tax-wonky than me would like to improve these estimates, feel free.]

Written by epopp

November 18, 2017 at 5:29 pm

what problems do people think antitrust is going to solve?

Last week, I asked why antitrust is having a moment (it’s continued, on Planet Money and elsewhere), and why Democrats are using radical language to make fairly modest proposals. In this post, I’m going to ask what problems people think antitrust is going to solve, anyway.

Certainly a lot of the current concern about antitrust comes from a broad sense that corporations are too economically and politically powerful, that our economy has been restructured in ways that make ordinary people worse off, and that massive tech companies are able to use our data in ways that we have little control over. That’s political antitrust. And those are totally real issues.

But I want to explore some new questions being raised that are not exactly within the current scope of economic antitrust, but that are still kind of speaking its language—that are pushing to change the antitrust technocracy, not up-end it. To recap, as it has been construed for the last thirty-plus years, the purpose of antitrust is to promote consumer welfare, generally by trying to keep firms from being able to raise and keep prices above a competitive level. The focus is consumers, and prices.

Increasingly, though, people at least adjacent to the space of antitrust expertise are making claims about economic problems they think are being caused by lax antitrust enforcement, or that antitrust should be addressing. And those proposals are worth keeping an eye on, because as hard as it might be to change the expert consensus, it’s still more likely than a new anti-monopoly movement. (Though the two could certainly reinforce each other.) I see these new arguments as falling into basically three categories.

Market power has effects we didn’t realize

Market power is the ability to keep prices above a competitive level (i.e. above marginal cost). Once upon a time, people thought there was a fairly close relationship between how concentrated a market is—that is, how many companies control what share of the market—and how much market power firms have. Since the 1970s, there has been much less of a presumption that concentration, on its own, indicates market power. That means that there’s been less concern about whether we’ve got four airlines controlling 70% of the U.S. market, or that four carriers control 99% of the U.S. wireless market.

Increasingly, though, people are raising flags about other problems that might result from market power. One of these is labor monopsony—the idea that firms have market power, but as purchasers of labor, not sellers of products, and that this is driving wages down. The Council of Economic Advisers put out a report last fall suggesting this might be happening, and Democrats’ mention of “bargaining power for workers” implies this is part of what they’re trying to address. There are related arguments about market power in supply chains and the emergence of “winner take most” industries that also suggest links between concentration or market power and wages.

In theory, monopsony can be handled within the current legal framework, though it is rarely addressed in practice. So developing arguments about the effects of market power on workers, and a legal framework for addressing that within antitrust, is one conceivable new direction for antitrust.

Others are arguing that market power can lead firms to attach undesirable conditions to products that make them lower quality, even as price remains the same. In particular, some scholars, including Nobel Laureate Joe Stiglitz, have framed privacy as an antitrust issue: the product may be free, but consumers have no choice about how their data is used (and in the case of platforms like Facebook, no equivalent competitors). Privacy is hard to address within a framework focused purely on price. But in Europe, competition policy is increasingly tackling privacy issues, and Germany is currently investigating whether Facebook’s dominant position is forcing consumers to give up their privacy without having an alternative choice.

Market power has causes we didn’t realize

The Atlantic just featured a story with the dramatic title, “Are Index Funds Evil?” The article discusses the rise of large institutional investors—index funds, though not only index funds—and what it means that, increasingly, big chunks of competitors in a specific market are actually owned by the same few corporations. It goes on to discuss work by José Azar, Martin Schmalz, and Isabel Tecu that finds that this common ownership enhances market power, and that airline ticket prices are 3-7% higher than they would be under separate ownership.

In this story, index funds were the hook, but it just as easily could have been framed around antitrust. In a way, common ownership was the original antitrust question: the big trusts of the late 19th century were not single-firm monopolies, but competitors that had turned over ownership to a group of trustees that made unified governance decisions. And while research in this area is still new and findings tentative, legal scholars are already making the case that antitrust law can cover the anticompetitive effects of these horizontal shareholdings. If this work continues to hold up, this seems potentially transformative.

Technological change is creating new threats to competition

Finally, a fair bit of the recent chatter is basically arguing, “it’s the technology, stupid.” The dynamics of competition change as more of the economy shifts to online platforms. Because of network effects, companies like Facebook, Google, Apple, and Amazon are hard to compete with—much of their value comes from their existing user base. And because they aren’t just selling products to consumers, but connecting consumers with producers, they aren’t acquiring market power in the traditional sense. Facebook and Google are free products, after all.

But the power of network effects means that they have a tendency towards monopoly. And the fact that the four largest companies by market capitalization are platforms suggests how central platforms have become to our economy.

So we have these new companies that have become very large, and that appear monopolistic, though they also create great value for consumers. From an antitrust perspective, they don’t really appear to be a problem, because they aren’t raising prices. And the history of rapid technological change over the past 25 years, including the rise and fall of a number of once-dominant platforms, raises the question of whether even platforms behaving in anticompetitive ways pose much of a long-term threat.

Recent scholarship, though, argues that monopolistic platforms are in fact anticompetitive, that it is a problem, and that current law is poorly equipped to handle. Lina Khan’s much-circulated note in the Yale Law Journal, for example, argues that 1) platforms encourage predatory pricing—generally seen as irrational (and thus not an issue) within antitrust law—because network effects encourage pursuit of growth over profit, and 2) platforms collect data on rivals that give them an unfair competitive advantage. These sorts of issues clearly fit within the broad scope of “protecting competition,” but don’t fit easily with a consumer welfare, market power conception of antitrust.

Changing that would be a significant project, but if we have an economy that is dominated by firms whose potentially anticompetitive activity is essentially beyond the scope of antitrust, there’s not much left to antitrust. And again, the massive fine the E.U. just levied on Google—for favoring its own shopping service, consisting of companies that pay Google to be on it, over competitors in search results—suggests what this could look like. So far, the U.S. has not demonstrated much enthusiasm about expanding antitrust in this direction. But it’s not inconceivable that it could happen, and it could be done within a framework that was focused solely on competition, if not only on consumer welfare.

Again, all these challenges to the current antitrust framework are at least in the ballpark of its conversation, even if they would require pushing the law in new directions or advancing the acceptance of new economic theories. And they are not the only arguments that are in play here. For example, the question of whether inequality is facilitated by concentration or market power, or whether it has become such a central economic problem that antitrust should try to address it, have prompted enough discussion that two leading antitrust scholars have felt the need to argue that antitrust should leave inequality alone.

Unlike political antitrust, which would probably require a social movement to move it forward, these antitrust arguments have the potential to gain traction without necessarily requiring legislation or a revolution against the current antitrust regime. The 1970s shift toward Chicago-style antitrust happened, to a considerable extent, because the old economic framework seemed increasingly inadequate for explaining the world people found themselves in. As the current framework comes to seem similarly dated, this could be another moment when such change is possible.

Written by epopp

August 10, 2017 at 1:33 pm

the democrats can’t decide how radical they want to be on antitrust

The other day I wrote about the current moment in the spotlight for antitrust. (Here’s the latest along these lines from Noah Smith.) Today I’ll say something about the new Democratic proposals on antitrust and how to think about them in terms of the larger policy space.

The Democrats are basically proposing three things. First, they want to limit large mergers. Second, they want active post-merger review. Third, they want a new agency to recommend investigations into anticompetitive behavior. None of these—as long as you don’t go too far with the first—is totally out of keeping with the current antitrust regime. And by that I mean however politically unlikely these proposals may be, they don’t challenge the expert and legal consensus about the purpose of antitrust.

But the language they use certainly does. The proposal’s subhead is “Cracking Down on Corporate Monopolies and the Abuse of Economic and Political Power”. The first paragraph says that concentration “hurts wages, undermines job growth, and threatens to squeeze out small businesses, suppliers, and new, innovative competitors.” The next one states that “concentrated market power leads to concentrated political power.” This is political language, and it goes strongly against the grain of actual antitrust policy.

Economic antitrust versus political antitrust

Antitrust has always had multiple, competing purposes. The original Progressive-Era antitrust movement was partly about the power of trusts like Standard Oil to keep prices high. But it was also about more diffuse forms of power—the power of demanding favorable treatment by banks, or the power to influence Congress. That’s why the cartoons of the day show the trusts as octopuses, or as about to throw Uncle Sam overboard.

The Sherman Act (1890) and the Clayton Act (1914), the two major pieces of antitrust legislation, are pretty vague on what antitrust is trying to accomplish. The former outlaws combinations and conspiracies in restraint of trade, and monopolizing or attempt to monopolize. The latter outlaws various behaviors if their effect is “substantially to lessen competition, or to tend to create a monopoly.” The courts have always played the major role in deciding what that means.

Throughout the last century, the courts have mostly tried to address the ability of firms to raise prices above competitive levels—the economic side of antitrust. For the last forty years, they have focused specifically on maximizing consumer welfare, often (though not always) defined as allocative efficiency. Since the late 1970s, this has been pretty locked in, both through court decisions, and through strong professional consensus that makes antitrust officials very unlikely to challenge it.

Before the 1970s, though, two things were different. For one thing, the focus was more on protecting competition, and less on consumer welfare per se (the latter was assumed to follow from the former, and was thought of a little more broadly). For another, the courts sometimes took concerns into account other than keeping prices low.

The most common such concern was the fate of small business. Concern for small business motivated the Robinson-Patman Act of 1936, which prohibited anticompetitive price discrimination. It was clear in the Celler-Kefauver Act of 1950, which restricted mergers out of fear that chain stores would eliminate local competition. And the courts acknowledged it in cases like Brown Shoe (1962), which prevented a merger that would have controlled 7% of the shoe market by pointing to Congress’s concern with preserving an “economic way of life” and protecting “local control of industry” and “small business.”

Today, Brown Shoe is seen as part of the bad old days of antitrust, when it was used to protect inefficient small businesses and to pursue confused social goals. This is a strong consensus position among antitrust experts across the political spectrum. While no one thinks that low prices for consumers are the only thing worth pursuing in life, they are the appropriate goal for antitrust because they make it coherent and administrable. Since those experts’ views dominate the antitrust agencies, and have been codified into law through court decisions, they are very resistant to change.

The Democrats’ proposal: radical language, incremental proposals

So when the Democrats start talking about “the abuse of economic and political power,” the effects of concentration on small business, and limiting mergers that “reduce wages, cut jobs, [or] lower product quality,” they are doing two things. First, they are hearkening back to the original antitrust movement, with its complex mix of concerns and its fear of unadulterated corporate power.

Second, they are very much talking about political antitrust, and political antitrust is deeply challenging to the status quo. But their actual proposals are considerably tamer than the fiery language at the beginning, and are structured in a way that doesn’t push very hard on the current consensus. New merger guidelines could make some difference around the margins. Post-merger review would definitely be good, since there’s currently no enforcement of pre-merger conditions that firms agree to, and no good way to figure out which merger approvals had negative effects. I have a hard time seeing a new review agency having much effect, though, since it’s just supposed to make recommendations to other agencies. Even I don’t like bureaucracy that much.

So my read on this is that the Democrats feel like they need a new issue, and it needs to look like it helps the little guy, and they want to sound like populist firebrands. But when you get down to the nitty gritty, they aren’t really so interested in challenging the status quo. That is, basically, they’re Democrats. Still, that the language is in there at all is remarkable, and reflects a changing set of political possibilities.

Next time I’ll look at some of the problems people are suggesting antitrust can solve. Because there are a lot of them, and they’re a diverse group. Tying them together under the umbrella of “antitrust” gives an eclectic political project some nominal coherence. But is it politically practicable? And could it actually work?

Final note: If you are interested in the grand historical sweep of antitrust in capitalism, I recommend Brett Christophers’ The Great Leveler. Among other things, he totally called the emerging wave of interest before it actually happened. Sometimes the very long lens is the right one to use.

Written by epopp

August 3, 2017 at 3:04 pm

why antitrust now?

Antitrust is having a moment. A couple of years ago, with the possible exception of complaining about never-ending airline mergers, no one paid attention to antitrust debates. Today, it’s all over the place. A few months ago, it was the Economist proclaiming “America Needs a Giant Dose of Competition.” Last month it was Amazon and Whole Foods. And now antitrust has become a key plank of the new Democratic platform.

I’ve been thinking about this for a while, but this antitrust explainer written by Matt Yglesias yesterday (which is generally quite good) motivated me to put fingers to keyboard. So I’m going to break this reflection up into three parts: Why antitrust now? What does the new antitrust debate mean? And what would it take for it to succeed? Today, I’ll tackle the first.

At one level, the rise of antitrust interest is just a perfect convening of streams, in the Kingdon sense. A problem (or loose collection of problems) rises to public attention, people are already out there advocating a solution, even if so far unsuccessfully, and—the moment we’re in now—politicians have the motivation to grab that solution and turn it into policy, or at least a platform. It’s just about timing, and it’s not predictable.

At the same time, I think we can unpack a couple of different factors that help us think about “why now”. Some of this is covered in the Yglesias piece. But there are a few things I’d add, and some different angles I’d highlight. So without further ado, here are four reasons antitrust is suddenly getting attention.

1. It’s a reaction to a change in objective conditions.

There is a degree of consensus that market concentration is increasing across the economy. Even if you don’t think concentration is a problem, it wouldn’t be surprising that an increase would lead some people to challenge it, and make media more open to hearing that claim. This is probably a contributing factor. But market concentration has been increasing for a long time, and the link between concentration and exercise of power, whether market power or political power, is at best complicated. I don’t think the rise in concentration explains much of the antitrust attention.

Other phenomena are emerging that are objectively new, and raise new questions about how to govern them. Amazon now controls 43% of internet retail sales in the U.S. That’s astonishing, and at least a little alarming. But we’ve now seen several generations of various platforms (operating systems, browsers, social networks) rise to dominance and sometimes fall, mostly without a lot of antitrust attention—Microsoft, at the turn of the millennium, being the significant exception. These objective changes are a necessary but definitely not sufficient for public attention to rise.

2. New actors are organizing around this issue.

A lot of the noise around antitrust is coming from a relative handful of people. Until the Democrats came on board, it was Elizabeth Warren on the political side, and before that Zephyr Teachout, the Fordham law professor who gave Andrew Cuomo a run for his money in 2014.

On the think tank side, as Yglesias notes, it’s the Open Markets Program at New America. Fellow Lina Khan, once of the Teachout campaign, landed an NYT op-ed on Amazon and Whole Foods. Fellow Matt Stoller’s Atlantic article on antitrust, “How Democrats Killed Their Populist Soul,” got a lot of attention when it came out last fall. Barry Lynn, who runs the program, has been working on this issue for a decade.

The Roosevelt Institute is the other significant player in this space. (Here’s a good, if now difficult to read, piece from last summer explaining the history of Roosevelt.) Marshall Steinbaum and others have made the case for a range of antitrust issues on a variety of grounds, and the influence of both these organizations on the new Democratic congressional platform is clearly visible.

There’s no question that this kind of policy advocacy—talking to policymakers, writing articles and op-eds—is making a difference. But its impact has been facilitated by two other things.

3. The space of expertise is changing in unexpected ways.

Antitrust policy is a space heavily dominated by experts. Congress rarely touches antitrust issues. The public rarely pays attention. Presidents generally talk a good antitrust game, and may care more or less about appointing antitrust officials who will pursue a particular policy line. But for the most part, antitrust is dominated by the lawyers and economists who serve in the Antitrust Division and FTC, consult on antitrust cases, write academic articles, and a handful of whom become judges.

And there is bipartisan consensus among these experts that concentration isn’t generally a problem. Markets are contestable. Predatory pricing is irrational, because firms know that if they drive out competitors then jack up prices, they’ll just attract some new entrant into the market. There’s really no point. Yes, there may be a little more antitrust enforcement among Democrats than Republicans. But it’s a game played “between the 45 yard lines.” As Richard Posner said recently, “Antitrust is dead, isn’t it?”

But this space is changing in interesting ways. The change doesn’t seem to be coming from the antitrust community itself, exactly. But it’s coming from people with the academic clout to be taken seriously.

From one direction, you have people like Jason Furman and Joseph Stiglitz making arguments about labor market monopsony contributing to lower wages and arguing that economic changes require new kinds of antitrust solutions. From another, you have Luigi Zingales overseeing an effort (at the University of Chicago’s Stigler Center, no less) to advocate for stronger antitrust, calling his position “pro-market” rather than “pro-business”. Zingales’ efforts are also notable for bringing in historians, political scientists and other experts usually not privy to the antitrust policy conversation.

None of these people work primarily on antitrust issues or even industrial organization, but they have the status to be taken seriously even if they are not among the usual suspects of antitrust. Their novel arguments have the capacity to shift the expert consensus about antitrust—either mildly, as in Furman’s arguments about the importance of labor monopsony (which don’t require a radical rethinking of the current approach), or more radically, as in Zingales’s advocacy of an antitrust that takes political power seriously.

I’ll discuss these changes more in the next couple of posts, but in terms of explaining “why antitrust now,” the point is that these insider/outsider dissenters are amplifying new voices and new issues, and thus contributing to the current wave of attention.

4. The cultural moment is right for other reasons.

If there’s one belief that seems to unite Americans across the political spectrum these days, it’s that the game is rigged against the ordinary person. For the many Americans who think big business is doing at least some of the rigging, this produces a new openness to arguments about concentration and corporate control. As much as anything else, I think this explains the current interest in antitrust. People are receptive to arguments that purport to explain why they’re being screwed.

Antitrust is a protean issue. It can channel many different types of fears and at least theoretically respond to many different kinds of problems. Whether it can do so effectively, and whether antitrust is the right tool for the job, is a different question. In my next post I’ll try to unpack some of those different problems, why they’re now being linked together under the umbrella of “antitrust,” and draw on some antitrust history to think about what current efforts mean.

Written by epopp

August 1, 2017 at 1:51 pm

is ethnography the most policy-relevant sociology?

The New York Times – the Upshot, no less – is feeling the love for sociology today. Which is great. Neil Irwin suggests that sociologists have a lot to say about the current state of affairs in the U.S., and perhaps might merit a little more attention relative to you-know-who.

Irwin emphasizes sociologists’ understanding “how tied up work is with a sense of purpose and identity,” quotes Michèle Lamont and Herb Gans, and mentions the work of Ofer Sharone, Jennifer Silva, and Matt Desmond.

Which all reinforces something I’ve been thinking about for a while—that ethnography, that often-maligned, inadequately scientific method—is the sociology most likely to break through to policymakers and the larger public. Besides Evicted, what other sociologists have made it into the consciousness of policy types in the last couple of years? Of the four who immediately pop to mind—Kathy Edin, Alice Goffman, Arlie Hochschild and Sara Goldrick-Rab—three are ethnographers.

I think there are a couple reasons for this. One is that as applied microeconomics has moved more and more into the traditional territory of quantitative sociology, it has created a knowledge base that is weirdly parallel to sociology, but not in very direct communication with it, because economists tend to discount work that isn’t produced by economics.

And that knowledge base is much more tapped into policy conversations because the status of economics and a long history of preexisting links between economics and government. So if anything I think the Raj Chettys of the world—who, to be clear, are doing work that is incredibly interesting—probably make it harder for quantitative sociology to get attention.

But it’s not just quantitative sociology’s inability to be heard that comes into play. It’s also the positive attraction of ethnography. Ethnography gives us stories—often causal stories, about the effects of landlord-tenant law or the fraying safety net or welfare reform or unemployment policy—and puts human flesh on statistics. And those stories about how social circumstances or policy changes lead people to behave in particular, understandable ways, can change people’s thinking.

Indeed, Robert Shiller’s presidential address at the AEA this year argued for “narrative economics”—that narratives about the world have huge economic effects. Of course, his recommendation was that economists use epidemiological models to study the spread of narratives, which to my mind kind of misses the point, but still.

The risk, I suppose, is that readers will overgeneralize from ethnography, when that’s not what it’s meant for. They read Evicted, find it compelling, and come up with solutions to the problems of low-income Milwaukeeans that don’t work, because they’re based on evidence from a couple of communities in a single city.

But I’m honestly not too worried about that. The more likely impact, I think, is that people realize “hey, eviction is a really important piece of the poverty problem” and give it attention as an issue. And lots of quantitative folks, including both sociologists and economists, will take that insight and run with it and collect and analyze new data on housing—advancing the larger conversation.

At least that’s what I hope. In the current moment all of this may be moot, as evidence-based social policy seems to be mostly a bludgeoning device. But that’s a topic for another post.


Written by epopp

March 17, 2017 at 2:04 pm

let’s panic thoughtfully

Since we’re both here, my social media bubble probably looks a lot like your social media bubble. And in my social media bubble, people are freaking out about the Trump presidency. There are false voter fraud claims, ugly attacks on the media, chilling of speech at government agencies, and a whole host of policy actions many find disastrous. I am also disturbed and fear that the U.S. is making an irreversible turn toward authoritarianism.

At the same time, I’m disheartened by how quickly academics and others who should know better unreflectively buy into the latest outrage on social media. This has negative consequences independent of Trump’s actions. Catastrophizing the bits that aren’t catastrophic undermines our authority to speak up about the things that actually are. And further politicization of the media and, now, the federal bureaucracy will continue to erode the very things that protect us from Trump’s worst.

I do not mean to create a false equivalence here. What Trump has the power to do vastly outweighs the chattering of academics or journalists on Twitter or Facebook. But I have no direct influence over Trump’s administration. I can, however, exhort my academic colleagues to do better.

In that spirit, here’s two things to consider before you decide to share the latest outrage.

1) Is this an important bill? Or just another bill?

In the 114th Congress, more than 12,000 bills were introduced. You know how many became law? 329. 86% never even make it out of committee. There are a bunch of extremists in Congress. Some of them introduce the same bills over and over that are never going to see the light of day. This has been going on for decades.

A few days ago, an Alabama Republican introduced a bill that would pull the U.S. out of the United Nations. Twitter went nuts, quoting the bill with captions like “WHAT. THE. ACTUAL. HELL.” It spread like crazy.

Problem is, this is nothing new. This representative has been introducing this bill into each Congress for the last two decades. It has nothing to do with Trump, nor are there any indications it was treated differently this time. There are lots of things to get worked up about. This bill is not one.

2) Is this politics as usual? Or something truly new and dangerous?

There has also been a lot of freaking out in the last couple of days about the silencing of federal agencies. EPA, NIH, and USDA have all had reports about communications restrictions, including cancellation of a planned climate change conference and a halt on all “public-facing documents” at USDA.

A lot of Trump’s political agenda will play out—or not—through the executive agencies. It is very likely that his appointees will attempt to undercut what many see as their basic missions. By all means, oppose this with great intensity.

But when administrations change, they are going to point agencies in new political directions. I don’t have firsthand experience working in federal agencies. But I have spent a lot of time reading documents from just these types of agencies in the 1960s, 70s, and 80s. Putting a pause on public communication during a transition doesn’t seem that radical to me.

I keep looking for a quote from an actual agency employee that says, “This is wildly different from what happened when George W. Bush took office.” The closest I can find is ProPublica saying an EPA employee “had never seen anything like it in nearly a decade with the agency.” But that only covers the Obama transition, which aligned with the mission of the EPA. It’s not clear that this is not politics-as-usual. Could it transition into something new and dangerous? Absolutely. But that ship has not yet sailed.

Why commitment to critical thinking matters in the face of a Trump administration

I can already hear people yelling that I’m not taking Trump seriously enough. “This isn’t ordinary times! This is an emergency. Real lives are at stake!”

But it’s precisely because I don’t think this is ordinary times—because I think we’re in a uniquely dangerous moment—that it is especially important that we retain the ability to think clearly, for two big reasons.

First, treating every single action of the administration as dangerous and disastrous, without any larger context, further politicizes our fragile institutions. It may be too late for the media. But it is not good for democracy if our bureaucracies go rogue.

People are delighted that the Badlands National Park gave the administration a big old middle finger yesterday with its climate change tweets. But to the extent that federal government functions at all, it functions because of all the unelected, unappointed people who do their jobs, regardless of administration. If ordinary government employees become seen as actively in the bag for the left, we are one step closer to having our government stop functioning entirely.

Is there a time to say “no”, and openly rebel or quit? Absolutely. And if you haven’t already, you should probably write down your own personal lines in the sand, before our sense of “normal” further erodes. If you’re at the EPA, maybe it’s active suppression of climate change evidence. If you’re at NSF, maybe it’s meddling with individual grants. Maybe your lines have already been crossed.

But if they haven’t, as a civil servant you serve democracy better by doing your job—even if that’s carrying out decisions made by someone you hate—than by throwing shade from a government Twitter account.

Second, assuming everything is catastrophic limits our ability to focus on the real catastrophes. The single most dangerous thing Trump has done in the last few days (and I know, it’s been a busy few days) is double down on his claims about massive voter fraud. Because if people don’t believe that our elections are basically honest and agree to respect the results of those elections, our democracy is truly toast.

The good news is that, according to the Washington Post, “Trump has virtually no elected allies in this assault on the election system.” Not even Sean Spicer will say Trump’s claims are actually true.

If we cry wolf about every change that is not in fact catastrophe—if we suddenly scream “fascism” about changes that are part of the normal workings of democracy, we undermine our ability to fight the things that matter most.

And if we don’t have a democratic government, all this other stuff we care so much about—healthcare, immigration policy, racial justice, science, foreign policy, whatever your personal biggest concerns are—will be irrelevant. A fully authoritarian government can do what ever it wants, and we’ll have no say. Defending democracy has to be priority #1. And defending democracy means commitment to reason.

Written by epopp

January 25, 2017 at 5:10 pm

the antitrust equilibrium and three pathways to policy change

Antitrust is one of the classic topics in economic sociology. Fligstein’s The Transformation of Corporate Control and Dobbin’s Forging Industrial Policy both dealt with how the rules that govern economic life are created. But with some exceptions, it hasn’t received a lot of attention in the last decade in econ soc.

In fact, antitrust hasn’t been on the public radar that much at all. After the Microsoft case was settled in 2001, antitrust policy just hasn’t thrown up a lot of issues that have gotten wide public attention, beyond maybe griping about airline mergers.

But in the last year or so, it seems like popular interest in antitrust is starting to bubble up again.

Just in the last few months, there have been several widely circulated pieces on antitrust policy. Washington Monthly, the Atlantic, ProPublica (twice), the American Prospect—all these have criticized existing antitrust policy and argued for strengthening it.

This is timely for me, because I’ve also been studying antitrust. As a policy domain that is both heavily technocratic and heavily influenced by economists, it’s a great place to think about the role of economics in public policy.

Yesterday I put a draft paper up on SocArXiv on the changing role of economics in antitrust policy. The 1970s saw a big reversal in antitrust, when we went from a regime that was highly skeptical of mergers and all sorts of restraints on trade to one that saw them as generally efficiency-promoting and beneficial for consumers. At the same time, the influence of economics in antitrust policy increased dramatically.

But while these two development are definitely related—there was a close affinity between the Chicago School and the relaxed antitrust policy of the Reagan administration, for example—there’s no simple relationship here: economists’ influence began to increase at a time when they were more favorable to antitrust intervention, and after the 1980s most economists rejected the strongest Chicago arguments.

I might write about the sociology part of the paper later, but in this post I just want to touch on the question of what this history implies about the present moment and the possibility of change in antitrust policy.

Read the rest of this entry »

Written by epopp

January 9, 2017 at 6:51 pm

free college: not dead yet


I’m not dead yet.

While higher ed has certainly been under attack since the election, Donald Trump hasn’t said too much about his agenda for higher education, and with Betsy DeVos, charter school aficionado, at the helm of the Department of Education, it seems like K-12 issues may be at the forefront of the new administration.

What’s pretty clear, though, is that “free college”, a la Bernie or, more reluctantly, Hillary, is not on that agenda. But free college, it turns out, has not disappeared: New York State Governor Andrew Cuomo has announced a free college proposal of his own, to apply to SUNY and CUNY schools.

Cuomo’s proposal would make SUNY/CUNY tuition-free for families with incomes of up to $125,000. It would require full-time attendance, and be “last-dollar” aid—i.e., the fee waiver would kick in after federal Pell grants, NY state Tuition Assistance Program (TAP) grants, and any scholarships were already used up.

New York is not the first state to set forth some kind of “free college” proposal—see Tennessee and Oregon. However, it is the first to take it beyond the community college level. And the mere size of the NYS system—enrolling a million students—makes it impossible to ignore.

So, some caveats. “Free tuition” probably doesn’t cover fees, which at my SUNY, at least, are nearly $3000 a year. And it definitely doesn’t cover living expenses. New York also has low tuition, compared to most states—it is still only $6470 at four-year SUNYs. And it has a decent—though not as generous as California’s—state grant aid program in TAP. If your income is low enough—I’d guess below $50k, though that’s just a ballpark—between Pell and TAP you’re not paying any tuition anyway. As Matthew Chingos accurately points out in the Washington Post, families with incomes between $80,000 and $125,000 will benefit most.

And the fact that living expenses are still, at SUNY/CUNY, larger than tuition costs means that it’s also not going to make that much dent in student loans, which are lower-than-average (about $20k for four-year degrees) for SUNY graduates anyway. Cuomo’s headline about “alleviating crushing burden of student loans” is hyperbole.

So what this is, is a significant, and expensive, expansion of grant aid for the middle-class, and a reframing of what college costs (nothing! I know, I know) that may encourage lower-income students to go to college. And tying the benefit to full-time attendance may encourage more full-time enrollment, which evidence suggests (though there are a lot of selection effects here) facilitates completion.

And, of course, this is just a proposal. It’s not yet legislation, and there are a lot of steps between here and there. Nevertheless, despite its limitations, if this became a reality I think the implications for higher ed would be huge—for the symbolic value of committing to the idea that students should not pay for tuition, if nothing else.

Several commentators have explored the policy and student effects of Cuomo’s proposal. But what would the organizational impacts look like? Here, there are a couple of things to think through.

One is the question of whether this would be resource-neutral for SUNY and CUNY. There’s no indication it’s not intended to be, but a lot will depend on the details. For SUNY, at least, funding has only been loosely linked to tuition levels. Sometimes New York State has raised tuition to plug its revenue gaps, without SUNY ever seeing the money.

A second is how it intersects with the push for larger enrollments, which has been a pounding drumbeat over the last three or four years at SUNY (not sure about CUNY). Right now, additional students—even in-state ones—bring marginal benefits, but would that still be the case if many of them weren’t paying tuition? I don’t think the enrollment push has been particularly good for the institution, but it’s also been sold as the path to financial solvency. If free tuition means no benefits to larger enrollments, SUNY will have to find a new strategy for achieving long-term fiscal stability.

This could also affect who gets to enroll. Free tuition might make selective schools more competitive—which is probably good for them as institutions. But it also might encourage an even heavier focus on out-of-state and international students who can bring more revenue. That, in turn, could lead to battles over who gets the seats—New York residents or non-New-Yorkers paying full freight—which have been brutal in California, but largely absent in New York.

Finally, this clearly affects the complex organizational ecosystem of higher ed. It’s bad for private institutions in New York, especially small struggling colleges like Albany’s Saint Rose, which cut two dozen tenure lines last year in a desperate attempt to stay afloat. It’s probably also bad for for-profit colleges—largely because of the symbolic value of making college “free” rather than real changes in relative cost, since for-profit students are disproportionately in the lower-income group that wouldn’t benefit anyway.

But I’d hold that the biggest impact of such a plan would be the symbolic one. Is it ideal that it’s basically a middle-class tax benefit that does nothing material for lower-income families? No. But the institutional details of the New York State system—its relatively low tuition and preexisting state grant aid—make it possible to create “tuition-free college” here for less money than it would cost in many places. Showing that it can be done will make free college more than a pipe dream. SUNY/CUNY is the 500-pound gorilla of public higher ed. Where New York leads, others will follow.

Written by epopp

January 5, 2017 at 6:05 pm

Posted in academia, education, policy

echoes of espeland: competing rationalities in the dakota access pipeline


Yesterday, the Army Corps of Engineers announced a temporary halt to the Dakota Access Pipeline (DAPL) project. It stated that it would explore alternative routes for the pipeline that would, presumably, avoid the areas of deep concern to the Standing Rock Sioux Tribe.

The DAPL story has been in the news on and off since September, when journalist Amy Goodman captured a clash in which guards used dogs and pepper spray to drive back protesters. I had only been paying superficial attention to it, but started thinking more yesterday with the Corps’ decision to hit pause on the project.

Specifically, I was thinking about the echoes between this battle and the one chronicled by Wendy Espeland in her classic book, Struggle for Water: Politics, Rationality, and Identity in the American Southwest.

Read the rest of this entry »

Written by epopp

December 5, 2016 at 11:11 pm

how we abandoned the idea that media should serve the public interest

Yesterday the New Republic wrote about how little attention has been paid to policy in the current election. In 2008, the network news programs devoted 220 minutes to policy; this year, it’s been a mere 32 minutes.

The piece goes on to bemoan the decline of the public-interest obligation once held by broadcasters (and which still remains, in vestigial form) in exchange for their use of the airwaves, and to connect the dots between the gradual removal of those restrictions and the toxic media environment we find ourselves in today. While — I think appropriately — the article doesn’t overemphasize the causal effects, it does highlight a broader shift that was going on in the 1970s and is still echoing today.

The 1970s saw a wide, bipartisan embrace of the deregulatory spirit in many areas. The transportation industries — air, rail, trucking — were one chief target. Banking was in there. So was energy. More controversial, and less bipartisan, was the push for the removal of new social regulations—rules meant improve the environment, health, and safety. But even when it came to social regulation, both sides believed in regulatory reform. (I’ve recently written about some of this history.)

Economists were one group that made a strong case for economic deregulation — the removal of price and entry barriers in industries like transportation, energy, and finance. (For the definitive account, see Martha Derthick and Paul Quirk’s 1985 book.) Their role in airline deregulation, led by the colorful Alfred “To me, they’re all just marginal costs with wings” Kahn, is probably best known. But economists also had something to say about the Federal Communications Commission.

Perhaps the most famous — certainly one of the earliest — critics of the FCC was Ronald Coase. Coase argued in 1959 that there was no good reason, technical or economic, for the government to own the airwaves, and made the case for auctioning off the radio spectrum. He was not at all impressed with the argument that licenses should be distributed according to the “public interest”, and emphasized not only the legal ambiguity of that standard, but the fact that the FCC’s decisions reflected “a degree of inconsistency which defies generalization.”

At the time, the idea of the airwaves as a public trust was so universally accepted that Coase’s views seemed quite radical, even to other economists. When, in 1962, he extended his argument into a 200-page RAND report, coauthored with Bill Meckling and Jora Mirasian, RAND quashed it for being too incendiary. Later, recalling these events, Coase quoted an internal review of the paper: “I know of no country on the face of the globe—except for a few corrupt Latin American dictatorships—where the ‘sale’ of the spectrum could even be seriously proposed.”

By the early 1970s, though, a new consensus had emerged in economics around questions of regulation, and this consensus saw FCC demands that broadcasters behave in unprofitable ways not as acting in the “public interest,” but as a source of efficiency losses that should, at a minimum, be regarded skeptically. This aligned with increasingly loud arguments from outside of economics (as well as within) about regulatory capture, which implied that the “public interest” pursued by executive agencies would never be more than a sham, anyhow.

Eventually, this shift in mood led to a change in how the FCC regulated broadcasters. The public interest standard was loosened, and in 1981 the agency began to shift from using hearings to allocate spectrum licenses — in theory to the applicants that best served the public interest — to lottery. In 1994, it moved another step closer to Coase’s prescription, beginning to auction off the licenses — a move that stimulated a great deal of research in auction theory as well as generating substantial revenue.

The “public interest” goal, which had initially been baked into the allocation process (however poorly it was pursued in practice) became increasingly marginalized. Or perhaps it was subsumed within the assumed public interest in encouraging efficient use of the spectrum. The process echoes the one that took place in antitrust policy, in which historically significant goals other than allocative efficiency — goals that often conflicted with efficiency and even with each other — were gradually defined as being simply beyond the scope of what could be considered. (Indeed, Coase’s criticism of the inconsistency of the FCC’s behavior sounds quite similar to Justice Stewart’s scathing critique of merger law, written around the same time: “the sole consistency I can find is that under Section 7 [the merger section of the Clayton Act], the Government always wins.”)

I don’t know enough about the history of the FCC to have an informed opinion on whether the public interest standard as it stood circa 1970 was redeemable or if the agency was irreparably captured. And I definitely don’t think the decline of that standard is the main explanation for the current media environment, which goes far beyond television.

But I do think that the demise of the idea that we should expect media to have obligations beyond profit — which is bound up with the ideal, if not the practice, of the public interest standard — is a big contributor. Individual journalists — that increasingly rare breed — may remain professionally committed to an ethical code and a sense of mission that isn’t primarily about sales. But at the corporate level, any such qualms were abandoned long ago, and the journalistic wall between “church and state” — editorial and advertising — continues to crumble.

What this means is that we get political news that is just horse race coverage, and endless examination of the ugliest aspects of politics — which, unsurprisingly, encourages more of the same. Actually expecting media to pursue the “public interest”, whether through regulatory means or professional commitment, may be unrealistically idealistic. But giving up on the concept entirely seems certain to take us further down the path in which objective lies merit just as much attention as truth.

Written by epopp

November 3, 2016 at 11:39 am

Posted in economics, policy

don’t be fooled: trump gave a remarkably effective speech 

I woke up this morning and started reading the post-mortems on Trump’s speech.  Andrew Sullivan pronounced it boring and lacking substance. Michael Barbero in the New York Times called it a missed opportunity.  People are getting comfortable that Hillary’s point-spread will hold and we will ride Trump out.

Those people are wrong. First, I’ll say this up front and as clearly as I can: I do not support Trump for President of the United States. His temperament, his instincts, his tactics and his values are antithetical to mine and I cannot support him. But having said that, I will also say that he gave a remarkably effective speech. And I think it will get him elected. Let me be specific: Read the rest of this entry »

Written by seansafford

July 22, 2016 at 4:58 pm

a history of “command and control”, or, thomas schelling is behind every door

I’m working on a paper about the regulatory reform movement of the 1970s. If you’ve read anything at all about regulation, even the newspaper, you’ve probably heard the term “command and control”.

“Command and control” is a common description for government regulation that prescribes what some actor should do. So, for example, the CAFE standards say that the cars produced by car manufacturers must, on average, have a certain level of fuel efficiency. Or the EPA’s air quality standards say that ozone levels cannot exceed a certain number of parts per billion. Or such regulations may simply forbid some things, like the use of asbestos in many types of products.

This is typically contrasted with incentive-based regulation, or market-based regulation, which doesn’t set an absolute standard but imposes a cost on an undesirable behavior, like carbon taxes, or provides some kind of reward for good (usually meaning efficient) performance, as utility regulators often do.

The phrase “command and control” is commonly used in the academic literature, where it is not explicitly pejorative. Yet it’s kind of a loaded term. Who wants to be “commanded” and “controlled”?

So as I started working on this paper, I became more and more curious about the phrase, which only seemed to date back to the late 1970s, as the deregulatory movement really got rolling. Before that, it was a military term.

To the extent that I had thought about it at all, I assumed it was a clever framing coined by some group like the American Enterprise Institute that wanted to draw attention to regulation as a form of government overreach.

So I asked Susana Muñiz Moreno, a terrific graduate student working on policy expertise in Mexico, to look into it. She found newspaper references starting in 1977, when the New York Times references CEA chair Charles Schultze’s argument that “the current ‘command-and-control’ approach to social goals, which establishes specific standards to be met and polices compliance with each standard, is not only inefficient ‘but productive of far more intrusive government than is necessary.’”


From the Aug. 21, 1977 edition of the New York Times

And sure enough, Schultze’s influential book of that year, The Private Use of Public Interest, uses the phrase a number of times. Which makes sense, as Schultze was instrumental in advancing regulatory reform and plays a key role in my story. But he’s clearly not the AEI type I would have imagined coining such a phrase—before becoming Carter’s CEA chair Schultze was at Brookings, and before that he was LBJ’s budget director.

Nevertheless, given Schultze’s influence and the lack of earlier media use of the term, I figured he probably came up with it and it took off from there.

But I started poking around Google Scholar, mostly because I wondered if some more small-government-oriented reformer of regulation had been using it prior to Schultze. I thought James C. Miller III might be a possibility.

I didn’t find any early uses of the term from Miller, but you know what I did find? An obscure book chapter called “Command and Control” written by Thomas Schelling in 1974.

Sociologists probably best know Schelling from his 1978 book, Micromotives and Macrobehavior, and its tipping point model, which shows how the decisions of agents who prefer that even a relatively small proportion of their neighbors be like them (read: of the same race) can quickly lead to a highly segregated space. Its insights are regularly referenced in the literature on neighborhoods and segregation.

(If you haven’t seen it, you should totally check out this brilliant visualization of the model, “Parable of the Polygons”.)


Economists know him for a broader range of game theoretic work on decision-making and strategy—work that was recognized in 2005 with a Nobel Prize.

Anyway, I just checked out the chapter—and I’m pretty sure this is the original source. Like most of Schelling’s work, it’s written in crystal-clear prose. The chapter itself is only secondarily about government regulation; it’s is in an edited book about the social responsibility of the corporation. It hasn’t been cited often—29 times on Google Scholar, often in the context of business ethics.

Schelling muses on the difficulty of enforcing some behavioral change—like making taxi passengers fasten their seat belts—even for the head of a firm, and considers how organizations try to accomplish such goals: for example, by supporting government requirements that might be more effective than their own policing efforts.

It’s a wandering but fascinating reflection, with a Carnegie-School feel to it. And the “command and control” of the title doesn’t refer to government regulation, but to the difficulties faced by organizational leaders who are trying to command and control.

In fact, if I didn’t know the context I’d think this was a completely coincidental use of the phrase. But the volume, Social Responsibility and the Business Predicament, is part of the same Brookings series, “Studies in the Regulation of Economic Activity,” that published Schultze’s lectures in 1977, and which catalyzed a network of economists studying regulation in the early 1970s.

So while Schultze adapts the phrase for his own needs, and it’s possible that he could have borrowed the military phrase directly, my strong hunch is that he is lifting it from Schelling. Which actually fits my larger story—which highlights how the deregulatory movement built on the work of McNamara’s whiz kids from RAND, a community Schelling was an integral part of—quite well.

I can’t resist ending with one other contribution Schelling made to the use of economics in policy beyond his strategy work: the 1968 essay, “The Life You Save May Be Your Own.” (He was good with titles—this one was borrowed from a Flannery O’Connor story.) This introduced the willingness-to-pay concept as a way to value life—the idea that one could calculate how much people valued their own lives based on how much they had to be paid in order to accept very small risks of death. Controversial at the time, the proposal eventually became the main method policymakers used to place a monetary value on life.

Thomas Schelling. He really got around.

Written by epopp

June 8, 2016 at 3:05 pm

tying our own noose with data? higher ed edition

I wanted to start this post with a dramatic question about whether some knowledge is too dangerous to pursue. The H-bomb is probably the archetypal example of this dilemma, and brings to mind Oppenheimer’s quotation of the Bhagavad Gita upon the detonation of Trinity: “Now I am become Death, the destroyer of worlds.

But really, that’s way too melodramatic for the example I have in mind, which is much more mundane. Much more bureaucratic. It’s less about knowledge that is too dangerous to pursue and more about blindness to the unanticipated — but not unanticipatable — consequences of some kinds of knowledge.


Maybe this wasn’t such a good idea.

The knowledge I have in mind is the student-unit record. See? I told you it was boring.

The student-unit record is simply a government record that tracks a specific student across multiple educational institutions and into the workforce. Right now, this does not exist for all college students.

There are records of students who apply for federal aid, and those can be tied to tax data down the road. This is what the Department of Education’s College Scorecard is based on: earnings 6-10 years after entry into a particular college. But this leaves out the 30% of students who don’t receive federal aid.

There are states with unit-record systems. Virginia’s is particularly strong: it follows students from Virginia high schools through enrollment in any not-for-profit Virginia college and then into the workforce as reflected in unemployment insurance records. But it loses students who enter or leave Virginia, which is presumably a considerable number.

But there’s currently no comprehensive federal student-unit record system. In fact at the moment creating one is actually illegal. It was banned in an amendment to the Higher Education Act reauthorization in 2008, largely because the higher ed lobby hates the idea.

Having student-unit records available would open up all kind of research possibilities. It would help us see the payoffs not just to college in general, but to specific colleges, or specific majors. It would help us disentangle the effects of the multiple institutions attended by the typical college student. It would allow us to think more precisely about when student loans do, and don’t, pay off. Academics and policy wonks have argued for it on just these grounds.

In fact, basically every social scientist I know would love to see student-unit records become available. And I get it. I really do. I’d like to know the answers to those questions, too.

But I’m really leery of student-unit records. Maybe not quite enough to stand up and say, This is a terrible idea and I totally oppose it. Because I also see the potential benefits. But leery enough to want to point out the consequences that seem likely to follow a student-unit record system. Because I think some of the same people who really love the idea of having this data available would be less enthused about the kind of world it might help, in some marginal way, create.

So, with that as background, here are three things I’d like to see data enthusiasts really think about before jumping on this bandwagon.

First, it is a short path from data to governance. For researchers, the point of student-level data is to provide new insights into what’s working and what isn’t: to better understand what the effects of higher education, and the financial aid that makes it possible, actually are.

But for policy types, the main point is accountability. The main point of collecting student-level data is to force colleges to take responsibility for the eventual labor market outcomes of their students.

Sometimes, that’s phrased more neutrally as “transparency”. But then it’s quickly tied to proposals to “directly tie financial aid availability to institutional performance” and called “an essential tool in quality assurance.”

Now, I am not suggesting that higher education institutions should be free to just take all the federal money they can get and do whatever the heck they want with it. But I am very skeptical that, in general, the net effect of accountability schemes is generally positive. They add bureaucracy, they create new measures to game, and the behaviors they actually encourage tend to be remote from the behaviors they are intended to encourage.

Could there be some positive value in cutting off aid to institutions with truly terrible outcomes? Absolutely. But what makes us think that we’ll end up with that system, versus, say, one that incentivizes schools to maximize students’ earnings, with all the bad behavior that might entail? Anyone who seriously thinks that we would use more comprehensive data to actually improve governance of higher ed should take a long hard look at what’s going on in the UK these days.

Second, student-unit records will intensify our already strong focus on the economic return to college, and further devalue other benefits. Education does many things for people. Helping them earn more money is an important one of those things. It is not, however, the only one.

Education expands people’s minds. It gives them tools for taking advantage of opportunities that present themselves. It gives them options. It helps them to find work they find meaningful, in workplaces where they are treated with respect. And yes, selection effects — or maybe it’s just because they’re richer — but college graduates are happier and healthier than nongraduates.

The thing is, all these noneconomic benefits are difficult to measure. We have no administrative data that tracks people’s happiness, or their health, let alone whether higher education has expanded their internal life.

What we’ve got is the big two: death and taxes. And while it might be nice to know whether today’s 30-year-old graduates are outliving their nongraduate peers in 50 years, in reality it’s tax data we’ll focus on. What’s the economic return to college, by type of student, by institution, by major? And that will drive the conversation even more than it already does. Which to my mind is already too much.

Third, social scientists are occupationally prone to overestimate the practical benefit of more data. Are there things we would learn from student-unit records that we don’t know? Of course. There are all kinds of natural experiments, regression discontinuities, and instrumental variables that could be exploited, particularly around financial aid questions. And it would be great to be able to distinguish between the effects of “college” and the effects of that major at this college.

But we all realize that a lot of the benefit of “college” isn’t a treatment effect. It’s either selection — you were a better student going in, or from a better-off family — or signal — you’re the kind of person who can make it through college; what you did there is really secondary.

Proposals to use income data to understand the effects of college assume that we can adjust for the selection effects, at least, through some kind of value-added model, for example. But this is pretty sketchy. I mean, it might provide some insights for us to think about. But as a basis for concluding that Caltech, Colgate, MIT, and Rose-Hulman Institute of Technology (the top five on Brookings’ list) provide the most value — versus that they have select students who are distinctive in ways that aren’t reflected by adjusting for race, gender, age, financial aid status, and SAT scores — is a little ridiculous.

So, yeah. I want more information about the real impact of college, too. But I just don’t see the evidence out there that having more information is going to lead to policy improvements.

If there weren’t such clear potential negative consequences, I’d say sure, try, it’s worth learning more even if we can’t figure out how to use it effectively. But in a case where there are very clear paths to using this kind of information in ways that are detrimental to higher education, I’d like to see a little more careful thinking about the real likely impacts of student-unit records versus the ones in our technocratic fantasies.

Written by epopp

June 3, 2016 at 2:06 pm

how the acid rain program killed northeasterners

Remember acid rain? For me, it’s one of those vague menaces of childhood, slightly scarier than the gypsy moths that were eating their way across western Pennsylvania but not as bad as the nuclear bombs I expected to fall from the sky at any moment. The 1980s were a great time to be a kid.

The gypsy moths are under control now, and I don’t think my own kids have ever given two thoughts to the possibility of imminent nuclear holocaust. And you don’t hear much about acid rain these days, either.

In the case of acid rain, that’s because we actually fixed it. That’s right, a complex and challenging environmental problem that we got together and came up with a way to solve. And the Acid Rain Program, passed as part of the Clean Air Act Amendments of 1990, has long been the shining example of how to use emissions trading to successfully and efficiently reduce pollution, and served as an international model for how such programs might be structured.

The idea behind emissions trading is that some regulatory body decides the total emissions level that is acceptable, finds a way to allocate polluters rights to emit some fraction of that total acceptable level, and then allows them to trade those rights with one another. Polluters for whom it is costly to reduce emissions will buy permits from those who can reduce emissions more cheaply. This meets the required emissions level more efficiently than if everyone were simply required to cut emissions to some specified level.

While there have clearly been highly successful examples of such cap-and-trade systems, they have also had their critics. Some of these focus on political viability. The European Emissions Trading System, meant to limit CO2 emissions, issued too many permits—always politically tempting—which has made the system fairly worthless for forcing reductions in emissions.

Others emphasize distributional effects. The whole point of trading is to reduce emissions in places where it is cheap to do so rather than in those where it’s more expensive. But given similar technological costs, a firm may prefer to clean up pollutants in a well-off area with significant political voice rather than a poor, disenfranchised minority neighborhood. Geography has the potential to make the efficient solution particularly inequitable.

These distributional critiques frequently come from outside economics, particularly (though not only) from the environmental justice movement. But in the case of the Acid Rain program, until now no one has shown strong distributional effects. This study found that SO2 was not being concentrated in poor or minority neighborhoods, and this one (h/t Neal Caren) actually found less emissions in Black and Hispanic neighborhoods, though more in poorly educated ones.

A recent NBER paper, however, challenges the distributional neutrality of the Acid Raid Program (h/t Dan Hirschman)—but here, it is residents of the Northeast who bear the brunt, rather than poor or minority neighborhoods. It is cheaper, it turns out, to reduce SO2 emissions in the sparsely populated western United States than the densely populated east. So, as intended, more reductions were made in the West, and less in the East.


The problem is that the population is a lot denser in the Northeastern U.S. So while national emissions decreased, more people were exposed to relatively high levels of ­SO2 and therefore more people died prematurely than would have been the case with the inefficient solution of just mandating an equivalent across-the-board reduction in SO2 levels.

To state it more sharply, while the trading built into the Acid Rain Program saved money, it also killed people, because improvements were mostly made in low-population areas.

This is fairly disappointing news. It also points to what I see as the biggest issue in the cap-and-trade vs. pollution tax debate—that so much depends on precisely how such markets are structured, and if you don’t get the details exactly right (and really, when are the details ever exactly right?), you may either fail to solve the problem you intended to, or create a new one worse than the one you fixed.

Of course pollution taxes are not exempt from political difficulties or unintended consequences either. And as Carl Gershenson pointed out on Twitter, a global, not local, pollutant like CO2 wouldn’t have quite the same set of issues as SO2. And the need to reduce carbon emissions is so serious that honestly I’d get behind any politically viable effort to cut them. But this does seem like one more thumb on the “carbon tax, not cap-and-trade” side of the scale.


Written by epopp

February 15, 2016 at 1:17 pm

social security in the case-deaton era

There is something about the Thanksgiving season that drives me to post things that are a bit random and more or less out of my wheelhouse. Maybe it’s anticipation of the turkey coma to come.

Anyhow, in that spirit, there’s something about the Case-Deaton paper on how middle-aged white people are dying at increasing rates that has been niggling at the back of my mind all month.

The paper, of course, got a lot of attention in the media and blogosphere (including a nice catch by Philip Cohen on how much of the finding is accounted for by changing age composition of 45–54-year-olds). But it’s really the less-educated  whose mortality is increasing, not the whole white population.* And the general finding that inequality in life expectancy between rich and poor is increasing in the U.S. is not particularly new, although the finding of actual declines in life span for some groups is relatively recent.

Obviously, the fact that people in the top income quintile are now expected to get a dozen or so more years of life than those in the bottom — a gap that was a third of that three decades ago — has all sorts of policy implications. But it made me think about Social Security in particular.

Social Security is, on the one hand, a political success because it’s a (near-) universal entitlement program. On the other, there have long been complaints the Social Security is itself regressive, since no Social Security tax is paid on income over $118,500. Of course, Social Security also replaces a larger portion of pre-retirement income for lower-income Americans than it does for higher-income Americans. It’s actually surprisingly difficult to figure out whether, on balance, it’s progressive or not.

What is clear, though, is that if low-income folks are losing years of life while high-income folks are gaining them, the system is losing progressivity (or gaining regressivity). I thought I’d play around with some basic numbers to try to examine this question. But it turns out I don’t need to. The National Academies of Science came out with a big study looking at this only a couple of months ago — a study which, so far as I can tell, got nothing like the media coverage that the much simpler Case-Deaton study received.

So what’s the scoop?

Well, as usual with social policy, a lot depends on the assumptions you make. But making some fairly reasonable assumptions, the NAS report finds that yes, the growing mortality gap is also increasing the gap in Social Security benefits received between low- and high-income groups. Lifetime benefits to the lowest income quintile remain about the same for men born in 1960 as for those born in 1930, and for women they decrease nearly 20%. For the top quintile, though, they increase: about 13% for women, and nearly 30% for men, a huge jump.


FIGURE 4-5 Average lifetime Social Security benefits for males (in thousands of dollars).


FIGURE 4-6 Average lifetime Social Security benefits for females (in thousands of dollars).

So what does this mean? Well, if you see this decrease in progressivity as a problem, it suggests you pay attention to the distributional consequences of various proposed Social Security reforms — which are often not taken into account in discussions of their effects. And it’s not always obvious which reforms will have which effects on progressivity. Raising the early retirement age from 62 to 64 makes the system less progressive, which makes sense. But raising the normal retirement age to 70, though it reduces benefits overall, actually (and unexpectedly) makes things more progressive. And reducing Social Security payouts to those with higher incomes accomplishes this even more directly.

More generally, though, this is a reminder that the growing impact of inequality — an impact that results not only in differential material well-being, but in large gaps in actual years to live — has implications far beyond the obvious ones. The growing gap between rich and poor has the potential to undermine the intent of public policies in a whole variety of ways. We ignore this at our peril.

* Caveat: Just as the population of 45–54-year-olds is not the same in 2013 as 1999, neither is the population of adults with a high school degree or less, the population Case and Deaton identify as having the big mortality increase; this group has become smaller over time and relatively more disadvantaged compared to the population as a whole.

Written by epopp

November 25, 2015 at 1:00 pm

the gap between students, professors, and policy wonks

This was going to be a post about How College Works, a recent book by Dan Chambliss and Chris Takacs. Every couple of years I teach a senior seminar on higher education, and this time around we started with Chambliss & Takacs.

I’d still like to write that post. I liked the book quite a lot, and it was a big hit with the students. But right now I want to emphasize something teaching this class often reminds me of, and which was even more apparent as we made our way through How College Works. The gap between why students attend college and what they think they get out of it, and how academics and policy wonks think about the purpose of college and how to improve the institution, is huge.

The higher ed policy world has been buzzing lately. First there was a big new paper that used tax data to provide some of the best evidence to date on who is defaulting on student loans. (Short answer: students who attend for-profits, and, secondarily, community college students, who traditionally did not borrow but have started to in the last decade.)

Right after that came the new federal College Scorecard, which similarly uses tax data to provide, for the first time, some information about student incomes after college relative to net price and money borrowed at specific schools.

All this generated lots of chatter among the media, policy types, and academics obsessed with such things. I would have contributed myself, had the start of the semester not whacked me upside the head (and, briefly, off the internet).

But as all this was coming out, I was just coming off an intense conversation with my class of seniors about what they had gotten out of their four years of college. For context, these are sociology majors, almost all from NY state, a large majority residential and of traditional college age, about 40% first generation, half Black and Latino, at a school of middling selectivity. So perhaps not the most career-obsessed (they *are* sociology majors), but also not collectively so privileged as to be able to ignore the financial realities of life after graduation.

What they talked about was personal development. They learned who they are. To manage themselves. To prioritize and juggle competing obligations. To evaluate the character of others. To be confident in themselves and their ability to handle new situations. To get along with others who are different from them. They made what they expect to be lifelong friends. Academics barely came up. Neither did future income. They are very aware that “life out there” is drawing near as they head toward graduation, and they do wish college had done a better job of helping them think about how to transition to the world of work. But the reason they go to college, and what they think they got out of it, is primarily personal and social.

This conversation, which took place before we read How College Works, anticipated many of the themes in the book. Chambliss and Takacs’s book is, first and foremost, student-centered, and it emphasizes how college works for students. That means that even though academics are a significant piece of the puzzle, much of the benefit as students see it comes elsewhere—in their typology, not just in skills they gain, but in confidence developed and in relationships made. I think this is part of why the book resonated so much with students, who wished they had read it in high school, or at least as freshmen.

How distant this seems from the policy conversation about higher ed, which is increasingly focused on post-college income—the thing that can be measured, and thus the only thing that matters. Surely no one wants to argue that it is fine for students to graduate with a mound of debt and a job that pays less than a living wage. And the “college experience” that most of my students have had to some degree—at least partly residential, surrounded by others of one’s age cohort—and which is central to what they feel they’ve gained, is not in fact the typical college experience. And, of course, they’re young. They’ll probably pay more attention to the economic value of their degree as they finish school and start looking for full-time jobs, and maybe they’ll think differently about the cost of college when they’re paying more taxes.

But I can’t help but think that a national conversation that focuses so heavily on college as a gateway to a high-paying job, and ignores what traditional college students think they get out of college, is really wrong-headed. Maybe it’s ridiculously expensive to give everyone a four-year residential college experience. Maybe it’s dumb that students are willing to go into debt so they can have that experience. Perhaps it’s a consumption good that they should be paying for themselves, and we shouldn’t be collectively subsidizing it. But for my students, and the Hamilton College students of How College Works—different in so many ways from my own—none of this matters. They are getting something valuable out of college. It’s just not what policy makers think.

Written by epopp

September 25, 2015 at 3:06 pm

The Journalist and the Ethnographer

Update: I responded to some of this post’s comments in another post.

Okay, I’m just a month behind in starting my blogging for orgtheory—sorry, I’m a horrible procrastinator. Thanks so much to Katherine for the kind introduction and to the editors for the chance to blog! So about my new book: it’s called Cut Loose: Jobless and Hopeless in an Unfair Economy, and it’s an ethnographic study of long-term unemployment and economic inequality. I can bore you with details later, but first I thought I’d mention a topic that’s the subject of a high-profile symposium in New York going on right now: the relationship between ethnography and journalism.

The symposium, “Ethnography Meets Journalism—Evidence, Ethics & Trust,” has an all-star lineup of ethnographers and journalists who will talk about the different ways they gather data and tell stories, as well as the misunderstandings and pitfalls that bedevil both approaches. (The event is from 2 to 6 p.m. today in Manhattan, and more details are here; if you can’t attend, you can listen to the livestream, which will be posted online afterward.)

I am not involved with this event, but I thought I’d give my two cents since I have a background in both professions. I’m a sociologist now at Virginia Commonwealth University, but I used to be a newspaper reporter (at New York Newsday), and as labor of love I still edit a magazine called In The Fray, a publication devoted to personal stories on global issues. (We like publishing commentary by academics, by the way, and are looking for a new blogger.)

When I had aspirations to be the next Bob Woodward back in college, I remember stumbling upon The Journalist and the Murderer, a book by New Yorker writer Janet Malcolm (who first published the work in 1989 as a two-part series in the New Yorker). The book begins with an incendiary paragraph:

Every journalist who is not too stupid or too full of himself to notice what is going on knows that what he does is morally indefensible. He is a kind of confidence man, preying on people’s vanity, ignorance or loneliness, gaining their trust and betraying them without remorse. Like the credulous widow who wakes up one day to find the charming young man and all her savings gone, so the consenting subject of a piece of nonfiction learns—when the article or book appears—his hard lesson. Journalists justify their treachery in various ways according to their temperaments. The more pompous talk about freedom of speech and “the public’s right to know”; the least talented talk about Art; the seemliest murmur about earning a living.

The Journalist and the Murderer is an account of the relationship between bestselling journalist Joe McGinniss and the subject of one of his true-crime books, Dr. Jeffrey R. MacDonald. During the course of McGinniss’s research for the book, MacDonald was tried and convicted of the murders of his wife and two children. The Journalist and the Murder excoriated McGinniss for allegedly “conning” his subject—first by befriending him, and then betraying that confidence. (More details about the book can be found here.)

The unethical behaviors that Malcolm describes in her book are extreme, but they speak to an aspect of journalism that many people find troubling: the way that it uses and manipulates its subjects and then casts them aside, all in pursuit of a sensationalistic headline. This sort of behavior may account in part for why journalists rank abysmally low in Gallup polling on honesty and ethics across various professions. It’s part of the reason I decided to go to grad school myself: I love journalism and believe it plays a vital role in our democracy, but I got tired of the ambulance chasing and other less-than-savory things you sometimes have to do.

Institutional review boards and the profession’s code of ethics help sociologists avoid these sorts of problems by setting up protections for the people we interview and observe. This often includes the promise of confidentiality, which can shield our respondents from the public humiliation or retribution at times endured by the subjects of news articles after publication.

Before we pat ourselves on the back, however, sociologists do still run into problems at times in terms of how we present our research to respondents and how they ultimately respond to our work. As someone who teaches research methods, I particularly like Jonathan Rieder’s Canarsie and Annette Lareau’s Unequal Childhoods as examples of how sociologists have dealt with this difficult ethical terrain—particularly the appendix to Lareau’s book where she describes candidly and thoughtfully the hostile reactions some of her respondents had to their portrayal in her book, in spite of the fact that she hid their identities.

Like journalists, can we also be confidence men and women—gaining trust and betraying it? Furthermore, do we have to do that—in order to gain access to begin with, and in order to be truthful to the reality we describe? That’s the age-old question in research ethics, of course.

Interestingly, journalists would say we are guilty of the exact opposite professional sin: being “overprotective” of our respondents. The fact that their identities—and sometimes those of the cities, companies, etc., we research, too—are hidden leads to a number of complications. First, it’s hard to prove to people—particularly skeptical journalists—that what we’ve written is true. What’s to stop us from fabricating our data whole cloth? One obvious safeguard would be the peer review process—and yet it’s not hard to imagine how a determined fabulist could get around even that hurdle.

Fact-checking helps journalists to avoid this problem. I’ve worked as a fact-checker before: what typically happens is the reporter gives you the contact info for their sources, and you call them up and verify each quote and fact. It’s harder to make up stuff when someone is looking over your shoulder in this way. (That said, disgraced journalists like Jayson Blair and Stephen Glass remind us that journalism has failed to catch many acts of dishonesty—and with today’s news budgets so strapped, publications no longer have as many resources to verify the information in each article.)

Even when there’s no outright fabrication involved, however, we as sociologists can alienate readers with our methods. We care about protecting our subjects to the point that in our published work we change (hopefully inconsequential) details, create composite characters, and otherwise alter the reality that we actually observed. For some readers, this is a no-no. Consider, for instance, the outcry over the revelation that James Frey changed or fabricated details in his memoir A Million Little Pieces (and this was a memoir—a genre of literature that has long had a tradition of embellishing the past).

As someone who has experience interacting with journalists, I know they look with great skepticism at “anonymous” sources. As they see it, stories based on information collected in this way are by their very nature untrustworthy. Journalistic norms (and sometimes a publication’s own policies) emphasize that there has to be a powerfully compelling reason to grant someone anonymity in an article. Sociologists would say interviewees are more willing to be candid about their personal lives and personally held beliefs if they have the protection of a pseudonym, but journalists would stress the fact that hiding their identities can also encourage them to lie: no one can come after them for making up a damaging story about someone else, for instance.

To the extent that sociology wants to be part of public debates on important issues, skepticism about our data is another reason for lay readers to dismiss our work. Partly, this is because readers just don’t understand the reasons that we believe practices like confidentiality are so important—they’re used to how journalists do their job. But I can imagine they’d have problems even if they understood our reasoning. Why should they trust us? Especially on controversial topics, how do they know we’re not lying, or at least fudging the facts?

It’s not just the question of honesty; it’s also a question of style. Using pseudonyms comes across as a bit hokey—especially for place names, which I imagine sound like the egghead equivalent of “Gotham” or “Metropolis” to non-sociologists.

I’m not sure how to deal with these problems, and I’d be curious what people think. I do think it’d be helpful if sociologists read more journalism (and journalists more sociology) and learn from some of the best practices of the other approach. For sociologists, reading classic works of journalism—from Let Us Now Praise Famous Men to Friday Night Lights—can be incredibly illuminating. It can allow us to draw from the literary beauty, perceptiveness, and heft of these writers in ways that serve our ideas. It can inspire us to write without jargon, make our theories more intelligible to lay readers, and not be afraid to reveal to readers the emotional power of our narratives. Those are the best ways, I think, that we can ensure sociology gets read by the people who could best benefit from its messages.

Click here for my responses to the comments.

Written by Victor Tan Chen

September 21, 2015 at 6:45 pm

Posted in ethics, ethnography, policy

the most overlooked trend in U.S. higher education

State defunding of public higher education has received a lot of attention in recent years. And budget cuts like the $250 million one Scott Walker made this year to the University of Wisconsin mean this trend continues to get media play.

Less visible in the media, but still well known, is that as public funding has eroded, colleges have become more dependent on tuition dollars for revenue. For public institutions, this has meant both tuition increases for in-state students and, where possible, a greater percentage of out-of-state and international students. While the net price of college hasn’t increased nearly as much as the sticker price, it’s still beat the cost of inflation year after year.

Both of these narratives are completely true. Yet this story of a shift from public to private funding overlooks one critical factor: the expansion of federal student aid.

During the past two decades, as state appropriations per postsecondary student flattened then declined, federally supported financial aid made massive gains. In 2002 its volume passed that of state appropriations, and by 2010 it was twice as large.


Stunning, right? This suggests a very different story than the one about the privatization of public universities we hear so much about. Instead, it looks like there’s been a shift from state funding of higher ed to federal funding. So what’s going on here?

Well, a couple of things. First, the federal aid figures include both grants and loans. Data sources like the College Board and the Delta Cost Project include loans as part of net tuition, not as federal funding. That makes sense, if you’re interested in the financial burden of college on students and their families. And the loans don’t cost the government anything like their face value.

But counting this way downplays the fact that those loans ultimately exist because the federal government makes them possible. Colleges are doubly dependent in this scenario: on students’ choices about where to attend, but also on the feds to make them available in the first place. And if you’re coming at this from an organizational perspective, we should expect resource dependence — whether on students, on the feds, or both — to have effects.

Second, this chart collapses public, private non-profit, and for-profit institutions together. The state appropriations are only going to publics (which also enroll about three-quarters of the students). But as of 2010, more than a quarter of student aid was going to the 10% of students enrolled in for-profit institutions. Moreover, because private colleges are so much more expensive than public colleges, they also receive a disproportionate fraction of federal loans. I haven’t pulled these numbers apart by institution type. But if we just compared state appropriations and federal aid to students at public institutions, the chart would surely be less dramatic.

It would be misrepresenting reality to say that public institutions have experienced a substantial shift from state to federal dependence (at least without substantially more number crunching). And it would be similarly wrong to argue that schools haven’t become more tuition dependent (since loans do come to schools via individual students).

But you can absolutely make the case that at the field level, higher education has increased its dependence on the federal government relative to state governments. And this makes colleges susceptible to a whole wave of federal demands that simply weren’t there before. The college ratings system Obama proposed and then abandoned is one example of this. Education Secretary Arne Duncan’s drumbeating for accountability is another.

Colleges have a lot of political clout and are well-organized. They ground the ratings proposal into a shadow of its former self. And it will take a lot of doing before we see No College Student Left Behind.

Nevertheless, if organization theory tells us anything, it’s that resource dependence matters. When, five years down the road, we get a Race to the Top rewarding colleges that meet completion and job placement goals at a given tuition cost, I know where I’ll be looking: at that point in 2002 where higher ed waved goodbye to the states and hello to the feds.

[Data from the College Board’s Trends in Student Aid 2014 and Grapevine reports, various years, deflated with BLS CPI.]

Written by epopp

August 31, 2015 at 12:34 pm

when anarchism is a decent option: the case of somalia

Africanists like to toss around the words “failed state.” But what they falsely assume is that there is only one option – building a stronger state. What would happen if the state just withered and people just let it go? Are people better off by just ditching the weak state? A 2006 article in the Journal of Economic Behavior and Organization by Benjamin Powell, Ryan Ford, and Alex Nowratseh asks exactly this question. They ask, what happened in Somalia after their state collapsed 1991?

Somalia is a nation that was hammered by war, famine, dictators, and an out of control socialist state. In the 1991, the state collapsed and people reverted to tribal forms of governance based on Islamic courts and kinship (the Xeer system). In 2005, Powell et al. collected basic data on longevity, health, roads, money, and law. Then they asked, how does Somalia compare with other African states?

The answer is surprising. On many measures, Somalia post-1991 actually does well compared to 42 other sub-Saharan states. On at least five measures (including life expectancy and mortality), Somalia is in the top half (p. 662). On a few important measures (such as water access and immunization), they are near the bottom. Even then, they often improved in absolute terms, though not in relative terms. When you compared Somalia with neighbors that had been at war, they report improvements in most measures while other warring states saw declines. Somalia has also seen an expansion of its pastoral economy, a functional currency, and the best mobile phone system in the region. The major setback for Somalia is a depressing performance with regard to infant mortality, which probably relates to poor immunization rates. Still, statelessness did not lead to chaos. Rather, Somalia continued to resemble other African societies on most measures.

This is not an argument for selling off the White House, but it does make an extremely important comparative institutional point. High quality Western systems of governance are simply not on the table. There is no way these impoverished societies can create the level of wealth needed for Western style states in the short term. It is also the case that the options are horrible – dictatorships or Marxist states. If those are your choices, it might be plausible to evolve into a decentralized legal system.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Stree

Written by fabiorojas

July 1, 2015 at 12:01 am

picking the right metric: from college ratings to the cold war

Two years ago, President Obama announced a plan to create government ratings for colleges—in his words, “on who’s offering the best value so students and taxpayers get a bigger bang for their buck.”

The Department of Education was charged with developing such ratings, but they were quickly mired in controversy. What outcomes should be measured? Initial reports suggested that completion rates and graduates’ earnings would be key. But critics pointed to a variety of problems—ranging from the different missions of different types of colleges, to the difficulties of measuring incomes along a variety of career paths (how do you count the person pursuing a PhD five years after graduation?), to the reductionism of valuing college only by graduates’ incomes.

Well, as of yesterday, it looks like the ratings plan is being dropped. Or rather, it’s become a “college-rating system minus the ratings”, as the Chronicle put it. The new plan is to produce a “consumer-facing tool” where students can compare colleges on a variety of criteria, which will likely include data on net price, completion rates, earning outcomes, and percent Pell Grant recipients, among other metrics. In other words, it will look more like U-Multirank, a big European initiative that was similarly a response to the political difficulty of producing a single official ranking of universities.

A lot of political forces aligned to kill this plan, including Republicans (on grounds of federal mission creep), the for-profit college lobby, and most colleges and universities, which don’t want to see more centralized control.

But I’d like to point to another difficulty it struggled with—one that has been around for a really long time, and that shows up in a lot of different contexts: the criterion problem.

Read the rest of this entry »

Written by epopp

June 26, 2015 at 1:48 pm

art museums should sell more of their collections

Michael O’Hare of UC Berkeley talks about policy reform in art museums in this Econtalk podcast. He made two points that I like a lot:

  1. Art museums hold tons of materials that are never shown, looked at, or studied. Why not sell the bottom 1 or 2% of holdings to make attendance free?
  2. Museum ticket prices should be zero. Why? Marginal cost is equal to zero. In most museums, the galleries are empty most of the time and most studies show that raising prices decreases attendance. In most cases, viewing art does not exclude others.

Listen to the whole talk here.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

June 26, 2015 at 12:01 am

Posted in art, fabio, policy