orgtheory.net

Archive for the ‘economics’ Category

james buchanan and the stealth plan for insurance copays

I’ve been thinking about James Buchanan again in light of Jennifer Burns’ new critical review of Nancy MacLean’s Democracy in Chains. (Steve Teles and Henry Farrell both defend their own positions — and their independence from Charles Koch — one last time as well.)

I’m done talking about Democracy in Chains, but Buchanan was on my mind today. I don’t know how much direct influence he had on public policy. He hasn’t come up that much in my work, although obviously public choice arguments bolstered the case for deregulation.

Recently, though, I’ve been trying to wrap my head around health policy a little bit, in part to test whether arguments I’ve worked out looking at other social policy domains apply there as well. And here Buchanan plays an interesting — though quite indirect — role.

Health economics as a field only emerged in the 1960s. After federal health spending shot up with the 1965 passage of Medicare and Medicaid, government became increasingly interested in supporting such research.

One of the early papers that shaped that field — and indeed, the whole policy debate over universal health insurance — was Mark Pauly’s 1968 American Economic Review paper, “The Economics of Moral Hazard.”

The paper points out that individuals who are insured against all health costs are likely to seek out more care, at least of some types, than those who are not insured. Thus insurance that includes no deductible or cost-sharing is likely to result in overuse of care. The argument seems obvious now, but at the time — while familiar to insurers — it was novel in economics.

(Interestingly, in Kenneth Arrow’s comment on the paper, his counterargument to Pauly is basically, “this is why we have norms” — to prevent people from consuming more than they need: “Nonmarket controls, whether internalized as moral principles or externally imposed, are to some extent essential for efficiency.”)

Anyway, Pauly was a student of James Buchanan, and credits Buchanan with turning his attention to health policy. Pauly thought he’d do a thesis on “designing the economic framework for a government-funded voucher system for public education” (ahh, now we’re getting into MacLean territory).

But the passage of Medicare had created new pools of money for health research, and Buchanan suggested Pauly might look at health care instead.

Focused on his studies as well as his new wife, 25-year-old student Pauly was only vaguely aware and not much interested in these outside happenings until his mentor, James M. Buchanan, PhD, explained that the law creating Medicare also provided funds for academic health economics studies. He suggested that Pauly switch his thesis focus from education to health care economics and apply for a federal grant.

“Broadly speaking, I was interested in government and public policy,” Pauly remembers. “But the thing that drew me to health care economics was the money. I wish I could be more noble, but that was the reason. I got the grant and the rest is history.”

Three years later, the moral hazard paper was published. It significantly eroded the economic case for universal health insurance without meaningful cost-sharing—just the sort of plan that Ted Kennedy was then advocating—although economists like Rashi Fein would spend the next decade trying to build support for just such a plan.

Capture

The moral hazard argument also led the Office of Economic Opportunity to initiate the RAND Health Insurance Experiment, which was intended to estimate the effects of different pricing structures on healthcare consumption and outcomes.

After a decade of study and nearly $100 million in expenditures, the Health Insurance Experiment found that cost-sharing reduced the use of care without harming outcomes. (There was, of course, much debate over the results.) Employers took note: “The fraction of major companies with cost-sharing insurance plans rose from 30% to 63% in the years immediately following the publication of the experimental results.”

The next couple of decades would see repeated attempts to reform healthcare, but the principle of social insurance — of some kind of broad-based, universal coverage like Medicare — stayed on the margins of health policy conversations, replaced by a focus on cost-sharing, means-testing, and the promotion of competition.

So James Buchanan never got the education vouchers he would have liked, and that MacLean focuses on the context of Virginia’s multiyear desegregation battle. And he hardly would have been a fan of Obamacare, which gave government a sizable new role to play in healthcare. And really, whatever credit — or blame — there is should go to Pauly, not Buchanan. Buchanan was just there with advice at a critical moment.

But maybe, just a little, we can point to James Buchanan for helping to give us the healthcare system—with plenty of copays and high deductibles, and still no universal coverage—that we have today.

[With credit to Zach Griffen, who knows much more than I do about both health economics and health policy, for pointing me in the right direction.]

Written by epopp

September 18, 2018 at 8:47 pm

Posted in economics, health, policy

winter book forum 2018, part 2: what do people actually get out of college?

This Winter, we are discussing Bryan Caplan’s The Case Against Education. The main issue: We invest a ton in education and it seems to do good. But is that because schooling acts as a filter or because schooling gives your concrete skills or better ways of thinking? If education is mostly a filter (the signalling model), we should probably cut back on education a lot.

In this post, I’ll discuss the types of evidence that Caplan reviews. His book is empirical in that the strength of the argument relies on what other researchers have found. A short blog post does not do justice to this work. For example, he asks – how much do people learn in college? How much do people use specific skills (like algebra) in the workplace? Is there any evidence that learning is transferable – that people acquire “critical thinking?” Each of these topics commands one’s full attention, but we can only skim through the best here.

As you can expect from the title of the book, the direct benefits of education are pretty sparse. Probably the most damning evidence are studies that show that people don’t learn that much in college to start with. Another important fact is that few people ever use the skills – the few they may remember  – in work. Thus, it is very hard to argue for the simple human capital argument – educations makes you better because you learn valuable things. This can’t be right because people don’t learn or retain much in college.

Two related points: In response to those who argue that education imparts critical thinking, he points to evidence that learning is actually domain specific. Learning one area doesn’t seem to help in most others. This is called “transfer learning” in psychology and it’s been rejected for a long, long time. Another fascinating point – if education improves you via human capital development, we’d expect your income to increase for every year of education you get. Instead, Caplan reports that studies of income show no increase in income until you hit 4 years of college – a classic sign of signalling – which economists call sheepskin effects.

Of course, no single study seals the deal and it may be that Caplan has misread some, or even a lot, of the studies. But is is unlikely he misread it all and it is consistent with the everyday view that formal education is not a particularly good way to impart skills. Thus, we should be very skeptical of claims that education is a great way to train people for the labor market. Next week: So what?

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street / Read Contexts Magazine– It’s Awesome!

Written by fabiorojas

February 19, 2018 at 5:01 am

winter 2018 book forum: bryan caplan’s the case against education

This month, I will write a series of blog posts about Bryan Caplan’s The Case Against Education: Why the Education System is a Waste of Time and Money. Normally, I will summarize a book, then praise it and then offer some criticism. In this case, I will deviate slightly. A lot of people will criticize the book, so I will focus on describing the core argument and explain why sociologists should care about it. If Caplan’s main point is even partially correct, it has big implications that any educational researcher should care about. In this first installment, I’ll provide a little background and then lay out the main argument. Later this month, I’ll describe the nuts and bolts of the argument in more depth.

I’ve known Bryan for many, many years and I’ve grown a deep appreciation his style of thought. The way he approaches an academic topic is to first boil down the main claim. Then, he will massively research the claim to find out how much of it is true. When I say “massive,” I mean massive. He’ll read across disciplines. He’ll read flagship journals and obscure edited volumes. He’ll even email the authors of papers to make sure that he got their main point correct. Once he is done this obsessive review, he’ll summarize the main points and the then re-assess and redevelop the original claim. He re-estimates models and the draws out the conclusions, which often cut against common opinion.

The Case Against Education proceeds in this same way. Caplan starts with a simple idea that a lot of people believe in: education improves you and that is why it should be subsidized and supported. This basic idea comes in a few flavors. For example, in academia, economists believe in human capital theory – education gives you valuable labor market skills. Other people may believe that education improves you because it makes you a better citizen or it otherwise improves your critical thinking skills. Caplan then contrasts this with another popular theory called “signalling theory” – education doesn’t make you better, but it works as an IQ/conformity test. In other words, people who do well after getting an education aren’t better in any concrete sense. Rather, the college degree is a signal that you are smart to begin with.

Why the emphasis on the human capital/signalling distinction? The theory that you believe in has huge policy implications. If you believe that education gives you a lot of skills and benefits, then it may make sense to pay for a lot of education or to subsidize it. In contrast, you believe it is mostly signalling, it is a sign that you should scale back education.

Then, Caplan delves into hundreds of studies in education, economics, sociology, psychology and other fields to actually see if education actually makes you better, or if it is merely a hoop you have to jump through. For example, is it true that education makes you a better “critical thinker?” It turns out that there is psychological research on “transfer learning,” which means that learning a skill in field A helps you in field B. Answer? Nope, not much transfer learning. Is it true that college graduates learn alot? he reviews work like Richard Arum and Josipina Roska’s Academically Adrift, which shows that people don’t learn a lot in college. The list of debunked effects of education goes on and on.

As you can sense from my thumbnail sketch, Caplan (correctly, in my view) arrives at the conclusion that education doesn’t really make you better in any direct sense. If that is true, then much of education might be a costly and inefficient signalling game and maybe we should seriously consider cutting back on it and that entails a massive change in policy.

Next week: What education does and does not do to a person.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street / Read Contexts Magazine– It’s Awesome!

Written by fabiorojas

February 9, 2018 at 7:15 am

book spotlight: culture and commerce by mukti khaire

khaire_book

A very,  very long time ago, Mukti Khaire was a guest blogger at orgtheory. Since then, she’s been a successful management researcher at the Harvard Business School and Cornell Tech. It is thus a great pleasure for me to read her new book Culture and Commerce: The Value of Entrepreneurship in Creative Industries. The book is a contribution to both the study of art markets and the study of entrepreneurship. The book’s premise is that art and business exist in a sort of fundamental tension. Khaire’s goal is to offer an account of what entrepreneurship means in the world of artistic markets.

The key element of Khaire’s theory is that artistic goods are not only introduced by entrepreneurs, but entrepreneurs do a lot of work to reshape markets so they can accept radically new categories of goods. For example, getting people to accept high quality, but expensive, produce is the work that Whole Foods did in the grocery market about twenty years ago. Such people, who reshape old markets into new markets, Khaire calls “pioneer entrepreneurs.” Similarly, Khaire identifies people who add value because of their ability to provide commentary to products that need explanation.

The strong point of Culture and Commerce is that Khaire digs deeper into the production chain of artistic goods. There are market actors who specialize in bringing in the new products, those who specialize in educating the audience, and those who add quality signals (e.g., giving awards). It’s a very rich account of entrepreneurship that many blog reader will enjoy. Recommended!

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street / Read Contexts Magazine– It’s Awesome!

 

Written by fabiorojas

November 28, 2017 at 5:08 am

does piketty replicate?

Richard Sutch reports in Social Science History that Piketty does not replicate very well:

This exercise reproduces and assesses the historical time series on the top shares of the wealth distribution for the United States presented by Thomas Piketty in Capital in the Twenty-First Century. Piketty’s best-selling book has gained as much attention for its extensive presentation of detailed historical statistics on inequality as for its bold and provocative predictions about a continuing rise in inequality in the twenty-first century. Here I examine Piketty’s US data for the period 1810 to 2010 for the top 10 percent and the top 1 percent of the wealth distribution. I conclude that Piketty’s data for the wealth share of the top 10 percent for the period 1870 to 1970 are unreliable. The values he reported are manufactured from the observations for the top 1 percent inflated by a constant 36 percentage points. Piketty’s data for the top 1 percent of the distribution for the nineteenth century (1810–1910) are also unreliable. They are based on a single mid-century observation that provides no guidance about the antebellum trend and only tenuous information about the trend in inequality during the Gilded Age. The values Piketty reported for the twentieth century (1910–2010) are based on more solid ground, but have the disadvantage of muting the marked rise of inequality during the Roaring Twenties and the decline associated with the Great Depression. This article offers an alternative picture of the trend in inequality based on newly available data and a reanalysis of the 1870 Census of Wealth. This article does not question Piketty’s integrity.

The point isn’t that income inequality hasn’t risen. Like most social scientists, I am of the view that, for various reasons, income inequality has risen, but it is important to get the magnitudes right, which can support or undermine other hypotheses about wealth accumulation. Sutch’s article shows that Piketty made a good effort, but it depends on some questionable choices. Let there be more discussion of this issue.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street / Read Contexts Magazine– It’s Awesome!

 

Written by fabiorojas

November 13, 2017 at 5:44 am

book spotlight: the inner lives of markets by ray fisman and tim sullivan

inner lives

The Inner Lives of Markets: How People Shape Them and They Shape Us is a “popular economics” book by Ray Fisman and Tim Sullivan. The book is a lively discussion of what one might call the “greatest” theoretical hits of economics. Starting from the early 20th century, Fisman and Sullivan review a number of the major insights from the field of economics. The goal is to give the average person a sense of the interesting insights that economists have come up with as they have worked through various problems such as auction design, thinking about social welfare, behavioral economics, and allocation in a world without prices.

I’ve taken a bit of economics in my life, and I’m somewhat of a rational choicer, so I am quite familiar with the issues that Fisman and Sullivan talk about. I think the best reader for the book might a smart undergrad or a non-economic social scientist/policy researcher who wants a fun and easy tour of more advanced economics. They’ll get lots of interesting stories, like how baseball teams auction off player contracts and how algorithms are used to manage online dating websites.

What I like a lot about the book is that it doesn’t employ the condescending “economic imperialist” approach to economic communication, nor does it offer a Levitt-esque “cute-o-nomics” approach. Rather, Fisman and Sullivan explain the problems that actually occur in real life and then describe how economists have proposed to analyze or solve such issues. In that way, modern economics comes off in a good light – it’s an important toolbox for thinking about the choices that individuals, firms and policy makers must encounter. Definitely good reading for the orgtheorist. Recommended!

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street / Read Contexts Magazine– It’s Awesome

Written by fabiorojas

October 19, 2017 at 4:08 am

when was the last time an economist rocked the social sciences?

Question: When was the last time an economist had big impact outside economics? It’s been a while. Gary Becker might be the best example, but that was in the 1970s – forty years ago!!! Of course, there are individual papers or research findings that attract interest (e.g., Deaton’s recent work on mortality), but more recent examples of work that change areas outside economics are hard to find. For example, Steve Levitt is hugely popular, but he hasn’t changed the way people think about areas outside of economics. At best, the big message of early 2000s “cute-o-nomics” is that we can try harder to find clean identification in naturally occurring data. Not a bad message, but not epic, either. And a lot of people were kind of doing that already.

More recently, one might think of Daron Acemoglu, for his massive work on development, or Esther Duflo for field experiments. Both are clearly high impact scholars, but I’d guess that they are high impact within specific areas. You don’t see conferences on the theoretical implications of Duflo or Acemoglu on other disciplines, or even on areas outside of their expertise. Their work doesn’t travel the way Becker’s did, or the way game theory or early econometrics did

Why? Unclear to me. In terms of quality, the average economist is probably stronger than in the past. On the other hand, most of the training in economics programs is on model building. Culturally, economists have developed a disdain for other areas, so they have little incentive to produce work that speaks to anyone except themselves. Then, there are financial incentives. If your salary is way above other disciplines, and you have great job prospects, influencing other fields probably isn’t worth your time. The only thing worth your time is really impressing elites within the field. Not a bad thing per se, but it is not the right environment for work that will reverberate across the academy. Maybe the simplest explanation is low hanging fruit – you have a big impact by bringing a simple idea to an adjacent area. Once that is done, all you are left with are hard problems that only insiders care about.

That’s too bad. I love sociology but I also feel excitement and challenge when a major figure steps up and offers a new way forward. I’d like to see more of it. Not just from sociology, but also from other fields.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street / Read Contexts Magazine– It’s Awesome

Written by fabiorojas

August 29, 2017 at 12:01 am

democracy in chains symposium

Over at Policy Trajectories Josh McCabe has organized a great symposium on Nancy MacLean’s Democracy in Chains, the most dramatic book on public choice theory you are ever likely to read.

Featuring economist Sandy Darity, political scientist Phil Rocco, and your truly, the essays try to get beyond some of the sturm und drang associated with the book’s initially glowing critical reception and intense subsequent backlash. Instead, they ask: How is political orthodoxy produced and challenged? What responsibility do individuals bear when their actions reinforce institutionalized racism? And what explains increasing efforts to put limits on democracy itself?

MacLean’s book may itself be highly polarizing. But the conversations she has opened up will be with us for a long time to come. Check it out.

Written by epopp

August 24, 2017 at 12:15 pm

the democrats can’t decide how radical they want to be on antitrust

The other day I wrote about the current moment in the spotlight for antitrust. (Here’s the latest along these lines from Noah Smith.) Today I’ll say something about the new Democratic proposals on antitrust and how to think about them in terms of the larger policy space.

The Democrats are basically proposing three things. First, they want to limit large mergers. Second, they want active post-merger review. Third, they want a new agency to recommend investigations into anticompetitive behavior. None of these—as long as you don’t go too far with the first—is totally out of keeping with the current antitrust regime. And by that I mean however politically unlikely these proposals may be, they don’t challenge the expert and legal consensus about the purpose of antitrust.

But the language they use certainly does. The proposal’s subhead is “Cracking Down on Corporate Monopolies and the Abuse of Economic and Political Power”. The first paragraph says that concentration “hurts wages, undermines job growth, and threatens to squeeze out small businesses, suppliers, and new, innovative competitors.” The next one states that “concentrated market power leads to concentrated political power.” This is political language, and it goes strongly against the grain of actual antitrust policy.

Economic antitrust versus political antitrust

Antitrust has always had multiple, competing purposes. The original Progressive-Era antitrust movement was partly about the power of trusts like Standard Oil to keep prices high. But it was also about more diffuse forms of power—the power of demanding favorable treatment by banks, or the power to influence Congress. That’s why the cartoons of the day show the trusts as octopuses, or as about to throw Uncle Sam overboard.

The Sherman Act (1890) and the Clayton Act (1914), the two major pieces of antitrust legislation, are pretty vague on what antitrust is trying to accomplish. The former outlaws combinations and conspiracies in restraint of trade, and monopolizing or attempt to monopolize. The latter outlaws various behaviors if their effect is “substantially to lessen competition, or to tend to create a monopoly.” The courts have always played the major role in deciding what that means.

Throughout the last century, the courts have mostly tried to address the ability of firms to raise prices above competitive levels—the economic side of antitrust. For the last forty years, they have focused specifically on maximizing consumer welfare, often (though not always) defined as allocative efficiency. Since the late 1970s, this has been pretty locked in, both through court decisions, and through strong professional consensus that makes antitrust officials very unlikely to challenge it.

Before the 1970s, though, two things were different. For one thing, the focus was more on protecting competition, and less on consumer welfare per se (the latter was assumed to follow from the former, and was thought of a little more broadly). For another, the courts sometimes took concerns into account other than keeping prices low.

The most common such concern was the fate of small business. Concern for small business motivated the Robinson-Patman Act of 1936, which prohibited anticompetitive price discrimination. It was clear in the Celler-Kefauver Act of 1950, which restricted mergers out of fear that chain stores would eliminate local competition. And the courts acknowledged it in cases like Brown Shoe (1962), which prevented a merger that would have controlled 7% of the shoe market by pointing to Congress’s concern with preserving an “economic way of life” and protecting “local control of industry” and “small business.”

Today, Brown Shoe is seen as part of the bad old days of antitrust, when it was used to protect inefficient small businesses and to pursue confused social goals. This is a strong consensus position among antitrust experts across the political spectrum. While no one thinks that low prices for consumers are the only thing worth pursuing in life, they are the appropriate goal for antitrust because they make it coherent and administrable. Since those experts’ views dominate the antitrust agencies, and have been codified into law through court decisions, they are very resistant to change.

The Democrats’ proposal: radical language, incremental proposals

So when the Democrats start talking about “the abuse of economic and political power,” the effects of concentration on small business, and limiting mergers that “reduce wages, cut jobs, [or] lower product quality,” they are doing two things. First, they are hearkening back to the original antitrust movement, with its complex mix of concerns and its fear of unadulterated corporate power.

Second, they are very much talking about political antitrust, and political antitrust is deeply challenging to the status quo. But their actual proposals are considerably tamer than the fiery language at the beginning, and are structured in a way that doesn’t push very hard on the current consensus. New merger guidelines could make some difference around the margins. Post-merger review would definitely be good, since there’s currently no enforcement of pre-merger conditions that firms agree to, and no good way to figure out which merger approvals had negative effects. I have a hard time seeing a new review agency having much effect, though, since it’s just supposed to make recommendations to other agencies. Even I don’t like bureaucracy that much.

So my read on this is that the Democrats feel like they need a new issue, and it needs to look like it helps the little guy, and they want to sound like populist firebrands. But when you get down to the nitty gritty, they aren’t really so interested in challenging the status quo. That is, basically, they’re Democrats. Still, that the language is in there at all is remarkable, and reflects a changing set of political possibilities.

Next time I’ll look at some of the problems people are suggesting antitrust can solve. Because there are a lot of them, and they’re a diverse group. Tying them together under the umbrella of “antitrust” gives an eclectic political project some nominal coherence. But is it politically practicable? And could it actually work?

Final note: If you are interested in the grand historical sweep of antitrust in capitalism, I recommend Brett Christophers’ The Great Leveler. Among other things, he totally called the emerging wave of interest before it actually happened. Sometimes the very long lens is the right one to use.

Written by epopp

August 3, 2017 at 3:04 pm

why antitrust now?

Antitrust is having a moment. A couple of years ago, with the possible exception of complaining about never-ending airline mergers, no one paid attention to antitrust debates. Today, it’s all over the place. A few months ago, it was the Economist proclaiming “America Needs a Giant Dose of Competition.” Last month it was Amazon and Whole Foods. And now antitrust has become a key plank of the new Democratic platform.

I’ve been thinking about this for a while, but this antitrust explainer written by Matt Yglesias yesterday (which is generally quite good) motivated me to put fingers to keyboard. So I’m going to break this reflection up into three parts: Why antitrust now? What does the new antitrust debate mean? And what would it take for it to succeed? Today, I’ll tackle the first.

At one level, the rise of antitrust interest is just a perfect convening of streams, in the Kingdon sense. A problem (or loose collection of problems) rises to public attention, people are already out there advocating a solution, even if so far unsuccessfully, and—the moment we’re in now—politicians have the motivation to grab that solution and turn it into policy, or at least a platform. It’s just about timing, and it’s not predictable.

At the same time, I think we can unpack a couple of different factors that help us think about “why now”. Some of this is covered in the Yglesias piece. But there are a few things I’d add, and some different angles I’d highlight. So without further ado, here are four reasons antitrust is suddenly getting attention.

1. It’s a reaction to a change in objective conditions.

There is a degree of consensus that market concentration is increasing across the economy. Even if you don’t think concentration is a problem, it wouldn’t be surprising that an increase would lead some people to challenge it, and make media more open to hearing that claim. This is probably a contributing factor. But market concentration has been increasing for a long time, and the link between concentration and exercise of power, whether market power or political power, is at best complicated. I don’t think the rise in concentration explains much of the antitrust attention.

Other phenomena are emerging that are objectively new, and raise new questions about how to govern them. Amazon now controls 43% of internet retail sales in the U.S. That’s astonishing, and at least a little alarming. But we’ve now seen several generations of various platforms (operating systems, browsers, social networks) rise to dominance and sometimes fall, mostly without a lot of antitrust attention—Microsoft, at the turn of the millennium, being the significant exception. These objective changes are a necessary but definitely not sufficient for public attention to rise.

2. New actors are organizing around this issue.

A lot of the noise around antitrust is coming from a relative handful of people. Until the Democrats came on board, it was Elizabeth Warren on the political side, and before that Zephyr Teachout, the Fordham law professor who gave Andrew Cuomo a run for his money in 2014.

On the think tank side, as Yglesias notes, it’s the Open Markets Program at New America. Fellow Lina Khan, once of the Teachout campaign, landed an NYT op-ed on Amazon and Whole Foods. Fellow Matt Stoller’s Atlantic article on antitrust, “How Democrats Killed Their Populist Soul,” got a lot of attention when it came out last fall. Barry Lynn, who runs the program, has been working on this issue for a decade.

The Roosevelt Institute is the other significant player in this space. (Here’s a good, if now difficult to read, piece from last summer explaining the history of Roosevelt.) Marshall Steinbaum and others have made the case for a range of antitrust issues on a variety of grounds, and the influence of both these organizations on the new Democratic congressional platform is clearly visible.

There’s no question that this kind of policy advocacy—talking to policymakers, writing articles and op-eds—is making a difference. But its impact has been facilitated by two other things.

3. The space of expertise is changing in unexpected ways.

Antitrust policy is a space heavily dominated by experts. Congress rarely touches antitrust issues. The public rarely pays attention. Presidents generally talk a good antitrust game, and may care more or less about appointing antitrust officials who will pursue a particular policy line. But for the most part, antitrust is dominated by the lawyers and economists who serve in the Antitrust Division and FTC, consult on antitrust cases, write academic articles, and a handful of whom become judges.

And there is bipartisan consensus among these experts that concentration isn’t generally a problem. Markets are contestable. Predatory pricing is irrational, because firms know that if they drive out competitors then jack up prices, they’ll just attract some new entrant into the market. There’s really no point. Yes, there may be a little more antitrust enforcement among Democrats than Republicans. But it’s a game played “between the 45 yard lines.” As Richard Posner said recently, “Antitrust is dead, isn’t it?”

But this space is changing in interesting ways. The change doesn’t seem to be coming from the antitrust community itself, exactly. But it’s coming from people with the academic clout to be taken seriously.

From one direction, you have people like Jason Furman and Joseph Stiglitz making arguments about labor market monopsony contributing to lower wages and arguing that economic changes require new kinds of antitrust solutions. From another, you have Luigi Zingales overseeing an effort (at the University of Chicago’s Stigler Center, no less) to advocate for stronger antitrust, calling his position “pro-market” rather than “pro-business”. Zingales’ efforts are also notable for bringing in historians, political scientists and other experts usually not privy to the antitrust policy conversation.

None of these people work primarily on antitrust issues or even industrial organization, but they have the status to be taken seriously even if they are not among the usual suspects of antitrust. Their novel arguments have the capacity to shift the expert consensus about antitrust—either mildly, as in Furman’s arguments about the importance of labor monopsony (which don’t require a radical rethinking of the current approach), or more radically, as in Zingales’s advocacy of an antitrust that takes political power seriously.

I’ll discuss these changes more in the next couple of posts, but in terms of explaining “why antitrust now,” the point is that these insider/outsider dissenters are amplifying new voices and new issues, and thus contributing to the current wave of attention.

4. The cultural moment is right for other reasons.

If there’s one belief that seems to unite Americans across the political spectrum these days, it’s that the game is rigged against the ordinary person. For the many Americans who think big business is doing at least some of the rigging, this produces a new openness to arguments about concentration and corporate control. As much as anything else, I think this explains the current interest in antitrust. People are receptive to arguments that purport to explain why they’re being screwed.

Antitrust is a protean issue. It can channel many different types of fears and at least theoretically respond to many different kinds of problems. Whether it can do so effectively, and whether antitrust is the right tool for the job, is a different question. In my next post I’ll try to unpack some of those different problems, why they’re now being linked together under the umbrella of “antitrust,” and draw on some antitrust history to think about what current efforts mean.

Written by epopp

August 1, 2017 at 1:51 pm

why we aren’t behavioral economists: a guest post by nina bandelj, fred wherry, and viviana zelizer

This month is “Money Month” on the blog. We have three utterly amazing and HUGE guests – UC Irvine’s Nina Banelj, Yale’s Fred Wherry and Princeton’s Viviana Zelizer. This first guest post investigates the boundary between economic sociology and allied disciplines. 

Rather than retreat to disciplinary corners, let us begin by affirming our respect for the generative work undertaken across a variety of disciplines. We’re all talking money, so it is helpful to specify what’s similar and what’s different when we do. That’s what we tried to do in our just born volume Money Talks: Explaining How Money Really Works where we brought together scholars from sociology, economics, law, political science, anthropology, history, and philosophy. In this post, we address our closest cousins: behavioral economics and cognitive psychology. (Mind you, the first chapter’s author is Jonathan Morduch who has co-authored a widely used economics principles textbook with Dean Karlan. Morduch’s essay in our book develops the first sustained comparison between economic and sociological approaches to money.)

In our introduction to Money Talks, we illustrate differences between mental accounting and relational approaches with the following example. Consider the case of a child’s “college fund.” Marketing professors Soman and Ahn recount the dilemma one of their acquaintances, who is an economist, faced with the option of borrowing money at a high rate of interest to pay for a home renovation or using money he already had saved in his three-year-old son’s low-interest rate education account. As a father, he simply could not go through with the more cost-effective option of “breaking into” his child’s education fund. Soman and Ahn use this story to frame how consequential the emotional content of a particular mental account can be. And by mental account, we mean the “set of cognitive operations used by individuals and households to organize, evaluate, and keep track of financial activities” (Thaler 1999: 183).

How does the sociological approach differ?

Note that when managing these accounts, individuals are really managing their relationships with others. The account is thus relational as well as psychological as individuals engage in what we call relational work. In the anecdote of the college savings account, for instance, we find the parents reluctant to dip into money earmarked for their children’s education. Why? Because these funds represent and reinforce meaningful family ties: they include but transcend individual mental budgeting; the accounts are therefore as relational as they are mental. Suppose a mother gambles away money from the child’s “college fund.” This is not only a breach of cognitive compartments but involves a relationally damaging violation. Most notably, the misspending will hurt her relationship to her child. But the mother’s egregious act is likely to also undermine the relationship to her spouse and even to family members or friends who might sanction harshly the mother’s misuse of money. These interpersonal dynamics thereby help explain why a college fund functions so effectively as a salient relational earmark rather than only a cognitive category.

We hope that the volume and our ongoing discussions this month encourage other scholars to ask how we can compare, contrast, but also complement our sociological approaches with those of behavioral economists and cognitive psychologists.

What will follow will be some focused discussions of how emotions and morality shape money and why all this matters from a policy perspective.

Forward! Adelante! Let’s Talk!

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street  

Written by fabiorojas

May 15, 2017 at 12:40 am

independent book stores are back!!! a guest post post by clayton childress

Clayton Childress is an Assistant Professor of Sociology at University of Toronto. While making the case for examining the relationships between fields and reuniting the sociological studies of production and reception, Under the Cover empirically follows a works of fiction from start to finish: all the way from its creation, through its production, selling, and reading.

Three Reasons Independent Bookstores Are Coming Back

 A couple weeks ago, Fabio had a post about the recent rise in brick-and-mortar independent bookstores, suggesting that perhaps they have successfully repositioned themselves as “artisanal organizations” that thrive through the specialized curation of their stock, and through providing “authentic,” and maybe even somewhat bespoke, book buying experiences for their customers.

There’s some truth to this, but in my forthcoming book, I spend part of a chapter discussing the other factors. Here’s several of them.

Why the return:

1)     The Demise of the Borders Group, and Shifting Opportunity Space in Brick-and-Mortar Bookselling.

This graph from Statista in Fabio’s original post starts in 2009, lopping off decades of retrenchment in the number of American Bookseller Association member stores. Despite the recent uptick, independent bookstores have actually declined by about 50% since their peak. More importantly, it’s worth noting that even in the graph we see independent bookstores mostly holding steady from 2009 to 2010, with their rise starting in 2011. Why does this matter? As Dan Hirschman rightly hypothesizes in the comments section of the original post, the bankruptcy and liquidation of the Borders Group began in February of 2011, and is key to any story about the return of independent bookstores. To put some numbers to it, between 2010 and 2011 the Borders Group closed its remaining 686 stores, and between 2010 and 2016 – after spending decades in decline –651 independent bookstores were opened. It’s a pretty neat story of nearly one-to-one replacement between Borders and independents since 2011.*

Yet, if anything, this isn’t as much a surprising story about the continued prevalence of independent bookstores themselves, but rather, a story about the continued prevalence of paper as a medium through which people like to consume the types of books that are mostly sold in independent bookstores. When Borders liquated people didn’t predict that independents would take their place, but that’s because they had mostly misattributed the bankruptcy of Borders to the rise of eBook technology and Amazon. That story was never quite right, though. Borders last year of turning a profit, 2006, mostly predated these supposed causal factors. Instead, Borders’ rise to prominence came through a competitive advantage in their back-end logistics operations, which they then never really updated, and by the mid-2000s they had turned from a market leader to a market trailer. Borders also invested more floor space in selling CDs right when that market started to decline, and then turned that floor space into the selling of DVDs right when that market started to decline – their stores were always too big, and they seemed to have a preternatural ability to keep on filling them with the wrong things. As for the rise of Amazon and online book sales in the decline of Borders, they did play a role, but not the one that people think. In perhaps one of the least prescient moves in the history of American bookselling, as online bookselling started to take off, Borders decided to not spend resources investing in that market, and instead contracted their online bookselling out to Amazon, helping them on their way to dominance of the market. Oh, you dummies.

So, while it was mostly back-end distribution problems, stores that were too big, and a series of bad bets that tanked Borders, its demise was never really about a lack of demand for print books, which allowed independents to fill that market space after Borders disappeared. For independent used book stores (which have always had as much of a supply problem as a demand problem), advances in back end supply systems have in fact made them more viable.

2)     Independent Bookstores are the Favored Trading Partners of the Publishing Industry.

Starting during the Great Depression, in order to keep bookstores in business, book publishers began letting them return any (damaged or undamaged) unsold books, meaning that for nothing more than the cost of freight bookstores could pack books up to the ceiling without taking on much financial risk on stocking decisions (if you’ve ever been curious why so many bookstores seem so overstuffed with product, here’s your answer).

It was the beginning of a long history of cooperation between publishers and sellers, and the cooperation has never been more friendly than it is between publishers and independent stores. Publishers and bookstores want the same thing: for people to go into bookstores looking for the books that are actually in stock. With about 300,000 new industry-published books coming out per year, that’s no small feat. For this reason, cooperation between publishers and independents is key, and they rely on an informal system of gift exchange, the details of which I go into in my book.

With the rise of chain bookstores such as Walden, Crown, Barnes & Noble, and Borders, this cooperation became formalized as “co-op,” a system in which publishers nominate their books, and if they are chosen for co-op by the seller, then pay to have their books placed on front tables and endcaps across the country. The basic shorthand is that it costs a publisher about a dollar per copy to get their book on a front table at Barnes & Noble, which is very roughly the same amount that an author gets paid per copy to write the book in her advance (talk to any publisher for long enough and they’ll grind their teeth while noting this).

From the cooperation system with independents the chains developed “co-op”, but a publisher’s relationship with Amazon is closer to coercion. With the chains, publishers can decide to nominate for “co-op” or not, but as soon as publisher sells a book on Amazon they’ve already entered into an enforced “co-op” agreement, in which usually around 6-8% of all of their revenue from selling on Amazon is then withheld, and must be used to advertise on Amazon for future titles. This tends to gets talked about less as “coercion”, and more as “just the way things are” –it’s what happens when you have a retailer that dominates the space enough to set its own terms.

As a result, while book publishers like independent bookstores because they believe them to be owned and staffed by true book lovers (Jeff Bezos was famously disinterested in books when launching Amazon – books are just fairly durable objects of standard size and shape and therefore ship well, making them a good test market for the early days of ecommerce), they also do everything they can to support independent bookstores because their trading terms with them are most favorable to publishers. In their most extreme forms, we can see publishing professionals collaborate in opening their own independent bookstores, but more generally, they engage in subtler forms of support: getting their big name authors to smaller places, and maybe over-donating a little bit to the true cost of printing flyers, and covering the cost of wine and cheese for when the author gets there. Rather than doing this out of the goodness of their hearts, however, publishers do it because independent bookstores are good for them to have around, as they’re the only booksellers who are too small and diffuse to make publishers do things.

3)    A Further Reorientation to Niche Specialization at Independents

Here we get to artisanal organizations, and the independent bookstores that are sticking around (or even more importantly, opening) have mostly given up aspirations of being generalists. In Toronto, we’ve got an independent bookstore which specializes in aviation, another for medieval history, and a third which has found a niche for discount-priced theology.* They’re like the Cascade sour beers to Barnes & Noble’s pilsners. While it’s definitely a trend, it’s not one I’d trace back just to 2010, as instead, the artisanal organization market position is one that independent bookstores have been relying on at least back into the 1980s.

In addition to just being niche, while independent hardware stores and grocers were going the way of the dodo, independent bookstores were also able to both capture and foment the formation of the “buy independent” social movements of the 1990s. It’s not many retail outlets that can successfully advocate for their mere existence as a public good. For instance, when was the last time that the New York Times unironically quoted somebody referring to the closing of an independent laundromat halfway across the country as a civic tragedy? As generalist independent bookstores have come to terms with their inability to compete on breadth with Barnes & Noble and Amazon, we see not only a transition to niche sellers, but also more sellers overall, as each one tends to take up a smaller footprint and have lower overhead costs than the independents of the past.

***

Of course, while there has been a rise in the number of independent bookstores in the 2010s, we shouldn’t overstate it, or be certain that it will continue. At the end of the day –and nobody likes to admit this –we’re talking about a segment that makes up less than 10% of industry sales and is still way down from its peak. It took one of the two major brick-and-mortar chains going out of business for this return to happen, but if Barnes & Noble goes under, it will upend any balance left between Amazon and everyone else. Yet unlike the industries for music and journalism, a preference for analog books among a major segment of the market doesn’t seem to be going away. Maybe if Barnes goes under we’ll instead be graphing the rise of brick-and-mortar bookstores by Amazon, and romantically pine for the good old days of Barnes as the industry villain.

 *If you’re a cynic, or even just a careful optimist, you’re also going to want to factor in the 80 stores Barnes & Noble has closed since 2010. So, since 2010 that’s a loss of 766 big brick-and-mortar bookstores which were selling a lot of books, and a gain of 655 generally much smaller brick-and-mortar bookstores which are generally selling many fewer books. Yet the number of physical books sold hasn’t really declined, and has actually increased for three years running (for reasons that are the subject of another post). In any case, the difference has been made up by Amazon.

**H/T to Christina Hutchinson and Chanmin Park, two undergraduate students in my Culture, Creativity, and Cities course, for these examples. You can see some of their work on bookstores, as well as other students’ great (and in progress!) work from this semester on Toronto martial arts studios, Korean and Indian restaurants, religious centers, food festivals, and so on here.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($5 – cheap!!!!)/Theory for the Working Sociologist/From Black Power/Party in the Street

 

Written by fabiorojas

March 30, 2017 at 12:21 am

is ethnography the most policy-relevant sociology?

The New York Times – the Upshot, no less – is feeling the love for sociology today. Which is great. Neil Irwin suggests that sociologists have a lot to say about the current state of affairs in the U.S., and perhaps might merit a little more attention relative to you-know-who.

Irwin emphasizes sociologists’ understanding “how tied up work is with a sense of purpose and identity,” quotes Michèle Lamont and Herb Gans, and mentions the work of Ofer Sharone, Jennifer Silva, and Matt Desmond.

Which all reinforces something I’ve been thinking about for a while—that ethnography, that often-maligned, inadequately scientific method—is the sociology most likely to break through to policymakers and the larger public. Besides Evicted, what other sociologists have made it into the consciousness of policy types in the last couple of years? Of the four who immediately pop to mind—Kathy Edin, Alice Goffman, Arlie Hochschild and Sara Goldrick-Rab—three are ethnographers.

I think there are a couple reasons for this. One is that as applied microeconomics has moved more and more into the traditional territory of quantitative sociology, it has created a knowledge base that is weirdly parallel to sociology, but not in very direct communication with it, because economists tend to discount work that isn’t produced by economics.

And that knowledge base is much more tapped into policy conversations because the status of economics and a long history of preexisting links between economics and government. So if anything I think the Raj Chettys of the world—who, to be clear, are doing work that is incredibly interesting—probably make it harder for quantitative sociology to get attention.

But it’s not just quantitative sociology’s inability to be heard that comes into play. It’s also the positive attraction of ethnography. Ethnography gives us stories—often causal stories, about the effects of landlord-tenant law or the fraying safety net or welfare reform or unemployment policy—and puts human flesh on statistics. And those stories about how social circumstances or policy changes lead people to behave in particular, understandable ways, can change people’s thinking.

Indeed, Robert Shiller’s presidential address at the AEA this year argued for “narrative economics”—that narratives about the world have huge economic effects. Of course, his recommendation was that economists use epidemiological models to study the spread of narratives, which to my mind kind of misses the point, but still.

The risk, I suppose, is that readers will overgeneralize from ethnography, when that’s not what it’s meant for. They read Evicted, find it compelling, and come up with solutions to the problems of low-income Milwaukeeans that don’t work, because they’re based on evidence from a couple of communities in a single city.

But I’m honestly not too worried about that. The more likely impact, I think, is that people realize “hey, eviction is a really important piece of the poverty problem” and give it attention as an issue. And lots of quantitative folks, including both sociologists and economists, will take that insight and run with it and collect and analyze new data on housing—advancing the larger conversation.

At least that’s what I hope. In the current moment all of this may be moot, as evidence-based social policy seems to be mostly a bludgeoning device. But that’s a topic for another post.

 

Written by epopp

March 17, 2017 at 2:04 pm

hayek and the edge of libertarian reason

I recently had the opportunity to read a whole boat load of F.A. Hayek. Constitution of Liberty; The Use of Knowledge in Society; Law, Legislation and Liberty; and more. This in depth rereading of Hayek helped me resolve a certain sociological puzzle concerning the Austrian economist’s reputation. How could he be the patron saint of laissez-faire while saying very nice things about welfare states and attracting positive commentary from a range of liberal and radical thinkers, such as Foucault?

Here is my answer: I think Hayek’s work resides on a boundary between libertarian social theory and modern liberalism. I’m going to argue that Hayek is the least libertarian you can be and still be, sort of, a libertarian. Because he is not a libertarian in the modern sense of grounding things strongly in terms of individual rights, it’s easy for non-libertarians to find a connection.

Exhibit A: Hayek never lays out a theory of freedom based on individual rights the way many libertarians do. For example, in Constitution of Liberty, he doesn’t start with natural rights and he doesn’t start with a utilitarian justification of freedom. Rather, for him, freedom is about autonomy. Given certain choices, does someone have a sphere of independent judgment free from coercion from others? Thus, this version of freedom is compatible with state policies that try to increase this private sphere of judgment. Also, he frequently emphasizes equality under the law and rule of law as prime virtues, even if they don’t  enhance freedom in the everyday sense of the word.

Exhibit B: The Road to Serfdom. It’s a text that is more talked about than read. But if you read it, you discover that it is not an argument against every single form of state intervention. Rather, it’s mainly an argument against Soviet style command economies and Westerners who want to nationalize various industries in the name of equality. Secondarily, he also wants to reign in state regulators who wish to wish to coerce people for their own bureaucratically determined goals.

Exhibit C: In other writings, he endorsed a basic income. And he does argue for the legitimacy of taxation. See Matt Zwolinski’s essay on this topic. He argues that these policies were likely justified for Hayek because they increase personal autonomy (see Exhibit  A) and I think they were ok in Hayek’s view because they were less about top down ordering of the economy or administrative tyranny and more about allocating resources to everyone in ways that could help them expand their freedom (Exhibit B).

Exhibit D: Spontaneous order theory. Basically, a whole lot of Hayek’s later social theory is about arguing why social structures can still work and are desirable if they are not top down command structures. That doesn’t lead immediately to libertarianism because you can have spontaneous order that has nothing to do with freedom in either Hayek’s view or the more modern libertarian view. For example, systems of race relations are not top down structures, but they often restrain people in cruel ways.

Taken together, Exhibits A, B, C and D paint an intellectual who has the following traits: (a) Very, very anti-socialist; (b) has a version of freedom that is very agnostic with respect to the wide range of policies that are not socialist; (c) provides grounds for both conservative and liberal policies via a respect for tradition/spontaneous order and freedom/autonomy expansion. It’s a very modest form of libertarianism that gives away a lot of ground to other philosophies.

Does that mean that we’ve all misunderstood Hayek? It depends. If you think that Hayek was this evil economist who advocated the most strict version of libertarianism, then that’s probably mistaken. But if you think of Hayek as a very mellow form of libertarianism that has overlap with other political traditions, you’re probably on target.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/Theory for the Working Sociologist/From Black Power/Party in the Street 

Written by fabiorojas

January 2, 2017 at 12:59 am

book forum 2017: turco and granovetter

Hi, everyone! As the year winds up, I’d like to announce two book fora:

Please order the books now!**

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/Theory for the Working Sociologist/From Black Power/Party in the Street 

* Holy smokes, yes, the Granovetter book is coming out. We have heard of this sacred text for years and now… my precious… my precious…

** And yes, editors who read this blog should send me free copies!!

Written by fabiorojas

December 22, 2016 at 12:15 am

honey, we have to talk about sears

A little while back, I asked how Sears was able to survive as a firm. Once a titan of the American economy, Sears was now a shell of its former self. From a write up in Salon (!) magazine:

Sears Holdings, which owns Sears and Kmart, reported on Thursday a loss of $748 million for the three months ending on Oct. 29. This is the company’s 20th consecutive quarterly loss, and worse than the $454 million loss the company posted in the same period last year. Revenue fell nine percent last quarter to $5.21 billion. Same-store sales, a key retail metric, dropped 10 percent at Sears and 4 percent at Kmart. The company lost $1.6 billion in the first ten months of the year, compared to $549 million in the same period last year, according to its regulatory filing.

These grim numbers were announced a week after the departure of two top-level executives: James Balagna, an executive vice president in charge of the company’s home-repair services and technology backbone, and Joelle Maher, the company’s president and chief member officer. Former Goldman Sachs banker Steve Mnuchinalso resigned from the Sears board last week after President-elect Donald Trump nominated him to head the Treasury Department.

When we discussed Sears, CKD suggested the issue wasn’t firm profitability. It was the relative benefits of bankruptcy court vs. a massive real estate sell off. If so, then the pattern of executive hires and behaviors makes sense. But that raises a deeper point. Why didn’t Sears keep up with the rest of the retail market?

Jeff Sward, founding partner of retail consultant Merchandising Metrics, doesn’t share Hollar’s optimism.

“What does Sears stand for?” Sward told Salon. “Sears unfortunately stands for so many different things that I don’t think there’s anything that’s a standout. I would go to Sears for appliances and tools, but I’ve certainly never thought of them as a headquarters for apparel.”

Sward says the issue isn’t that Sears doesn’t have good products and competitive prices. Instead, he said, the problem facing Sears is that it isn’t the first choice for buyers of any of its core product categories. If consumers need tools, they go to Home Depot or Lowe’s. If they want outdoor or work apparel, it’s Dick’s Sporting Goods, not Sears. Electronics and home appliances? That’s for Best Buy. And who’s buying apparel and shoes at Sears?

The bottom line is that the department store model of the early 1900s is incredibly hard to sustain in the modern environment. Where discovery of the “big box model” by Home Depot and the online model of Amazon, a lot of department store chains either folded or refocused. Sear, with way too much real estate and sluggish executive team, couldn’t make the pivot. Not surprisingly, you then attract investors who are more interested in hollowing out the firm, like the Sears/Kmart holding group that also took on Borders before it died.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/Theory for the Working Sociologist/From Black Power/Party in the Street  

Written by fabiorojas

December 12, 2016 at 12:36 am

how we abandoned the idea that media should serve the public interest

Yesterday the New Republic wrote about how little attention has been paid to policy in the current election. In 2008, the network news programs devoted 220 minutes to policy; this year, it’s been a mere 32 minutes.

The piece goes on to bemoan the decline of the public-interest obligation once held by broadcasters (and which still remains, in vestigial form) in exchange for their use of the airwaves, and to connect the dots between the gradual removal of those restrictions and the toxic media environment we find ourselves in today. While — I think appropriately — the article doesn’t overemphasize the causal effects, it does highlight a broader shift that was going on in the 1970s and is still echoing today.

The 1970s saw a wide, bipartisan embrace of the deregulatory spirit in many areas. The transportation industries — air, rail, trucking — were one chief target. Banking was in there. So was energy. More controversial, and less bipartisan, was the push for the removal of new social regulations—rules meant improve the environment, health, and safety. But even when it came to social regulation, both sides believed in regulatory reform. (I’ve recently written about some of this history.)

Economists were one group that made a strong case for economic deregulation — the removal of price and entry barriers in industries like transportation, energy, and finance. (For the definitive account, see Martha Derthick and Paul Quirk’s 1985 book.) Their role in airline deregulation, led by the colorful Alfred “To me, they’re all just marginal costs with wings” Kahn, is probably best known. But economists also had something to say about the Federal Communications Commission.

Perhaps the most famous — certainly one of the earliest — critics of the FCC was Ronald Coase. Coase argued in 1959 that there was no good reason, technical or economic, for the government to own the airwaves, and made the case for auctioning off the radio spectrum. He was not at all impressed with the argument that licenses should be distributed according to the “public interest”, and emphasized not only the legal ambiguity of that standard, but the fact that the FCC’s decisions reflected “a degree of inconsistency which defies generalization.”

At the time, the idea of the airwaves as a public trust was so universally accepted that Coase’s views seemed quite radical, even to other economists. When, in 1962, he extended his argument into a 200-page RAND report, coauthored with Bill Meckling and Jora Mirasian, RAND quashed it for being too incendiary. Later, recalling these events, Coase quoted an internal review of the paper: “I know of no country on the face of the globe—except for a few corrupt Latin American dictatorships—where the ‘sale’ of the spectrum could even be seriously proposed.”

By the early 1970s, though, a new consensus had emerged in economics around questions of regulation, and this consensus saw FCC demands that broadcasters behave in unprofitable ways not as acting in the “public interest,” but as a source of efficiency losses that should, at a minimum, be regarded skeptically. This aligned with increasingly loud arguments from outside of economics (as well as within) about regulatory capture, which implied that the “public interest” pursued by executive agencies would never be more than a sham, anyhow.

Eventually, this shift in mood led to a change in how the FCC regulated broadcasters. The public interest standard was loosened, and in 1981 the agency began to shift from using hearings to allocate spectrum licenses — in theory to the applicants that best served the public interest — to lottery. In 1994, it moved another step closer to Coase’s prescription, beginning to auction off the licenses — a move that stimulated a great deal of research in auction theory as well as generating substantial revenue.

The “public interest” goal, which had initially been baked into the allocation process (however poorly it was pursued in practice) became increasingly marginalized. Or perhaps it was subsumed within the assumed public interest in encouraging efficient use of the spectrum. The process echoes the one that took place in antitrust policy, in which historically significant goals other than allocative efficiency — goals that often conflicted with efficiency and even with each other — were gradually defined as being simply beyond the scope of what could be considered. (Indeed, Coase’s criticism of the inconsistency of the FCC’s behavior sounds quite similar to Justice Stewart’s scathing critique of merger law, written around the same time: “the sole consistency I can find is that under Section 7 [the merger section of the Clayton Act], the Government always wins.”)

I don’t know enough about the history of the FCC to have an informed opinion on whether the public interest standard as it stood circa 1970 was redeemable or if the agency was irreparably captured. And I definitely don’t think the decline of that standard is the main explanation for the current media environment, which goes far beyond television.

But I do think that the demise of the idea that we should expect media to have obligations beyond profit — which is bound up with the ideal, if not the practice, of the public interest standard — is a big contributor. Individual journalists — that increasingly rare breed — may remain professionally committed to an ethical code and a sense of mission that isn’t primarily about sales. But at the corporate level, any such qualms were abandoned long ago, and the journalistic wall between “church and state” — editorial and advertising — continues to crumble.

What this means is that we get political news that is just horse race coverage, and endless examination of the ugliest aspects of politics — which, unsurprisingly, encourages more of the same. Actually expecting media to pursue the “public interest”, whether through regulatory means or professional commitment, may be unrealistically idealistic. But giving up on the concept entirely seems certain to take us further down the path in which objective lies merit just as much attention as truth.

Written by epopp

November 3, 2016 at 11:39 am

Posted in economics, policy

no complexity theory in economics

Roger E. Farmer has a blog post on why economists should not use complexity theory. At first, I though he was going to argue that complexity models have been dis-proven or they use unreasonable assumptions. Instead, he simply says we don’t have enough data:

The obvious question that Buzz asked was: are economic systems like this? The answer is: we have no way of knowing given current data limitations. Physicists can generate potentially infinite amounts of data by experiment. Macroeconomists have a few hundred data points at most. In finance we have daily data and potentially very large data sets, but the evidence there is disappointing. It’s been a while since I looked at that literature, but as I recall, there is no evidence of low dimensional chaos in financial data.

Where does that leave non-linear theory and chaos theory in economics? Is the economic world chaotic? Perhaps. But there is currently not enough data to tell a low dimensional chaotic system apart from a linear model hit by random shocks. Until we have better data, Occam’s razor argues for the linear stochastic model.

If someone can write down a three equation model that describes economic data as well as the Lorentz equations describe physical systems: I’m all on board. But in the absence of experimental data, lots and lots of experimental data, how would we know if the theory was correct?

On one level, this is a fair point. Macro-economics is notorious for having sparse data. We can’t re-run the US economy under different conditions a million times. We have quarterly unemployment rates and that’s it. On another level, this is an extremely lame criticism. One thing that we’ve learned is that we have access to all kinds of data. For example, could we have m-turker participate in an online market a  million times? Or, could we mine eBay sales data? In other words, Farmer’s post doesn’t undermine the case for complexity. Rather, it suggests that we might search harder and build bigger tools. And, in the end, isn’t that how science progresses?

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street 

Written by fabiorojas

October 7, 2016 at 12:39 am

the pager paper, sociological science, and the journal process

Last week, we discussed Devah Pager’s new paper on the correlation between discrimination in hiring and firm closure. As one would expect from Pager, it’s a simple and elegant paper using an audit study to measure the prevalence and consequences of discrimination in the labor market. In this post, I want to use the paper to talk about the journal publication process. Specifically, I want to discuss why this paper appeared in Sociological Science.

First, it may be the case that Professor Pager directly went to Sociological Science without trying another peer reviewed journal. If so, then I congratulate both Pager and Sociological Science. By putting a high quality paper into public access, both Professor Pager and the editors of Sociological Science have shown that we don’t need the lengthy and cumbersome developmental review system to get work out there.

Second, it may be the case that Professor Pager tried another journal, probably the ASR or AJS or an elite specialty journal and it was rejected. If so, that raises an important question – what specifically was “wrong” with this paper? Whatever one thinks of the Becker theory of racial discrimination, one can’t critique the paper on lacking a “framing” or have a simple and clean research design. One can’t critique statistical technique because it’s a simple comparison of means. One can’t critique the importance of the finding – the correlation between discrimination in hiring and firm closure is important to know and notable in size. And, of course, the paper is short and clearly written.

Perhaps the only criticism I can come up with is a sort of “identification fundamentalism.” Perhaps reviewers brought up the fact discrimination was not randomly assigned to firms so you can’t infer anything from the correlation. That is bizarre because it would render Becker’s thesis un-testable. What experimental design would allow you get a random selection of firms to suddenly become racist in their hiring practices? Here, the only sensible approach is Bayesian – you collect high quality observational data and revise your beliefs accordingly. This criticism, if it was made, isn’t sound upon reflection. I wonder what, possibly, could the grounds for rejection be aside from knee jerk anti-rational choice comments or discomfort with a finding that markets do have some corrective to racial discrimination.

Bottom line: Pager and the Sociological Science crew are to be commended. Maybe Pager just wanted this paper “out there” or just got tired of the review process. Either way, three cheers for Pager and the Soc Sci Crew.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

September 28, 2016 at 12:10 am

gary becker 1, rational choice haters 0

One of the most striking arguments of Gary Becker’s theory of discrimination is that there is a cost of racial discrimination. If you hire people based on personal taste rather than job skills, your competitors can hire these better works and you work at a disadvantage. I think the strong version argument isn’t right. Markets do not instantly weed out discriminators. But the weak version has a lot of merit. If you truly avoid workers based on race or gender, you are giving away a huge advantage to the competition.

Well, turns out that Becker was right, at least in one data set. Devah Pager has a new paper in Sociological Science showing that discrimination is indeed associated with lower firm performance:

Economic theory has long maintained that employers pay a price for engaging in racial discrimination. According to Gary Becker’s seminal work on this topic and the rich literature that followed, racial preferences unrelated to productivity are costly and, in a competitive market, should drive discriminatory employers out of business. Though a dominant theoretical proposition in the field of economics, this argument has never before been subjected to direct empirical scrutiny. This research pairs an experimental audit study of racial discrimination in employment with an employer database capturing information on establishment survival, examining the relationship between observed discrimination and firm longevity. Results suggest that employers who engage in hiring discrimination are less likely to remain in business six years later.

Commentary: I have always found it ironic that sociologists and non-economists have resisted the implications of taste based discrimination theory. If discrimination in markets is truly not based on performance or productivity, there must be *some* consequence. However, a lot of sociologists have a strong distrust of markets that draws their attention to this rather simple implication of price theory. I don’t know the entire literature on taste based discrimination, but it’s good to see this appear.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

September 22, 2016 at 12:20 am

the three student loan crises

Among higher ed policy folks, there’s a counter-conventional wisdom that there is no student loan crisis. For the most part (the story goes), student loans are a good investment that will increase future wages, and students could borrow quite a bit more before the value of the debt might be called into question. Indeed, some have argued that many students are too reluctant to borrow, and should take on more debt.

Just this month, two new pieces came out that reiterate this counter-narrative: a book by Urban Institute economist Sandy Baum, and a report by the Council of Economic Advisers. Yes, everyone agrees the system’s not perfect, and tweaks need to be made. (Susan Dynarski, for example, argues that repayment periods need to be longer.) Fundamentally, though, the system is sound. Or so goes the story.

What can we make of this disconnect between the conventional wisdom—that we are in the throes of a student loan crisis—and this counter-conventional story?

To understand it, it’s worth thinking about three different student loan crises. Or “crises”, depending on your sympathies.

First, there’s the student who has accrued six figures of debt for an undergraduate degree. Ideally, for media purposes, this is a degree in women’s studies, art history, or some other easily-dismissible field. The New York Times specialized in these for a while.

Since student loan debt is not bankruptable, these people really are kind of screwed, although income-based-repayment options have improved their options somewhat. And they make for a dramatic story—as well as lots of moralizing in the comments.

Second, there’s the student who took on debt but didn’t finish a degree. These people often struggle, because their income doesn’t go up much, if at all. In fact, the highest default rates are among those who left school with the smallest debts (< $5000), presumably because they didn’t graduate.

These folks disproportionately attend for-profit institutions, whose degrees have less payoff anyhow, but even more importantly, have abysmal graduation rates. (Community colleges have low graduation rates too, but they’re a lot cheaper.) The debt-but-no-degree people are also kind of screwed, although again, income-based repayment plans can help them a lot, as would a bankruptcy option.

So we’ve got the crisis of people who borrow too much for a four-year degree, and the crisis of people who borrow a little, but don’t complete the degree, often because they’re attending a school whose entire business model is to sign up new students for the purpose of taking their loan money.

These are both problems—even “crises”—but they are solvable. For the first, cap federal loans (including PLUS access) for undergraduate degrees, and make all loans bankruptable, so private lenders are leerier of loaning large amounts to students.

For the second—well, I’d probably be comfortable eliminating aid to for-profits, but let’s say that’s beyond the political pale. Certainly we could place a lot more limitations on which institutions are eligible for federal aid, whether that’s tied to graduation rates, default rates, or some other measure. And, again, making student loans bankruptable would help people who really needed to get a fresh start.

Wait, so what’s the third crisis?

The thing is, these two “crises” may be devastating to individuals, but in societal terms aren’t that big. The six-figure-debt one really drives policy wonks crazy, because every student debt story in the last ten years has led with this person, but the percentage of students who finish four-year degrees with this many loans is very small. Like maybe a couple of percent of borrowers small.*

Proponents of the Counter-Conventional Wisdom (C-CW) take the second group—those who borrow but don’t finish a degree—more seriously. This group is often really hurting, despite having smaller loan balances. But they only make up perhaps 20% of borrowers, and since their balances are relatively low, an even smaller fraction of that $1.4 trillion student loan figure we hear so much about.**

The real question—the one that determines whether you think there’s a third crisis—is how you react to the other 75-80% of borrowers. The C-CW crowd looks at them and says, eh, no crisis. These folks come out with four-year degrees, $20 or $30,000 of government-issued student loan debt, will pay $300 a month or so for ten years, and then move on with their lives. We could argue about how much of a burden this is for them, but it’s clearly not a crisis in the same way it is for the NYU grad with $150k in loans, or for the Capella University dropout trying to pay back $7500 on $10 an hour.

This C-CW is based on the premise that 1) college is a human capital investment that is worth taking on debt for up to the expected economic payoff, 2) individual borrowing is a reasonable and appropriate way to finance this investment—indeed, more sensible than paying for the costs collectively—and 3) as long as debt is kept to a “manageable” level (as indicated by students not going into default and having access to forbearance when their income is low), then there’s no crisis.

Why this understates the problem

I take issue with this position, though, on at least three fronts.

1. “Typical” student debt is increasing .

Individual borrowing levels are still rising rapidly, and there’s no reason to think we’ve neared a max. A recent Washington Post editorial cited the CEA report as saying that “[t]he average undergraduate loan burden in 2015 was $17,900.” But that’s not what the average graduate holds. That’s what the average loan-holder holds, including those who have already been paying for a number of years. Estimates for the average 2016 graduate, by contrast, are considerably higher—in the $29,000 to $37,000 range—and growing. The fraction of all students who borrow also continues to increase.

College costs keep rising. State budgets are still under pressure. The penalties for not completing college keep increasing. We can only expect loan sizes to continue to go up. At what point does “reasonable borrowing” become “unreasonable burden”? And tweaks like expanding income-based repayment or extending the standard repayment period won’t bend the curve (to borrow from another debate)—if anything it will enable the further expansion of lending.

2. We are all Capella now.

These debates often overlook the effects of federal aid policy on colleges as organizations, something I’ve written about elsewhere. (The exception is the attention given to the Bennett hypothesis, which suggests that colleges will simply turn federal aid into higher tuition prices.)

But that doesn’t mean organizational effects don’t exist. Continuing to shift the cost burden to individual students is going to accelerate the already intense pressure on public colleges in particular to recruit and retain students, because with students come tuition dollars.

The drive to attract students is already undermining a lot of traditional values in higher education. It encourages schools to spend money on marketing and branding, rather than education. It promotes a consumerist mindset among students who quite reasonably feel that they have become customers. It encourages schools to develop low-value degree programs simply to generate revenue, and recruit students into them regardless of whether the students will benefit.

The values that limit colleges from doing kind of thing are what separates nonprofits from for-profits in the first place. If they go away, we all become Capella. And allowing “reasonable” lending to keep expanding moves us straight in that direction.

3. It gives up on actual public education.

Ultimately, though, the biggest problem I have with this position is that it concedes the possibility, or even the value, of real public higher education entirely. It doesn’t matter whether that’s because the C-CW sees it as a pipe dream, or because it sees it as an irrational use of public funds, since individuals benefit personally from their education in the long run.

This post is already too long, so I won’t go into a detailed defense of public higher ed here. But I do want to point out that if you accept the premise of the C-CW—that student loan debt will only become a crisis if it increases individual costs beyond the returns to a college degree—you’ve already given up on public higher education. I’m not ready to do that.

And it looks like I’m not the only one.

 

* This number is actually surprisingly difficult to find. In 2008, it was only 0.2% of undergrad completers, but average debt for new graduates has increased about 40% since then, so the six-figure camp has undoubtedly grown.

** Again, exact numbers hard to pin down. 15% of beginning students who borrowed from the government in 2003-04 had not completed a degree six years later, nor were they still enrolled. This figure has doubtless increased as nontraditional borrowers—who are less likely to finish—have become a bigger fraction of the total pool of borrowers, hence my 20% guesstimate.

Written by epopp

August 3, 2016 at 12:15 pm

a history of “command and control”, or, thomas schelling is behind every door

I’m working on a paper about the regulatory reform movement of the 1970s. If you’ve read anything at all about regulation, even the newspaper, you’ve probably heard the term “command and control”.

“Command and control” is a common description for government regulation that prescribes what some actor should do. So, for example, the CAFE standards say that the cars produced by car manufacturers must, on average, have a certain level of fuel efficiency. Or the EPA’s air quality standards say that ozone levels cannot exceed a certain number of parts per billion. Or such regulations may simply forbid some things, like the use of asbestos in many types of products.

This is typically contrasted with incentive-based regulation, or market-based regulation, which doesn’t set an absolute standard but imposes a cost on an undesirable behavior, like carbon taxes, or provides some kind of reward for good (usually meaning efficient) performance, as utility regulators often do.

The phrase “command and control” is commonly used in the academic literature, where it is not explicitly pejorative. Yet it’s kind of a loaded term. Who wants to be “commanded” and “controlled”?

So as I started working on this paper, I became more and more curious about the phrase, which only seemed to date back to the late 1970s, as the deregulatory movement really got rolling. Before that, it was a military term.

To the extent that I had thought about it at all, I assumed it was a clever framing coined by some group like the American Enterprise Institute that wanted to draw attention to regulation as a form of government overreach.

So I asked Susana Muñiz Moreno, a terrific graduate student working on policy expertise in Mexico, to look into it. She found newspaper references starting in 1977, when the New York Times references CEA chair Charles Schultze’s argument that “the current ‘command-and-control’ approach to social goals, which establishes specific standards to be met and polices compliance with each standard, is not only inefficient ‘but productive of far more intrusive government than is necessary.’”

Untitled.png

From the Aug. 21, 1977 edition of the New York Times

And sure enough, Schultze’s influential book of that year, The Private Use of Public Interest, uses the phrase a number of times. Which makes sense, as Schultze was instrumental in advancing regulatory reform and plays a key role in my story. But he’s clearly not the AEI type I would have imagined coining such a phrase—before becoming Carter’s CEA chair Schultze was at Brookings, and before that he was LBJ’s budget director.

Nevertheless, given Schultze’s influence and the lack of earlier media use of the term, I figured he probably came up with it and it took off from there.

But I started poking around Google Scholar, mostly because I wondered if some more small-government-oriented reformer of regulation had been using it prior to Schultze. I thought James C. Miller III might be a possibility.

I didn’t find any early uses of the term from Miller, but you know what I did find? An obscure book chapter called “Command and Control” written by Thomas Schelling in 1974.

Sociologists probably best know Schelling from his 1978 book, Micromotives and Macrobehavior, and its tipping point model, which shows how the decisions of agents who prefer that even a relatively small proportion of their neighbors be like them (read: of the same race) can quickly lead to a highly segregated space. Its insights are regularly referenced in the literature on neighborhoods and segregation.

(If you haven’t seen it, you should totally check out this brilliant visualization of the model, “Parable of the Polygons”.)

polygons-2

Economists know him for a broader range of game theoretic work on decision-making and strategy—work that was recognized in 2005 with a Nobel Prize.

Anyway, I just checked out the chapter—and I’m pretty sure this is the original source. Like most of Schelling’s work, it’s written in crystal-clear prose. The chapter itself is only secondarily about government regulation; it’s is in an edited book about the social responsibility of the corporation. It hasn’t been cited often—29 times on Google Scholar, often in the context of business ethics.

Schelling muses on the difficulty of enforcing some behavioral change—like making taxi passengers fasten their seat belts—even for the head of a firm, and considers how organizations try to accomplish such goals: for example, by supporting government requirements that might be more effective than their own policing efforts.

It’s a wandering but fascinating reflection, with a Carnegie-School feel to it. And the “command and control” of the title doesn’t refer to government regulation, but to the difficulties faced by organizational leaders who are trying to command and control.

In fact, if I didn’t know the context I’d think this was a completely coincidental use of the phrase. But the volume, Social Responsibility and the Business Predicament, is part of the same Brookings series, “Studies in the Regulation of Economic Activity,” that published Schultze’s lectures in 1977, and which catalyzed a network of economists studying regulation in the early 1970s.

So while Schultze adapts the phrase for his own needs, and it’s possible that he could have borrowed the military phrase directly, my strong hunch is that he is lifting it from Schelling. Which actually fits my larger story—which highlights how the deregulatory movement built on the work of McNamara’s whiz kids from RAND, a community Schelling was an integral part of—quite well.

I can’t resist ending with one other contribution Schelling made to the use of economics in policy beyond his strategy work: the 1968 essay, “The Life You Save May Be Your Own.” (He was good with titles—this one was borrowed from a Flannery O’Connor story.) This introduced the willingness-to-pay concept as a way to value life—the idea that one could calculate how much people valued their own lives based on how much they had to be paid in order to accept very small risks of death. Controversial at the time, the proposal eventually became the main method policymakers used to place a monetary value on life.

Thomas Schelling. He really got around.

Written by epopp

June 8, 2016 at 3:05 pm

tying our own noose with data? higher ed edition

I wanted to start this post with a dramatic question about whether some knowledge is too dangerous to pursue. The H-bomb is probably the archetypal example of this dilemma, and brings to mind Oppenheimer’s quotation of the Bhagavad Gita upon the detonation of Trinity: “Now I am become Death, the destroyer of worlds.

But really, that’s way too melodramatic for the example I have in mind, which is much more mundane. Much more bureaucratic. It’s less about knowledge that is too dangerous to pursue and more about blindness to the unanticipated — but not unanticipatable — consequences of some kinds of knowledge.

19462658349_0e7d937d6d_b

Maybe this wasn’t such a good idea.

The knowledge I have in mind is the student-unit record. See? I told you it was boring.

The student-unit record is simply a government record that tracks a specific student across multiple educational institutions and into the workforce. Right now, this does not exist for all college students.

There are records of students who apply for federal aid, and those can be tied to tax data down the road. This is what the Department of Education’s College Scorecard is based on: earnings 6-10 years after entry into a particular college. But this leaves out the 30% of students who don’t receive federal aid.

There are states with unit-record systems. Virginia’s is particularly strong: it follows students from Virginia high schools through enrollment in any not-for-profit Virginia college and then into the workforce as reflected in unemployment insurance records. But it loses students who enter or leave Virginia, which is presumably a considerable number.

But there’s currently no comprehensive federal student-unit record system. In fact at the moment creating one is actually illegal. It was banned in an amendment to the Higher Education Act reauthorization in 2008, largely because the higher ed lobby hates the idea.

Having student-unit records available would open up all kind of research possibilities. It would help us see the payoffs not just to college in general, but to specific colleges, or specific majors. It would help us disentangle the effects of the multiple institutions attended by the typical college student. It would allow us to think more precisely about when student loans do, and don’t, pay off. Academics and policy wonks have argued for it on just these grounds.

In fact, basically every social scientist I know would love to see student-unit records become available. And I get it. I really do. I’d like to know the answers to those questions, too.

But I’m really leery of student-unit records. Maybe not quite enough to stand up and say, This is a terrible idea and I totally oppose it. Because I also see the potential benefits. But leery enough to want to point out the consequences that seem likely to follow a student-unit record system. Because I think some of the same people who really love the idea of having this data available would be less enthused about the kind of world it might help, in some marginal way, create.

So, with that as background, here are three things I’d like to see data enthusiasts really think about before jumping on this bandwagon.

First, it is a short path from data to governance. For researchers, the point of student-level data is to provide new insights into what’s working and what isn’t: to better understand what the effects of higher education, and the financial aid that makes it possible, actually are.

But for policy types, the main point is accountability. The main point of collecting student-level data is to force colleges to take responsibility for the eventual labor market outcomes of their students.

Sometimes, that’s phrased more neutrally as “transparency”. But then it’s quickly tied to proposals to “directly tie financial aid availability to institutional performance” and called “an essential tool in quality assurance.”

Now, I am not suggesting that higher education institutions should be free to just take all the federal money they can get and do whatever the heck they want with it. But I am very skeptical that, in general, the net effect of accountability schemes is generally positive. They add bureaucracy, they create new measures to game, and the behaviors they actually encourage tend to be remote from the behaviors they are intended to encourage.

Could there be some positive value in cutting off aid to institutions with truly terrible outcomes? Absolutely. But what makes us think that we’ll end up with that system, versus, say, one that incentivizes schools to maximize students’ earnings, with all the bad behavior that might entail? Anyone who seriously thinks that we would use more comprehensive data to actually improve governance of higher ed should take a long hard look at what’s going on in the UK these days.

Second, student-unit records will intensify our already strong focus on the economic return to college, and further devalue other benefits. Education does many things for people. Helping them earn more money is an important one of those things. It is not, however, the only one.

Education expands people’s minds. It gives them tools for taking advantage of opportunities that present themselves. It gives them options. It helps them to find work they find meaningful, in workplaces where they are treated with respect. And yes, selection effects — or maybe it’s just because they’re richer — but college graduates are happier and healthier than nongraduates.

The thing is, all these noneconomic benefits are difficult to measure. We have no administrative data that tracks people’s happiness, or their health, let alone whether higher education has expanded their internal life.

What we’ve got is the big two: death and taxes. And while it might be nice to know whether today’s 30-year-old graduates are outliving their nongraduate peers in 50 years, in reality it’s tax data we’ll focus on. What’s the economic return to college, by type of student, by institution, by major? And that will drive the conversation even more than it already does. Which to my mind is already too much.

Third, social scientists are occupationally prone to overestimate the practical benefit of more data. Are there things we would learn from student-unit records that we don’t know? Of course. There are all kinds of natural experiments, regression discontinuities, and instrumental variables that could be exploited, particularly around financial aid questions. And it would be great to be able to distinguish between the effects of “college” and the effects of that major at this college.

But we all realize that a lot of the benefit of “college” isn’t a treatment effect. It’s either selection — you were a better student going in, or from a better-off family — or signal — you’re the kind of person who can make it through college; what you did there is really secondary.

Proposals to use income data to understand the effects of college assume that we can adjust for the selection effects, at least, through some kind of value-added model, for example. But this is pretty sketchy. I mean, it might provide some insights for us to think about. But as a basis for concluding that Caltech, Colgate, MIT, and Rose-Hulman Institute of Technology (the top five on Brookings’ list) provide the most value — versus that they have select students who are distinctive in ways that aren’t reflected by adjusting for race, gender, age, financial aid status, and SAT scores — is a little ridiculous.

So, yeah. I want more information about the real impact of college, too. But I just don’t see the evidence out there that having more information is going to lead to policy improvements.

If there weren’t such clear potential negative consequences, I’d say sure, try, it’s worth learning more even if we can’t figure out how to use it effectively. But in a case where there are very clear paths to using this kind of information in ways that are detrimental to higher education, I’d like to see a little more careful thinking about the real likely impacts of student-unit records versus the ones in our technocratic fantasies.

Written by epopp

June 3, 2016 at 2:06 pm

economics and sociology, part cdlxvii: comments on a blog post by noah smith

A few weeks ago, economics columnist Noah Smith wrote a blog post about how economics should raid sociology. This raises interesting questions about how academic disciplines influence each other. In this case, why has sociology not been a good a receptor for economics?

I start with an observation, which Smith also alludes to: Sociology has already been “raided” by economics with only moderate success. In contrast, economists have done very well raiding another discipline, political science. They have done fairly well in establishing pockets of influence in public policy programs and the law schools. By “success,” I do not mean publishing on sociological topics in economics journals. Rather, “success” means institutional success: economists should be routinely hired sociology programs, economic theory should become a major feature of research in graduate programs, and sociological journals should mimic economics journals. All of these have happened in political science but not sociology.

Here’s my explanation – Sociology does not conform to the stereotype that economists and other outsiders have of the field. According to the stereotype, sociology is a primarily qualitative field that has no sense of how causal inference works. In some accounts, sociologists are a bunch of drooling Foucault worshipers who babble endlessly in post-modern jargon. Therefore, a more mathematical and statistical discipline should easily establish its imprint, much as economics is now strongly imprinted on political science.

The truth is that sociology is a mixed quantitative/qualitative field that prefers verbal theory so that it can easily discuss an absurdly wide range of phenomena. Just open up a few issues of the American Sociological Review, the American Journal of Sociology or Social Forces. The modal article is an analysis of some big N data set. You also see historical case studies and ethnographic field work.

It is also a field that has import traditions of causal identification, but does not obsess over them. For example, in my department alone, there are three faculty who do experiments in their research and one who published a paper on propensity scores. Some departments specialize in social psychology which is heavily experimental, like Cornell. There are sociologists who work with data from natural experiments (like Oxford’s Dave Kirk), propensity scores (like IU’s Weihua an), and IV’s (I actually published one a while ago). The difference between economics and sociology is that we don’t reward people for clever identification strategies or dismiss observational data out of hand. If possible, we encourage identification if it makes sense. But if an argument can be made without it, that’s ok too.

So when economists think about sociology as a non-quantitative field, they simply haven’t taken the time to immerse themselves in the field and understand how it’s put together. Thus, a lot of the arguments for “economic imperialism” fall flat. You have regression analysis? So does sociology. You have big N surveys? We run the General Social Survey. You have identification? We’ve been running experiments for decades. One time an economist friend said that sociology does not have journals about statistical methods. And I said, have you heard of Sociological Methodology or Sociological Research and Methods? He’s making claims about a field that could easily be falsified with a brief Google search.

In my view, economics actually has one massive advantage over sociology but they have completely failed to sell it. Economists are very good at translating verbal models into mathematical models which then guide research. The reason they fail to sell it to sociology is for a few reasons.

First, economists seem to believe that the only model worth formalizing is the rational actor model. For better or worse, sociologists don’t like it. Many think “formal models = rational actor model.” They fail to understand that math can be used to formalize and study any model, not just rational choice models.

Second, rather than focus on basic insights derived from simple models, economists fetishize the most sophisticated models.* So economists love to get into some very hard stuff with limited applied value. That turns people off.

Third, a lot of sociologists have math anxiety because they aren’t good at math or had bad teachers. So when economists look down at them and dismiss sociology as whole or qualitative methods in particular, you loose a lot of people. Instead of dismissing people, economists should think more about how field work, interviews, and historical case studies can be integrated with economic methods.

I am a big believer in the idea that we are all searching for the truth. I am also a big believer in the idea that the social sciences should be a conversation not a contest of ego. That means that sociologists should take basic economic insights seriously, but that also means that economists should turn down the rhetoric and be willing to explore other fields with a charitable and open mind.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

** For example, I was once required to read papers about how to do equilibrium models in infinite dimensional Banach spaces. Cool math? Sure. Connection to reality? Not so sure.

Written by fabiorojas

May 18, 2016 at 12:15 am

once again: college effects do not matter, college major effects are huge

I just discovered an Economist article from last year showing that, once again, which college you go to is a lot less important than what you do at college. Using NCES data, PayScale estimated return on investment for students from selective and non-selective colleges. Then, they separated STEM majors from arts/humanities. Each dot represents a college major and its estimated rate of return:

college_return

Some obvious points:

  • In the world of STEM, it really doesn’t matter where you go to school.
  • High prestige arts majors do worse, probably because they go into low paying careers, like being a professional painter (e.g., a Yale drama grad will actually try Broadway, while others may not get that far). [Ed. Fabio read the graph backwards when he wrote this.]
  • A fair number of arts/humanities majors have *negative* rates of return.
  • None of the STEM majors have a negative rate of return.
  • The big message – college matters less than major.

There is also a big message for people who care about schools and inequality. If you want minorities and women to have equal pay, one of the very first things to do is to get more into STEM fields. All other policies are small in comparison.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

May 11, 2016 at 12:01 am

why i don’t teach polanyi

Marko Grdesic wrote an interesting post on why modern economists don’t read Polanyi. He surveyed economists at top programs and discovered that only 3% had read Polanyi. I am not shocked. This post explains why.

For a while, I taught an undergrad survey course in sociology with an economic sociology focus. The goal is to teach sociology in a way interesting to undergraduate business and policy students. I often teach a module that might be called “capitalism’s defenders and critics.” On defense, we had Smith and Hayek. On offense, we had Marx and Polanyi.

And, my gawd, it was painful. Polanyi is a poor writer, even compared to windbags like Hayek and Marx. The basic point of the whole text is hard to discern other than, maybe, “capitalism didn’t develop the way you think” or “people change.” It was easily the text that people understood the least and none of the students got the point. Nick Rowe wrote the following comment:

35 years ago (while an economics PhD student) I tried to read Great Transformation. I’m pretty sure I didn’t finish it. I remember it being long and waffly and unclear. If you asked me what I was about, I would say: “In the olden days, people did things for traditional reasons (whatever that means). Then capitalism and markets came along, and people changed to become rational utility maximisers. Something like that.”

Yup. Something like that. Later, I decided that the Great Transformation is a classic case of “the wiki is better than the book.” We should not expect readers to genuflect in front if fat, baggy books. We are no longer in the world of the 19th century master scholars. If you can’t get your point across, then we can move on.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

April 6, 2016 at 12:05 am

statistics vs. econometrics – heckman’s approach

Over at Econ Talk, Russ Roberts interviews James Heckman about censored data and other statistical issues. At one point, Roberts asks Heckman what he thinks of the current identification fad in economics (my phrasing). Heckman has a few insightful responses. One is that a lot of the “new methods” – experiments, instrumental variables, etc. are not new at all. Also, experiments need to be done with care and the results need to be properly contextualized. A lot of economists and identification obsessed folks think that “the facts speak for themselves.” Not true. Supposedly clean experiments can be understand in the wrong way.

For me, the most interesting section of the interview is when Heckman makes a distinction between statistics and econometrics. Here’s his example:

  • Identification – statistics, not economics. The point of identification is to ensure that your correlation is not attributable to an unobserved variable. This is either a mathematical point (IV) or a feature of research design (RCT). There is nothing economic about identification in the sense that you need to understand human decision making in order to carry out identification.

In contrast, he thought that “real” econometrics was about using economics to guide statistical modelling or using statistical modelling to plausibly tell us how economic principles play out in real world situations. This, I think, is the spirit of structural econometrics, which demands the researcher define the economic relation between variables and use that as a constraint in statistical estimation. Heckman and Roberts discuss minimum wage studies, where the statistical point is clear (raising wages do not always decrease unemployment) but the economic point still needs to be teased out (moderate wage increases can be offset by firms in others ways) using theory and knowledge of labor markets.

The deeper point I took away from the exchange is that long term progress in knowledge  is not generated by a single method, but rather through careful data collection and knowledge of social context. The academic profession may reward clever identification strategies and they are useful, but that can lead to bizarre papers when the authors shift from economic thinking to an obsession with unobserved variables.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

March 24, 2016 at 12:01 am

recent gre scores for economists vs. other social scientists

socsciGRE

On Twitter, Michigan higher ed prof Julie Posselt compares the quantitative GRE scores for various social science disciplines. Take home message: social sciences are comparable in terms of recruits, but economics has stronger math skills. Take home point #2: there is still a lot of overlap. The bottom third of econ overlaps with other social sciences. This probably reflects that the extraordinarily mathematical approach to econ is a phenomenon of the strong programs that attract those with degrees in physical science and they have pushed the more traditional economics student to the bottom of the distribution.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

February 16, 2016 at 12:00 am

how the acid rain program killed northeasterners

Remember acid rain? For me, it’s one of those vague menaces of childhood, slightly scarier than the gypsy moths that were eating their way across western Pennsylvania but not as bad as the nuclear bombs I expected to fall from the sky at any moment. The 1980s were a great time to be a kid.

The gypsy moths are under control now, and I don’t think my own kids have ever given two thoughts to the possibility of imminent nuclear holocaust. And you don’t hear much about acid rain these days, either.

In the case of acid rain, that’s because we actually fixed it. That’s right, a complex and challenging environmental problem that we got together and came up with a way to solve. And the Acid Rain Program, passed as part of the Clean Air Act Amendments of 1990, has long been the shining example of how to use emissions trading to successfully and efficiently reduce pollution, and served as an international model for how such programs might be structured.

The idea behind emissions trading is that some regulatory body decides the total emissions level that is acceptable, finds a way to allocate polluters rights to emit some fraction of that total acceptable level, and then allows them to trade those rights with one another. Polluters for whom it is costly to reduce emissions will buy permits from those who can reduce emissions more cheaply. This meets the required emissions level more efficiently than if everyone were simply required to cut emissions to some specified level.

While there have clearly been highly successful examples of such cap-and-trade systems, they have also had their critics. Some of these focus on political viability. The European Emissions Trading System, meant to limit CO2 emissions, issued too many permits—always politically tempting—which has made the system fairly worthless for forcing reductions in emissions.

Others emphasize distributional effects. The whole point of trading is to reduce emissions in places where it is cheap to do so rather than in those where it’s more expensive. But given similar technological costs, a firm may prefer to clean up pollutants in a well-off area with significant political voice rather than a poor, disenfranchised minority neighborhood. Geography has the potential to make the efficient solution particularly inequitable.

These distributional critiques frequently come from outside economics, particularly (though not only) from the environmental justice movement. But in the case of the Acid Rain program, until now no one has shown strong distributional effects. This study found that SO2 was not being concentrated in poor or minority neighborhoods, and this one (h/t Neal Caren) actually found less emissions in Black and Hispanic neighborhoods, though more in poorly educated ones.

A recent NBER paper, however, challenges the distributional neutrality of the Acid Raid Program (h/t Dan Hirschman)—but here, it is residents of the Northeast who bear the brunt, rather than poor or minority neighborhoods. It is cheaper, it turns out, to reduce SO2 emissions in the sparsely populated western United States than the densely populated east. So, as intended, more reductions were made in the West, and less in the East.

acid_revised

The problem is that the population is a lot denser in the Northeastern U.S. So while national emissions decreased, more people were exposed to relatively high levels of ­SO2 and therefore more people died prematurely than would have been the case with the inefficient solution of just mandating an equivalent across-the-board reduction in SO2 levels.

To state it more sharply, while the trading built into the Acid Rain Program saved money, it also killed people, because improvements were mostly made in low-population areas.

This is fairly disappointing news. It also points to what I see as the biggest issue in the cap-and-trade vs. pollution tax debate—that so much depends on precisely how such markets are structured, and if you don’t get the details exactly right (and really, when are the details ever exactly right?), you may either fail to solve the problem you intended to, or create a new one worse than the one you fixed.

Of course pollution taxes are not exempt from political difficulties or unintended consequences either. And as Carl Gershenson pointed out on Twitter, a global, not local, pollutant like CO2 wouldn’t have quite the same set of issues as SO2. And the need to reduce carbon emissions is so serious that honestly I’d get behind any politically viable effort to cut them. But this does seem like one more thumb on the “carbon tax, not cap-and-trade” side of the scale.

 

Written by epopp

February 15, 2016 at 1:17 pm

does piketty replicate?

Ever since the publication of Piketty’s Capital in the 21st Century, there’s been a lot of debate about the theory and empirical work. One strand of the discussion focuses on how Piketty handles the data. A number of critics have argued that the main results are sensitive to choices made in the data analysis (e.g., see this working paper). The trends in inequality reported by Piketty are amplified by how he handles the data.

Perhaps the strongest criticism in this vein is made by UC Riverside’s Richard Sutch, who has a working paper claiming that some of Piketty’s major empirical points are simply unreliable. The abstract:

Here I examine only Piketty’s U.S. data for the period 1810 to 2010 for the top ten percent and the top one percent of the wealth distribution. I conclude that Piketty’s data for the wealth share of the top ten percent for the period 1870-1970 are unreliable. The values he reported are manufactured from the observations for the top one percent inflated by a constant 36 percentage points. Piketty’s data for the top one percent of the distribution for the nineteenth century (1810-1910) are also unreliable. They are based on a single mid-century observation that provides no guidance about the antebellum trend and only very tenuous information about trends in inequality during the Gilded Age. The values Piketty reported for the twentieth-century (1910-2010) are based on more solid ground, but have the disadvantage of muting the marked rise of inequality during the Roaring Twenties and the decline associated with the Great Depression. The reversal of the decline in inequality during the 1960s and 1970s and subsequent sharp rise in the 1980s is hidden by a fifteen-year straight-line interpolation. This neglect of the shorter-run changes is unfortunate because it makes it difficult to discern the impact of policy changes (income and estate tax rates) and shifts in the structure and performance of the economy (depression, inflation, executive compensation) on changes in wealth inequality.

From inside the working paper, an attempt to replicate Piketty’s estimate of intergenerational wealth transfer among the wealthy:

The first available data point based on an SCF survey is for 1962. As reported by Wolff the top one percent of the wealth distribution held 33.4 percent of total wealth that year [Wolff 1994: Table 4, 153; and Wolff 2014: Table 2, 50]. Without explanation Piketty adjusted this downward to 31.4 by subtracting 2 percentage points. Piketty’s adjusted number is represented by the cross plotted for 1962 in Figure 1. Chris Giles, a reporter for the Financial Times, described this procedure as “seemingly arbitrary” [Giles 2014].9 In a follow-up response to Giles, Piketty failed to explain this adjustment [Piketty 2014c “Addendum”].

There is a bit of a mystery as to where the 1.2 and 1.25 multipliers used to adjust the Kopczuk-Saez estimates upward came from. The spreadsheet that generated the data (TS10.1DetailsUS) suggests that Piketty was influenced in this choice by the inflation factor that would be required to bring the solid line up to reach his adjusted SCF estimate for 1962. Piketty did not explain why the adjustment multiplier jumps from 1.2 to 1.25 in 1930.

This comes up quite a bit, according to Sutch. There is reasonable data and then Piketty makes adjustments that are odd or simply unexplained. It is also important to note that Sutch is not trying to make inequality in the data go away. He notes that Piketty is likely under-reporting early 20th century inequality while over-reporting the more recent increase in inequality.

A lot of Piketty’s argument comes from international comparisons and longitudinal studies with historical data. I have a lot of sympathy for Piketty. Data is imperfect, collected irregularly, and prone to error. So I am slow to  criticize. Still, given that Piketty’s theory is now one of the major contenders in the study of global inequality, we want the answer to be robust.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

February 10, 2016 at 12:01 am

credit where credit is due: gender and authorship conventions in economics and sociology

[Ha — I wrote this last night and set it to post for this morning — when I woke up saw that Fabio had beat me to it. Posting anyway for the limited additional thoughts it contains.]

Last week Fabio launched a heated discussion about whether economics is less “racially balanced” than other social sciences. Then on Friday Justin Wolfers (who has been a vocal advocate for women in economics) published an Upshot piece arguing that female economists get less credit when they collaborate with men.

The Wolfers piece covers research by Harvard economics PhD candidate Heather Sarsons, who used data on tenure decisions at top-30 economics programs in the last forty years to estimate the effects of collaboration (with men or women) on whether women get tenure, controlling for publication quantity and quality and so on. (Full paper here.) Only 52% of the women in this population received tenure, compared to 77% of the men.

The takeaway is that women got no marginal benefit (in terms of tenure decision) from coauthoring with men, while they received some benefit (but less than men did) if they coauthored with at least one other women. Their tenure chances did, however, benefit as much as men’s from solo-authored papers. Sarsons’ interpretation (after ruling out several alternative possibilities) is that while women are given full credit when there is no question about their role in a study, their contributions are discounted when they coauthor with men.

Interesting from a sociologist’s perspective is that Sarsons uses a more limited data set from sociology as a comparison. Looking at a sample of 250 sociology faculty at top-20 programs, she finds no difference in tenure rates by gender, and no similar disadvantage from coauthorship.

While it would be nice to interpret this as evidence of the great enlightenment of sociology around gender issues, that is probably premature. Nevertheless, Sarsons points to one key difference between sociology and economics (other than differing assumptions about women’s contributions) that could potentially explain the divergence.

Sociology, as most of you probably know, has a convention of putting the author who made the largest contribution first in the authorship list. Economics uses alphabetical order. Other disciplines have their own conventions — lab sciences, for example, put the senior author last. This means that sociologists can infer a little bit more than economists about who played the biggest role in a paper from authorship order — information Sarsons suggests might contribute to women receiving more credit for their collaborative work.

This sounds plausible to me, although I also wouldn’t be surprised if the two disciplines made different assumptions, ceteris paribus, about women’s contributions. It might be worth looking at sociology articles with the relatively common footnote “Authors contributed equally; names are listed in alphabetical order” (or reverse alphabetical order, or by coin toss, or whatever). Of course such a note still provides information about relative contribution — 50-50, at least in theory — so it’s not an ideal comparison. But I would bet that readers mentally give one author more credit than the other for these papers.

That may just be the first author, due to the disciplinary convention. But one could imagine that a male contributor (or a senior contributor) would reap greater rewards for these kinds of collaborations. It wouldn’t say much about the hypothesis if that were not the case, but if men received more advantage from papers with explicitly equal coauthors, that would certainly be consistent with the hypothesis that first-author naming conventions help women get credit.

Okay, maybe that’s a stretch. Sarsons closes by noting that she plans to expand the sociology sample and add disciplines with different authorship conventions. It will be challenging to tease out whether authorship conventions really help women get due credit for their work, and I’m skeptical that that’s 100% of the story. But even if it could fix part of the problem, what a simple solution to ensure credit where credit is due.

Written by epopp

January 11, 2016 at 1:47 pm

is economics less racially integrated than other disciplines?

A few days ago, economist Noah Smith posted this tweet:

This raises an interesting question: what is the racial balance of the economics profession and how does that compare with similar fields?

It helps to start with a baseline model. In higher education research, the common finding is that Blacks and Latinos are under represented among professors when compared to the population. Blacks and Latinos are each about 6% of the professoriate (e.g., see the National Center for Education Statistics summary here). Asians tend to be about 10% of the professoriate, which means they are over represented compared to the population. These numbers vary a little by rank, with lower ranks having more racial and ethnic minorities.

Finding the numbers for economics professors is tricky. You have to dig a little to find the data. In 2006, The Journal of Blacks in Higher Education counted 15 Black economists among 935 faculty in top 30 programs – a whopping 1.6%. There seem to be very few surveys of economists, but there is the 1995 Survey of Americans and Economists on the Economy conducted by the Washington Post and the Kaiser Family Foundation. That survey reports that .5% (<1%) of economics professors are Black, according to Bryan Caplan’s analysis of the data in the Journal of Law and Economics (Table 1, p. 398). The same article reports about 5% for Asian economists. This indicates that economics faculty are more likely to be White than the population as a whole and academia in general. If readers have access to more recent surveys of economists and their demographics, please use the comments.

Follow up question #1: Is economics similar to other related social science disciplines like political science or sociology? Answer: Political science has about 5% Black faculty and 3.4% Asian faculty according to this 2011 APSA report (Table 8, p. 40). Sociology has about 7% Black faculty and 5% Asian faculty according to this 2007 ASA report. So economics is more White than allied social science disciplines and about the same in terms of Asian faculty.

Follow up question #2: What about economics’ similarity to math intensive STEM fields like physics or math? According to a 2014 report from the American Institute for Physics, about 2% of physics faculty are Black and 14% are Asian (see Table 1). According to this 2006 study of the American mathematics faculty, 1% are Black and 12% Asian in the PhD programs (Table F5).

To summarize:

  • Economics professors are less likely to be Black (~1%) than professors as a whole (~6%).
  • Economics professors are less likely to be Black (~1%) than political scientists and sociologists (5%-7%).
  • Black professors are equally common in econ, math, and physics (1-2% for each field).
  • Asian economics professors are equally common as Asian professors in other social sciences (3.5% in political science, ~5% in economics and sociology).
  • Economics professors are less likely to be Asian (5%) than in academia as a whole (10%) and even less than physics and mathematics (14% and 12%)

Bottom line: Economics has fewer Black faculty when compared to social sciences and fewer Asian compared to physical sciences. That’s something that makes you go “hmmmm….

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street 

Written by fabiorojas

January 6, 2016 at 12:01 am

democracy is tougher than you think, wealthy ones at least

We often hear that democracy is under threat. But is that true? In 2005, Adam Przeworski wrote an article in Public Choice arguing that *wealthy* democracies are stable but poor ones are not. He starts with the following observation:

No democracy ever fell in a country with a per capita income higher than that of Argentina in 1975, $6055.1 This is a startling fact, given that throughout history about 70 democracies collapsed in poorer countries. In contrast, 35 democracies spent about 1000 years under more developed conditions and not one died.

Developed democracies survived wars, riots, scandals, economic and governmental crises, hell or high water. The probability that democracy survives increases monotonically in per capita income. Between 1951 and 1990, the probability that a democracy would die during any particular year in countries with per capita income under $1000 was 0.1636, which implies that their expected life was about 6 years. Between $1001 and 3000, this probability was 0.0561, for an expected duration of about 18 years. Between $3001 and 6055, the probability was 0.0216, which translates into about 46 years of expected life. And what happens above $6055 we already know: democracy lasts forever.

Wow.

How does one explain this pattern? Przeworski describes a model where elites offer income redistribution plans, people vote, and the elites decide to keep or ditch democracy. The model has a simple feature when you write it out: the wealthier the society, the more pro-democracy equilibria you get.

If true, this model has profound implications of political theory and public policy:

  1. Economic growth is the bulwark of democracy. Thus, if we really want democracy, we should encourage economic growth.
  2. Armed conflict probably does not help democracy. Why? Wars tend to destroy economic value and make your country poorer and that increase anti-democracy movements (e.g., Syria and Iraq).
  3. A lot of people tell you that we should be afraid of outsiders because they will threaten democracy. Not true, at least for wealthy democracies.

This article should be a classic!

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street 

Written by fabiorojas

January 4, 2016 at 3:34 am

free college vs. cost-benefit thinking

Last month, Howard Aldrich made—as he often does—a good point in the comments:

There’s been an interesting subtle shift in the rhetoric regarding whose responsibility it is to pay for an individual’s post-secondary education. My impression is that there was a strong consensus across the nation 50 years ago, and certainly into the late 1960s, that governments had a responsibility to educate their students that extended up through college. However, I perceive that consensus has been under attack from both the left and the right….Liberals argue that much of the public subsidy goes to the wealthier high income students whose parents don’t really deserve the subsidy. Conservatives argue that as students benefit substantially from their college education, they should pay most of the cost.

This month, I’ve been writing about the history of cost-benefit analysis. (Why yes, I do know how to have a good time.) On the surface, it has nothing to do with universities. But there are important links to be made.

One of the arguments I’m playing with is that economic thinking—here just meaning a rational, cost-benefit, systematic-weighing-of-alternative-choices sort of thinking—has been particularly constraining for the political left. On the right, when people’s values disagree with economic reasoning, they ignore the economics and forge ahead. On the left, while some will do the same, the “reasonable” position tends to be much more technocratic. Think Brookings versus Heritage. Over time, one thing that has pulled “the left” to the right has been the influence of a technocratic, cost-benefit strain of thought.

Yes, I know these are sweeping generalizations. But stay with me for a minute.

There are a couple of big economic arguments for asking individuals, not the public, to pay for higher education. Howard’s comment gets at both of them.

One is that while there is some public benefit in educating people, individuals capture most of the returns to higher education. If that is the case, it makes sense that they should pay for it, with the state perhaps making financing available for those who lack the means. Milton Friedman made this argument sixty years ago, and since then, it has become ever more popular.

The other is that providing free higher education is basically regressive. The wealthier you are, the more likely you are to attend college (check out this NYT interactive chart), and relatively few who are poor benefit. Milton Friedman made this argument, too, but it is particularly associated with a 1969 paper by Lee Hansen and Burton Weisbrod, and continues to be made by commentators across the political spectrum.

Both of these arguments have become economic common sense (even though support for the latter is actually pretty weak). Of course it’s fair for individuals to have to pay for the education that they benefit so much from. And of course it doesn’t make sense to pay for the education of the upper-middle class while the working poor who never make it to college get nothing.

Indeed, these arguments have been potent enough that it has become hard to argue for free higher education without sounding extreme and maybe economically illiterate. Really, it kind of amazes me that free college is even being talked about seriously these days by President Obama and Bernie Sanders.

But even the argument for free college now depends heavily on claims about economic payoff. The Obama proposal headlines “Return on Investment,” arguing that “every dollar invested in community college by federal, state and local governments means more than $25 [ed: !] in return.” The Sanders statement starts, “In a highly competitive global economy, we need the best-educated workforce in the world.” The candidate who is a self-described socialist relies on a utilitarian, economic argument to justify free higher education.

So what’s the problem with thinking about college in terms of economic costs and benefits? After all, it’s an expensive enterprise, and getting more so. Surely it doesn’t make sense to just wantonly spend without giving any thought to what you’re getting in return.

The problem is, if the argument you really want to make is that college is a government responsibility—that is, a right—starting with cost-benefit framing leads you down a slippery slope. Benefits are harder to measure than costs, and some benefits can’t be measured at all. All sorts of public spending becomes much harder to justify.

Now, this might be fine if you generally think that small government is good, or that the economic benefits of college are pretty much the ones that matter. But if you think it’s worth promoting college because it might help people become better citizens, or increases their quality of life in some difficult-to-measure way, or you just want to live in a society that provides broad access to education, well, too bad. You’ve already written that out of the equation.

If you really believe there are social benefits to making public higher education freely available, then cost-benefit arguments will always betray you. But rights, on the other hand, aren’t subject to cost-benefit tests. Only a moral argument that defends higher education as a right—as something to value because it improves the social fabric in literally immeasurable ways—can really work to defend real public higher education.

Seem too unrealistic? Think about high school. There’s no real reason that free college should be subject to a cost-benefit test when free high school is not. Individuals reap economic benefits—lots of them—from attending high school, too. And high school is at least as regressive as college: the well-off kids who attend the good public schools reap many more benefits than the low-income kids who attend the crummy ones. It only makes sense, then, that families should pay for high school themselves, right? Perhaps with government loans, if you’re too poor to afford it.

And yet no one is making this argument. Because we all still agree—at least for now—that children have the right to a free primary and secondary education. We may argue about how much to spend on it, or how to make it better, but the basic premise—governments have a responsibility to educate students, in Howard’s words—still holds.

So I support the free college movement. But I’d like to see its champions stop saying it’s because we need to be globally competitive, or because it’s got a huge ROI.

Instead, say it’s because our society will be stronger when more of us are better educated. Say that knowing higher education is an option, and an option you don’t have to mortgage your future for, will improve our quality of life. Say that colleges themselves will be better when they return to seeing students as students, and not as revenue streams.

Say it’s because it’s the right thing to do.

Written by epopp

October 23, 2015 at 12:00 pm

econ nobel prize cliches: collect them all

Every October when the Nobel prize in economics is announced, you hear the same trite and hackneyed things. Already, the Guardian has one of those tedious “economics is not a science” articles just to prepare for tomorrow. To help you save time, I’ve collected the following cliches so you can just clip and paste them into your tweets, Facebook messages, and blog posts:

  • Economics is not a science.
  • Actually, there is no Nobel Prize in economics.
  • The so-called Economics Nobel prize.
  • This prize refutes the policies of [insert politician you hate].
  • This prize supports the policies of [insert politician you love].
  • This prize is long overdue.
  • This prize rewards [my favorite field].
  • This prize rewards free-market fundamentalists.
  • This prize proves that free-market fundamentalists are wrong.
  • This person did not deserve the prize.
  • This person deserved the prize.
  • This is a rather mathematical/statistical prize for a technical point that I can’t summarize here.
  • This prize is for proving the obvious.
  • I predicted this all along.
  • I am completely surprised by this.
  • I can’t believe they gave this to a non-economist.
  • I can’t believe they gave this to a person not from [circle one: Harvard/MIT].
  • Harvard is slipping, straight to toilet.
  • Steve Levitt does/does not know the work of these prize winners.

Actually, I have a Granovetter post ready to go if he ever wins, since he is the sociologist whose work is most known in economics. Add your own cliches in the comments.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

October 12, 2015 at 1:41 am

Posted in economics, fabio