Archive for the ‘economics’ Category
I recently had the opportunity to read a whole boat load of F.A. Hayek. Constitution of Liberty; The Use of Knowledge in Society; Law, Legislation and Liberty; and more. This in depth rereading of Hayek helped me resolve a certain sociological puzzle concerning the Austrian economist’s reputation. How could he be the patron saint of laissez-faire while saying very nice things about welfare states and attracting positive commentary from a range of liberal and radical thinkers, such as Foucault?
Here is my answer: I think Hayek’s work resides on a boundary between libertarian social theory and modern liberalism. I’m going to argue that Hayek is the least libertarian you can be and still be, sort of, a libertarian. Because he is not a libertarian in the modern sense of grounding things strongly in terms of individual rights, it’s easy for non-libertarians to find a connection.
Exhibit A: Hayek never lays out a theory of freedom based on individual rights the way many libertarians do. For example, in Constitution of Liberty, he doesn’t start with natural rights and he doesn’t start with a utilitarian justification of freedom. Rather, for him, freedom is about autonomy. Given certain choices, does someone have a sphere of independent judgment free from coercion from others? Thus, this version of freedom is compatible with state policies that try to increase this private sphere of judgment. Also, he frequently emphasizes equality under the law and rule of law as prime virtues, even if they don’t enhance freedom in the everyday sense of the word.
Exhibit B: The Road to Serfdom. It’s a text that is more talked about than read. But if you read it, you discover that it is not an argument against every single form of state intervention. Rather, it’s mainly an argument against Soviet style command economies and Westerners who want to nationalize various industries in the name of equality. Secondarily, he also wants to reign in state regulators who wish to wish to coerce people for their own bureaucratically determined goals.
Exhibit C: In other writings, he endorsed a basic income. And he does argue for the legitimacy of taxation. See Matt Zwolinski’s essay on this topic. He argues that these policies were likely justified for Hayek because they increase personal autonomy (see Exhibit A) and I think they were ok in Hayek’s view because they were less about top down ordering of the economy or administrative tyranny and more about allocating resources to everyone in ways that could help them expand their freedom (Exhibit B).
Exhibit D: Spontaneous order theory. Basically, a whole lot of Hayek’s later social theory is about arguing why social structures can still work and are desirable if they are not top down command structures. That doesn’t lead immediately to libertarianism because you can have spontaneous order that has nothing to do with freedom in either Hayek’s view or the more modern libertarian view. For example, systems of race relations are not top down structures, but they often restrain people in cruel ways.
Taken together, Exhibits A, B, C and D paint an intellectual who has the following traits: (a) Very, very anti-socialist; (b) has a version of freedom that is very agnostic with respect to the wide range of policies that are not socialist; (c) provides grounds for both conservative and liberal policies via a respect for tradition/spontaneous order and freedom/autonomy expansion. It’s a very modest form of libertarianism that gives away a lot of ground to other philosophies.
Does that mean that we’ve all misunderstood Hayek? It depends. If you think that Hayek was this evil economist who advocated the most strict version of libertarianism, then that’s probably mistaken. But if you think of Hayek as a very mellow form of libertarianism that has overlap with other political traditions, you’re probably on target.
Hi, everyone! As the year winds up, I’d like to announce two book fora:
- March 2017: Catherine Turco’s Conversational Firm.
- May 2017: Mark Granovetter’s Society and Economy.*
Please order the books now!**
* Holy smokes, yes, the Granovetter book is coming out. We have heard of this sacred text for years and now… my precious… my precious…
** And yes, editors who read this blog should send me free copies!!
Sears Holdings, which owns Sears and Kmart, reported on Thursday a loss of $748 million for the three months ending on Oct. 29. This is the company’s 20th consecutive quarterly loss, and worse than the $454 million loss the company posted in the same period last year. Revenue fell nine percent last quarter to $5.21 billion. Same-store sales, a key retail metric, dropped 10 percent at Sears and 4 percent at Kmart. The company lost $1.6 billion in the first ten months of the year, compared to $549 million in the same period last year, according to its regulatory filing.
These grim numbers were announced a week after the departure of two top-level executives: James Balagna, an executive vice president in charge of the company’s home-repair services and technology backbone, and Joelle Maher, the company’s president and chief member officer. Former Goldman Sachs banker Steve Mnuchinalso resigned from the Sears board last week after President-elect Donald Trump nominated him to head the Treasury Department.
When we discussed Sears, CKD suggested the issue wasn’t firm profitability. It was the relative benefits of bankruptcy court vs. a massive real estate sell off. If so, then the pattern of executive hires and behaviors makes sense. But that raises a deeper point. Why didn’t Sears keep up with the rest of the retail market?
Jeff Sward, founding partner of retail consultant Merchandising Metrics, doesn’t share Hollar’s optimism.
“What does Sears stand for?” Sward told Salon. “Sears unfortunately stands for so many different things that I don’t think there’s anything that’s a standout. I would go to Sears for appliances and tools, but I’ve certainly never thought of them as a headquarters for apparel.”
Sward says the issue isn’t that Sears doesn’t have good products and competitive prices. Instead, he said, the problem facing Sears is that it isn’t the first choice for buyers of any of its core product categories. If consumers need tools, they go to Home Depot or Lowe’s. If they want outdoor or work apparel, it’s Dick’s Sporting Goods, not Sears. Electronics and home appliances? That’s for Best Buy. And who’s buying apparel and shoes at Sears?
The bottom line is that the department store model of the early 1900s is incredibly hard to sustain in the modern environment. Where discovery of the “big box model” by Home Depot and the online model of Amazon, a lot of department store chains either folded or refocused. Sear, with way too much real estate and sluggish executive team, couldn’t make the pivot. Not surprisingly, you then attract investors who are more interested in hollowing out the firm, like the Sears/Kmart holding group that also took on Borders before it died.
Yesterday the New Republic wrote about how little attention has been paid to policy in the current election. In 2008, the network news programs devoted 220 minutes to policy; this year, it’s been a mere 32 minutes.
The piece goes on to bemoan the decline of the public-interest obligation once held by broadcasters (and which still remains, in vestigial form) in exchange for their use of the airwaves, and to connect the dots between the gradual removal of those restrictions and the toxic media environment we find ourselves in today. While — I think appropriately — the article doesn’t overemphasize the causal effects, it does highlight a broader shift that was going on in the 1970s and is still echoing today.
The 1970s saw a wide, bipartisan embrace of the deregulatory spirit in many areas. The transportation industries — air, rail, trucking — were one chief target. Banking was in there. So was energy. More controversial, and less bipartisan, was the push for the removal of new social regulations—rules meant improve the environment, health, and safety. But even when it came to social regulation, both sides believed in regulatory reform. (I’ve recently written about some of this history.)
Economists were one group that made a strong case for economic deregulation — the removal of price and entry barriers in industries like transportation, energy, and finance. (For the definitive account, see Martha Derthick and Paul Quirk’s 1985 book.) Their role in airline deregulation, led by the colorful Alfred “To me, they’re all just marginal costs with wings” Kahn, is probably best known. But economists also had something to say about the Federal Communications Commission.
Perhaps the most famous — certainly one of the earliest — critics of the FCC was Ronald Coase. Coase argued in 1959 that there was no good reason, technical or economic, for the government to own the airwaves, and made the case for auctioning off the radio spectrum. He was not at all impressed with the argument that licenses should be distributed according to the “public interest”, and emphasized not only the legal ambiguity of that standard, but the fact that the FCC’s decisions reflected “a degree of inconsistency which defies generalization.”
At the time, the idea of the airwaves as a public trust was so universally accepted that Coase’s views seemed quite radical, even to other economists. When, in 1962, he extended his argument into a 200-page RAND report, coauthored with Bill Meckling and Jora Mirasian, RAND quashed it for being too incendiary. Later, recalling these events, Coase quoted an internal review of the paper: “I know of no country on the face of the globe—except for a few corrupt Latin American dictatorships—where the ‘sale’ of the spectrum could even be seriously proposed.”
By the early 1970s, though, a new consensus had emerged in economics around questions of regulation, and this consensus saw FCC demands that broadcasters behave in unprofitable ways not as acting in the “public interest,” but as a source of efficiency losses that should, at a minimum, be regarded skeptically. This aligned with increasingly loud arguments from outside of economics (as well as within) about regulatory capture, which implied that the “public interest” pursued by executive agencies would never be more than a sham, anyhow.
Eventually, this shift in mood led to a change in how the FCC regulated broadcasters. The public interest standard was loosened, and in 1981 the agency began to shift from using hearings to allocate spectrum licenses — in theory to the applicants that best served the public interest — to lottery. In 1994, it moved another step closer to Coase’s prescription, beginning to auction off the licenses — a move that stimulated a great deal of research in auction theory as well as generating substantial revenue.
The “public interest” goal, which had initially been baked into the allocation process (however poorly it was pursued in practice) became increasingly marginalized. Or perhaps it was subsumed within the assumed public interest in encouraging efficient use of the spectrum. The process echoes the one that took place in antitrust policy, in which historically significant goals other than allocative efficiency — goals that often conflicted with efficiency and even with each other — were gradually defined as being simply beyond the scope of what could be considered. (Indeed, Coase’s criticism of the inconsistency of the FCC’s behavior sounds quite similar to Justice Stewart’s scathing critique of merger law, written around the same time: “the sole consistency I can find is that under Section 7 [the merger section of the Clayton Act], the Government always wins.”)
I don’t know enough about the history of the FCC to have an informed opinion on whether the public interest standard as it stood circa 1970 was redeemable or if the agency was irreparably captured. And I definitely don’t think the decline of that standard is the main explanation for the current media environment, which goes far beyond television.
But I do think that the demise of the idea that we should expect media to have obligations beyond profit — which is bound up with the ideal, if not the practice, of the public interest standard — is a big contributor. Individual journalists — that increasingly rare breed — may remain professionally committed to an ethical code and a sense of mission that isn’t primarily about sales. But at the corporate level, any such qualms were abandoned long ago, and the journalistic wall between “church and state” — editorial and advertising — continues to crumble.
What this means is that we get political news that is just horse race coverage, and endless examination of the ugliest aspects of politics — which, unsurprisingly, encourages more of the same. Actually expecting media to pursue the “public interest”, whether through regulatory means or professional commitment, may be unrealistically idealistic. But giving up on the concept entirely seems certain to take us further down the path in which objective lies merit just as much attention as truth.
Roger E. Farmer has a blog post on why economists should not use complexity theory. At first, I though he was going to argue that complexity models have been dis-proven or they use unreasonable assumptions. Instead, he simply says we don’t have enough data:
The obvious question that Buzz asked was: are economic systems like this? The answer is: we have no way of knowing given current data limitations. Physicists can generate potentially infinite amounts of data by experiment. Macroeconomists have a few hundred data points at most. In finance we have daily data and potentially very large data sets, but the evidence there is disappointing. It’s been a while since I looked at that literature, but as I recall, there is no evidence of low dimensional chaos in financial data.
Where does that leave non-linear theory and chaos theory in economics? Is the economic world chaotic? Perhaps. But there is currently not enough data to tell a low dimensional chaotic system apart from a linear model hit by random shocks. Until we have better data, Occam’s razor argues for the linear stochastic model.
If someone can write down a three equation model that describes economic data as well as the Lorentz equations describe physical systems: I’m all on board. But in the absence of experimental data, lots and lots of experimental data, how would we know if the theory was correct?
On one level, this is a fair point. Macro-economics is notorious for having sparse data. We can’t re-run the US economy under different conditions a million times. We have quarterly unemployment rates and that’s it. On another level, this is an extremely lame criticism. One thing that we’ve learned is that we have access to all kinds of data. For example, could we have m-turker participate in an online market a million times? Or, could we mine eBay sales data? In other words, Farmer’s post doesn’t undermine the case for complexity. Rather, it suggests that we might search harder and build bigger tools. And, in the end, isn’t that how science progresses?
Last week, we discussed Devah Pager’s new paper on the correlation between discrimination in hiring and firm closure. As one would expect from Pager, it’s a simple and elegant paper using an audit study to measure the prevalence and consequences of discrimination in the labor market. In this post, I want to use the paper to talk about the journal publication process. Specifically, I want to discuss why this paper appeared in Sociological Science.
First, it may be the case that Professor Pager directly went to Sociological Science without trying another peer reviewed journal. If so, then I congratulate both Pager and Sociological Science. By putting a high quality paper into public access, both Professor Pager and the editors of Sociological Science have shown that we don’t need the lengthy and cumbersome developmental review system to get work out there.
Second, it may be the case that Professor Pager tried another journal, probably the ASR or AJS or an elite specialty journal and it was rejected. If so, that raises an important question – what specifically was “wrong” with this paper? Whatever one thinks of the Becker theory of racial discrimination, one can’t critique the paper on lacking a “framing” or have a simple and clean research design. One can’t critique statistical technique because it’s a simple comparison of means. One can’t critique the importance of the finding – the correlation between discrimination in hiring and firm closure is important to know and notable in size. And, of course, the paper is short and clearly written.
Perhaps the only criticism I can come up with is a sort of “identification fundamentalism.” Perhaps reviewers brought up the fact discrimination was not randomly assigned to firms so you can’t infer anything from the correlation. That is bizarre because it would render Becker’s thesis un-testable. What experimental design would allow you get a random selection of firms to suddenly become racist in their hiring practices? Here, the only sensible approach is Bayesian – you collect high quality observational data and revise your beliefs accordingly. This criticism, if it was made, isn’t sound upon reflection. I wonder what, possibly, could the grounds for rejection be aside from knee jerk anti-rational choice comments or discomfort with a finding that markets do have some corrective to racial discrimination.
Bottom line: Pager and the Sociological Science crew are to be commended. Maybe Pager just wanted this paper “out there” or just got tired of the review process. Either way, three cheers for Pager and the Soc Sci Crew.
One of the most striking arguments of Gary Becker’s theory of discrimination is that there is a cost of racial discrimination. If you hire people based on personal taste rather than job skills, your competitors can hire these better works and you work at a disadvantage. I think the strong version argument isn’t right. Markets do not instantly weed out discriminators. But the weak version has a lot of merit. If you truly avoid workers based on race or gender, you are giving away a huge advantage to the competition.
Well, turns out that Becker was right, at least in one data set. Devah Pager has a new paper in Sociological Science showing that discrimination is indeed associated with lower firm performance:
Economic theory has long maintained that employers pay a price for engaging in racial discrimination. According to Gary Becker’s seminal work on this topic and the rich literature that followed, racial preferences unrelated to productivity are costly and, in a competitive market, should drive discriminatory employers out of business. Though a dominant theoretical proposition in the field of economics, this argument has never before been subjected to direct empirical scrutiny. This research pairs an experimental audit study of racial discrimination in employment with an employer database capturing information on establishment survival, examining the relationship between observed discrimination and firm longevity. Results suggest that employers who engage in hiring discrimination are less likely to remain in business six years later.
Commentary: I have always found it ironic that sociologists and non-economists have resisted the implications of taste based discrimination theory. If discrimination in markets is truly not based on performance or productivity, there must be *some* consequence. However, a lot of sociologists have a strong distrust of markets that draws their attention to this rather simple implication of price theory. I don’t know the entire literature on taste based discrimination, but it’s good to see this appear.