Author Archive

Beyond Nuance

As many of the contributors to this series will remember, the late Marvin Bressler used to amuse the Princeton grad students with such jokes as saying that all job talk questions were special cases of two general questions: “But, is it really so simple?” or “But, is it really so complicated?” In Kieran’s contribution to this forum he notes that relational work scholarship runs the risk of devolving into an endless series of works that basically ask the first question of a strawman other (be they a garden variety economist, a behavioral economist, an embeddedness/networks economic sociologist, or whatever). A lot of this work ends up basically saying, when you dig into the details of social life you see how it’s all so much richer and more nuanced than it first appears. Much like thick description or history, this can be fascinating when applied by a talented researcher to an interesting case, but in less felicitous circumstances quickly degrades into one damn thing after another. Even under the best circumstances though it’s hard to see how the “is it really so simple” research question builds up to a distinct theoretical perspective rather than a sort of atheoretical empiricism with nihilism towards the idea of theory-building and general mechanisms.

For some people such theoretical nihilism is satisfactory, as the whole point is building a Philippic against the reductionist other. However, as Kieran argues, this isn’t relational work at its best and he draws attention to work by Zelizer herself, Almeling, and Quinn that plays up the institutional and organizational context in which relational work is performed. I fully agree that it is important to treat such contexts as structured ones, and not merely places where tacit understandings are made explicit and documented for the convenience of sociologists who later on dig through case law or other bureaucratic records. Understanding how such contexts shape relational work provides an opportunity for positive contribution by the school rather than just critique of others.

In addition to the institutional context which many of us already do a good job of taking seriously, I think we need to take seriously the idea that relational work can be categorized and schematized. This is the first step to identifying more or less consistent patterns and contingencies in how relational work is applied. That is, going from a (valuable) sensitizing concept to an articulated theory.

In the last few years Zelizer has taken the lead in this issue with the concept of circuits: who exchanges what with whom for what else. This is an important step, but for the most part it remains a sensitizing concept, encouraging us to identify and document circuits where they occur and identify patterns among them. Fortunately, one of our sister disciplines has a long tradition of work closely parallel to circuits and has developed some sophisticated theories for understanding these issues.

Anthropology has been seriously into issues that closely parallel relational work but we don’t cite them very much and are the poorer for it. Now, perhaps I am confessing nothing of more general interest than my own ignorance. Still, I have to confess that to the best of my recollection I never encountered this literature in any of my undergraduate or graduate coursework and until recently I was mostly ignorant of it and so I suspect that my experience is not entirely unique. Likewise I seldom see this work cited in relational work publications (here’s an exception). Fortunately a few things came together for me (a deliberately thin quantitative project provided me with a windfall finding about relational work in payola, a very well-written and much discussed ambitious and insightful book on the subject was published, and I started attending the relational models lab) and so I got interested in the anthro literature, much to my benefit.

Early versions of economic anthropology were much like relational work in that they were more a sensitizing concept or critique than an articulated positive theory with a typology of theoretical constructs and mechanisms for their interaction. So in his “Essay on the Gift” Mauss talks about all sorts of gift relationships but is mostly interested in sensitizing us to the contingent nature of market exchange. So while Mauss describes both peer and clientelist gifts he doesn’t really emphasize a schema distinguishing between them as the important thing is that gifts (of whatever variety) are not market exchange. In the 1950s anthro saw the development of a “spheres of exchange” model with publications like Bohannan’s work on the Tiv people. In this work, Bohannan describes three ordinal categories for objects, with exchange of objects within a category being much more acceptable than exchange across categories. So traditionally a Tiv could trade chickens for beans, slaves for brass rods, and brides for brides, but to trade brass rods for either beans or brides could be accomplished only with great difficulty and what we would call elaborate relational work. In Debt, Graeber surveys a wide range of similar cases and argues that such incommensurable exchanges are never really final, being possible only on an “it’s a purchase, not a rental” kind of basis in which the qualitatively inferior good can work to service debt but where the qualitatively superior principal can only be repaid in-kind.

The thing I find to have the most potential to move relational work forward is Alan Fiske’s relational models typology of communal sharing, authority ranking, equality matching, and market pricing.* You can and should follow the link to see what each of those terms covers, but for my purposes the really important point is that there is a typology. Moreover, the typology has a richly articulated set of contingencies and covariates and so it rises to the level of a theory rather than just a sensitizing concept. Of course we all here recognize the “market vs. else” dichotomy, but such a dichotomy accomplishes little more than facilitating a now tired critique of economics so as to pile up a mass of things beyond econ’s purview to serve as a sort of defensive fortifications against that discipline’s occasional imperialist adventurism. To build a positive theory of non-price-theory exchange requires not just treating it as the complement to the market, but disaggregating it into its constituent varieties and identifying systematic properties to these types. It is in this respect that we can move our own model forward by accepting the theoretical gifts of anthropology and reciprocating with citations.

* Note that in Debt Graeber has a closely parallel typology of “communism,” “hierarchy,” and [gift|market] “exchange.” As best as I can tell, Graeber and Fiske did not directly influence each other but rather they drew similar conclusions from a common research tradition.

Written by gabrielrossman

September 5, 2012 at 4:08 pm

the decline and fall of institutions


As Brayden noted recently, one of the things sociology and org theory tend to study is how institutions spread, in part because our methods are well-suited for this. For instance, Fabio’s book is about how black studies programs became established in American higher education and my current project is about how both individual pop songs and entire music genres spread across radio. This focus on growth without a parallel inquiry into decline may lead to the impression that society is an institutional packrat, and to a large extent this is true. For instance in Fabio’s case, while universities add new fields like black studies and genomics they rarely drop old departments like classics, which is one of the reasons it’s hard to talk to a dean or department chair for five minutes without hearing about their space problem. Likewise, the aspect of Polanyi’s Great Transformation that I find most fascinating is the notion that the state’s role in the economy generally does not arise from a coherent ideology of statism, but from the accretion of innumerable pragmatic solutions to specific problems (a famous example not mentioned by Polanyi being NYC rent control as a “temporary” measure to prevent inflation in WW2).

Read the rest of this entry »

Written by gabrielrossman

October 20, 2007 at 9:43 pm

if at first you don’t succeed …


Yesterday the New York Times reported that FCC Chairman Kevin Martin is planning on changing media ownership regulations. This is an issue I follow pretty closely since it was the subject of my dissertation and I know a lot of people who care a lot about this issue. I haven’t seen many details on what exactly he is proposing, but it sounds like he’s resurrecting some form of Michael Powell’s 2003 proposal for a “diversity index” (FCC-DI) which would treat different media within a market as fungible and allow diffuse ownership in one medium to balance out oligopoly in another. The main practical impact is that it would make it easier for a single firm to own both broadcasting and print in the same market. (Currently this requires a waiver). The Powell FCC passed the diversity index but it was struck down by the courts, mostly because the FCC-DI didn’t include weights for media outlet size (e.g., some rinky dink UHF station with no ratings would count as much as a network affiliate). There are a lot of interesting angles for orgheads to see in this policy push: the perspective of org theory itself, performativity, and the role of social movements.

Org theory’s perspective on media ownership

Until the 1970s, the sociology of culture was completely dominated by functionalist and Marxist approaches which both mostly amounted to reflection theory. But then organizational scholars like Paul Hirsch, Richard “Pete” Peterson, and Paul DiMaggio began studying popular culture by more or less ignoring meaning and focusing on the processes through which cultural goods are produced. In a seminal 1975 article on how rock and roll replaced tin pan alley, Peterson and Berger found that industrial oligopoly led to creative stagnation and that creativity was only restored when a series of exogenous shocks created opportunities for new market entrants like Chess and Sun to meet unsated demand for more regional and ethnic music. Later the finding was qualified by Lopes and again by Dowd that the effect is ameliorated if the oligopoly decentralizes creative control to low level managers and subcontractors. Thus org theory inspired sociology of culture implies that oligopoly will be bad for the culture if the oligopoly centralizes control. Some research by Eric Klinenberg suggests that this may be the case. The FCC’s main stated goal in local media markets is the integrity of local news. Klinenberg has found that increasingly when a single firm owns multiple news outlets in a single market, it tends to pool journalists across the different operations, which not only centralizes control of the outlets, but creates a convergence of journalism styles. Two co-owned tv stations in LA have even made this the basis of an advertising campaign. Parenthetically, it gives me the creeps that the ad looks like a still from Apocalypse Now. Are they trying to imply that they just strafed the NBC station?

(btw, the org colonization of sociology of culture continues to the present. For the last five years leading economic sociologists like Brian Uzzi, Olav Sorenson, and Ezra Zuckerman have been doing some very cool work using data from the culture industries, and like the earlier generation they bracket the issue of meaning, though unlike them they mostly focus on social networks rather than ownership).


The big deal in media economics is the Hotelling-Steiner effect, which holds that a competitive market will lead to excessive concentration of goods aimed at the median consumer. Imagine that in a market there are two taste groups called A (worth 80% of revenues) and B (20%). If you have two firms serving this market, both will try to serve group A and neglect group B since half of 80% is greater than all of 20%. Ironically, a monopolist with two properties will be better at serving both A and B because by directing one property at A and the other at B is can capture all of both markets. So this leads to the Gekko-esque conclusion that monopoly is good. The FCC takes this theory very seriously in crafting and justifying policy, especially for radio. There has been something of an arms race of studies with first the FCC giving a grant to Joel Waldfogel to demonstrate the effect empirically, then the Future of Music Coalition’s Pete DiCola criticized the first study (basically he demonstrated that there is too much similarity between market positions in radio to call them meaningful variation), and finally the National Association of Broadcasters gave a grant to Andrew Sweeting to replicate the Waldfogel study in a way that was sensitive to DiCola’s methodological critique. (I should note that I think it is entirely ethical to take funding from interested parties so long as there’s no embargo clause and I think all three economists are talented and honorable). In part this is a scholarly debate over theory, but it wasn’t only that, for the funding was motivated by the impact that it might have on policy. Whether the Hotelling-Steiner effect can be demonstrated doesn’t just affect whether an article will get published, but whether some very large corporate mergers can go through.

(FWIW, my own reservation with the Hotelling-Steiner research tradition is that it treats market positions as point masses rather than niches with variable breadth. Therefore even Sweeting’s very sophisticated methods confuse cutting up a field more narrowly with enlarging the scope of the field. In plain English, I think empirically most of what the economists are capturing is that under oligopoly Adult Contemporary stations are splitting into the narrow subformats of Hot AC and Soft AC, which is different from the true increase in diversity implied by the original theory which would be something like the redundant AC stations switching to some completely novel format. My hunch as to why this is so is that a) truly novel market positions are risky and b) the radio chains are more interested in TSL than cume).

The most concrete way that social science has shaped media policy is the Prometheus v FCC case that struck down the original FCC-DI. In the Telecommunications Act of 1996, Congress delegated to the FCC the authority to periodically review media ownership policy and relax constraints that it finds to be inappropriate. Basically, the 3rd Circuit Court interpreted this to mean that the FCC can’t act arbitrarily but must make decisions that are supported by social science. Indeed, the court explicitly said that it was not forever rejecting the FCC-DI, just requiring that the FCC present more social science evidence to justify it. The FCC had in fact presented a batch of studies to justify the FCC-DI and the court found some of them convincing on their own terms, but it found that treating media outlets as equivalent regardless of revenues or circulation was not justified (or even addressed) by the social science evidence. I think the lack of weights might be justified by classic liberal political theory since an option is still an option even if few people avail themselves of it, but the interesting thing is that the court was basically holding that the issue had to be decided by the facts, and the facts had to be decided by economists. This message was heard loud and clear by the media policy community. After Prometheus, the FCC commissioned a new round of studies. On the other side, the Ford foundation gave a decent sized grant to the Social Science Research Council’s program on “Necessary Knowledge for a Democratic Public Sphere” which is an academic program but has a very strong emphasis on policy. For instance, for my own Necessary Knowledge grant I’m not just doing research on payola, but am partnered with the Future of Music Coalition to disseminate the findings and turn them into policy.

Social movements

It’s an understatement to say that concentrated media ownership is unpopular. At both public meetings and in correspondence the FCC has gotten a huge volume of complaints that is literally 99% opposed to media conglomeration. The interesting thing is that these complaints mostly come from the left but a nontrivial fraction are from the social conservative right. My intuition is that for people with very strong views about politics, the media serves as an all purpose whipping boy to explain their own political failure by recourse to a version of false consciousness theory. This sentiment is captured in the media reform movement’s proverb that whatever your issue is, your other issue is the media. I think this the only way to explain the coalition of the progressive left and the social conservative right to both oppose concentrated media ownership since it doesn’t seem like they could both be right about the consequences of reform for their chances on other issues.

There’s actually a traveling road show of sorts where you can witness this. After the FCC-DI debacle the FCC commissioners have traveled around the country several times to have open mic public meetings. Theoretically these are to learn what the public is concerned about with media but as demonstrated by Martin’s resurrection of cross-ownership deregulation there is very obviously no impact on policy whatsoever. Rather the real function is some kind of medieval-style penitence where the commissioners atone for their sins by traveling from city to city and allowing aggrieved commoners to verbally flagellate them for hours on end. Last year I attended the road show engagement at USC and it was fascinating. It was held in the middle of a workday but it managed to completely fill a huge auditorium to standing room only as well as much of a close-circuit tv overflow room. (And despite being at a school, very few of the people there were students). At the line to get in, there were media reform activists passing out forms to structure your open mic testimony that basically read “Hi my name is (X) and I represent (insert ethnic or other identity group here). I’m here to tell you how big media corporations are hurting my community ….” As an elitist technocrat of the sort that Prometheus implied should be setting policy, I’ll be blunt, many of the complaints were totally crazy. A fairly typical bit of testimony is one guy told the FCC a story about how he spent a few weeks traveling across the state trying to raise awareness of some obscure issue but not a single reporter showed up to any of his events. It is my professional opinion as a media scholar that the reason nobody covered this guy’s road trip is because it wasn’t newsworthy and this would be the same even if the media were all run by journalist soviets instead of Rupert Murdoch. On the other hand, many complaints voiced against media concentration were extremely rational. For instance, at the same FCC open mic thing there were a couple dozen people from Hollywood demanding that the FCC restore a version of fin-syn so as to reduce vertical integration between studios and networks and allow television producers more bargaining power vis-a-vis the conglomerates.

The bottom line

The devil is in the details, but in principle viewing conglomeration at a market level rather than a market-medium level makes a certain amount of sense — it all depends on how the index is calculated and what thresholds are set as policy triggers. However viewing it purely as a political matter, there is too much grassroots and Congressional opposition to anything that smells like allowing media concentration for it to work. I mean, a Congress that has flirted with restoring the fairness doctrine (which would effectively censor Rush Limbaugh), is hardly going to be receptive to gutting the remnants of media antitrust policy. I imagine at the very least Congress will hold hearings opposing Martin’s proposal and very possibly outright reverse it through legislation freezing current ownership policy. They wouldn’t have a veto-proof majority though and lately Bush hasn’t been too concerned about whether his vetoes will be popular. If the FCC does pass it, and if Bush does veto a Congressional reversal, there’s still the issue of the courts. If the new proposal includes weights for ownership size it will probably be allowed by the courts, though it’s just as likely that they would issue a stay for about a year while they ruminate on it. In the meantime we’ll have an election and the FCC may very well reverse itself or Congress can pass a re-regulation bill again without getting vetoed.

Written by gabrielrossman

October 19, 2007 at 5:29 pm

the coming anomie


via Megan McArdle I saw this Nicholas Eberstadt essay on Chinese demography and the one child policy. The two most remarked consequences are the abortion and infanticide of many millions of girls and the pending catastrophic dependency ratio of a society where retirees outnumber able-bodied young people by a decent ratio. The thing that most interested McArdle was that it is rapidly creating a system where a child has parents and grandparents but no siblings (and for that matter only a few second and third cousins). In the near future, the only intra-generational tie a Chinese man is likely to have is marriage (and that only if he’s lucky, given the 1.3 sex ratio). But this marriage will not link him to any brother- or sister-in-laws and is a dead-end as far as the intra-generational kin network goes. So in the very near future, within any given generation, familial ties among the Chinese will consist entirely of isolates and dyads. You see a phase transition to much larger network components when mean degrees per node is greater than 1 and with only marriage to tie them to their own generation (and marriage not being universal), the Chinese are below 1, therefore falling well below this threshold. This implies that a historically family-oriented society will soon have no families.

While neither McArdle nor Eberstadt mentioned it, low fertility is also an issue for every rich society, ranging from near-replacement for the United States, France, and Scandinavia to deathbed levels for Russia, Japan, and Southern Europe. So does this imply that everything I said for China applies to Spain as well? Kind of. First, many low fertility countries have equally small numbers of boys and girls (probably because they have public pensions) so they’ll probably avoid the crime wave that the Chinese have coming. Second, the unique thing about China is that it has low mean and variance for fertility, with a theoretical range of 0 to 1 (though in fact 3 baby families are not uncommon in rural China). In contrast, most other low fertility countries have low mean and high variance, so households with no babies and with two babies are more common than they are in China. Spaniards may have even fewer babies that the Chinese, but paradoxically a Spanish baby is more likely to have a sibling than is a Chinese baby. The upshot is that while intra-generational family ties are going to disappear in China, they will only weaken (a lot) in Europe. More technically, I’m making a confident prediction that in 30 years mean component size for kin networks will be appreciably higher in Spain or Italy than in China.

Well, so what? Who needs brothers, sisters, brother-in-laws, sister-in-laws, nieces, nephews, and cousins? It’s not as if we can’t substitute non-familial friends. There are two problems with this. First, family ties are unique in that they can’t be replaced (you can stop talking to your brother, but you can’t recruit a new brother to replace him) and this makes them very important in low trust societies. It could be that a lack of relatives could drive people to trust strangers of necessity and you’ll have a decline in corruption, or it could be that they just won’t trust anyone, transaction costs will go way up, and nothing will get done. Second, in the United States non-kin strong ties are rapidly disappearing as people are basically discussing serious issues only with their spouses and parents. While I’ve seen no evidence that this change is also occurring in low fertility countries, if it is then the “mass society” nightmare scenario of atomized individuals wasn’t wrong, just ahead of its time.

Written by gabrielrossman

October 17, 2007 at 6:11 am

sing goddess, sing of the rage of sylvester (1.1)


[1.1 because a previous version of this post didn’t load the graphics right. the substance is the same]

You may be able to judge a culture by its epic poetry. Where the Greeks had the Odyssey, we have “Trapped in the Closet.” Rather than “wise old Nestor” we have “crusty headed hoes” and our composer is not a blind old bard but a cradle-robbing water sportsman.

Anyway, anyone who’s seen “Trapped in the Closet” knows that the appeal is not in the music itself (which is boring and repetitive) nor even really in the lyrical style (despite the occasional gem) and it is most definitely not in the filler (a good part of the series consists of the characters playing phone tag). Rather, the appeal of the series is in the plot, which consists not so much of new actions as of continuous revelation of an incredibly intricate web of duplicitous relations which were formed in the backstory. As such, I have been having some fun mapping the relations in “Trapped in the Closet.” I define a tie as either observed or hearsay direct interaction between two characters. Presence in the same room or organization doesn’t count, nor does the dream sequence. [Spoilers abound from here on out.]

This Pajek graph shows the relations between the characters. The numbers on the lines represent the episode in which the dyad is revealed so characters with all high numbers like Pimp Luscious were introduced late. In other cases you can have two characters who were both introduced early, such as Chuck and Cathy, but for whom no relationship is revealed until fairly late.

[big version of the same graph]

The graph has a very dense core and two peripheral structures. The structure on the right consists of Pimp Luscious, Bishop Craig, Reverend Mosley James Evan, and the Peace Within Choir. While entertaining in of itself (why is the pimp in church in the first place? and did you notice the blind prostitute?) the scene involving these characters is as peripheral to the story as it is to the network. The structure on the top is Rosie, Rudolph, and Mirna. When Rudolph overhears that Chuck is in the hospital with “the package” (a venereal disease), he tells Rosie, who in turn tells Mirna (a new character). Although the network structure implies that the story should end here, as Mirna is at the periphery, in chapter 22 it somehow gets back to all the main characters and this widespread knowledge of Chuck’s “package” will probably form an important plot point in the forthcoming chapters.

Given that Chuck has an STD, the stakes of the graph are pretty high. Anyone who has heard the hip-hopera knows that this is not just Chuck’s problem, but potentially involves several other characters. However when you isolate the graph of sexual ties you see that this really only affects the core characters introduced in the first few episodes. Furthermore, we know James the cop and Gwendolyn used a condom at least once. If we make the further assumption that they used them consistently and properly, then James, Bridget, and Big Man are not at serious risk of contracting “the package.”

In addition to sexual ties, there are basically two other types of network ties in “Trapped in the Closet.” Of course, there are people who merely know each other or talk to each other, which you can consider a sort of default tie. However there are a lot of threats of violence (and to a lesser extent, actual violence) in “Closet.” My guesstimate is that about a third of the conversations occur at gunpoint. You can then define a tie as 1 “talk/know,” 2 “sex,” 3 “violence,” and 4 “sex and violence.” This resulting graph demonstrates that the peripheral characters not only are introduced later and have fewer ties, but they tend to have weaker ties, consisting only of talk. A look at the core structure shows that the main characters tend to all have (diseased) sex with and/or threaten one another so if you have to live in “Closet” you’re probably better off living at the periphery. [again, right-click and “view image” to see the whole thing].

[big version of the same graph]

More generally, network analysis seems to be an interesting approach to apply to fiction, perhaps revealing similar findings as those in Steven Johnson’s coding of overlapping plot threads in Everything Bad is Good for You. My hunch is that very few works of fiction would resemble the kind of small world structure we see in real world social networks. (The only small world novel that leaps to mind is Neal Stephenson’s Baroque Cycle). Rather, I think just as Johnson found, works will tend to be either episodic or serial. “Trapped in the Closet” is a good example of a serial work, with very high density among an ensemble of characters and over the course of the work you are as likely to see a new tie formed between existing characters as to see a new character introduced. In contrast, episodic works will have a very large number of characters in a radiating hub structure, with the regular characters at the core and each episode having a distinct structure connected directly to the core but not the other branches. For instance, The Adventures of Huckleberry Finn would consist of Huck and Jim at the core, with Tom Sawyer on the one hand and the dauphin and duke on the other having slightly lower centrality. Huck, Jim, and Tom would be connected to all the characters at the beginning and end of the book and Huck, Jim, and the two con men would be connected to all the characters in the middle. However, lesser characters would be strangers to one another, for instance the Arkansas audience for “Nonesuch” never meet the grieving Wilks family just a few pages away and a few miles down the river. Network analysis could provide an interesting coding scheme for literature (broadly defined) which could provide a way of measuring complexity of the work and what sorts of works are valued by critics and audiences over time.

Written by gabrielrossman

October 6, 2007 at 10:37 pm

100% compatible with the previous releases


When I’m not inadvertently serving my discipline by providing negative case studies in academic etiquette, I analyze pop music radio data. In my current project, my co-authors and I are looking for patterns in the diffusion trajectories in hundreds of pop songs. Since the raw data isn’t in the appropriate form, it takes a bit of code to clean it.

The last time I ran my cleaning scripts was under Stata 9 and when I ran it yesterday under Stata 10, it gave me a dataset where several of the key variables had all missing values. Since the cleaning script is 574 lines of code it took me a couple hours to identify where the problem was and I eventually isolated the offending line of code.
gen fpdate=date(firstplayed,”mdy”)

The issue was something with how Stata handled dates. I then read the manual entry “[D] dates and time” and got such a thorough treatment of leap seconds and microseconds and atomic decay that I could imagine both Augustine of Hippo and Albert Einstein saying “you know, I think they’re belaboring the point.” Finally, I got to the point on converting strings to dates and it looked like I was doing the right thing, but I decided to enter their code anyway.
gen fpdate=date(firstplayed,”MDY”)

Yes, the problem was that the new syntax requires uppercase codes for month, day, and year, whereas the old syntax used lowercase codes. Just now, I noticed the following in the help file:

Historical note: Stata 10’s date() function is much improved over that of previous versions, and the mask is specified a little differently. In previous versions, the codes for year, month, and date were y, m, and d rather than Y, M, and D.

I actually agree that the new function is improved and if I were starting from scratch it would have been easier to code using the v10 command. (I round off events to the nearest week, which is built into v10 but to do it in v9 you have to divide by 7, force the data to integer format, and multiply by 7 again). What I don’t get is why they couldn’t have made the improvements without changing the mdy codes to MDY?

btw, the title to this post is ultimately from the manual but proximately from Jeremy’s post in which he describes a similar experience a few months ago.

Written by gabrielrossman

September 29, 2007 at 3:41 am



For my first couple years in grad school I seriously considered writing a dissertation on Islamic finance but before it got to the prospectus phase I changed my topic to media ownership, largely because I realized that if learning German gave me trouble, then there was no way I could learn Arabic and Urdu and that doesn’t even begin to address the problems of getting access to the field site. Nonetheless I think it’s a fascinating subject from which people with the right skill set can get some fascinating insights (for instance Timur Khuran’s Islam and Mammon) . I think there are two interesting aspects to it: the social construction of religious dogma and the practical consequences of the rules.

(Note: I haven’t read Khuran’s book or anything else on the subject in a few years. If you notice a mistake, please say so in the comments).

Social construction
There’s more to it than this, but the keystone principle of Islamic finance is that the Koran forbids using “riba.” The word means usury, but just as in English it is ambiguous with some people defining it broadly as all interest and others narrowly as excessive or predatory interest. This distinction is of tremendous practical importance as on it hinges the question of whether Muslims can participate in Western-style banks or must create complicated alternative institutions. According to Khuran, the ban on riba has historically been interpreted loosely and banks in Islamic countries often charged interest from the classical period through the recent past. In the twentieth century, certain Muslim scholars (mostly Sunnis) began to worry about interest, in part because they saw it as a way to promote their (essentially secular) ideas about post-colonial development and in part because of the general recent trend in Islam of fundamentalism displacing traditional religious case law.(1) Thus here you have a case where, in part for secular reasons, people try to find an interpretation of their religion which imposes more restrictions on behavior than had traditionally been there.

In this case the new strict interpretation does seem like a plausible reading of the relevant scriptures, but you can easily find other cases where it is not. My favorite example is Prohibition. In the late 19th and early 20th centuries, American Protestants became increasingly opposed to alcohol, mostly because of the social problems it created and in part because they associated it with Catholic immigrants. Not only were they against alcohol as citizens, but they were against it in their capacity as Christians. The trouble is that, as critics pointed out at the time, there are numerous clear positive references to alcohol in the Bible (most obviously the second chapter of John). 19th century Protestant theologians came up with a bunch of elaborate arguments to explain why we should ignore the wedding at Cana (by pretending Jesus turned water into grape juice) and instead focus on stories that put booze in a bad light, like the drunkenness of Noah.

There are other times when people swim upstream to relax dogma. A good portion of the Talmud consists of “arguing out of existence” certain unpleasant passages from Tanakh, for instance by severely restricting capital punishment to a fraction of the profligate applications of the sword and stone demanded by scripture. (In other cases, such as Kosher food laws, rabbinic Judaism makes the religion much stricter than a facial reading of the Bible would demand). The modernist/fundamentalist (aka mainline/evangelical) schism in American Protestantism most directly dates back to Victorian-era disputes about whether to use a nonliteral reading of the first two chapters of Genesis so as to accommodate evidence from Hutton and Darwin about the age of the Earth and the origin of life. More recently the big divide is over whether to ignore the half dozen or so verses that condemn gay sex or follow shifts in public opinion towards greater tolerance. The most visible consequence of this dispute is that the Episcopal Church USA is now on probation with the global Anglican Communion. The Archbishop of Canterbury has been working overtime to patch things up, but if he fails this dispute will rather quickly lead to a very bitter divorce including lots of lawsuits over church property between the newly schismatic American church and conservative congregations who wish to affiliate their churches (and church property) with conservative African bishops in good standing with the Anglican Communion.

So one of the general issues in religion is that getting from scripture to doctrine is not a straightforward unproblematic process, but one with all sorts of vulnerability to political, social, economic, and cultural influences. This seems like a very fruitful area to apply the sociology of knowledge. The issue of riba is particularly interesting for two reasons. First, it creates a lot of room for Muslim theologians to go through gymnastics figuring out how certain financial schemes that are, at their core, fixed-rate on the principle, aren’t really interest. This is actually the dominant approach in the Muslim world with only Pakistan taking a hardline “if it looks a duck” position. Second, the issue closely parallels earlier Christian debates in the Renaissance about how to interpret Deuteronomy 23:20-21. Christians eventually came to the conclusion that a) the verse was superseded by the parable of talents and b) interest represented both opportunity cost and shared risk. While Muslims have nothing comparable to the parable of talents, they have made extensive use of the concept of risk-sharing to distinguish riba from legitimate practices.

Practical consequences
Despite the presence of alternatives, fixed-rate on the principle with recourse to seize collateral is the core form of finance in the West (and for that matter, much of the Muslim world). In Pakistan this is illegal and elsewhere (including the United States and Britain) some Muslims and Muslim organizations voluntarily avoid interest. Such taboo-abiding actors do not save up in advance to make purchases in full, nor do they hide their savings in the mattress.

Some forms of Islamic finance are basically fixed-rate on the principle by another name. For instance, a bank may buy a piece of equipment for a firm then loan it to a business for a year at the end of which the firm pays the bank the original price for the equipment plus a “usage fee.” This form of financing is allowed in most places but forbidden by really strict theologians.

So what is open to such particularly strict folks? There are a broad family of practices, but all of them involve substantial risk-sharing. If one party is guaranteed a return, then the strict people say it’s riba. So for instance, while keeping your personal assets in a savings account is forbidden (because the bank owes you points even if its investments fail), using a credit-union is cool (because you only get points if the investments are profitable). Likewise, venture capital is a uniformly acceptable form of credit because the vc is not guaranteed a return. In general, for a scheme to pass the stricter rules, it has to involve the investor agreeing to share profits from some venture but with the investor getting nothing if the venture fails to turn a profit.

This implies a huge principle-agent problem in that there is an obvious incentive for a firm to conceal its profits from investors, either by skimming profits through fees directed to the management or by simply keeping two sets of books. This means that the monitoring transaction costs are extremely high in Islamic finance as one must scrutinize a venture for its likely profitability before investing and then monitor it to see if it is accurately reporting profits. In contrast, with interest you know from the outset what you are owed and whether you actually get it is simple arithmetic. Although I have seen no empirical evidence on this, it leads me to speculate that (controlling for country) embedded transactions and social networks are more important in strict Islamic finance than they are Western-style finance.

Compounding the issue is that many Muslim countries rank extremely high on corruption indices. For instance 7 of the top 10 countries to accrue NYC parking tickets by UN diplomats are Muslim, though diplomats from Azerbaijan, UAE, Oman, and Turkey scrupuously fed the meter. Particularly worrying is that Pakistan not only has the strictest laws against Western-style finance, but also has the tenth highest number of parking citations/diplomat in the whole UN. If we think that their financial system requires an especially high level of trust, this is a problem.


1. Occasionally you read a newspaper column suggesting that Islam needs a Protestant reformation. Such a statement demonstrates complete ignorance of both Islam and the Protestant reformation. If you want to find a historical Christian parallel for Zawahiri, Cromwell is a much closer fit than Torquemada. What these columnists mean to say is they want Islam to experience an enlightenment (no doubt one that lacks a Muslim Robespierre).

Written by gabrielrossman

September 25, 2007 at 11:08 pm

R.I.P. TimesSelect


Last week the New York Times ended its policy of charging for online access to its op-eds. There has been a lot of commentary on this (which I’m too lazy to link to) about how this was a failed experiment, and how (unlike WSJ) NYT was giving away its expensive news and charging for its cheap opinion, etc. The interesting thing to me (and I’m not claiming this is original) is that absolutely nobody reads the NYT op-ed page because it has Bob Herbert, people read Bob Herbert because he’s on the NYT op-ed page. The reason for this is not because the NYT op-ed page has so much institutional credibility that we dutifully attend to it despite the lack of compelling content (like going to church even though your preacher couldn’t sermonize his way out of a paper bag). Rather, the reason is network externalities: we used to read the op-ed page because we thought other people were reading it and we didn’t want to be left out of the conversation. Although it’s hard to remember, a few years ago a good portion of the political blogosphere was a Talmud to the op-ed page’s Torah. National Review Online even had a column called “Krugman Truth Squad” devoted to fisking the columnist, an enterprise that would seem rather pointless today. After TimeSelect, this huge cultural presence of the op-ed page crashed overnight.

This is a good example on how a shock can affect a complex system. One way to think about it is to imagine that everyone else in the chattering classes got TimeSelect, would it be worth it for you to subscribe? For myself, it would probably be worth paying $50/year to converse knowledgeably with my colleagues and the entire blogosphere. However, this was not the actual situation. If a few people drop the service this decreases the network externalities of the service, and in turn it becomes less valuable to the remaining subscribers, many of whom drop it until you’re left with only the few dozen people who actually think the op-ed page is an intrinsically riveting read. This is a dynamic threshold model with positive feedback of a type first described by Schelling and later generalized by Granovetter and still later popularized by Gladwell. The original model was to explain residential segregation. Suppose that whites have a distribution of preferences for the minimum whiteness of their neighborhood with a mean of 70% and a standard deviation of 15%. The naive prediction would be that there would be a lot of neighborhoods that were about 70% white, but in fact there are MSAs like Detroit where almost all neighborhoods are either greater than 90% or less than 10% white (you can practically see 8 Mile from space). In Schelling’s model you start with a neighborhood that is 100% white until a few non-white households move in, moving the white rate to 98%. The vast majority of white households don’t care because they have preferences of between 50% and 98%, but a few hyper-bigoted households can’t stand their new neighbors and move out, only to be replaced by more non-white households. This upsets the whites with preferences between 95% and 98% and they move out which pushes the rate down to 95%, which in turn upsets the 90-95% preference whites, etc. Since you’re still in the upper tails of the distribution this all happens fairly slowly, but eventually around +1 sigma you start reaching the fat part of the curve and the neighborhood “tips,” with all the white households leaving overnight. Thus you have segregation much higher than that implied by the mean white preference (and no doubt the mean non-white preference as well). Likewise, you can imagine the NYT op-ed page going into a downward spiral where the more readers who abandon it, the less valuable it becomes to the remaining readers, who in turn also abandon it. Alternately, imagine that you subscribe to TimesSelect, but few other people do. With whom could you discuss it? Many bloggers subscribed to TimesSelect but were reluctant to link to it for fear of making the implicit demand that their readers subscribe to the unpopular service. (As Ezra Zuckerman recently observed in the comments, network externalities can not only create value but impose obligations. I for one am glad to be through with the chore of reading the NYT op-eds).
So one puzzle is whether having fallen off the paywall, will we put Humpty Dumpty back together again? I am actually inclined to think not. As shown theoretically by Banerjee and empirically by Salganik et al, positive feedback systems are fairly arbitrary in their allocation of success. If I had to guess what will eventually serve as the top dog, my money would be on the Atlantic blogs. I RSS several of these blogs and I would prefer any of them to any part of the NYT op-ed page. Part of it is that three of the six Atlantic bloggers are my age, but the real thing is that the Atlantic lets them write posts with no minimum or maximum length whereas the NY Times standard of 800 words twice a week can turn even the Wolfe-ian style of David Brooks into a font of boredom. One indication of the Atlantic‘s success is that Megan McArdle is beginning to experience the backhanded compliment of serving as a focus of hatred for some of the left blogosphere in the same way that Paul Krugman used to for the right.

Written by gabrielrossman

September 24, 2007 at 3:46 pm



Yesterday at the Harvard-MIT economic sociology workshop, I saw David Brady from Duke present some work on an approach to poverty that takes both micro (human capital, family structure, etc.) and macro (welfare state, economic growth, etc) seriously. In a nutshell, he finds that in the OECD, left-wing governments imply a large welfare state which implies not only low poverty, but ameliorates the micro predictors like female-headed household and low education. Furthermore, he found that the main effect of unionization is mediated by the state (i.e., collective bargaining is peanuts and the real action is unions getting lefties in office). Brady cast the research as testing liberal economic theory (growth solves poverty by increasing demand for labor), structural theory (liberal theory is basically right but it doesn’t work for all groups), and power relations theory (poverty is solved when left-wing parties take power and solve it) — with the results most supporting power relations, being ok for structural, and not at all for liberal. He had some very well designed models and good data and basically did great stuff with the error terms, intervening variables, and all that, but there was a serious problem, at least rhetorically, and possibly epistemologically as well.

The problem was the dependent variable. Brady measures poverty as a binary variable defined as disposable income less than half of the context-specific median. This means there are two ways for “poverty” to increase: for the bottom to lose disposable income or for the median to gain it. This leads to conclusions that are a bit hard to swallow on the face of it, like that Ireland has recently experienced a sharp increase in poverty, whereas the conventional way to describe that island is that in the last twenty years it has experienced explosive economic growth and now has large positive migration. Despite the counter-intuitiveness of a celtic tiger whose poor were better off when the economy was in the toilet, the definition of <.5contextmedian(disposable) is very defensible based on a large and solid literature across the social sciences (which Brady knows infinitely better than I do) on both bidding up of scarce positional goods as well as less tangible and more subjective aspects of relative deprivation. Basically, it is true that as the saying goes, America is a country where the poor people are fat — but this doesn’t mean that at least in some meaningful sense they aren’t really poor.

Nonetheless, even if you think this definition of poverty has a lot going for it, there is an inescapable sense that we’re witnessing a bait-and-switch where we’re measuring inequality and calling it poverty. It’s a lot less interesting to demonstrate that left-wing parties reduce inequality than to say they reduce poverty. At least two people in the room (myself and the guy who beat me to asking the question) were thinking this and Brady said he gets that question all the time. His answer was basically that absolute measures of poverty are both philosophically unjustifiable and methodologically problematic in comparative research because of things like which basket of goods (notably health care) the state provides directly. I thought this was a convincing argument for making <.5contextmedian(disposable) his preferred measure but a rather weak and defensive reason for making it the only measure. A big part of the problem is that he cast his project as testing liberal economic theory vs. power relations theory, but he uses a definition of the outcome preferred by the latter. From a liberal perspective, this is a cop-out since they never claimed that they would compress the income distribution, only that they would promote Pareto-efficient growth. You can use happiness research to demonstrate that this goal will not maximize subjective utility of the bottom quartile, but you can’t use it to demonstrate that it won’t succeed on its own terms. To see if liberal economics accomplishes what it wants to accomplish, you need a liberal outcome; whether it is right to want to accomplish that is a different question.

More generally, say that you build an analysis on some assumptions that you really like and you have all sorts of good reasons to favor, but that some people are just too stubborn to take them seriously. If these people are just marginal cranks, you can ignore them. But if they are not, then you have a problem. You can either convince them to adopt your assumptions (good luck), or you can, for the sake of argument, adopt their flawed assumptions and see if your results are robust. If they are robust, you win, since even on their terms you’re still right. If they aren’t robust, then you have a problem, but at least you’ve narrowed and clarified it. It’s not necessarily worth doing this if it would be extremely cumbersome (radically different model specification, nontrivial data collection) but if it’s something simple (log something, use Huber-White standard errors, break a continuous variable into a dummy set to look for nonlinear effects) then there’s really no excuse. You don’t have to make the main presentation their way, but there’s nothing so convincing as a footnote saying “I tried it the other way and the results are robust.” Basically, when you’re dealing with some obstinate asshole (especially if that asshole is a peer reviewer) it’s a good idea to be the adult and let them be the stubborn one rather than engage in a shouting match of the deaf. Would it really kill you to say, I still think I’m right, but here’s what it looks like if we assume you’re right? In the specific case of Brady, I think his research is solid, but it would be devastating if he added an appendix where he does all his models the same, but uses an absolute definition of poverty to show that his analysis is robust to assuming the importance of relative deprivation. (Or is it? There’s only one way to find out.)

Written by gabrielrossman

September 20, 2007 at 4:42 pm

speaking of legitimacy


My last post may give the impression that I think “legitimacy” is over-rated. In fact, I think one need only drive a couple hours down the 405 to my sister campus in Irvine to see how important it is. UCI has recently caught a lot of flack for first hiring Duke (and former USC) law professor Erwin Chemerinsky as its inaugural law school dean and then rescinding the offer because was too left-wing. (According to the LA Times, UCI is now trying to dig itself out of the hole and rehire him). The funny thing is that while Chemerinsky is reliably left of center, he’s no Ward Churchill. For instance, the apparent last straw to UCI was that Chemerinsky wrote an op-ed opposing a California proposal to reduce the number of appeals available to death row prisoners — certainly a debatable position but not quite Maoist and in any case well within his expertise as a con law professor. Likewise, while he’s not a strict constructionist, neither is he a critical legal theorist. So basically, UCI hired an outspoken (but mainstream and respected) liberal, then panicked and fired him.

UCI has caught a lot of flack for this. One indication is that conservatives (who presumably would have been the constituency for dropping Chemerinsky) have uniformly and aggressively condemned the move. The interesting thing from an org theory perspective is to say, we clearly have an organization that has experienced a (self-inflicted) blow to its legitimacy but the interesting thing will be to see what the consequences are for the embryonic UCI law school. At least one top scholar who had planned to follow Chemerinsky to UCI is now balking and there has been a lot of speculation that UCI would find it impossible to find a good alternative candidate willing to accept the now tainted dean’s chair. At Volokh Conspiracy, Stuart Benjamin has been speculating that this was an attempt by a minority of the regents to sabotage the formation of the school. I think such an explanation is too clever by half, but it demonstrates the extent to which the Chemerinsky matter has completely delegitimized the new law school, and to a lesser extent, the whole of UCI.

It’s further worth thinking about why exactly the legitimacy problem will hurt UCI. It’s hard to believe that there will be a direct impact on funding or that high school seniors will sudenly prefer UC Riverside or San Diego. Rather, it’s pretty clearly the case that the issue here is labor, which tends to be very romantic about academic freedom. When you step back and look at it, this is pretty noteworthy. I find it difficult to believe that, say, MBAs would eschew McKinsey if it refused to hire a division head who liked to write op-eds. On the other hand, I think it’s very clear that journalists would raise hell if their publishers tried to micro-manage the valence of their news. The newspaper union was pretty upset at even the contingent prospect that this might happen if Murdoch bought the Wall Street Journal and helped set up structural barriers against this. (Journalists’ professional obsession with “objectivity” is one of the reasons I don’t believe corporate ownership biases the valence of the news to any appreciable extent). You could probably look around at any field that is dependent on skilled professionals and see how those professionals insist on a certain level of legitimacy. Maybe there’s something to the institutional vs. technical sectoral distinction after all.

Written by gabrielrossman

September 16, 2007 at 7:59 pm

who’s more cultural? pt. 2


Anyone who has taken graduate level sociological theory remembers plotting different theoretical perspectives in Cartesian space where one axis is culture/structure and another is micro/macro. A few months ago, Omar blogged that population is usually plotted incorrectly on such a graph since the allegedly hyper-structural theory of population ecology is, in its own wonky way more social constructionist than neo-institutionalism. I recommend re-reading the post, but to briefly recap, the key hypothesis in ecology is that density (i.e. number of competitors) predicts organizational failure along a u-shaped pattern. The reason that high density predicts failure is straight out of first semester micro-economics since as the number of firms approaches infinity the marginal profit approaches zero (and marginal profit approaches the monopoly return as the number of firms approaches one). For this reason, micro-economics would predict, not a u-shaped, but a monotonic relationship between density and failure. However, pop ecology folks observe a high death rate at low density which they explain with the essentially cultural logic that such sparse niches lack “legitimacy.” While I’m no expert on population ecology, my impression is that legitimacy is a purported explanation for a finding about density and failure, not a finding itself. That is, it’s not like there are a bunch of studies that do a path analysis where legitimacy is an intervening variable (between density and failure) and legitimacy is directly observed by, for instance, content analysis of the business press (“Widgets, the Industry of the Future?”).

I agree with Omar, but what got me thinking is that could it have been otherwise? Is this little-remarked cultural aspect to population ecology essential or accidental? Since the most conspicuously cultural aspect if the theory is the high rate of failure at low density, another way to put the question is whether Hannan and Freeman could have found another mechanism to explain this rather strong empirical finding.

In fact, I think they could have. Moore’s Crossing the Chasm is a huge bestseller in marketing. Chasm is a prescriptive work about marketing what Christensen calls discontinuous innovations — products that are unlike anything before. A simple way to think about it is that these products create a new industry, not necessarily at the 4 digit SIC level but (if there were such a thing) at the “5 digit” level. I find the book a fascinating read in part for all the usual reasons that business guru books succeed (storytelling, heuristics, etc.). What’s really fun though is that Moore knows very little of the academic org theory literature and what he does know he mangles, for instance, he misidentifies Ryan and Gross’ hybrid corn study as being about potatoes. However he is extremely bright and has had a lot of experience so he constantly reinvents the wheel, occasionally getting things subtly differently from the academic literature. Remember that the book is intended for firms who are creating their own field (which a population ecologist would describe as a niche with a density of one firm) and the central point of the book is that this is indeed a hazardous place to be. So Chasm represents an independent attempt to explain the same problem as Hannan and Freeman.

Moore has a variety of reasons for why low density is so hazardous. Some of them could be described as cultural or legitimacy, such as explaining to your customers what your product is and why it is useful. However, by far the most important thing to him is that new products lack a system of ancillary support products and services. For instance, if you create a database engine, the database is not really useful until templates, consulting services, and translators are built into it and the relevant end-users learn to use it. Third-party vendors will not provide these services unless there is a critical mass of an installed user base. (Most of the book is about breaking this catch-22). Note though that this is fundamentally an argument about network externalities, not culture or legitimacy. So it is possible to provide a coherent explanation for the failure rate at low density that does not involve culture. Although it is in principle an empirical question, until I see the evidence, I’m inclined to believe Moore’s experience with the nitty gritty over Hannan and Freeman’s speculation about how to interpret a beta. (I don’t mean this as a criticism of H+F since the precise mechanism for low density failure is less interesting than the fact itself).

So given that Hannan and Freeman could have come up with a structural explanation like Moore, the question is then why did they come up with legitimacy? My speculation is that it was just the general intellectual zeitgeist of the period. The 1980s was broadly the period of the “cultural turn.” Both resource dependency and institutionalism had reinvigorated org theory and their emphasis on legitimacy provided a handy off-the-shelf explanation. At the time although network externalities were big in economic history, they hadn’t shown up much yet in sociology.

Any alternative ideas?

Written by gabrielrossman

September 15, 2007 at 11:33 pm



I’m beginning to think that I should write a little theme song to accompany my guest-blogging “a spoonful of sleaze makes the theory go down/ the theory go down/ in the most delightful way!” And for all you UCLA undergrads, yes, my lectures are a lot like my posts. Register for SOC-M176/CS-M147 early kids because sociology classes tend to fill up on the first URSA pass. Anyway, continuing my ongoing effort to cheapen this august blog with pretentious exegesis of salacious trash, today’s post is why we care about celebrities. I was recently asked this question by a documentary filmmaker and while we never managed to schedule the interview, I did ruminate on it and I figured I’d share my notes with y’all. My reasoning won’t be terribly surprising if you’ve followed my previous posts, but I think there are basically three things explaining the pervasiveness of celebrity gossip: network externalities (demand), human nature (demand), and supply. Read the rest of this entry »

Written by gabrielrossman

September 14, 2007 at 4:12 pm

what can bathroom cruising contribute to org theory?


(Note: I wrote this post before Tyler Cowen scooped me on part of the analysis. My inner economist told me to discount sunk costs and delete the post, but my outer sociologist never took micro and doesn’t understand what “sunk cost” means).

The big story a couple weeks ago was that Senator Craig (R-Idaho) was busted soliciting sex from a police officer in the Minneapolis airport mens’ room. While there are no end of ways to enjoy this story, like a lot of people the thing I found really fascinating was in learning the technical details of cruising for anonymous gay sex in public places, as reenacted in this Slate video. Basically, the initiating party discretely glances through a gap in the partition to assess the potential partner’s attractiveness, then waits for third parties to vacate the restroom and sits in the neighboring stall. That’s more or less what I’d always imagined but the interesting part comes next, where the initiator discreetly taps his foot then waits for a response in kind. The next step is for them to rub their fingers along the base of the partition. This is as far as Senator Craig got before being arrested, but apparently the typical escalation is then for genitals to be flashed under the partition and then for sex to begin either under/through the partition, in one of the stalls, or at some new location.
(I never took graduate level qualitative methods so I can only guess how much of this dates back to that ethically infamous tome, the Tea Room Trade.) Read the rest of this entry »

Written by gabrielrossman

September 13, 2007 at 6:47 pm

ethanol and Roman imperialism


In Monday’s post I used information cascades to explain why two major hip hop releases came out on the same day. One of the key implications of cascades is that they enormously magnify the impact of the first movers in a system. In today’s post I’m going to exploit this property to explain two serious topics: American energy policy and the process by which Rome went from a republic to a dictatorship between 133 BC and 31 BC.

From any kind of straight cost-benefit perspective, corn ethanol is a really profoundly bad idea. Cars get bad mileage with it, it’s really expensive, it’s driven up the cost of corn so much that Fidel Castro and Hugo Chavez find it very easy to stir up anti-yanqui sentiment, and between deforestation and processing it’s more or less a wash on CO2. (A much simpler and better policy for reducing foreign oil and CO2 is a carbon tax, coupled with a revenue-neutral reduction in the comparably regressive payroll tax, then let the market figure out the details). So, putting aside the null hypothesis that Congress is irrational, why would it heavily subsidize such a pointless boondoggle? The answer is that while ethanol is bad for the country as a whole, it’s good for corn-producing states like Iowa. But it doesn’t make sense why Iowa (with it’s seven electoral votes, two senators, and five house members) should be able to shackle the rest of the country to the moonshine economy, even if it formed a block with Nebraska (five electoral votes), it’s still nowhere close. Nonetheless, every four years presidential primary candidates sweep in on the state and pander to Iowa voters with nonsense about how bio-fuels will support the American farmer and achieve energy independence. Any candidate who refused to do this would almost certainly lose the Iowa primary. Read the rest of this entry »

Written by gabrielrossman

September 12, 2007 at 3:50 pm

culture as a fruit fly. e.g., Kanye West vs. 50 Cent


I originally started studying culture because I was interested in questions about hegemony and the news but in grad school I drifted into studying radio because the data is a lot better. Even if you don’t find pop music very intrinsically interesting, you can use it to cleanly identify answers to some very cool theoretical questions. Every week many songs come out and there are thorough and reliable (albeit proprietary) datasets that detail how many cds are sold or radio stations play these songs. Geneticists study drosophila because a generation lasts a week instead of, say, seventy days for mice or twenty years for people. Likewise, I study radio because it takes a few months for a song to become a hit whereas it can take decades for smoking bans to be adopted by a majority of municipalities or AA/EEO officers to be adopted by a majority of large firms.

All around us you can use examples from pop culture to develop org theory. (So, yes orgheads, you can claim your cable subscription as a business expense, though no, I don’t have the cojones todo this myself). One place you can see this with the enormous amount of hype about tomorrow’s release of new records by Kanye West and 50 Cent. The latter has threatened to retire if he fails to outsell the former and on the cover of the current Rolling Stone the two appear in the dueling profiles pose associated with boxing posters. By this point, the fact that they’re releasing on the same day has created enough hype that it may rebound to the benefit of both, but originally it was actually seen as something of a problem for them. Generally speaking, you want to release your products when there is little competition. For instance, in a 2006 ASQ paper, Sorenson and Waguespack found that not only do films do better when they open against weak rival films, but studio bosses allocate such attractively vacant release dates as favors to directors with whom they have embedded ties. So it’s reasonable to assume that Kanye West would prefer to be releasing on a day when 50 Cent was not, and vice versa. This is especially so when you consider that both are hip hop musicians and thus going after roughly the same market. (Neither of them is losing any sleep about Kenny Chesney’s new record). Finally, tomorrow is September 11 and there have been some questions as to whether it is appropriate to release major records on what is arguably a national day of mourning, the memory of which inspired two ongoing wars. So when there are 364 perfectly good other days on the 2007 calendar why did West and 50 Cent have to choose the same day?

The first thing to note is that by industry convention, records (and books) are released on Tuesdays, just as movies are usually released on Fridays. Breaking this convention would create all sorts of logistical headaches while only providing, at most, three days of distance from competitors. But, taking for granted that records can only be released on Tuesdays, this still leaves open another fifty-one Tuesdays. Since Kanye West and 50 Cent are both easily in the top ten for hip hop and probably the top twenty-five for pop music generally, this still leaves plenty of Tuesdays for them to avoid not only each other, but any other competitors of comparable stature. This implies that there is something that makes September 11, 2007 more desirable than your garden-variety Tuesday such that they would rather compete on that particular date than each dominate the market on separate, but lesser, days.

The thing that makes tomorrow special is that it is exactly fifteen weeks before Christmas day. Record labels routinely release all of their best albums in early to mid fall for this reason. (They release their least promising material in January and February). This is worth picking apart since traditionally the Christmas season doesn’t begin until the day after Thanksgiving, and even in our current orgy of debt-fueled consumerism, one rarely sees Santa Claus before Halloween. The naive view would be that to take advantage of Christmas spending the record labels should release music in late fall, so we need a way to explain why the release is about ten weeks premature.

It’s probably best to start with Christmas day and work our way backwards. In a 1993 AER article with the perfect title “The Deadweight Loss of Christmas,” Joel Waldfogel found that people tend to value their Christmas gifts at less than the givers paid for them (the difference between utility and price is a “deadweight loss” of welfare). Furthermore, he found that social dissimilarity between the giver and recipient strongly predicts the gift being a poor match — basically, you usually don’t like what your grandmother gives you whereas your brother usually gets you pretty good stuff. The only silver lining is that these givers tend to know their limitations and are more likely to give cash (where by definition the price equals the utility). Anyway, the take home from our perspective is that Christmas gifts tend to be based on low information.

This then leads to the question of how people with low information make decisions. In 1992 both Bikhchandani and Hirshleifer in JPE and Banerjee in QJE presented the concept of an information cascade (which Banerjee calls “herd behavior”). In this model, actors with low information assume that other people know something they don’t and thus they treat popularity as a heuristic for quality. This of course creates a positive feedback cycle since when these actors choose the most popular offerings they make them even more popular, and this increased popularity is treated as an even more informative heuristic by future actors. There are two interesting things about cascades. First, they are determined as much by random error in the first few iterations as they are by underlying quality. Second, they tend to lead to truly massive inequality characterized by a power law distribution where a few properties have massive market share, there is a small middle-class, and there is a huge pool of losers.

Salganik, Dodds, and Watts published a 2006 Science paper that demonstrated this effect in pop music with an experiment. They created a website with a list of really obscure bands and gave users the option of listening to and then downloading the songs. In some versions of the site the user only sees a list of songs. In this version of the site the songs all got roughly the same number of downloads. In other versions of the site they made it or more or less explicit how many previous downloads a song had received. They found that these versions of the site had more inequality and the more explicit the download count, the more inequality. Furthermore, they randomly assigned users to a few different “worlds” with independent download counts and they found that in each world with an explicit download count there was a lot of inequality, but it was always a different hierarchy only loosely correlated with its sister worlds. That is, the exact same songs could be extremely successful or extremely unsuccessful depending on whether they happened to get lucky at the beginning of an essentially stochastic process. However, those of you with a romantic conception of culture should not lose all hope. In an unpublished follow-up study, the team invervened to reverse the rank order (that is they lied to users and said the most popular song was actually the least popular). Under this condition two things happened. First, most users spent very little time on the site after they listened to the top-ranked song and found it to be terrible (no doubt presuming that lower-ranked songs were even worse). Second, over an enormous amount of time, the original rank order could reestablish itself, though this finding is unrealistic as in real life a cultural producer would not keep a poorly selling product on the market long enough for it to return to its true place in the distribution. So from Salganik’s team we learn that pop music is vulnerable to information cascades and that over the medium-run this field is vulnerable to manipulation.

This brings us to a classic 1969 Management Science paper by Bass where he distinguishes between internal-influence (things like cascades where the more popular something is the more popular it gets) from external-influence (things like advertising where the increase in popularity is unrelated to popularity ex ante). Bass created a mixed-influence model where the two processes co-exist and interact. Early in a product’s history, external-influence dominates but once it reaches a critical mass, internal-influence takes over. Thus one can use things like advertising and hype to prime the pump. Using this model Bass created very accurate models for the postwar diffusion patterns of consumer durables like televisions, electric refigerators, and clothes dryers. Note that following the literature of the time, Bass’ description for internal-influence was contagion (word of mouth). This is implausible in the particular case of pop music though since if I were to tell you “hey check out this album, it’s really good” you would probably say “so why don’t you just burn me a copy” and this wouldn’t show up in the sales figures. On the other hand a cascade model is realistic here since when your grandma goes to the record store to buy you your disappointing Christmas present and asks the clerk what is popular with the kids these days, that shows up in the Billboard chart as a sale.

And that’s really the crux of the thing. A lot of people who don’t know very much about the products they are buying or the taste of the people buying them have to make buying decisions at Christmas. Such purchases are almost guaranteed to rely on crude heuristics like popularity. Providers know this and go to great lengths to make sure that they are popular enough to benefit from this heuristic around the time Christmas shopping begins. Since it takes about ten weeks to climb to the top of the radio and sales charts. Hence, big artists release their albums in September.

Just remember to write a thank you note when your grandma gives you that copy of “Graduation” (or will it be “Curtis”?) that you had not been hoping for.

Next time I’ll explain why cascades are good for explaining not only 50 Cent but Mexican bread riots and Julius Caesar.

Written by gabrielrossman

September 11, 2007 at 1:46 am