Archive for the ‘culture’ Category
lifting the crimson curtain: Manufacturing Morals: The Values of Silence in Business School Education
As a grad student, I always found crossing the bridge over the Charles River from Harvard University to the Harvard Business School (HBS) to be a bit like approaching Emerald (or more appropriately, Crimson) City. On the Allston side, the buildings seemed shinier (or, as shiny as New England vernacular architecture allows), and the grounds were undergoing constant replantings, thanks to a well-heeled donor. In addition, HBS has loomed large as an institution central to the dissemination of organizational theory and management practices, including Elton Mayo’s human relations.
HBS has certain peculiarities about teaching and learning, like the use of case studies which follow formulaic structures as the basis for directed class discussion.* Moreover, instructors follow a strict grading break-down: mandatory “III”s assigned to the lowest-performing students of classes – a source of concern, as students with too many IIIs must justify their performance before a board and possibly go on leave.** To help instructors with grading, hired scribes document student discussion comments.***
Such conditions raise questions about the links, as well as disconnects, between classroom and managerial leadership, so I was delighted to see a new ethnography about business school teaching at the UChicago Press book display at ASAs.
With his latest book, Michel Anteby lifts the crimson curtain from HBS with his new book Manufacturing Morals: The Values of Silence in Business School Education (University of Chicago Press, 2013).
Here’s the official blurb:
“Corporate accountability is never far from the front page, and as one of the world’s most elite business schools, Harvard Business School trains many of the future leaders of Fortune 500 companies. But how does HBS formally and informally ensure faculty and students embrace proper business standards? Relying on his first-hand experience as a Harvard Business School faculty member, Michel Anteby takes readers inside HBS in order to draw vivid parallels between the socialization of faculty and of students.
In an era when many organizations are focused on principles of responsibility, Harvard Business School has long tried to promote better business standards. Anteby’s rich account reveals the surprising role of silence and ambiguity in HBS’s process of codifying morals and business values. As Anteby describes, at HBS specifics are often left unspoken; for example, teaching notes given to faculty provide much guidance on how to teach but are largely silent on what to teach. Manufacturing Morals demonstrates how faculty and students are exposed to a system that operates on open-ended directives that require significant decision-making on the part of those involved, with little overt guidance from the hierarchy. Anteby suggests that this model-which tolerates moral complexity-is perhaps one of the few that can adapt and endure over time.”
Check it out! And while you’re at it, have a look at Anteby’s previous book, Moral Gray Zones (2008, Princeton University Press).
Several sociologists (Matt Wray, Jon Stern, and myself) and an anthropologist (S. Megan Heller) have a round table discussion on Burning Man at the Society Pages. We’ve all done research at Burning Man, an annual temporary community in Nevada that has inspired events and organizations worldwide.
Have a peek at our discussion, which includes ideas for future studies. We discuss answers to questions such as:
“Why might the demographics of the Burning Man population be of interest to researchers? For instance, there is a cultural trope that people who go to Burning Man are often marginalized individuals—outsiders in some way. Could the festival’s annual Census be used to measure this rather subjective characteristic of the population? Is there a single “modal demographic” (that is, a specific Burner “type”) or are there many? What else does the Census Lab measure (or not measure)?”
“Burning Man sometimes gets portrayed as little more than a giant rave—a psychedelic party on the playa. It is like a party in many ways, but those of us who go know that the label doesn’t begin to capture the full experience. What larger phenomena does Burning Man represent in your research? In other words, how do you categorize the event and why should we take it seriously?“
Normally, when we think of Latin Jazz, we think of Afro-Cuban, or Bossa Nova. But there are others. For a while in the 60s and 70s, Gato Barbieri created a blend of Argentinian music and free jazz that had quite a following. Merceditas, above, is one example, before he gained popularity doing a type of smooth jazz.
One of the somewhat terrifying or, for some of us, invigorating aspects of being an academic is learning and practicing cross-cultural and local norms, especially for research or travel. Typically, these lessons involve careful observation of what seems unthinkable (cutting in line?!? OR waiting one’s turn in line?!?), inadvertently breaking norms in front of aghast or amused locals, and the thrill of mastering a new skill.
In a few weeks, those of you who are relocating or returning to Britain might find the following links handy for immersion in the local milieu:
- An excruciating obsession with ennui, politeness, and queues – a few sample quotes about VeryBritishProblems:
- “Feeling your life lacks excitement, so dunking your biscuit for an irresponsibly long time“
– “The anxious bewilderment when clocking the stranger deciding to join the queue at your side rather than behind you“
– “The unwelcome surprise of someone telling you how they are after you’ve asked them how they are”
– “Secretly hoping it stays cold so there’s always something to talk about”
– “Feeling guilty taking your M&S Bag For Life into Tesco“
- Habits and mannerisms: Kate Fox’s Watching the English : The hidden rules of English behaviour
Coming to NYC for ASAs? Try season 8 of Curb Your Enthusiasm and brush up your “waiting on line” etiquette. Tip: don’t assume you’re at the back of the right line.
I keep hearing about the coming big data revolution. Data scientists are now using huge data sets, many produced through online interactions and media, that shed light on basic social processes. Big data data sets, from sources like Twitter, Facebook, or mobile phones, give social scientists ways to tap into interactions and cultural output at a scale that has never been seen before in social science. The way we analyze data in sociology and organizational theory are bound to change due to this influx of new data.
Unfortunately, the big data revolution has yet to happen. When I see job candidates or new scholars present their research, they are mostly using the same methods that their predecessors did, although with incremental improvements to study design. I see more field experiments for sure, and scholars seem more attuned to identification issues, but the data sources are fairly similar to what you would have seen in 2003. With a few notable exceptions, big data have yet to change the way we do our work. Why is that?
Last week Fabio had a really interesting post about brain drain in academia. One reason we might see less big data than we’d like is because the skills needed to handle this type of analysis are rare and much of the talent in this area is finding that research jobs in the for-profit world are more lucrative and rewarding than what they’re being offered in academia. I believe that’s true, especially for the kinds of people who are attracted to data mining techniques. The other problem though, I think, is that social scientists are having a hard time figuring out how to fit big data techniques into the traditional milieu of social science. Sociologists, for example, want studies to be framed in a theoretically compelling way. Organizational theorist would like scholars to use data that map on to the conceptual problems of the field. It’s not always clear in many of the studies that I’ve read and reviewed that big data analyses are doing anything new other than using big data. If big data studies are going to take over the field they need to address pressing theoretical problems.
With that in mind, you should really read a new paper by Chris Bail (forthcoming in Theory and Society) about using big data in cultural sociology. Chris makes the case that cultural sociology, a subfield that is obsessed with understanding the origins of and practical uses of meaning, is prime for a big data surge. Cultural sociology has the theoretical questions, and big data research offers the methods.
More data were accumulated in 2002 than all previous years of human history combined. By 2011, the amount of data collected prior to 2002 was being collected very two days. This dramatic growth in data spans nearly every part of our lives from gene sequencing to consumer behavior. While much of these data are binary and quantitative, text-based data is also being accumulated on an unprecedented scale. In an era of social science research plagued by declining survey response rates and concerns about the generalizability of qualitative research, these data hold considerable potential. Yet social scientists – and cultural sociologists in particular – have ignored the promise of so-called ‘big data.’ Instead, cultural sociologists have left this wellspring of information about the arguments, worldviews, or values of hundreds of millions of people from internet sites and other digitized texts to computer scientists who possess the technological expertise to extract and manage such data but lack the theoretical direction to interpret their meaning in situ….[C]ultural sociologists have made very few ventures into the universe of big data. In this article, I argue inattention to big data among cultural sociologists is particularly surprising since it is naturally occurring – unlike survey research or cross-sectional qualitative interviews – and therefore critical to understanding the evolution of meaning structures in situ. That is, many archived texts are the product of conversations between individuals, groups, or organizations instead of responses to questions created by researchers who usually have only post-hoc intuition about the relevant factors in meaning-making – much less how culture evolves in ‘real time’ (note: footnotes and references removed).
Chris goes on to offer suggestions about how cultural sociology might use big data to address big theoretical questions. For example, he believes that scholars studying discursive fields would be wise to use big data methods to evaluate the content of such fields, the relationships between actors and ideas, and the relationships between different fields. Of course, much of the paper is about how to use big data analysis to enhance or replace traditional methods used in cultural sociology. He discusses how Twitter and Facebook data might supplement newspaper analysis, a fairly common method in cultural and political sociology. Although he doesn’t go into great detail about how you would do it, an implicit argument he makes is that big data analysis might replace some survey methods as ways to explore public opinion.
I continue to think there is enormous potential for using big data in the social sciences. The key for having it accepted more broadly is for data scientists to figure out how to use big data to address important theoretical questions. If you can do that, you’re gold.
Neil Gross cements his position as the leading sociologist of American intellectuals with his new book Why are Professors Liberal and Why do Conservatives Care?* This book collects into one text a series of arguments about the American professoriate that Gross and his collaborators have presented in a series of articles. Essentially, Gross argues that American academia, on the average, is liberal because of self-selection on the part of conservatives. The specific issue is that academia, for a number of historically specific reasons, has acquired an aura of extreme liberalism. Thus, conservative students say “Why bother? Academia is for liberals. What’s the point?”
What is impressive about Gross and his confederates is that they test all kinds of alternative hypotheses. For example, one might think that academic skills explain conservatives lower enrollments in PhD programs. But it doesn’t. Differences in values don’t explain much either. In other words, Gross et al systematically test all kinds of hypotheses and show that they are simply not true or that they only explain a small proportion of the differences between conservatives and others.
Eventually, using historical evidence and interview data, Gross makes a good case for self-selection. Sociology is a good example. In principle, there’s lots of places for non-liberal sociologists. For example, one could work on non-ideological aspects of sociology, like research methods. Or, as many conservatives have done, they could work in areas of interest like family sociology, where in some cases (like studies of negative divorce effects on kids), they could work on topics that are consistent with their ideology. But if you sit down and ask a typical conservative undergrad why they didn’t take many soc courses, they’ll tell you an image of evil ultra-liberals who are bent on political correctness.
Now, where I would criticize this book is the study of conservatives. For example, Gross argues that there isn’t much evidence of bias against conservatives. He uses the example of a study he conducted with Jeremy Fresse and Ethan Fosse where they contacted graduate directors with email from fake students. Some emails mentioned working for a GOP candidate, some a Democrat, and other none at all. Gross et al find no differences in how graduate directors responded.
First, there’s the issue, which Gross acknowledges, that graduate directors probably write a lot of boiler plate emails. But there’s a deeper criticism – why didn’t Gross interview people at risk for discrimination from liberal colleagues? For example, why not interview liberal (Keynesian) and conservative economists (monetarists or Austrians)? Or, why not interview Rawlsian philosophers (liberals) and compare their careers with Nozickians (libertarians) or Burkeans (conservatives)? Or, even better, why not collect materials from people who submitted books or articles on conservative topics but were rejected?
I think that Gross is right – anti-conservative bias is not nearly as bad as people think, if it exists at all – but the treatment of conservatives is not nearly as nuanced as the treatment of liberals. This probably speaks to the development of the project, which started with analyzing massive data (like the GSS) that trues to tease out conservative/liberal differences. Developing a theory or map of conservative intellectuals probably came late in the game.
Regardless, this book is massive progress on a central issue in the study of American intellectuals and the academy. This will be required reading for anyone interested in this topic.
* And I’m not saying that because he said nice things about me in the book. But he did. Oh yeah, and I’m not just saying it because he edited another cool forthcoming book about academia with a chapter by moi. But he did. Ok, maybe he buttered up a little. But just a little!
The usual advice about art and investing is “don’t bother.” Buy it because you love it, but don’t expect a decent return. Well, that’s not exactly true. There are at least two ways to consistently make money from art, but neither is easy:
- The Vogel Strategy: Named after the Vogels, who spent their lives collecting art on a postman’s salary, the idea is simple – immerse yourself in art and buy up lots of cheap stuff. But you can’t buy any old art. You go to the cultural center, hang out with impoverished artists, and buy cheap.
- The fussy value buyer: As discussed in a recent Art Market Monitor article, art investment funds do actually manage a decent rate of return. The way they do it is to avoid the fancy auctions and look for somewhat undervalued works by artists that are already on track to having good historical reputations. For example, if Bacon is already famous, go for his lesser known buddy Frank Auerbach. Good work, but probably under-appreciated.
The tricky part with the Vogel strategy is that you need to invest in a lot of stuff, much of it goofy. Most people don’t have the patience or taste needed to spot how today’s bizarre avant-garde might be featured in tommorrow’s history book. The trick with the Moneyball strategy is that you go for people who are relatively cheap, but still expensive in absolute terms. You need a lot of capital to even contemplate this strategy. Also, you need to be confident and ignore the hype that often surrounds “hot artists.” That is hard to do for many investors.
Apparently, a lot of it has to do with working in that cycle of fourths into your lines. This is a very nice video of the post-bop piano master, with a performance of his tune “To Those Who Chant.” I also strongly recommend his composition Coral Keys, ideal for those who want easy listening with a hip edge.
If you follow journalism, you’ve heard that the Chicago Sun-Times has laid off its entire photojournalism staff. Probably a last ditch effort to save money, the move was pitched as an attempt to go multi-media and have print journalists snap pictures with their smart phones.
One casualty was John H. White, one of Chicago’s best photographers. Above is his picture of a Nation of Islam gathering. More vivid photos can be found in this Chicago.mag article.
Brendan Nyhan has a nice post on the sociology of scandal. He summarizes his research on presidential scandal in this way:
My research suggests that the structural conditions are strongly favorable for a major media scandal to emerge. First, I found that new scandals are likely to emerge when the president is unpopular among opposition party identifiers. Obama’s approval ratings are quite low among Republicans (10-18% in recent Gallup surveys), which creates pressure on GOP leaders to pursue scandal allegations as well as audience demand for scandal coverage. Along those lines, John Boehner is reportedly “obsessed” with Benghazi and working closely with Darrell Issa, the House committee chair leading the investigation. You can expect even stronger pressure from the GOP base to pursue the IRS investigations given the explosive nature of the allegations and the way that they reinforce previous suspicions about Obama politicizing the federal government.
In addition, I found that media scandals are less likely to emerge as pressure from other news stories increases. Now that the Boston Marathon bombings have faded from the headlines, there are few major stories in the news, especially with gun control and immigration legislation stalled in Congress. The press is therefore likely to devote more resources and airtime/print to covering the IRS and Benghazi stories than they would in a more cluttered news environment.
I’d also add that “events” have properties. It is easier to scandalize, say, the IRS investigation issue because it is simple. In contrast, the issue of whether the attack in Libya should have been labeled terrorism is probably to esoteric for most folks. If you buy that argument, you get a nice story about the “scandal triangle.” The likelihood of scandal increases when partisan opposition, bored media, and clearly norm-broaching events come together.
In this last post, I’ll discuss why I fundamentally disagree with the argument presented in Reinventing Evidence. There are two reasons. First, I agree with Andrew Perrin that Biernacki wants us to embrace a textual holism. One of Biernacki’s major arguments is that by isolating a single word, or passage, we are losing the entire meaning of the text. Thus, interpretation is the only valid approach to text. Coding and quantification is invalid. Perrin points out that lots of things be isolated. For example, if I see the n-word, I can say that, on the average, the text is employing racist language.
Second, Biernacki does not seem to consider cultural competence. In other words, human beings are creatures that can often reliably capture the meaning of utterances made by other humans from the same cultural group. Of course, I am talking about things like every day speech or short and simple writings like newspaper articles. More complex texts, like novels, will have networks or dense layering of meaning that go beyond a human’s native capacity for communication. These probably could be coded, but it would require intense training and an elaborate theory of text, which sadly we don’t have in sociology. But my major point remains. There’s a lot of fairly simple text that can be coded. If you believe that people can accurately convey the meaning of a text or label some aspect of it because they are “native speakers” of the culture, then coding is a valid thing to do. To believe otherwise, is to assume a world of solipsistic culture where every act of utterance requires a stupendous level of interpretation on the part of the audience.
So to wrap things up. I give credit to Biernacki for making us think hard about the quality of coding which is lacking. The fact that science is presented in ritual is fair, but doesn’t address whether a particular procedure produces valid measurement or inference. And I think that the view that texts are essentially uncodable is in error.
Speaking of branding with kitties, I have enjoyed academic work that incorporates the iconic Hello Kitty. As a grad student, I attended a conference featuring presentations on kawaii (“cute”) culture. One observant panelist noted that what appeared to be Hello Kitty on the conference posters was actually her twin sister Mimmy, distinguished by a yellow rather than red bow.
Hello Kitty has also graced the covers of at least two American academic books. The book cover of an edited anthology on manners and gender in Japan features Hello Kitty and her not-often-seen boyfriend Daniel both reaching for the same piece of sushi. A fresh-off-the-press book on globalization showcases an over-sized version of Hello Kitty about to trample a metropolis.
Want to share your own favorite cultural icons analyzed from an academic perspective? Put them in the comments.
To summarize: Richard Biernacki claims that coding textual materials (books, speech, etc) is tantamount to committing gross logical errors that mislead social scientists. Overall, I think this point is wrong but I think that Reinventing Evidence does a great service to qualitative research by showing how coding of texts might be critiqued and evaluated. In other words, ironically, by critiquing prior work on text coding, Biernacki draws our attention to the fact that qualitative research can be subjected to the same standards as quantitative research.
What do I mean? Well, a big problem with qualitative research is that it is very hard to verify and replicate. It is rare when ethographers go to the same field site, or informants are re-interviewed by others. A lot of the strength of quantitative research lies in the fact that other researchers can replicate prior results. For example, if I claim that party ID is correlated with gay marriage attitudes in the GSS, another researcher can download the same data and check the work. If they think the GSS made a mistake in collecting the data, a second survey can be conducted.
Biernacki, in trying to prove that coding qualitative data is pointless, follows a similar strategy by choosing a few articles of note and then he tries to reproduce the results. For example, he chooses Bearman and Stovel’s “Becoming a Nazi: A Model for Narrative Networks” which appeared in Poetics. The article creates a network out of ideas and themes mentioned from the memoir of a Nazi. Assuming that Biernacki reports his results correctly, he’s persuaded me that we need better standards for coding text. For example, he finds that Bearman and Stovel use an abbreviated version of the memoir – not the whole thing. Big problem. Another issue is how the network of text is interpreted. In traditional social network analysis, centrality is often thought to be a good measure of importance. Biernacki makes the reasonable argument that this assumption is flawed for texts. Very important ideas can become “background,” which means they are coded in a way that results in a low centrality score. This leads to substantive problems. For example, the Nazi mentions anti-semitism briefly, but in important ways. Qualitatively we know it is important, but the coding misses this issue.
Next week, I’ll get to my views on Biernacki’s attack on coding. But for now, I’ll give him credit for drawing my attention to these issues. The problems with the coding of the Nazi memoir point to me that there is more work to be done. We need to first start with a theory of text and then build techniques. If you want to use network analysis, you may have to take into consideration that standard network ideas may not be suitable. That will help us address problems like how to judge a text and the way we code data. That may not be the lesson Biernacki intended, but it’s a good one.
This Spring, our book forum will address Richard Biernacki’s Reinventing Evidence in Social Inquiry: Decoding Facts and Variables. In this initial post, I’ll describe the book and give you my summary judgment. Reinventing Evidence, roughly speaking, claims that numerically coding extended texts is a very, very bad idea. How bad? It is soooo bad that sociologists should just stop coding text and abandon any hope of providing a quantitative or numerical coding of texts or speech. It’s all about interpretation. This is an argument that prevents a much needed integration of the different approaches to sociology, and it deserves a serious hearing.
In support of this point, Biernacki does a few things. He makes an argument about how coding text lacks validity (i.e., associating a number to a text does correctly measure what we want it to measure). Then he spends three chapters going back to well known studies that use content analysis and argues, at varying points, that the coding is misleading, obviously incorrect, or that there were no consistent standard for handling the text or the data.
As a proponent of mixed methods, I was rather dismayed to read this argument. I do not agree that coding of text is a hopeless task and that we should retreat into the interpretive framework of the humanities. There seem to be regularities in speech, and other text, that makes us want to group them together. If you accept that statement, then it follows that a code can be developed. So, on one level I don’t buy into the main argument of the book.
At a more surface level, I think the book does some things rather well. For example, the meat of the book is in replication, which many of us, like Jeremy Freese, have advocated. Biernacki goes back and examines a number of high profile publications that rely on coding texts and finds a lot to be desired.
Next week, we’ll get into some details of the argument. Also, please check out our little buddy blog, Scatterplot. Andrew Perrin will discussing the book and offering his own views.
As I posted earlier, I’ll be presiding over a conversation between George Ritzer and Carmen Sirianni from 3:30-5pm on Fri., March 22, 2013 at ESS in the Whittier Room (4th Flr) of the Boston Park Plaza hotel.
In the past several years, disasters like Hurricane Sandy and Katrina have sparked growing interest in what both conventional and innovative organizations can (and cannot) do given conditions of uncertainty vs. certainty. Both featured scholars’ work cover the limits of particular organizing practices (i.e., Ritzer’s work on McDonaldization), as well as the potential of organized action (i.e., Sirianni’s work on collaborative governance). Thus, I’ve given this particular conversation the broad title “Organizations and Societal Resilience: How Organizing Practices Can Either Inhibit or Enable Sustainable Communities.”
What would you be interested in hearing Ritzer and Sirianni discuss about organizations and society? Please put your qs or comments in the discussion thread.
For those unfamiliar with Ritzer and Sirianni, here is some background about their work:
George Ritzer is best known for his work on McDonaldization and more recently, the spread of prosumption in which people are both producers and consumers.
J. Mike Ryan‘s interview of Ritzer about his McDonaldization work:
J. Mike Ryan’s interview of Ritzer about why we should learn about McDonaldization (corrected link):
Carmen Sirianni is known for his work on democratic governance.
A brief video of Sirianni arguing that citizens should be “co-producers” in building society.
A more extensive video of Sirianni presenting on his book Investing in Democracy: Engaging Citizens in Collaborative Governance (Brookings Press, 2009).
We live in a golden age of papal betting. Within my own lifetime, I will have had at least three opportunities to wager on papal elections (’78, ’05, ’13). Better than bingo. If you need a primer on the possible leaders, click here. Intrade is trading 47% for an Italian pope. For individual cardinal odds, click here. For sociology of Vatican II, check out Melissa Wilde’s ASR article on the topic. Consider this an open thread on the social science (and gaming) of the papacy and/or information markets.
Becoming Right: How Campuses Shape Young Conservatives, by Amy Binder and Kate Wood, is the latest entry into the growing scholarship on conservative politics in America. They ask a simple question: how do campus environments shape conservative political styles? This is an important question for two reasons. First, there is relatively little research on conservative students. Second, culture depends on organizational environment. How ideas are expressed is affected by where ideas are expressed. Definitely a worthy question for a sociologists.
So what do Binder and Wood discover? They focus on two campuses for their case study – big public West Coast and fancy private East Coast. They choose these campuses because thay have similar high achieving student bodies but the environments are way, way different. West Coast is a huge “multiversity” to use Clark Kerr’s terminology. East Coast is smaller and more intimate. The same type of students tend to be attracted to campus conservative politics (mainly white, fairly comfortable folks) but the environments encourage different expressions.
You might say that there are two habituses at work – the provocateur and the intellectual. In a big impersonal campus, it is very, very hard to project your voice except in a confrontational manner. Thus, West Coast conservative students rely on sensational tactics, like the affirmative action bake sale. Also, West Coast students feel little attachment to the community. Little is lost by being aggressive. In contrast, East Coast encourages all students to feel as if they have a place, even if they admit that most professors are fairly liberal. They don’t feel alienated or embattled, so they feel little hostility toward the campus. Thus, they resort to more intellectual forms of expression that don’t rely on shocking people. The book also has a nice discussion of the larger field of conservative politics and how that affects campus protest.
Overall, a solid book and one that’s essential to studies of campus politics. If I were to criticize the book, I think I’d think a little more about the differences between conservative students and the broader field of conservative intellectuals. This does get mentioned in a few passages that allude to Steve Teles’ book on conservarive legal academia, which we discussed in detail on this blog. The issue is that the world of conservative intellectuals that have influence is more defined by the East Coast intellectual types than the affirmative action shock jocks at West Coast. The consequences are important as we’ve seen with the Tea Party mobilization. Conservative grass roots politics is now dominated by shock jocks, not the well coiffed policy wonks of the Heritage Foundation. More needs to be said about the boundary and links between campus conservatives and this broader network of think thanks, interest groups, and electoral organizations.
The last comment I’ll make is about the inherent irony of much of this stuff. It can be argued that conservative politics at its best is incremental, stodgy, and resistant to radicalism – that it is essentially bourgeois. It retains the hard won lessons of tradition and skepticism of utopia. Then there is some irony that the cultural style of contemporary conservatives is at odds with this ideal. It is loud and obnoxious. It mocks one of society’s most ancient and enduring institutions, the university system, which has nurtured Western culture since the end of the Middle ages. It is skeptical and hostile toward those who are cultured and knowledge. It can’t disentangle potentially insightful criticisms of specific intellectual currents from a loathing of the academic system itself. Perhaps the ultimat lesson is that beneath the talk of tradition and values, there is a rank populism that leaves one ultimately disappointed.
Quick reaction: The Academy loves well crafted films that are about actors or acting, especially when actors save the day. These films often beat other films. Example: Shakespeare in Love beats Saving Private Ryan; the Kings Speech beats Black Swan, Inception and Social Network. Bonus: Argo had old Hollywood guys saving the day. I still liked it.
A few weeks ago, I argued that the era of overt racism is over. One commenter felt that I needed to operationalize the idea. There is no simple way to measure such a complex idea, but we can offer measurements of very specific processes. For example, I could hypothesize that it is no longer to legitimate to use in public words that have a clearly derogatory meaning, such as n—— or sp–.*
We can test that idea with word frequency data. Google has scanned over 4 million books from 1500 to the present and you can search that database. Above, I plotted the appearance of n—– and sp—, two words which are unambiguously slurs for two large American ethnic groups. I did not plot slurs like “bean,” which are homophones for other neutral non-racial words. Then, I plotted the appearance of the more neutral or positive words for those groups. The first graph shows the relative frequencies for African American and Latino slurs vs. other ethnic terms. Since the frequency for Asian American slurs and other words is much lower, they get a separate graph. Thus, we can now test hypotheses about printed text in the post-racial society:
- The elimination thesis: Slurs drop drastically in use.
- The eclipse thesis: Non-slur words now overwhelm racist slurs, but racist slurs remain.
- Co-evolution: The frequency of neutral and slur words move together. People talk about group X and the haters just use the slur.
- Escalation: Slurs are increasing.
This rough data indicates that #2 is correct. The dominant racial terms are neutral or positive. Most slurs that I looked up seem to maintain some base level of usage, even in the post-civil rights era. The slur use level is non-zero, but it is small in comparison to other words so it looks as if it is zero. Some slure use may be derogatory, while some of it may be artistic or “reclaiming the term.” I can’t prove it, but I think Quentin Tarantino accounts for for 50% or more of post-civil rights use of the n-word.
Bottom line: Society has changed and we can measure the change. This doesn’t mean that racial status is no longer important, but it does mean that one very important aspect of pre-Civil Rights racist culture has receded in relative importance. Some people just love racial slurs, but that its likely not the modal way of talking about people. Is that progress? I think so.
* Geez, Fabio, must you censor? Well, it isn’t censoring if it’s voluntary. I just don’t want this blog to be picked up for slurs. Even my book on 1970s Black Power, when people used the n-word a bit, only uses it once, in a footnote when referring to the title of H. Rap Brown’s first book.
This weekend, Omar wrote a detail post about the “depth” of culture, the degree to which some idea is internalized and serves as a motivation or guide for action. I strongly recommend that you read it. What I’d like to do in this post is use Omar’s comments as a springboard for thinking about organizational behavior.
The reigning theory in sociology of organization is neo-institutionalism. The details vary, but the gist is that the model posits a Parsonsian theory of action. There is an “environment” that “imprints” itself in organizations. Myth and Ceremony institutionalism posits a “shallow imprinting” – people don’t really believe myth and ceremony. Iron cage institutionalism takes a very “deep” view of culture. Actors internalize culture and then do it.
Omar posits, I think, is a view of culture that is constitutive (you are the ideas you internalize) and interactive (your use of the idea modifies the cultural landscape). Omar wants to get away from the metaphor of “deep” vs. “shallow” culture. He also discusses dual process theory, which merits its own post.
What is important for organization theorists is that you get away from Parsons’ model:
Note that conceptually the difference is between thinking of “depth” as a property of the cultural object (the misleading Parsonian view) or thinking of “depth” as resulting from the interaction between properties of the person (internalized as dispositions) and qualities of the object (e.g. meaning of a proposition or statement) (the Bourdieusian point).
The implication for orgtheory? Previously, the locus of orgtheory has been the “environment” – all the stuff outside the organization that people care about. That’s highly analogous to “culture” getting internalized deep within the individual. Thus, different institutional theories reflect a deep/shallow dichotomy. If you buy Omar’s post-Swidler/post-Giddens view of things, then what is really interesting is the interaction creating at the point of contact between environment and organization. Orgs don’t passively await imprinting. Rather, there is variance in how they respond to the environment and there is interesting variation in the adoption/importation of stuff from the environment.
The issue of whether some culture is “deep” versus “shallow” has been a thorny one in social theory. The basic argument is that for some piece of culture to have the requisite effects (e.g. direct action) then it must be incorporated at some requisite level of depth. “Shallow culture” can’t produce deep effects. Thus, for Parsons values had to be deeply internalized to serve as guiding principles for action. Postulating cultural objects that are found at a “deep” level requires we develop a theory that tells us how this happens in the first place (e.g. Parsons and Shils 1951). That is: we need a theory about how the same culture “object” can go from (1) being outside the person, to (2) being inside the person, and (3) once inside, from being shallowly internalized to being deeply internalized. For instance, a value commitment may begin at a very shallow level (a person can report being familiar with that value) but by some (mysterious) “internalization” process it can become “deep culture” (when the value is now held unconditionally and motivates action via affective and other unconscious mechanisms; the value is now “part” of the actor).
One thing that has not been noted very often is that the “cultural depth” discussion in the post-Parsonian period (especially post-Giddens) is not the same sort of discussion that Parsons was having. This is one of those instances in cultural theory where we keep the same set of terms—e.g. “deep” versus “shallow” culture–but change the parameters of the argument, creating more confusion than enlightenment. In contrast to Parsonian theorists, for post-Giddensian theorists, the main issue is not whether the same cultural element can be found at different levels of “depth” (or travel across levels via a socialization process). The key point is that different cultural elements (because of some inherent quality) exist necessarily at a requisite level of “depth.”
These are not the same sort of statement. Only the first way of looking at things is technically “Parsonian”; that is Parsons really thought that
…culture patterns are [for an actor] frequently objects of orientation in the same sense as other [run of the mill physical] objects…Under certain circumstances, however, the manner of his [sic] involvement with a cultural pattern as an object is altered, and what was once an object becomes a constitutive part of the actor” (Parsons and Shils 1951: 8).
So here we have the same object starting at a shallow level and then “sinking” (to stretch the depth metaphor to death) into the actor, so that ultimately it becomes part of their “personality.”
Contrast this formulation to the (post-Giddensian) cultural depth story proposed by Sewell (1992). According to Sewell,
…structures consist of intersubjectively available procedures or schemas capable of being actualized or put into practice in a range of different circumstances. Such schemas should be thought of as operating at widely varying levels of depth, from Levi-Straussian deep structures to relatively superficial rules of etiquette (1992: 8-9).
Sewell (e.g. 1992: 22-26), in contrast to Parsons, decouples the depth from the causal power dimension of culture. Thus, we can find cultural schemas that are “deep but not powerful” (rules of grammar) and schemas that are powerful but not deep (political institutions). Sewell’s proposal is clearly not Parsonian; it is instead (post)structuralist: there are certain things (like a grammar) that have to be necessarily deep, while other things (like the the filibuster rule in the U.S. Senate) are naturally found in the surface, and need not sink to the level of deep culture to produce huge effects. Accordingly, Sewell’s cultural depth discussion should not be confused with that of the early Swidler. Swidler (circa 1986) inherited the Parsonian not the post-structuralist problematic (because at that stage in American sociology that would have been an anachronism). Her point was that for the thing that mattered to Parsons the most (valuation standards) there weren’t different levels of depth, or more accurately that they didn’t need to have that property to do the things that they were supposed to do.
The primary aim of recent work on dual process models of moral judgment and motivation seems to be to revive a modified version of the Parsonian argument. That is, in order to direct behavior the point is that some culture needs to be “deeply internalized” (as moral intuitions/dispositions). However, as I will argue below the very logic of the dual process argument makes it incompatible with the strict Parsonian interpretation. To make matters even more complicated we have to deal with the fact that by the time we get to Swidler (2001) the conversation has changed (i.e. Bourdieu and practice theory happened), and she’s modified the argument accordingly. She ingeniously proposes that what Parsons (following the Weberian/Germanic tradition) called “ideas” can now be split into “practices + discourses.” Practices are “embodied” (and thus “deep” in the post-structuralist sense) and discourses are “external” (and thus shallow).
This leads to the issue of how Bourdieu fits into the post-Parsonian/post-structuralist conversation on cultural depth. We can at least be sure of one thing: the Parsonian “deep internalization” story is not Bourdieu’s version (even though Bourdieu used the term “internalization” in Logic of Practice). The reason for this is that habitus is not the sort of thing that was designed to give an explanation for why people “learn” to have “attitudes” (orientations) towards “cultural objects” much less to internalize these “objects” so that they become part of the “personality” (which is, by the way, possibly the silliest thing ever said). There is a way to tell the cultural depth story in a Bourdieusian way without falling into the trap of having to make a cultural object a “constituent” part of the actor but this would require de-Parsonizing the “cultural depth” discussion (which is something that Bourdieu is really good for). There is one problem: the more you think about it, the more it becomes clear that, insofar as the cultural depth discussion is a pseudo-Parsonian rehash, there might not much left after it is properly Bourdieusianized.
More specifically, the cultural depth discussion might be a red herring because it still retains an implicit allegiance to the (Parsonian) “internalization” story, and internalization makes it seem as if something that was initially subsisting outside of the person now comes to reside inside the person (as if for instance, “I disagree with women going to work and leaving their children in daycare” was a sentence stored in long-term memory to which a “value” is attached.
This is a nice Parsonian folk model (shared by most public opinion researchers). But it is clear that if, we follow the substantive implications of dual process models, what resides in the person is not a bunch of sentences to which they are oriented; instead the sentence lives in the outside world (of the GSS questionnaire) and what resides “inside” (what has been internalized) is a disposition to react (negatively, positively) to that sentence when I read it, understand it and (technically if we follow Barsalou 1999) perceptually simulate its meaning, which actually involves running through modal scenarios of women going to work and leaving miserable children behind). This disposition is also presumably the same one that may govern my intuitive reaction to other sorts of items designed to measure my”attitude” towards other related things. I can even forget the particular sentence (but keep the disposition) so that when somebody or some event (I drive past the local daycare center) reminds me of it I still reproduce the same morally tinged reaction (Bargh and Chartrand 1999; Bargh and Williams 2006).
Note that the depth imagery disappears under this formulation, and this is for good reason. If we call “dispositions to produce moral-affective judgments when exposed to certain scenarios or statements in a consistent way through time” deep, so be it. But that is not because there exist some other set of things that are the same as dispositions except that they lack “depth.” Dispositions either exist in this “deep” form or they don’t exist at all (dispositions, are the sorts of things that in the post-Giddensian sense are inherently deep). No journey has been undertaken by some sort of ontologically mysterious cultural entity to an equally ontologically spurious realm called “the personality.” A “shallow” disposition is a contradiction in terms, which then makes any recommendation to “make cultural depth a variable” somewhat misleading, as long as that recommendation is made within the old Parsonian framework. The reason why this is misleading is because this piece of advice relies on the imagery of sentences with contents located at “different levels” of the mind travelling from the shallow realm to the deep realm and transforming their causal powers in the process.
If we follow the practice-theoretical formulation more faithfully, the discussion moves from “making cultural depth a variable” to “reconfiguring the theoretical language so that what was previously conceptualized in these terms is now understood in somewhat better terms.” This implies giving up on the misleading metaphor of depth and the misleading model of a journey from shallow-land to depth-land via some sort of internalization mechanism. Thus, there are things to which I have dispositions to react (endowed with all of the qualities that “depth” is supposed to provide such as consistency and stability) in a certain (e.g. morally and emotionally tinged) distinct way towards. We can call this “deep culture” but note that the depth thing does not add anything substantive to this characterization. In addition, there are things towards which I (literally) have no disposition whatever, so I form online (shallow?) judgments about these things because this dorky, suit-wearing in July interviewer with NORC credentials over here apparently wants me to do so. But this (literally confabulated) “attitude” is like a leaf in the wind and it goes this or that way depending on what’s in my head that day (or more likely as shown by Zaller 1992, depending on what was on the news last night). Is this the difference between “shallow” and “deep” culture? Maybe, but that’s where the (Parsonian version of the) internalization language reaches its conceptual limits.
Thus, we come to a place where a dual process argument becomes tightly linked to what was previously being thought of under the misleading “shallow culture/deep culture” metaphor in a substantive way. I think this will “save” anybody who wants to talk about cultural depth from the Parsonian trap, because that person can then say that “deep= things that trigger moral intuitions” and “shallow=attitudes formed by conscious, on-the-fly confabulation.” Note that conceptually the difference is between thinking of “depth” as a property of the cultural object (the misleading Parsonian view) or thinking of “depth” as resulting from the interaction between properties of the person (internalized as dispositions) and qualities of the object (e.g. meaning of a proposition or statement) (the Bourdieusian point).
One of my fondest earliest memories of starting my research was accepting an invitation to hear a band perform in Oakland, CA. I asked my host, “What kind of music is it?” My host, a Berklee College of Music grad, paused and then gave an intriguing answer, “Well…it’s noise.” That description ushered in a crash course introduction to Burning Man and its art scene, a memorable immersion depicted in the first paragraph of Appendix I of my book.
Since then, participating at Burning Man has provided many introductions to cultural trends, some of which have become mainstream. (Other art forms, like the Aesthetic Meat Foundation, have not yet become mainstream.) Each year, a fellow campmate likes to ruminate about what’s in and what’s out at Burning Man based on our mutual observations. This past year, we agreed that dub step seemed to be on its way out. For those readers who haven’t tried dancing or listening to dub step, here’s Key & Peele’s take on this musical genre (warning: squeamish viewers may want to pause around 2:23 or so):
Carmina Burana music chaser after the jump.
Read the rest of this entry »
Q. You are interested in factors that determine whether particular musical styles, genres, etc., will gain mass appeal — or remain circumscribed to a small niche. Have you discovered something about the process of “influence” or “contagion” that the social network scholars have ignored or underemphasized? What does your work tell us about the role of networks in shaping popular tastes?
A.The most common way for music to blow up from a small scene into global pop is for a controversy to erupt. Music history is littered with examples of “moral panics”: be-bop jazz was blamed for white-on-black race riots in the mid-1940s, just as rap music was blamed when riots erupted in Los Angeles following the Rodney King trial. In both cases, sensationalized news reports and especially a focus on the “dangerous” elements in the music attracted young people in droves. Moral panics, like magnets, repel and attract. This is also true when disputes involve dueling scenes, like the fights between “mods” and “rockers” in the U.K. in the early 1960s or the battles between fans of heavy metal and punk that played out on the pages of Creem magazine in the early 1980s. It is equally true when outsiders attack: the Parents’ Music Resource Center’s efforts to ban heavy metal and rap music resulted in those Parental Advisory stickers. When rock fans staged the infamous Disco Demolition at Comiskey Park they may have kept disco in the limelight for an extra year.
The interview is filled with lots of other insights. Self-recommending!