Archive for the ‘omar’ Category

More words on critical realism: Getting clear on the basics

One thing that I found dissatisfying about our earlier “discussion” on CR is that it ultimately left the task of actually getting clear on what CR “is” unfinished (or bungled).  Chris tried to provide a “bulletpoint” summary in one of the out of control comment threads, but his quick attempt at exposition mixed together two things that I think should be kept separate (what I call high level principles from substantively important derivations from those principles). This post tries to follow Chris Smith’s (sound advice) that ” We’ll all do better by focusing on important matters of intellectual substance, and put the others to rest.”

The task of getting clear on the nature of CR is particularly relevant for people who haven’t already formed strong opinions on CR and who are just curious about what it is. My argument here is that neither proponents nor critics do a good job of just telling people what CR is in its most basic form. The reason for this has to do with precisely the complex nature of CR as an ontology, epistemology, theory of science, and (most importantly) a set of interrelated theses about the natural, social, cultural, mental, world that are derived from applying the high level philosophical commitments to concrete problems. My argument is that CR will continue to draw incoherent reactions and counter-reactions (by both proponents and opponents) unless these aspects are disaggregated, and we get clear on what exactly we are disagreeing about.  One of these incoherent reactions is that CR is both a “giant” package of meta-theoretical commitments and that CR is actually a fairly “minimalist” set of principles the reasonable nature of which would only be denied by the certifiably insane.

In particular it is important to separate the high level “core” commitments from all the substantive derivations, because it is possible to accept the core commitments and disagree with the derivations. In essence, a lot of stuff (actually most of the stuff) that gets called “CR” consists of a particular theorist’s application of the high level principles to a given problem. For instance, one can apply (as did Bhaskar in the “original” contributions) the high level ontology to derive a (general) theory of science. One can (as Bhaskar also did) use the general theory of science to derive a local theory (both descriptive and normative) of social science (via the undemonstrable assumption that social science is just like other sciences).  And the same can be done for pretty much any other topic: I can use CR to derive a general theory of social structures, or human action, or culture, or the person, or whatever. Once again, the cautionary point above stands: I can vehemently disagree with all the special theories, while still agreeing with the high level CR principles. In other words I can disagree with the conclusion while agreeing with the high-level premises because I believe that you can’t get where you want to go from where you start. This may happen because let’s say, I can see the CR theorist engaging in all sorts of reasoning fallacies (begging the question, arguing against straw men, helping him or herself to undemonstrable but substantively important sub-theses, and so on) to get from the high level principles to the particular theory of (fill in the blank: the person, social structure, social mechanisms, human action, culture, and so on).

This is also I believe the best way to separate the “controversial” from the “uncontroversial” aspects of CR, and to make sense of why CR appears to be both trivial and controversial at the same time. In my view the high level principles are absolutely uncontroversial. It is the deployment of these principles to derive substantively meaningful special theories with strong and substantively important implications that results in controversial (because not necessarily coherent or valid at the level of reasoning) theories.

The High Level Basics.-

One thing that is seldom noted by either proponents or critics of CR is that the fundamental high level theses are actually pretty simple and in fact fairly uncontroversial. These only become “controversial” when counterposed to nutty epistemologies or theories of science that nobody holds or really believes (e.g. so-called “positivism”, radical social constructionism, or whatever). I argued against this way of introducing CR precisely because it confounds the level at which CR actually becomes controversial.

So what are these theses? As repeatedly pointed to by both Phil and Chris in the ridiculously long comment thread, and as ritualistically introduced by most CR writers in social theory (e.g. Dave Elder-Vass), these are simply a non-reductionist “realism” coupled to a non-reductionist, neo-Aristotelian ontology.

The non-reductionist realism part is usually the one that is much ballyhooed by proponents of CR, but in my view, this is actually the least interesting (and least distinctive) part of CR in relation to other options. In fact, if this was all that CR offered, there would be no reason to consider it any further. So the famous empirical/actual/real (EAR) triad is not really a particularly meaningful signature of CR. The only interesting high-level point that CR makes at this level is the “thou shall not reduce the real to the actual, or worse, to the empirical.” Essentially: the world throws surprises at you because it is not reducible to what you know, and is not reducible to what happens (or has happened or will happen). I don’t think that this is particularly interesting because no reasonable person will disagree with these premises. Yes, there are people that seem to say something different, but once you sit them down for 10 minutes and explain things to them, they would agree that the real is not reducible to our conceptions or our experiences of reality. Even the seemingly more controversial point (that reality is not reducible to the actual) is actually (pun intended) not that controversial. In this sense CR is just a vanilla form of realism.

When we consider the CR conception of ontology then things get more interesting. Most CR people propose an essentially neo-Aristotelian conception of the structure of world as composed of entities endowed with inherent causal powers. This conception links to the EAR distinction in the following sense: The real causal powers of an entity endow it with a dispositional set of tendencies or propensities to generate actual events in the world; these actual events may or may not be empirically observable. The causal powers of an entity are real in the sense that these powers and propensities exist even if they are never actualized or observed by anyone. To use the standard trite example, the causal power to break a window is a dispositional property of a rock; this property is real in these that it is there whether it is ever actualized (an actual window breaking with a rock event happens in the world), and whether anybody ever observes this event.

Reality then, is just such a collection of entities endowed with causal powers that come from their inherent nature. The nature of entities is not an unanalyzable monad but is itself the (“emergent” in the sense outlined below) result of the powers and dispositions of the lower level constituents of that entity suitably organized in the right configuration. What in earlier conceptions of science are called “laws of nature” happen to be simply observed events generated by the actualization of a mechanism, whereby a “mechanism” is simply a regular, coherently organized, collection of entities endowed inherent causal powers acting upon one another in a predictable fashion. Scientists isolate the mechanism when they are able to manipulate the organization of the entities in question so that the event is actualized with predictable regularity; these events are then linked to an observational system to generate the so-called phenomenological or empirical regularities (“the laws”) that formed the core of traditional (Hempelian) conceptions of science.

The laws thus result from the regular operation of “nomological machines” (in Cartwright’s sense). The CR point is thus that the phenomenological “laws” are secondary, because they are just the effect produced by hooking together a real mechanism to produce (potentially) observable events in a regular way. So the CR people would say that Hacking’s aphorism “if you can spray them they are real” is made sense of by the unobservable stuff that you can spray is an entity endowed with the causal power capable of generating observable phenomena when isolated as part of an actualized mechanism. The observability thing is secondary, because the powers are there whether you can observe the entity or not. That’s the CR “theory of science.”

The key to the CR ontology is that the nature of entities is understood using a “layered” ontological picture in which entities are understood as essentially wholes made of parts organized according to a given configuration (a system of relations). These “parts” are themselves other entities which may be decomposable into further parts (lower level entities organized in a system of relations and so on). Causal powers emerge at different levels and are not reducible to the causal powers of some “fundamental” level. Thus, CR proposes a non-reductionist, “layered” ontology, with emergent causal powers at each level.

This emergence is “ontological” and not “epistemic” in the sense that the causal powers at each level are “real” in the standard CR sense: they are not reducible to their actual manifestations nor are these “emergent” properties simply an epistemic gloss that we throw into the world because of our cognitive limitations. Thus, CR is an ontological democracy which retains the part-whole mereology of standard realist accounts, but rejects the reductionist implication that the structure of the world bottoms out at some fundamental level of reality where the really real causal powers can be found (and with higher level causal powers simply being a derivative shadow of the fundamental ones).

Getting controversial.-

Now you can see things getting interesting, because we have a stronger set of position takings. Note that from our initial vanilla realism, and our seemingly innocuous EAR distinction, along with a meatier conceptualization of entities as organized wholes endowed with powers and propensitities, we are now living in a world composed of a panoply of real entities at different levels of analysis, endowed with (non-reducible) real causal powers at each level. The key proposition that is beginning to generate premises that we can actually have arguments about is of course the premise of ontological emergence. I argue that this premise not a CR requirement. For instance, why can’t I be a reductionist critical realist? (RCR) Essentially, RCR accepts the EAR distinction, but privileges a fundamental level; this fundamental level may ultimately figure in our theoretical conceptions of reality but it is the bedrock upon which all actual and empirical events stand. In other words, the only true “mechanisms” that I accept are the ones composed of entities at the most fundamental level of reality, which may or may not ever be uncovered. I don’t seriously intend to defend this position, but just bring it up as an attempt to show that CR hooks together a lot of things that are logically independent (emergentist ontology, Aristotelian conception of entities, part-whole mereology, with a “causal powers” view of causation, among others).

In any case, my argument is that most of the substantively interesting CR theses do not emerge (pun intended) from the Bhaskarian theory or science, or the account of causation, or the EAR distinction. They emerge from hooking together (ontological) emergentism and an Aristotelian conceptions of entities and dispositional causal powers. For emergentism is what generates the (controversial) explosion of real entities in CR writing. Not only that, emergentism is the only calling card that CR writers have to provide what Dave Elder-Vass has called a “regional ontology” for the social sciences, that does not resolve into just repeating the boring EAR distinction or the (increasingly uncontroversial) “theory of science” that Bhaskar developed in A Realist Theory of Science and The Possibility of Naturalism. 

How to be a (controversial) Critical Realist in two easy steps.-

So now that we have that covered, it is easy to show how to produce a “controversial” CR argument. First, pick a mereology. Meaning, pick some entities to serve as the parts, preferably entities that themselves do not have a controversial status (most people would agree that the entities exist, form coherent wholes, have natures, and so on), and pick a more controversially coherent whole that these parts could conceivably be the parts of. Then argue that the parts do indeed form such a whole via the ontological emergence postulate. Note that the postulate allows you to fudge on this point, because you do not actually have to specify the mechanism via which this ontological emergence relation is actualized (you can argue that that is the job of empirical science and so on). Then hooking the CR notion of causal powers and the EAR distinction and the postulate of ontological democracy of all entities argue that this whole is now a super-addition to the usual vanilla reality. That is, the  new emergent entity is real in the same sense that other things (apples, rocks, leptons, cells) are real. It has a inherent nature, a set of dispositions to generate actual events, and most importantly it has causal powers. The powers of this new emergent entity may be manifested at its own level (by affecting same-level entities), or they may be exhibited by the constraining power of that entity upon the lower level constituent entities (the postulate of “downward causation”). For instance, (to mention one thing that could actually be of interest to readers of this blog), Dave Elder-Vass has provided an account of the reality of “organizations” (and the non-reducibility of organizational action to individual action) using just this CR recipe.

Now we have the materials to make some people (justifiably) discomfited about a substantive CR claim (or at least motivated to write a critical paper). For if you look at most of the contributions of CR to various issues they resolve themselves to  just the steps that I outlined above. So the CR “theory” of social structure, is precisely what you think. Social structure is composed of individuals, organized by a set of relations that form a coherent (configured) whole. This whole (social structure) is now a real entity endowed with its own causal powers, which now (may) exert “downward causation” on the individual’s the constitute it. These causal powers are not reducible to those of the individuals that constitute it. This how CR cashes in what John Levi Martin has referred to as the “substantive hunch” that animates all sociological research. “The social” emerges from the powers and activities of individuals but it never ultimately resolves itself into an aggregation of those powers and activities. Note that CR is opposed to any form of ontological reduction whether it is “downwards” or “upwards.” Thus attempts to reduce social structure to the mental or interactional level are “downward conflationist” and attempts to reduce individuals to social structure (or language or what have you), are “upward conflationist.” Thus, the “first” Archer trilogy can be read in this way. First, on the non-reducibility (and ontological independence between) social structure in relation to the individual or individual activity, then “culture” in relation to the individual or individual interaction, and later (in reverse) personal agency in relation to either social structure or culture.

Essentially, the stratified ontology postulate must be respected. Any attempt to simplify the ontological picture is rejected as so much covert (or overt) reductionism or “conflation.” Note that “conflation” is not technically a formal error of reasoning (as is begging the question) but simply an attempt by a theorist to simplify the ontological picture by abandoning the ontological democracy or ontological emergence postulates. A lot of the times CR theorists (like Archer) reject conflation as if it was such an error in reasoning, when in fact it is a substantive argument that cannot be dismissed in such an easy way. Note that this is weird because both the ontological democracy and the ontological emergence argument are themselves non-demonstrable but substantively important propositions in CR. Thus, most CR attempts to dismiss either reductionist or simplifying ontologies themselves do commit such a formal error of reasoning, namely, begging the question in favor of ontological emergence and ontological democracy.

Another way to make a CR argument is to start with a predetermined high level entity of choice. This kind of CR argument is more “defensive” than constructive. Here the analyst picks an entity the real status of which has (for some reason) become controversial, either because some theorists purport to show that it does not “really” exist (meaning that it is just a shorthand way to talk about some aggregate of actually existing lower level entities), or is not required to generate scientific accounts of some slice of the world (ontological simplification or reduction a la caloric or phlogiston). Here CR arguments essentially use the ontological democracy postulate to simply say that the preferred whole has ontological independence from either the lower constituents or higher level entities to which others seek to reduce the focal entity. Moreover, the CR theorist may argue that this ontological independence is demonstrated by the fact that this entity has (actualized and/or empirically observable) causal powers, once again above and beyond those provided by the lower level (or higher level ) entities or processes usually trotted out to “reduce it away.” This applies in particular to the “humanist” strand of CR that attempts to defend specific causal powers that are seen as inherent properties of persons (e.g. reflexivity in Archer’s case) or even the very notion of person (in Chris Smith’s What is a Person?) as an emergent whole endowed with specific causal powers, properties and propensities.

To recap, CR is a complex object composed of many parts. But not all parts are of the same nature. I have distinguished between roughly three parts, organized according to the generality of the claim and the specificity of the substantive points made. In this respect, I would distinguish between:

1) The parts that CR shares with all “vanilla” realisms. This includes the postulate of ontological realism (mind-independence of the existence of reality), the transitive/intransitive distinction, the EAR distinction, and so on. In itself, none of these theses make CR particularly distinctive, unique or useful. If you disagree with CR at this level, based on irrealist premises, congratulations. You are insane.

2) The Aristotelian ontology.- This specifies the kind of realism that CR proposes. Here things get more interesting, because there is actual philosophical debate about this (nobody seriously defends irrealist positions in Philosophy any more and most sociologists just like to pretend to be irrealists to show off at parties). Here CR could play a role in philosophical debates insofar as a neo-Aristotelian approach to realism and explanation is a coherent position in Philosophy of Science (although it is not without its challengers). Here belongs (among other things) the specific CR conceptualizations of objects and entities, the causal (dispositional) powers ontology (when hooked to the EAR distinction) and the specific “Theory of Science” and the “Theory of Explanation” that follows from these (essentially endorsing mechanismic and systems explanation over reductive, covering law stories). This is what I believe is the best ontological move and CR should be commended in this respect.

3) The stratified ontology.- This comes from yoking (1) and (2) to the ontological emergence and ontological democracy postulate. This is where you can find a lot of “controversial” (where by controversial I mean worth arguing about, worth specifying, worth clarifying, and in some cases worth rejecting) arguments in CR. These are of three types: ontological emergence arguments, augment the standard common-sense ontology of material entities to argue for the existence of higher level non-material entities; thus “social structures” are as real as the couch that you are lying on; the danger here is a world that comes to populated with a host of emergent “entities” with no principled way of deciding which ones are in fact real (beyond the theorist’s taste). This is the problem of ontological inflation, (2) “Downward causation” arguments add this postulate to suggest that the emergent (non-material or material) entities not only “exist” in a passive sense, but actually exert causal effect on lower level components or other higher-level entities, (3) “ontological independence” arguments attempt to show that a particular sort of entity that is usually done violence to in standard (reductionist) accounts has a level of ontological integrity that cannot be impugned and has a set of causal powers that cannot be dismissed. In humanist and personalist accounts, this entity is “the person” along with a host of powers and capacities that are usually blunted in “social-scientific” accounts (e.g. persons as centers of moral purpose) and the enemies are the positions that attempt to explain away these powers or capacities or that attempt to show that the don’t matter as much as other entities (e.g. “social structure”).

4) Continuing extensions of the stratified ontology argument.- This is the part of CR that has drawn (an unfair) amount of attention, because it extends the same set of arguments to defend both the reality but also the causal powers of a set of entities that (a) a lot of people are diffident about according the same level of reality to as the standard material entities, and (b) things that most people would have difficulty even calling entities. These may be “norms,” “the mental,” “the cultural,” “the discursive,” and “levels of reality” above and beyond the plain old material/empirical world that we all know and love (e.g. super-empirical domains of reality). You can see how CR can get controversial here.

5) Additional stuff.- A lot of other CR arguments do not directly follow from any of these, but are added as supplementary premises to round out CR as a holistic perspective. For instance, the rejection of the fact-value distinction in science is not really a logical derivation from the theory of science or the neo-Aristotelian ontology, and neither is the “judgmental rationality” postulate (that science progresses, gradually gets at the truth, etc.). I mean all realisms presuppose that we get better at science, but this is not really a logical derivation from realist premises (as argued by Arthur Fine). The fact/value thing is in the same boat, because it requires a detour through a lot of controversial group (3) and group (4) territory to be made to stick. For instance, given that persons are emergent entities, endowed with non-arbitrary properties and powers, then the “relativist” arguments that any social arrangement is as good as any other for the flourishing of personhood is clearly not valid. This means that social scientists have to take a strong stance on the value question (hence sociological inquiry cannot be value neutral). Because a mixture of Aristotelian ontology and ontological emergentism applied to human nature is incompatible with moral (and social-institutional) relativism, the the fact value distinction in social science is untenable. However, note that to get there  a lot of other premises, sub-premises, and substantive arguments for the reality of persons as emergent, neo-Aristotelian entities have to be accepted as valid. In this sense the fact/value thing is only a derivation from certain extensions of CR into controversial territory. As already intimated, What is a Person? is a (well-argued!) piece of controversial CR precisely in this sense.

Note that this clarifies the “giant package” versus “minimalist” CR debate. Let’s go back to the cable analogy. So you are considering signing  up for CR? Here’s the deal: The “basic” CR package would (in my view) be any acceptance of (1) and (2) (with some but not all elements of (3)). In this sense, I am a Critical Realist (and so should you). The “standard” CR package includes in addition to (1), (2) and all of (3), some elements of (4). Here we enter controversial territory, because a lot of CR arguments for the “reality” of this or that are not as tight or well-argued as their proponents suppose. In their worst forms, they resolve themselves into picking your favorite thing (e.g. self-reflexivity), and then calling it “real” and “causally powerful” because “emergent.” It is no surprise that Archer’s weakest work is of this (most recent) ilk. Here the obsession with ontological democracy prevents any consideration of ontological simplification or actual ontological stratification (meaning getting clear on which causal powers matter most rather than assigning each one their preferred, isolated level). Finally, the “turbo” package requires that you sign up for (1) through (5), this of course is undeniably controversial, because here CR goes from being a philosophy of scientific practice to being a philosophy of life, the universe and everything. Sometimes CR people seem to act surprised that people may be reluctant to adopt a philosophy of life, but I believe that this has to do with their penchant to suppose that once you accept the basic, then the chain or reasoning that will lead you to the standard and the turbo follows inexorably and unproblematically.

This is absolutely not the case, and this where CR folk would benefit most from talking to people who are not fully committed to the turbo, but who (like other sane people) are already 80% into the basic (and maybe even the standard). My sense is that we should certainly be arguing about the right things, and in my view the right things are at the central node (3), because this is the where the key set of argumentative devices that allows CR people to derive substantively meaningful (“controversial”) conclusions (both at that level—arguments for the reality of “social structure”—and about type (4) and (5) matters), and where most attempts to provide a workable ontology for the social sciences are either going to be cashed in, or be rejected as aesthetically pleasing formulations of dubious practical utility.


Written by Omar

September 14, 2013 at 7:48 pm

Three thousand more words on critical realism

The continuing brouhaha over Fabio’s (fallaciously premised) post*, and Kieran’s clarification and response has actually been much more informative than I thought it would be. While I agree that this forum is not the most adequate to seriously explore intellectual issues, it does have a (latent?) function that I consider equally as valuable in all intellectual endeavors, which is the creation of a modicum of common knowledge about certain stances, premises and even valuational judgments. CR is a great intellectual object in the contemporary intellectual marketplace precisely because of the fact that it seems to demand an intellectual response (whether by critics or proponents) thus forcing people (who otherwise wouldn’t) to take a stance.  The response may range from (seemingly facile) dismissal (maybe involving dairy products), to curiosity (what the heck is it?), to considered criticism, to ho hum neutralism, to critical acceptance, or to (sock-puppet aided) uncritical acceptance.  But the point is that it is actually fun to see people align themselves vis a vis CR because it provides an opportunity for those people to actually lay their cards on the table in way that seldom happens in their more considered academic work.

My own stance vis a vis CR is mostly positive. When reading CR or CR-inflected work, I seldom find myself vehemently disagreeing or shaking my head vigorously (this in itself I find a bit suspicious, but more on that below). I find most of the epistemological, and meta-methodological recommendations of people who have been influenced by CR (like my colleague Chris Smith, Phil Gorski, or George Steinmetz, or Margaret Archer) fruitful and useful, and in some sense believe that some of the most important of these are already part of sociological best practice. I think some of the work on “social structure” that has been written by CR-oriented folk (Doug Porpora and Margaret Archer early on and more recently Dave Elder-Vass) important reading, especially if you want to think straight about that hornet’s nest of issues. So I don’t think that CR is “lame.” Although like any multi-author, somewhat loose cluster of writings, I have indeed come across some work that claims to be CR which is indeed lame. But that would apply to anything (there are examples of lame pragmatism, lame field theory, lame network analysis, lame symbolic interactionism, etc. without making any of these lines of thought “lame” in their entirety).

That said, I agree with the basic descriptive premises of Kieran’s post. So this post is structured as a way to try to unhook the fruitful observations that Kieran made from the vociferous name-calling and defensive over-reactions to which these sort of things can lead. So think of this as my own reflections of what this implies for CR’s attempt to provide a unifying philosophical picture for sociology.

Read the rest of this entry »

congratulations to Omar!

Earlier this week the Theory Section of ASA announced in an email that Omar Lizardo is this year’s winner of the Lewis Coser Award for Theoretical Agenda Setting. Congratulations!  According to the Theory Section’s website, the award is “intended to recognize a mid-career sociologist whose work holds great promise for setting the agenda in the field of sociology.” This is a well-deserved recognition for Omar, whose research covers a diverse range of theory and empirical substance. It’s unlikely that you’ve read everything interesting that Omar has written lately (including his fantastic book reviews). My advice is just to dig in and start reading. You’ll learn something.


Written by brayden king

July 11, 2013 at 4:27 pm

Posted in just theory, omar, sociology

deep culture and organization theory

This weekend, Omar wrote a detail post about the “depth” of culture, the degree to which some idea is internalized and serves as a motivation or guide for action. I strongly recommend that you read it. What I’d like to do in this post is use Omar’s comments as a springboard for thinking about organizational behavior.

The reigning theory in sociology of organization is neo-institutionalism. The details vary, but the gist is that the model posits a Parsonsian theory of action. There is an “environment” that “imprints” itself in organizations. Myth and Ceremony institutionalism posits a “shallow imprinting” – people don’t really believe myth and ceremony. Iron cage institutionalism takes a very “deep” view of culture. Actors internalize culture and then do it.

Omar posits, I think, is a view of culture that is constitutive (you are the ideas you internalize) and interactive (your use of the idea modifies the cultural landscape). Omar wants to get away from the metaphor of “deep” vs. “shallow” culture. He also discusses dual process theory, which merits its own post.

What is important for organization theorists is that you get away from Parsons’ model:

Note that conceptually the difference is between thinking of “depth” as a property of the cultural object (the misleading Parsonian view) or thinking of “depth” as resulting from the interaction between properties of the person (internalized as dispositions) and qualities of the object (e.g. meaning of a proposition or statement) (the Bourdieusian point).

The implication for orgtheory? Previously, the locus of orgtheory has been the “environment” – all the stuff outside the organization that people care about. That’s highly analogous to “culture” getting internalized deep within the individual. Thus, different institutional theories reflect a deep/shallow dichotomy. If you buy Omar’s post-Swidler/post-Giddens view of things, then what is really interesting is the interaction creating at the point of contact between environment and organization. Orgs don’t passively await imprinting. Rather, there is variance in how they respond to the environment and there is interesting variation in the adoption/importation of stuff from the environment.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

January 9, 2013 at 12:01 am

Rethinking Cultural Depth

The issue of whether some culture is “deep” versus “shallow” has been a thorny one in social theory. The basic argument is that for some piece of culture to have the requisite effects (e.g. direct action) then it must be incorporated at some requisite level of depth. “Shallow culture” can’t produce deep effects. Thus, for Parsons values had to be deeply internalized to serve as guiding principles for action. Postulating cultural objects that are found at a “deep” level requires we develop a theory that tells us how this happens in the first place (e.g. Parsons and Shils 1951). That is: we need a theory about how the same culture “object” can go from (1) being outside the person, to (2) being inside the person, and (3) once inside, from being shallowly internalized to being deeply internalized. For instance, a value commitment may begin at a very shallow level (a person can report being familiar with that value) but by some (mysterious) “internalization” process it can become “deep culture” (when the value is now held unconditionally and motivates action via affective and other unconscious mechanisms; the value is now “part” of the actor).

One thing that has not been noted very often is that the “cultural depth” discussion in the post-Parsonian period (especially post-Giddens) is not the same sort of discussion that Parsons was having. This is one of those instances in cultural theory where we keep the same set of terms—e.g. “deep” versus “shallow” culture–but change the parameters of the argument, creating more confusion than enlightenment. In contrast to Parsonian theorists, for post-Giddensian theorists, the main issue is not whether the same cultural element can be found at different levels of “depth” (or travel across levels via a socialization process). The key point is that different cultural elements (because of some inherent quality) exist necessarily at a requisite level of “depth.”

These are not the same sort of statement.  Only the first way of looking at things is technically “Parsonian”; that is Parsons really thought that

…culture patterns are [for an actor] frequently objects of orientation in the same sense as other [run of the mill physical] objects…Under certain circumstances, however, the manner of his [sic] involvement with a cultural pattern as an object is altered, and what was once an object becomes a constitutive part of the actor” (Parsons and Shils 1951: 8).

So here we have the same object starting at a shallow level and then “sinking” (to stretch the depth metaphor to death) into the actor, so that ultimately it becomes part of their “personality.”

Contrast this formulation to the (post-Giddensian) cultural depth story proposed by Sewell (1992). According to Sewell,

…structures consist of intersubjectively available procedures or schemas capable of being actualized or put into practice in a range of different  circumstances. Such schemas should be thought of as operating at widely varying levels of depth, from Levi-Straussian deep structures to relatively superficial rules of etiquette (1992: 8-9).

Sewell (e.g. 1992: 22-26), in contrast to Parsons, decouples the depth from the causal power dimension of culture. Thus, we can find cultural schemas that are “deep but not powerful” (rules of grammar) and schemas that are powerful but not deep (political institutions). Sewell’s proposal is clearly not Parsonian; it is instead (post)structuralist:  there are certain things (like a grammar) that have to be necessarily deep, while other things (like the the filibuster rule in the U.S. Senate) are naturally found in the surface, and need not sink to the level of deep culture to produce huge effects.  Accordingly, Sewell’s cultural depth discussion should not be confused with that of the early Swidler. Swidler (circa 1986) inherited the Parsonian  not the post-structuralist problematic (because at that stage in American sociology that would have been an anachronism). Her point was that for the thing that mattered to Parsons the most (valuation standards) there weren’t different levels of depth, or more accurately that they didn’t need to have that property to do the things that they were supposed to do.

The primary aim of recent work on dual process models of moral judgment and motivation seems to be to revive a modified version of the Parsonian argument.  That is, in order to direct behavior the point is that some culture needs to be “deeply internalized” (as moral intuitions/dispositions).  However, as I will argue below the very logic of the dual process argument makes it incompatible with the strict Parsonian interpretation. To make matters even more complicated we have to deal with the fact that by the time we get to Swidler (2001) the conversation has changed (i.e. Bourdieu and practice theory happened), and she’s modified the argument accordingly.  She ingeniously proposes that what Parsons (following the Weberian/Germanic tradition) called “ideas” can now be split into “practices + discourses.”   Practices are “embodied” (and thus “deep” in the post-structuralist sense) and discourses are “external” (and thus shallow).

This leads to the issue of how Bourdieu fits into the post-Parsonian/post-structuralist conversation on cultural depth. We can at least be sure of one thing: the Parsonian “deep internalization” story is not Bourdieu’s version (even though Bourdieu used the term “internalization” in Logic of Practice). The reason for this is that habitus is not the sort of thing that was designed to give an explanation for why people “learn” to have “attitudes” (orientations) towards “cultural objects” much less to internalize these “objects” so that they become part of the “personality” (which is, by the way, possibly the silliest thing ever said).  There is a way to tell the cultural depth story in a Bourdieusian way without falling into the trap of having to make a cultural object a “constituent” part of the actor but this would require de-Parsonizing the “cultural depth” discussion (which is something that Bourdieu is really good for). There is one problem: the more you think about it, the more it becomes clear that, insofar as the cultural depth discussion is a pseudo-Parsonian rehash, there might not much left after it is properly Bourdieusianized.

More specifically, the cultural depth discussion might be a red herring because it still retains an implicit allegiance to the (Parsonian) “internalization” story, and internalization makes it seem as if something that was initially subsisting outside of the person now comes to reside inside the person (as if for instance, “I disagree with women going to work and leaving their children in daycare” was a sentence stored in long-term memory to which a “value” is attached.

This is a nice Parsonian folk model (shared by most public opinion researchers). But it is clear that if, we follow the substantive implications of dual process models, what resides in the person is not a bunch of sentences to which they are oriented; instead the sentence lives in the outside world (of the GSS questionnaire) and what resides “inside” (what has been internalized) is a disposition to react (negatively, positively) to that sentence when I read it, understand it and (technically if we follow Barsalou 1999) perceptually simulate its meaning, which actually involves running through modal scenarios of women going to work and leaving miserable children behind).  This disposition is also presumably the same one that may govern my intuitive reaction to other sorts of items designed to measure my”attitude” towards other related things.  I can even forget the particular sentence (but keep the disposition) so that when somebody or some event (I drive past the local daycare center) reminds me of it I still reproduce the same morally tinged reaction (Bargh and Chartrand 1999; Bargh and Williams 2006).

Note that the depth imagery disappears under this formulation, and this is for good reason. If we call “dispositions to produce moral-affective judgments when exposed to certain scenarios or statements in a consistent way through time” deep, so be it.  But that is not because there exist some other set of things that are the same as dispositions except that they lack “depth.” Dispositions either exist in this “deep” form or they don’t exist at all (dispositions, are the sorts of things that in the post-Giddensian sense are inherently deep). No journey has been undertaken by some sort of ontologically mysterious cultural entity to an equally ontologically spurious realm called “the personality.”  A “shallow” disposition is a contradiction in terms, which then makes any recommendation to “make cultural depth a variable” somewhat misleading, as long as that recommendation is made within the old Parsonian framework. The reason why this is misleading is because this piece of advice relies on the imagery of sentences with contents located at “different levels” of the mind travelling from the shallow realm to the deep realm and transforming their causal powers in the process.

If we follow the practice-theoretical formulation more faithfully, the discussion moves from “making cultural depth a variable” to “reconfiguring the theoretical language so that what was previously conceptualized in these terms is now understood in somewhat better terms.” This implies giving up on the misleading metaphor of depth and the misleading model of a journey from shallow-land to depth-land via some sort of internalization mechanism. Thus, there are things to which I have dispositions to react (endowed with all of the qualities that “depth” is supposed to provide such as consistency and stability) in a certain (e.g. morally and emotionally tinged) distinct way towards. We can call this “deep culture” but note that the depth thing does not add anything substantive to this characterization.  In addition, there are things towards which I (literally) have no disposition whatever, so I form online (shallow?) judgments about these things because this dorky, suit-wearing in July interviewer with NORC credentials over here apparently wants me to do so.  But this (literally confabulated) “attitude” is like a leaf in the wind and it goes this or that way depending on what’s in my head that day (or more likely as shown by Zaller 1992, depending on what was on the news last night).  Is this the difference between “shallow” and “deep” culture?  Maybe, but that’s where the (Parsonian version of the) internalization language reaches its conceptual limits.

Thus, we come to a place where a dual process argument becomes tightly linked to what was previously being thought of under the misleading “shallow culture/deep culture” metaphor in a substantive way. I think this will “save” anybody who wants to talk about cultural depth from the Parsonian trap, because that person can then say that “deep= things that trigger moral intuitions” and “shallow=attitudes formed by conscious, on-the-fly confabulation.”  Note that conceptually the difference is between thinking of “depth” as a property of the cultural object (the misleading Parsonian view) or thinking of “depth” as resulting from the interaction between properties of the person (internalized as dispositions) and qualities of the object (e.g. meaning of a proposition or statement) (the Bourdieusian point).

Special Issue of Social Psychology Quarterly

Hi all, here’s an ad:


At the 2012 ASA meetings, a number of members of the Social Psychology Section sported pins announcing “Social Psychology—it’s actually everywhere!” The same may be true about culture and cultural processes. The Culture section “considers material products, ideas, and symbolic means and their relation to social behavior.” The new American Journal of Cultural Sociology and the long standing Poetics: Journal of Empirical Work on Culture, Media, and the Arts publish a wide array of studies focused on aspects of culture. The intent of a special issue of Social Psychology Quarterly is to highlight the deep connections between the omnipresent cultural context/processes and social psychological mechanisms in social life.
The intersection of culture and social psychology may take many forms. Claims that “culture is cognition” raise linkages between cultural meaning-making and a wide array of internal processes such as stereotyping, attribution, schematic processing, and the like. Identity processes involve recognition of shared and negotiated meanings and interpersonal dynamics within cultural contexts that may bolster or alter identity meanings and subsequent behavior. And, the varied status and power processes shaping dynamics among individuals and groups may underlie the production, consumption, and interpretation of cultural objects.
We welcome submissions from a broad range of empirical and theoretical perspectives, demonstrating the links between social psychological mechanisms and cultural processes to explain a wide variety of practices (pertaining, for example, to religion, health, politics, music, art, intergroup dynamics).  The deadline for submitting papers is May 1, 2013. The usual ASA requirements for submission apply (see “Notice for Contributors”). Papers may be submitted at  Please indicate in a cover letter that you would like your submission to be considered for the special issue. Prospective authors should feel free to communicate with the coeditors ( or or special issue editors, Jessica Collett ( and Omar Lizardo ( about the appropriateness of their papers.

Written by Omar

December 5, 2012 at 12:42 am

Posted in culture, omar

Why strong social constructionism does not work I: Arguments from Reference

In this and a series of forthcoming posts, I will attempt to outline an argument showing that most of the time claims to have derived a substantively important conclusion from constructionist premises are incoherent.   By a substantively important conclusion I refer to strong arguments for the “social construction of X” where X is some sort of category or natural kind that is usually thought to have general ontological validity in the larger culture (e.g gender, race, mental illness, etc.).

In a nutshell, I will argue that the reason for why these sort of arguments do not really work is that they require us to draw on a theory of meaning, language and reference that is itself inconsistent with constructionism.  To put it simply: substantively important conclusions derived from constructionist premises require a theory of reference that implies at least the potential for realism about natural kinds and a strong coupling between linguistic descriptions and the real properties of the entities to which those descriptions apply, but constructionism is premised on the a priori denial of realism about natural kinds and of such a strong coupling between language and the world.  Thus, most strong claims about something being “socially constructed” cannot be strong claims at all.  This argument applies to all forms of social constructionism, whether of the phenomenological, semiotic, or interactionist varieties.

Here I will first do two things:  1) give a more “technical” definition of what I mean by a “substantively important conclusion” within a constructionist mode of argumentation (noting that my argument does not apply to “softer” versions of constructionism) and 2) nail down the point that constructionism (and any other set of premises designed to draw substantively important conclusions about the natural and social worlds) depends on an “argument from reference” in order to work.  Finally, I will lay out the argument that 3) because of this dependence, strong constructionist conclusions are usually not warranted (they follow from an incoherent argument).

The shock value in constructionism.-  In a constructionist argument, a substantively important conclusion is one that has “shock value.”  By shock value, I mean that the argument results in the conclusion that something that we thought was “real” in an unproblematic sense is shown to be either a) a fictitious entity that has never been or could never be real or b) a historically contingent entity endowed with a weaker form of existence (e.g. a collectively sustained fiction or even delusion).  This is “shocking” in the sense that the constructionism thesis upsets the “folk ontology” heretofore taken for granted by lay and professional audiences alike.

A useful analogue (because it makes the technical argumentative steps clear) comes from the Philosophy of Mind. There, the most “shocking” argument ever put forth is know as “eliminativism” in relation to the so-called “propositional attitudes” (Stich 1983; Churchland 1981).  Note that this argument is actually espoused by people who consider themselves to be radical materialists almost blindly committed to a traditional scientific epistemology and an anti-dualist ontology.  Thus, I am not claiming a substantive commonality between constructionists and eliminativists.  All that I want to do here is to point to some formal commonalities in their mode of argumentation in order to set up the subsequent point of common reliance on an argument from reference.

According to the eliminativist thesis, the denizens of the mental zoo that play a role in our ability to account for ours and other’s people’s behavior (such as beliefs, desires, wants, etc.) do not actually exist. The reason for that is that the theoretical system in which they play a role (so called “folk” or “belief-desire psychology”) is actually an empirically false theory, one that relies on the postulation of theoretical entities (mental entities) that have no scientifically defensible ontological status.

According to belief desire psychology, persons engage in action in order to satisfy desires.  Beliefs play a causal role in behavior by providing the person with subjective descriptions of how means connect to desirable ends.  Using belief-desire psychology, we can explain why person A engages in behavior B, by postulating that “Person A believes that by doing B, she will get C, and she desires/wants C.” A belief is a proposition about the world endowed with a truth value and a desire is a proposition that describes the sorts of states of affair that the person would like to bring about.   Both are conceived to be mental entities endowed with “intentional” content (they are about something). Their intentional content dictates how they can relate to other entities in a systematic way (e.g. because some propositions logically imply others). We can then “predict” (or retrodict) the behavior of persons by linking desires to beliefs in a way that preserves the rationality of persons.

Accordingly, if I see somebody rummaging through the contents of a refrigerator, I can surmise that this person is engaging in this sort of behavior because she believes that she will find something to eat in there, and she wants something to eat.  Relatedly, when persons are questioned as to why they did something, they usually give a “reason” for why the did what they did.  This reason takes the form of a “motive report.”  If I question somebody about why they are rummaging through a refrigerator, they are likely to say “because I’m hungry.”

According to eliminativists, the main causal factors in belief desire psychology have no ontological status.  Thus, neither propositional beliefs of the sort of “I think that p” where p is a proposition of the sort “there is food in the refrigerator” nor desires of the sort “I want q” have any ontological status.  As such, belief-desire psychology stands to be replaced by a mature neuropsychology, one in which “folk solids” such as desires and beliefs (to use Andy Clark‘s terms) will play no role in explanations and accounts of human behavior.  These notions, previously thought to be natural kind endowed with unquestionable reality, are eliminated from our ontological storehouse and into the dustbin of fictional entities discarded by modern science (such as Phlogiston, Caloric, The Ether, The Four Humors, etc.).

Constructionism and eliminativism.- I argue that most substantively important conclusions within the constructionist paradigm are actually modeled after “eliminativist” arguments in the Philosophy of Mind.

All of the pieces are there.  First, a constructionist argument usually takes some (folk or professional) system of “theory” as their target. This is regardless this is a system of theory currently in existence or from a previous historical era.  This is usually a folk (or sometime professional) “theory of X” (e.g the “folk theory of race” or the “folk theory of gender”).  Second, within this system the constructionist picks one or more central theoretical categories or concepts (X), which, within the system are endowed with an non-problematic ontological status as real (e.g. gender  or racial “essence”).  Third, the constructionist shows the folk theory of X to be false from the point of view of a more sophisticated theory (modern population genetics in the case of the old anthropological concept of “race”).  Thus X (e.g. race), as conceptualized in the folk theory, does not really exist, even though it forms a key part of certain contemporary folk theories of race. The title of the famous PBS documentary: “Race: The Power of an Illusion” conveys that point well.

The constructionist may also argue for the indirect falsity of the current theory of X, by simply using the historical or anthropological record to show that there are cultures/historical periods  in which X either was not presumed to exist in the way that it exists today or was part of a different theoretical system which radically changed its status (the properties that define membership in the concept were radically different).  Here the constructionist will agree that X “exists” in the current setting, but it does not have the sort of existence attributed to it in the folk discourse (transhistorical and transcultural) instead it has a weaker form of existence: social; as in “sustained by a historically and culturally contingent social arrangement which could theoretically be subject to radical change.”  Foucault’s famous argument for the radically different status of the category of “man” within the so-called “classical episteme” is an example of that sort of claim.  The category of man in the modern era has a meaning that is radically incommensurate to the one that it had in the classical episteme.  The implication is that therefore the category of “man” does not refer and we can thus conceive of a possible future in which it plays no actual role, follows.

The common element here is that a category that we take for granted (within the descriptions afforded by some lay or professional theoretical system) to be ontologically “real” (race, gender, the category of “man”, etc.) is shown instead to  “actually” have a fictitious status because there is nothing in the world that meets that description. More implicitly, insofar as a concept has undergone radical changes in overall meaning (with meaning determined by its place within a network of other concepts in the form of a folk or professional theory), then there cannot be a preservation of reference across the incommensurate meanings.Hence the concept cannot really be picking out an ontologically coherent entity in the world. I refer to this as the “strong constructionist effect.”  The basic idea, as I have already implied is that in order for the effect to be successful, we must already be working from within some theory of reference, otherwise the claim that “there is nothing in the world that meets that description” is either vacuous or incoherent.

Constructivism and arguments from reference.- What are “arguments from reference”? Arguments from reference are those that implicitly or explicitly require a theory of reference for their conclusions to follow (or even make sense), as has been recently pointed out by Ron Mallon (2007).  When this is the case, it can be said that the substantively important conclusion is  dependent on the (logically autonomous) theory of reference. It is striking how little most social scientists spend thinking about reference. They should, because even though it is seldom explicit, we all require some theory about how conceptualizations link up (or fail to!) to events in the world in order to make substantive statements about the nature of that world. I argue that in order to produce the strong constructionist  effect, and thus derive substantively important conclusions, the argument from social construction requires a particular theory of reference.

One would think that when it comes to theorizing about how conceptual, theoretical or folk terms “refer” to the world there would be various competing theories.  Instead, twentieth century analytic philosophy was long dominated by single dominant account of how concepts refer.  This was Frege’s suggestion that “intension” (the meaning of a term) determines “extension” (the object in the world that the term picks out).  Lewis (1971, 1972) formalized this formulation for the case of so-called theoretical entities in scientific theories.  According to Lewis, terms in scientific theories purport to describe objects in the world bearing certain properties or standing in certain relations with other objects. This is the description of that term.  According to Lewis, the terms of Folk Psychology are theoretical entities that gain their meaning from their relations to other entities and observational statements within a system of theory.  Eliminativists built their argument on this suggestion, by suggesting that there is nothing in the (scientifically acceptable) world that meets the description for a propositional attitude (a mental entity endowed with “intentional” content); ergo, belief-desire psychology is false, its terms do not refer, and we need a better theory of the mental.

In short, from the viewpoint of a descriptivist theory of reference, a given term or concept defined within a given theoretical system refers if and only if there is an object in the world that bears the properties or stands in the relations specified in the description.  According to this theory, terms refer to real world entities when there exists an object satisfies the necessary and sufficient conditions of membership in the category defined by the term (which in the limiting case may be an individual).  Descriptions that have no counterpart in the real world are descriptions of fictional entities and thus fail to refer (and the validity of the theoretical systems of which they are a part is therefore impugned).  When competent speakers use the terms of any theory (scientific or folk) they have a description in mind, which specifies the set of properties that an object would have to have for that term to be said to successfully refer to it.

The basic argument that I want to propose here is that “shock value” constructionism depends on a descriptivist theory of reference. This should already be obvious.  The standard constructionist argument begins by a painstaking reconstruction of a given set of folk or professional descriptions.  The analyst then moves on to ask the rhetorical question: is there anything in the world that actually satisfies this description?  If the answer is no, then the conclusion that the term fails to refer (and is a fictional and not a real entity) readily follows.  The standard criteria for satisfaction of these conditions usually boil down to some sort of semantic analysis. For instance, in Orientalism, Edward Said painstakingly reconstructed a Western “image” (read description) of the Middle East as a kind of place and of the Arab “Other” as a (natural?) kind of person. Said pointed out that this description of Arab peoples (menacing, untrustworthy, exotic, emotional, eroticized, etc.) was not only logically incoherent; it was simply false, there had never been a group of people who met this description; it had been a fabrication espoused by a misleading theoretical system: Orientalism. Thus, Orientalism as a culturally influential theory of the nature of the Arab “Orient” needed to be transcended. The main theoretical entity implied by such theory, the Oriental “other” endowed with a bizarre set of attributes and properties was thereby eliminated from our ontological storehouse.

Houston we have a problem.- It would be easy to show that essentially all arguments that produce the “strong constructionist effect” follow a similar intellectual procedure.  There are at least two problems with this (largely unacknowledged) dependence of social constructionism on a descriptivist theory of reference. First, constructionism denies the conditions that make a descriptivist strategy an adequate theory of reference, which is at a minimum the validity of a truth-conditional semantics and the capacity of words to unambiguously (e.g. literally) refer to objects and events in the world.  This is not a problem for Gottlob Frege and David Lewis, or most descriptivist theorists in analytic philosophy, most of whom subscribe to some version of propositional realism (propositions have truth values that can be unproblematically redeemed by just checking to see if the “correspond” to the world).  However, this is a problem for constructionists because they cannot accept such a strong version of realism.

Thus, if the very theory of the relationship between language and the world that is espoused by social constructionism (skepticism as to the applicability of a truth conditional semantics and unambiguous reference) is true then descriptivism has to be false. This means that social constructionism is an inherently contradictory strategy; to produce substantively meaningful conclusions (the strong constructionist effect) it has to rely on a theory of the relationship between meanings and the world that is denied by that very approach. Second, even if this logical argument could be sidestepped, constructionism would still be in trouble.  The reason for this is that there is a competing (and equally appealing on purely argumentative grounds) theory of reference in modern philosophy: this is the causal-historical theory of reference most influentially outlined by Saul Kripke and Hilary Putnam.  The basic issue is not that this is a competing account of reference; the problem is that this account of reference actually denies a key link in the constructionist argument: that in order to refer, there has to be match between the description of the term and the properties of the object that the term putatively refers to.

Instead, causal-historical theories of reference allow for two possibilities that are seldom taken into account by constructionists:  1) that persons can refer to things in the world even though their mental description of the term that they are using to refer to those things those not at all match the properties of those things, and 2) that the description of a term can undergo radical historical change while the term continues to refer to the same entities or cluster of entities.  The first possibility undercuts the capacity of the constructionist to “correct the folk,” because reference is decoupled from the descriptive validity of the terms that are used to refer.  The second possibility undercuts the argument for social construction based on historical and cultural variability of descriptions. It opens up the possibility that there is “rigid designation” to the same set of social or natural realities across cultures in spite or radical differences in the cultural frameworks from within which these referential relations are established.

A reasonable objection is simply to point out that we simply do not have sufficiently strong grounds of picking descriptivism over causal-historical theories of reference, as equally respectable arguments have been put forth in defense of both. This is in fact the position taken by most philosophers who instead go on to worry about whether people are cherry-picking one of the two theories of reference to support their preferred argumentative strategy.  However, I believe that most constructionists in social science cannot be content with this non-committal solution. Instead, like other areas of Philosophy (e.g. epistemology, ethics, mind), there is a way to “break the tie” between various philosophical theories and that is to look to naturalize these types of inquiry by looking at what theories seem to be consistent with the relevant sciences.  Here we have good news and bad neews for constructionists.

Research in cognitive science, cognitive semantics and cognitive linguistics points to the inadequacy of descriptivist theories of reference from a purely naturalistic standpoint. This should be good news for constructionists because the upshot is that truth-conditional semantics roundly fails as an account of how persons generate meaning (Lakoff 1987).  The irony is that these theories redeem the original skepticism of constructionism vis a vis any form of truth-conditional semantics and propositional realism, but in so doing also undercut the ability of constructionists to engage in the sort of  argument that results in “shocking” or substantively strong claims for the social construction of X, because the rhetorical force of these arguments depends on descriptivism and descriptivism implies propositional realism and “objectivism” (that truth is the literal correspondence of statements and reality).  The resulting counter-intuitive conclusion is that it is precisely because linguistic meaning and natural categories meet the constructionist specifications that strong constructionist arguments are actually impossible.  In fact, it is precisely because language and semantics work the way that constructionist (implicitly) presuppose that they do that the norm in historical change may not be the radical transformation of reference relations in historical and cultural change (as implied by Foucauldian analysts), but rigid designation of the same (social, or natural) “essences” and relations even in the wake of superficial shifts in the accepted cultural description of those entities.

Written by Omar

March 7, 2012 at 6:57 pm

specifying the agency problematic II: implications for cultural sociology

In a previous post, I suggested that a useful way of (re)specyfying the agency problematic, requires us to understand that most of the time, talk of agency has nothing to do with “freedom to act” but actually pertains to the freedom to conceptualize the world in a way that is indeterminate in relation to objective reality: that is, agency usually means freedom to think (about the world in a way that is not determined or unilaterally constrained).  I noted that an advantage of specifying the concept of agency in this way is that it allows us to understand a bunch of quirks in the history of social and cultural theory, in particular the Parsonian conflation of “voluntarism” with the Weberian problematic of “ideas” and the subsequent projection of essentially the same debate in anthropological theory to the “cultural autonomy (from biology and conditions).”  Here I would like to go into greater depth into the reasons why it is useful to think of the agency problematic in this way, with an emphasis on implications  for contemporary cultural sociology.

One objection that you might have is that thinking of agency as “freedom of conceptualization” seems like a counter-intuitive, overly-convoluted, obscure or simply unhelpful way of specifying and dis-aggregating what we mean by agency. If that’s what you think, I think you are wrong. This way of thinking about the agency problematic makes a bunch of sense.  First, as I mentioned before, it makes sense of the way that Parsons thought about it.  Why should we care about making sense of Parsons?  Because a lot of the debates that we are having today are still Parsonian debate in code, this helps us get clearer about what we are talking about. To crack the code, all that you need to do is change the words.  As we saw, for Parsons the battle was between “idealism” and “positivism”; change “idealism” to “culture” and change “positivism” to either “materialism” or “structure/structuralism” and you have the modern version of the debate.  That’s why when we set up  culture to structure, or agency to structure, culture to materiality, agency to social structure, or ideas to the objective world, in an oppositional contrast, the corresponding terms of these interlinked dichotomies match.  Second, this way of thinking about culture and agency accounts for why is it that there will always be a conflation between agency and the mental and why is it that theories that deny that the mental (or the cognitive) matter are ipso facto theories that “deny agency.”  Third, this way of thinking about it explains the curious contemporary fate of cultural sociology.  This is a field that has actually been built on the ruins of the original debate that was had at the level of individual agency.

Culture versus structure.- For instance, cultural sociologists sometimes get made fun of by “structuralists” (let’s say in the study of inequality) because what they are peddling (the mental) seems like fluff in comparison to non-negotiable realities, especially when it comes to the big stuff (large, structured inequalities). That’s why in the agency/structure debate cultural sociologists have to be on the side of (some) agency. The reason for this is that, as I noted before, the “group” version of the debate is no different from the individual version.  Culture is just socially patterned conceptualization (or shared ideas).  So if we can ascertain that the “mental” matters because different people can conceive of the same “objective” situation in different ways, then when we aggregate individual cognition into the group cognition that we usually refer to as culture, a similar set of inferences follows (see any book by Zerubavel).  This is also why in the “culture and poverty” debate there is conflation between culture/agency and judgments of responsibility.  In our folk (Western) model, if you had agency, then you are responsible.  When the cultural sociologist then brings “culture” into the study of poverty, he or she is ipso facto saying that the poor were somehow (at least partially) “responsible” for their plight. This creates the odd situation in which only the pure structuralist who removes all agency from the poor can claim that he or she is not blaming them for their condition.

The autonomy of culture(s).- In the anthropological version of the agency=freedom of conceptualization formulation, culture is not reducible to (group) biology (e.g. genetic heritage) in the same way that the individual mental process is not driven by biology, culture is not reducible to the (physical) environment or to ecology in the same way that the mental is not reducible to the environmental; finally culture is not reducible to some sort of “rational” calculus, because if the neo-classical presumption was true, there would not be “cultures” in the plural. Instead all cultures would have the same set of beliefs about the world, and cultural variation would simply be a function of variation in the objective features of the world (e.g. the situation of “same worlds different culture” would not arise).  Note that I have essentially described the program of “cultural anthropology” initiated by Boas and sustained by such people as Sapir, Whorf, Mead, Kroeber, etc. during the early and mid-twentieth centuries.  The inference that agency is the “freedom to think differently” is extended to the group level in the form of cultural relativism: culture is not determined by non-cultural forces, therefore groups have the freedom to think differently in forging distinct cultures.  The “autonomy” of culture (from whatever) is formally identical to the autonomy of cognition from conditions.  That’s why it is so easy to navigate without conceptual loss, from a position of “voluntarism” at the level of the individual to a position of “autonomism” at the level of cultural analysis (see Wikipedia entry for Alexander, Jeffrey). The reason for that  is because they are the same substantive position, and even the bogey-men that Parsons cursed as positivism re-appear in aggregate form: environmental determinism, biologism and neo-classical rationalism. That’s why cultural anthropology fought valiantly against all three.  The first two were vanquished pretty early on, but the battle of cultural anthropology against the rationalist conception of the actor continues to this day (this usually happens under the heading of the “cognitive unity of mankind” or the “multiple rationalities” debates in economic and cultural anthropology).

Culture versus Rationality.- This explains an otherwise weird mystery: rational action theories (see e.g. Hedstrom, Goldthorpe) take ideas and beliefs seriously, but they seem oddly “a-cultural.” The reason why RAT has an a-cultural flavor, is because it has trouble accounting for structured variation in beliefs and ideas that is not traceable to objective conditions; by implication this also makes it a theory that denies agency.  Thus, you can believe that “ideal” stuff matters and still deny that “agency” (or the cultural) matters (that’s why Parsons understood neo-classical economics to be an incoherent mixture of idealism and positivism).  That’s also why rational-choice philosophers (like Elster) have to get into the belief formation problematic and in fact have been the only ones who have advanced the normative problematic of belief justification.  Finally, this is why people like Coleman simply don’t make any sense when they think that by bringing action, back-in they are in fact bringing agency. Insofar as they subscribe to a deterministic model of cognition (e.g. the constrained optimization calculus), then you can have all of the action in the world, without having an iota of agency.

The oddness of normativity in cognition.- It is astounding how much not a problem (or how bizarre) the notion that we can have a normative theory of the mental (essentially that we can pass judgment on ideas by looking at their causal history) is for cultural sociologists. Cultural sociology inherits the core irrationalism of German Idealism and Boasian anthropology. This is not a “bad” thing; it is just the thing: agency entails a loose-coupling between the world and beliefs about the world, and since the only way to get a “normative” theory of belief is to suggest an unacceptable strong coupling, cultural sociologists are happy to give up on this. In fact, I think that most cultural sociologists don’t even think that this normative question (vis a vis a belief: is it rational or not? is it justified or not?) makes any sense.  In this respect the rational action people and the cultural sociologist might as well from different planets. This is also one of the main ways in which we haven’t made much progress since Parsons.

Where do we stand?.- So we come full circle.  A lot of agency talk is really talk about the mental.  What we really mean by agency is really the capacity to conceptualize the world in different ways irrespective of objective reality and what other people mean by structure is really some sort of non-mental or non-cognitive thing that constrains your capacity to conceptualize the world in this or that way, so that in the limiting case a structuralist can predict what you think without looking into the black box that is your head. So you don’t have agency because you don’t have the freedom to impose your own construal on objective situations (or in the group sense, cultures are not autonomous because they are linked to non-cultural features of the world).

Does this mean that the world does not constrain conceptualization in any way?  The answer to this question is more complex, but I would say that the weight of the evidence points to no. So the unrestricted version of social constructionism goes out the window. The best work on comparative and typological linguistics, metaphor theory and cross-cultural studies of categorization overwhelmingly shows that there are objective constraints on conceptualization and cognition although these constraints show up at the level of structure and seldom at the level of content (except when it comes to the so-called basic level).  One hypothesis that can certainly be rejected is the unitary constraint hypothesis (e.g. naive reflection, “realist”, of truth-conditional theories of semantics).  There are very few features of the world that have a monolithic effect on conceptualization.  No domain (space, society, time, etc.) has been found that imposes a non-negotiable structure on our conceptualizations, although there are domains that leave less degrees of freedom than others.

But the job of “ranking” domains in this sense has only begun.  The more important point is that the obsession of cultural sociologists with simply making the case for social construction (and leaving the impression that they subscribe to the unrestricted—and ultimately irrationalist—account even though most don’t really) has resulted in a lack of attention to the “limits” of social construction.  Here limits should not be interpreted in terms of the traditional bogey-men (what about biology?) but instead in terms of the relation between agents and the world at a level that abstracts from this.  We know there have to be limits simply because we are embodied and embedded beings, and it is unlikely for instance that we can use conceptual resources that are not “grounded” in that fact. However, the relationship between embodiment, cognition and action is still something that makes cultural sociologists squirm a bit (because the body is kind of, well, biological), but it is clear that this is where these questions will be asked (and hopefully answered).

Written by Omar

March 5, 2012 at 10:50 pm

You say within, I say between…Let’s call the whole thing off!

Suppose we are interested in the effects of some social psychological construct that we are theoretically devoted to (let’s say “symbolic racism”) on support (or lack thereof) for (generous) social welfare policies.  In quantitative social science we would spend a lot of money surveying people, collect some data, and ultimately specify a regression model of the form:

Y=a+bW+cX+e      (1)

Where Y is some sort of scale or that lines up individuals in terms of their support for social welfare policies, W is some sort of scale that lines up individuals in terms of their “symbolic racism” is a matrix of other “socio-demographic” stuff and e is a random disturbance.  Suppose further that the model provides support for our theory; b is substantively and statistically significant and its sign goes in the right direction: the more symbolic racism the less support for social welfare policies.  We would then write a paper arguing that individuals who are high in symbolic racism are less likely to support social welfare policies, and this is a likely source of support for the Republican party in the South, we might even insinuate in the conclusion that trends in income inequality would be much less steep if it wasn’t for these darn racists, etc.

I would bet you 10,000 dollars*, however that in actually presenting their results and their implications the authors would say things that are in fact not supported by their statistical model.  In fact we all say or imply these things, especially when W is an attitude (or some other “intra-individual” attribute) and Y is a behavior, and we desire to conclude from a model such as (1) that attitude is a cause of the behavior (the same thing would apply if  the unit of analysis are organizations, and W is some organizational attribute–like the implementation of a “strategy”–and Y is an organizational outcome).

Now suppose even further that W passes all of the (usual) hurdles for something to constitute a cause: it precedes Y, the model is correctly specified on the observables, etc.  My point here is that even if that were true, it is not true that from the fact of observing a large and statistically significant effect of b we can conclude that at the individual level there is some sort of psychological (intra-organizational) process with the same structure as our W  called “symbolic racism” that causes the individual’s support for this or that policy.

An obscure segment of the statistical and psychometrics literature tells us why this is the case (see in particular Borsboom et al 2003):  in order to jump from information that is obtained from a comparison between persons to statements about the data generating process within persons, we must make what is called the local homogeneity assumption.  This assumption is just that; an assumption.  And for the most part it is a shaky one to make.  For b in (1) only gives us information about the conditional distribution of Y responses among the population of subjects as we move across levels of W; it says nothing about causal processes at the individual level. In fact the model that produces responses at the individual level could be wildly different from (1) above and yet it could generate the between-persons result that we observe.  In this respect, the statements:

1a. Our results provide support for the conclusion that in the contemporary United States a person with a high degree of symbolic racism is less likely to support social welfare policies than another person with a lower degree of symbolic racism.

1b. Our results provide support for the conclusion that a person’s support for punitive welfare policies would decrease if their propensity towards symbolic racism were to decrease.

Are empirically and logically independent.  Model (1) only supports 1a, but it says nothing about 1b (or would only say something about 1b under the weight of a host of unsupportable assumptions).  However, whenever we write up results obtained from models such as (1), we sometimes present them as if (or insinuate that) they provide support for 1b.

Startlingly, this lack of (necessary or logical) correspondence between a between-subjects result and the DGP (data-generating process) at the individual level implies that most statistical models are useless for the sort of thing that people think that they are good for (draw conclusions about mechanisms at the level of the person/organization).   Not only that, it implies that a model that provides a good explanatory fit for within individual variation (let’s say a growth curve model of the factors that account for individual support for social welfare across the life course) might be radically different from the  one that provides the best fit in the between-persons context.  Finally, it implies a “rule” of sociological method: “whenever a within-subject explanation is extracted from a between subjects analysis we can be sure that this explanation is (probably) false (at least for most non-trivial outcomes in social science).”

*I don’t actually have 10,000 dollars.

Written by Omar

December 16, 2011 at 4:28 pm

One way of specifying the agency problematic

Conceptions of what we mean by “agency” abound and are unlikely to constitute a single manageable notion (some ritualistic citation to Emirbayer and Mische would go here). Instead, something like the notion of agency is probably a complex, “radial category.” This means that when we take a position on the various versions of the “agency/structure” or any sort of analogous “debate” we end up having unrelated arguments that use similar words, and sometimes the same argument using different words. More deleteriously, we end up speaking in code, so that instead of saying what we mean in specific terms, we develop the bad habit of speaking in “generics” such as agency versus structure. Positions taken in terms of these generics have a socio-logical function in the field (they serve as emblems of membership in groups and schools) but analytically they leave us at the level of a low cognitive equilibrium, one which is pretty hard to shake out from (which is why it seems like we’ve been having the same conversation for like fifty years now).I think one productive way to proceed is to use a disagreggation and specification strategy, whereby we isolate all of the various sub-arguments that we are having under the agency/structure debate guise (this is roughly the strategy that John Martin used in Social Structures in regards to that concept). After disagreggation and specification we may then have it all out at this lower level of abstraction. Arguments at this level will be bound to be more productive, because here hopes for adjudication regarding the desirability and coherence of one position over another are actually much higher than they would be if we stay at the level of ghostly generics.

So, one thing that I think has been meant by agency in the history of social theory is simply freedom to conceptualize the world in a way that is not dictated by the objective features of the world. While the term agency usually has an affinity to “freedom” (as in freedom of the will or freedom to do whatever you want), so that in a lot of debates agency is conceptualized as freedom of action, I think that one of the core meanings (or the most consequential meaning in the history of various influential debates in social theory) is not about freedom to act but about freedom to think or as I will refer to this from now on, as freedom to conceptualize. So agency is simply freedom of cognition from objective aspects of the world, or more precisely the agent’s freedom (insofar as our cognitive capacities are constitutive of our status as agents) to conceptualize the world in alternative ways. One can recognize this as a strictly Kantian version of what we mean by “agency” and that is no accident. For it is clear that it was the strand of social theory that begins with German-Idealism and which emphasized the creative and constitutive capacities of the subject to conceive of the world in particular ways that injected the most consequential version of the agency problematic in social theory. Under this model, the individual is “free” insofar as the way in which some state of affairs is conceptualized is not completely determined by some sort of non-negotiable feature of the world (e.g. its materiality or brute facticity). Instead, a conceptualization emerges from a negotiation between features of the world and aspects of cognition that are decidedly contributed by the conceptualizer. This mean that a simple inspection of the objective features of the situation is not sufficient to predict the way in which that situation will be understood (read conceptualized) by a person. In this sense, to say that there agency is a property of persons is to say that they have “freedom to” construe a given state of affairs in ways that are not a function of extra-cognitive features of that state of affairs.

As Parsons well understood, a theory that denies agency (or that denies “voluntarism” in his antiquated language) would be a theory that by-passes the mental, or to use more contemporary language, would be a theory that by-passes cognition (or in Weberian terms, a theory that says that ideas don’t matter). In Parsons there were three examples of such theory: instinct theory (biologism), radical behaviorism (environmental determinism) and neo-classical economics; he called all of this stuff “positivism” even though that had nothing to do with how the term had been understood in nineteenth century thought. Regardless, Parsons was right in thinking that any theory that made the cognitive a determinate product of the non-cognitive by definition got rid of the element of “freedom” in action (which he sometimes confusingly referred to as the “value-element”); the reason for that is that–thanks to Kant—in the social theory tradition the only source of freedom left to the person was the freedom to conceptualize the world in a way that was not determined by non-negotiable features of the world. That’s why the “material” is the site of non-agency and the mental (or the cognitive/ideal) is where “agency” resides.

So one way to specify the whole “agency” thing is to actually argue about this rather than about agency: does your actor model makes conceptualization indeterminate given some state of affairs or does it constrain the actor to conceptualize the world in a single way (e.g. the way that “reality” really is). Parsons understood that any theory that reduced cognition to “objectivity” was “positivist” in the sense that it left the actor no conceptual choice to construe the world in independence from non-cognitive features of the world. In neo-classical economics the world is only one way (the way described by “modern science”) and if the actor did not have this conceptualization then by definition the actor was irrational. That’s why in Parsons work (but curiously not in our versions of his debates) there was a clear connection between agency and “the problem of rationality.” Parsons dilemma was that the only theory that had a normative conception of rationality did not leave room for agency (positivism in its neo-classical incarnation) while the only theory that left room for agency (freedom to think otherwise) when taken to its ultimate conclusions resulted in a irrationalist premise (a form of cultural and cognitive relativism). We still haven’t solved that one, but it would be helpful to bring the rationality debate to the fore again.

Note that a lot of “social construction” talk (and debate) has the same structure. So another advantage of what I propose is that disagreggates and re-specifies that debate. I think the term social construction is terrible and misleading. First, if you’ve read Berger and Luckmann you know that it is missing a few words. What they really mean by this phrase is the cognitive construction of (the sense of) reality with categories of thought of social origin. This mouthful is of course unwieldy, but underscores their achievement. In some respects the B&L respecification of the problematic ended up being a better synthesis of German-Idealism with French Social Realism (e.g. Weber and Durkheim) than that produced by Parsons. For the point of social construction (and of cultural sociology) is that this implies an “idealist” version of the Marxian dictum (on this “idealization” of Marx B&L were quite clear): you cognitively make your world but not with categories of thought of your own making (a point obviously central to Bourdieu as well; the phenomenological notion of world-making re-appears in analytic philosophy in Nelson Goodman’s work). I think the reason why we like this formulation is that we can have our cake and eat it too. Note that at the individual level this implies the grossest form of (Durkhemian) determinism (not made any more palatable by the invocation of Mead): the categories with which your think are the product of society; at the group level though we get the benefits of German Idealism: culture is not reducible to (social structure, environment, physical features of the world, universal rationality), so agency re-appears at that level.

This accounts for why social-construction types of debates are so predictable: on the one side you usually have somebody vigorously stomping his/her feet and saying that there are objective features of the world (e.g. and by “objective” the foot-stomper means features of reality that demand that they be conceptualized in ways that leave no freedom for alternative construals). Let’s call these features non-negotiable features. On the other side you have social constructionists carefully denying that such non-negotiable features exist (or more precisely, claiming that they might exist in a neutral ontological sense but they don’t really constrain thought in the way that the non-constructionist claims that they do; i.e. they are epistemically indeterminate). For the (strict) social constructionist everything that the non-constructionist claims is non-negotiable could be construed otherwise, and that’s why culture is autonomous and people have agency.

For instance, Andy Pickering thinks that his work on Quarks demonstrates how (scientific?) “agency” emerges from the “mangle of practice” even though his substantive point is that objective features of the world have not constrained the actual shape of theories about the world in the history of particle physics. Once again, “agency” plays no role in this claim, and we could translate agency as “capacity to conceptualize in divergent ways” without any explanatory loss; agency is a completely decorative term in this whole discussion. Of course, now we can easily explain why it makes this strange appearance; for what Pickering means by “agency” is simply freedom of thought from the slavery of having to reflect a unitary reality. Pickering’s big counter-factual (and this he has in common with every cultural theorist) is that the history of physics (which is essentially a history of conceptualizations) could have been otherwise. Non-constructionists think that he should be comitted to the nearest mental institution. The constructionist immediately points out that the very notion of mental illness is a concept that is not determined by objective features of the world and therefore one that has been cognitively constructed in collective or social way (we can envision an alternative history in which the notion of “mental illness” never arose in the way that it arose in the West, which is the point of Foucault’s early work).

So the point is that a lot of debates about social construction is simply the historical version of the culture/agency thing: the history of collective conceptualizations is contingent and/or driven by the internal features of cultural systems themselves (a point also made by Foucault in his early work). One thing that is not the case is that the history of cultural change can be done as a history of changes in non-cultural features of the world.

The lesson? Agency means many things. One obvious thing that it means is freedom. Yet, a curious quirk in the history of social theory linked “freedom” to cognition or thought (Kant). In the twentieth century this linkage (via Boas who was said to regularly page through his copy of Critique of Pure Reason during cold nights in the arctic) was “blown up” to the group level in the form of the founding problematic of cultural anthropology; so that the “autonomy” (a synonym for “freedom” by the way) of culture from “conditions” (biological, environmental, etc.) is the formal equivalent of the Kantian autonomy of conceptualization from the world, or the Parsonian autonomy of values from conditions. Cultural sociology inherits this whole package of debates, problematics and hasty formulations, but in some sense, cultural sociology is simply the contemporary avatar of a position in this debate (shared by Boas and Parsons): this position is that culture, cognition and thus conceptualization is not reducible to objective or non-negotiable features of the world (so when early network theorists counter-posed “social relations” to culture, they took a stance in this debate, one that they’ve recently reneged); Berger and Luckmann provide us with the modern vocabulary, the one that replaced Parsons’ voluntarism/idealism/positivism lingo: “the social construction of reality.” That’s why this phrase has become shorthand for saying “culture/cognition (depending of whether you are talking about groups or individuals) is not a function on non-negotiable features of reality” or simply another way of saying “agents have the freedom to construct realities in ways that are not a function of the objective features of the world” (the phenomenological input here is clear in the notion of “multiple realities”).  Edmund Leach, Mary Douglas and Eviatar Zerubavel provide us with another update of the same position: culture is a grid that cuts the booming, buzzing confusion of the world in group-specific ways. This single fact explains why cultural sociology stands opposed to all sorts of biologism, environmentalism and universalist rationalisms. In these debates the capacity to unhook thought from servitude to some sort of non-mental determinant is essentially an argument for “agency.” But if that’s the case, then we can have the same debates without using the “a” word. It adds nothing to the proceedings, and in fact may actually confuse matters further.

Written by Omar

December 7, 2011 at 7:28 pm

three ways of talking about the variables that you don’t care about

It would be nice if the world of quantitative data analysis in social science was like the one envisioned by people like Christopher Achen and every regression was a bivariate regression or at worst a model with 2-4 right-hand side variables.  Instead, we are stuck living in a “garbage can” world where we write papers in which even though we only care about the relation between one thing and some other thing, end up including long vectors of other stuff that we don’t care about.

How we talk about the rest of the garbage can however, is important since it reflects mini-theories about the research process (sometimes consciously held most of the time mindlessly deployed) that reflect what you (think you) are doing when you regress Y on whatever.  Different people recommend different lingo, and this lingo is related what they think you are accomplishing by augmenting your regression specification.  So get clear on what you are doing and modify your vocabulary accordingly.

In my view, there are two broad practical purposes of regression analysis and they are reflected in the relevant vocabulary.

Descriptive inference.- For instance, Andrew Gelman prefers to talk about “input” variables (rather than independent variables); and what do you do with input variables? Well, you “adjust” for them.  This kind of language is in my view appropriate for the bulk of quantitative analysis of observational data in social science. Here, the researcher cannot, does not or should not be making strong claims that they favorite X has a causal effect on their favorite Y. Instead, I’m just interested in the “net effect” of X on Y, and adjust for other inputs to make sure that the effect is there within levels of their (additive) combination.  This language is nice and neutral and does not commit you to problematic assumptions about inference.

Causal inference.- People like Stephen Morgan on other hand, like to talk about “conditioning on” the values of the other Xs. This language is borrowed from a long tradition of causal inference in experimental and non-experimental research and it is appropriate when that’s exactly what the researcher thinks that he or she is doing.  In particular, if you follow the recent systematization of what it is that those other Xs that you directly don’t care about actually do in the context of causal inference using directed acyclic graphs (in the recent writings by Pearl and Morgan and Winship), then it is clear why is it that you are “conditioning” on them (you block backdoor paths linking your favorite X to your favorite Y by conditioning on those other pesky Xs–and only those— that connect the fave with the outcome).  The language of “conditioning” here also makes explicit that you are working from within the conditional independence assumption which is the fundamental bedrock of causal inference with observational data.

So there you have it.  If you are just running descriptive regressions to see what goes with what and are interested in “effects” (as long as you don’t fall into the trap of talking about these effects as having anything to do with causes), then when you write your papers, you just say “adjust for.”  Also, I think it would be good to follow Gelman and stay away from the language of “dependent” and “independent” variables; this is just mindless importation of lingo that comes from the experimental tradition into a setting where it does not belong.  On the other hand, if you are working from an explicit framework in which your main interest is in the estimation of causal effects, then you should say “conditioning on.”  This makes it clear that what you are referring to (“conditional on X2, when I jiggle my favorite X my favorite Y also jiggles”).

What does that leave out?  Well it leaves out the language that most people use when talking about the Xs that they don’t care about:  the language of “controlling for.”  I think this is a generally stupid way of referring to what you are doing.  This vocabulary reveals that you may not even be self-conscious as to what exactly you are doing in that piece of research. It imports categories from the experimental tradition (the notion of a “control group”) that have no business in observational data analysis and which makes it sound as if you are not aware of the spectacular inappropriateness of such assumptions in that context (every paper I’ve written that analyzes quantitative data uses this phrase making me moron number one).  So stop saying “control for”!

Of course, I am under no illusions that people will actually stop using this phrase given how mindlessly and insidiously it has become insinuated into the language of social research; but you can always hope.

Written by Omar

December 3, 2011 at 2:42 pm

Theoretical egalitarianism as Primitive Classification

As every sociology grad student knows, the famous thesis of Dukrheim and Mauss’ (1962: 11) essay on Primitive Classification is that “the classification of things reproduces…[the] classification of men [sic].”  This is a controversial argument that has served as one of the primary inspirations for contemporary work in the sociology of knowledge.  Durkheim and Mauss went on to argue for instance, that if persons divided themselves into two groups, then they divided the world of animals, plants, etc. into two kinds; if they divided themselves into four groups organized as hierarchies (with subgroups nested within larger groups), we would find an analogous classification system for nature (with sub-kinds nested within larger kinds), and so on.

I want to propose that a form of “Primitive Classification” is alive and well in contemporary social theory.  This argument holds with two minor elaborations:  first, we must allow for the corollary that if the classification of things reproduces the classification of persons, then the resistance to classify persons should result in a resistance to classify things.  Second, that the style of classification of persons should result in an analogous style of classifying things.  For instance, one thing that Durkheim and Mauss “primitives” had no trouble doing was classifying themselves in rigidly hierarchical terms.  From here Durkheim reasoned that an equally rigid classification of things should follow such that subsumptive relations between groups resulted in analogous part whole classifications at the level of natural phenomena.  From this it follows, that resistance to classify persons in a hierarchical manner should result in an equal resistance to classify things in a hierarchical way.

I think that these two principles can help to explain a lot of quirky features of contemporary social theory.  Consider for instance the knee-jerk presupposition that any form of dualistic distinction is somehow “wrong” (a priori) and therefore deserves to be “transcended” (the allergy against dual distinctions). Or consider for instance the related (equally knee jerk) propensity to think that when postulating the existence of two abstract substances and processes (let us say, structure and agency) the theorist makes a “mistake” if he or she “privileges” one over the other (theoretical egalitarianism).  Such that the best theory is the one that gives equal share of (causal?) power to both things (or if the theorist postulates “three” things, then all three).

I will submit that for the most part, the “hunch” that “dualisms” are wrong or that “privileging” some abstract thing over another puts us on the road towards the worst of analytical sins has nothing to do (in 99% of the cases) with the logical virtues of the argument.  Instead, I think that this hunch, is in effect a form of Primitive Classification unique to certain collectives in the social and human sciences (most other sciences and most other lay persons have no problem with dualisms and hierarchical privileging of one abstract thing over the other). I think that we (e.g. sociologists) want our classification of (abstract) things to reflect our (desired?) classification of persons, so that when there is a mismatch, we simply reject the theoretical strategy as wrong and in fact end up producing bland theoretical classifications (“equal interplay of structure and agency”) that in fact reflect our classification of persons.  So dualisms are (perceived as) “wrong,” not logically, but socio-logically (Martin 2000).  They are wrong because our desired classification of persons tends to reject dualisms (at least in the humanities and some social sciences).  And even when abstract dualisms are provisionally allowed (e.g. agency and structure), then we are forbidden from privileging one over the other (because such a privileging is a no no in our [ideal?] social world).

So, the next time somebody tells you that the very fact that you made an analytical distinction is somehow already a logical or theoretical “fallacy,” (see Vaisey and Frye 2011 for an entertaining dispatching of this ridiculous idea) or the very fact that you argued that something is substantively more important than something else somehow makes you unacquainted with the canons of theoretical logic or that your theory in fact requires that everything be at the same level as everything else in order to count as “sound” (see Archer 1982 for the classic demolition of this preposterous notion), turn the tables and point them back to Durkheim and Mauss’ classic essay.

Environment, Risky Decisions and Post-Tragedy Analysis

Today I woke up to the news that tragedy has struck Oklahoma State University again.  Las night a single engine plane carrying four people–including the coach and assistant coach of the women’s basketball team, Kurt Budke and Miranda Kerna–crashed in central Arkansas killing everybody on board.  This is of course the same OSU that lost 10 people in crash 10 years ago. Both the coach and the assistant coaches were seen as rising stars in women’s basketball (Kerna was only 36), making their tragic loss a bitter pill to swallow.

Details are still sketchy, and I’m sure lots of new information will emerge in the days to come.  There are two things that we should expect.  First, ex post facto analysis will tend to concentrate on micro-decisions that on hindsight will seem very bad and short-sighted.  Some plausible candidates are the age (probably too old) and make (single-engine) of the plane, flying conditions, and possibly the age of the pilot (an octogenerian ex-state senator who had his wife–in her seventies–as a co-pilot).  There are already some early reports that indicate that the “strict” university policies that were instituted after the 2001 crash were not followed here (although details as to whether this particular flight was “covered” by these rules are not clear).  Second, post-tragedy analysis will obscure a key component of the (possibly) bad decision-making that we are looking at here, which is the environment in which the organizational actors in question were operating.  This is particular important, since, if it is true that protocol wasn’t followed, we have a case not only of shoddy decision-making but also of an organization failing to follow its own (self-imposed) rules.

As Vaughan (1996) has argued, this sort of phenomenon (tragic decisions resulting from acting in violation of organizational safety rules) has to be put in the context in which the actors are operating, especially if this context serves to provide incentives (both of the “push” and “pull” variety) for actors to take risks and circumvent the rules and/or ignore obvious warnings.  Two things that I’m sure will get lost in the whole post-mortem are precisely that (1) this was a recruiting trip, and (2) that the pilot was not just somebody providing some sort of disinterested service, but that he was an OSU booster.  Thus, in terms of contextual factors we have a environment presenting scarcity and competitive pressures in which timing is of the essence (recruitment), coupled with a person who is supposed to be the expert at making the crucial decision (should we fly?) that is actually being affected by this process rather than being some sort of neutral, objective third party (as it would have been had the pilot simply been a commercial pilot for hire).

Thus, I think that ultimately this will prove to be a case that meets the conditions of a tragic decision as a result of lack of organizational self-regulation produced by the placement of certain organizational actors in a competitive environment that induced careless decision-making.

Written by Omar

November 19, 2011 at 7:25 pm

Posted in omar, sociology

Easy way to Erdös #

MathSciNet now has a simple tool to compute your Erdös number.  You go here, put an author’s name and first initial followed by an asterisk and then click on the “Collaboration Distance” link and click on the “use Erdös” button. In Sociology, one of the few open paths to Erdös is via Stan Wasserman (E=6; you can check for yourself here).  Thus, all co-authors of Stan have a finite Erdös number.  One of them is Joe Galaskiewicz (E=7).  Since Jeff Larson is a co-author of Joe G’s, and I’m a co-author of Jeff’s, then that puts my Erdös number in the finite camp (E=9).

Written by Omar

May 13, 2011 at 12:55 pm

aer’s greatest hits

The February issue of American Economic Review has a nice feature entitled “100 Years of the American Economic Review: The Top 20 Articles,” where a distinguished committee of scholars (Arrow, Bernheim, Feldstein, McFadden, Poterba and Solow) present what they see as, well, the top 20 articles published in AER in the last 100 years.  One thing to note that these are not the top-20 most heavily cited articles.  The committee followed (ironically) a “qualitative/reputational” approach in which citations were considered but they were not the most important factor.  In addition to the (from my own highly uninformed perspective) obvious papers, (e.g. Cobb and Douglas 1928, Friedman 1968, Krugman 1980, Kuznets 1955, Lucas 1973) orgtheorists and O&Mers would be happy to find Alchian and Demsetz (1972) occupying the top spot and Hayek (1945) somewhere in the middle.

Written by Omar

March 29, 2011 at 1:19 pm

Fligstein and McAdam on Strategic Action Fields

The most recent issue of Sociological Theory features an article by Fligstein and McAdam entitled “Towards a General Theory of Strategic Action Fields.”  In this paper F & M, attempt a “grand” conceptual synthesis (and also attempt to draw a systematic outline of the empirical implications of) a series of recent trends towards the integration of organizational, institutional and social movement theories.  This is a place where the literature has been kind of awkwardly moving for a while now (e.g. Scheneiberg and Clemens 2006; Armstrong and Bernstein 2008; Rao 2008; Evans and Kay 2008; King and Pearce 2010), but which is finally given a measure of overall conceptual coherence in this piece.

The theoretical motor of the entire paper is very parsimonious version of field theory.  This is also a place where the literature had been awkwardly moving, with various people inventing and re-inventing a field perspective using all sorts of different language and terms such as ecologies, and multiple institutional logics (e.g. Abbott 2005; see also here).  F & M bring order to what could have been some overwhelmingly complicated proceedings through their economical meta-concept of “strategic action fields” (as well as other secondary and very handy distinctions).  This concept is supposed to subsume older versions (including sectors, movement industries, organizational fields and I would add Abbottian ecologies) of the same general thing; essentially SAFs are sites where collective actors struggle for what is at stake (what Bourdieu referred to as “illusio”), taking each other into account while doing so.  The general dynamics of SAFs can then be described using the combined resources of “French” field theory (e.g. dominated/dominant, doxa, struggle for recognition, etc.), American reconceptualizations thereof (e.g. Fligstein’s theory of social skill) and standard concepts taken from social movement (incumbent/challenger, contention, mobilization, framing, etc.) and organizational theory (institutional logics).

This paper is an absolute must-read.  Easily one of the most important conceptual advances in organizational and social movement theory (in fact one of the  ambitious claims of the paper is that these two realms are empirically co-extensive, so there should be brought under a single conceptual framework) in recent memory.

Written by Omar

March 23, 2011 at 6:15 pm

soc networks readings

A Social Networks reading list that I put together with a colleague for a graduate seminar that we taught a couple of years ago might be of interest to some of you.

Written by Omar

January 16, 2011 at 4:59 pm

What is it like to be Bruno Latour?

When you and I wake up in the morning a series of unconscious microhabits of perception and appreciation take over. These habits structure our “common-sense” perception of the physical and the social worlds. In fact these habits dictate a specific partition of the everyday objects that we encounter into those that are “animate” (agents) and “inanimate” (non-agents). Within the subset of agents that we endow with “animacy” we distinguish those that have a resemblance to you and me (we use the term “humans” to refer to them) and those who do not. We treat the “humans” in a special way, for instance, by holding them responsible for their actions, getting mad at them if they do not acknowledge our existence but we have previously acknowledged theirs, saying “Hello” to some of them in the morning, etc. We also ascribe distinct powers and abilities to those humans (and maybe to those furry non-human agents whom we have grown close to).

The most important of these powers is called (by some humans) “agency.” That is the capacity to make things happen and to be the centers of a special sort of causation that is different from that which befalls non-human agents and non-agents in general (such as my lamp). This is our common-sense ontology.  Bruno Latour does not experience the world in this way. In Bruno’s experience, the world is not partitioned into a set of “animated” entities and a set of “non-animated” ones.  After much wrestling with previous habits of thought and experience (which Bruno imbibed from his upbringing in a Western household and his education at Western schools), Bruno has taught himself to perceive something that we usually do not notice (although I hasten to add, it is available for our perception only if we started to make an effort to notice): a bunch of those entities that the rest of the world does not ascribe that special property of “agency” to (because the rest of us continue to hold on to our species-centric habit of thought that dictates that that this capacity is only held by our human conspecifics), actually behave and affect the world in a manner that is indistinguishable from humans.  For instance, they act on humans, they make humans do things, they participate (in concert with humans some of the time; in fact humans can be observed to “recruit” these non-human agents and these “non-agents” for their own self-aggrandizement projects) in the creation of large socio-technical networks that are responsible for a lot of the “wonders” of modern civilization.

The important thing is that now Bruno is able to directly perceive (in an everyday unproblematic manner) that these “machines” and these “animals” are the source of as much agency (sometimes even more! ), than other humans. Bruno has gotten so good at practically deploying this new conceptual scheme (along with the radically new ontological partition of the world that it carries along with it) so as to transpose this newly acquired and newly mastered habits of perception and appreciation to discover evidence of the agentic capacities of those entities that were previously thought not to exercise it, in the history of Science and Politics.  He has even uncovered evidence of humans being aware of this evidence, but then he noted that they proceeded to hide this evidence by creating elaborate systems of ontology and metaphysics in which non-human agency was explicitly denied, and in which it was explicitly conceptualized as being an exclusive property of so-called “persons” (where persons is now a category restricted to humans) only. These “human” agents were now thought to reside in a special realm that these human apologists called “society.” This “society,”—these thinkers proposed—was organized by a specific set of properties and laws that were distinct from those that “governed” (the humans even used a metaphor from their own way of dealing with another! ) the “slice” of the world that was populated by those entities which “lacked” this agency (the humans called these latter “natural laws”).

Giddy with excitement at this discovery, Bruno even wrote a book in which he announced the entire cover-up to the rest of his human counterparts. But the basic point is as follows: When Bruno experiences the world directly, or when Bruno’s brain simulates this experience (e.g. when reading a historical account of the discovery of the germ theory of disease) he does not deploy our common-sense ontology. Instead he practically deploys a conceptual scheme that in many ways does “violence” to our common sense ontology by radically redrawing and liberally redistributing certain properties that we restrict to a smaller class of entities. Bruno is thus able to perceive the action of these “agents” both in the contemporary world and in past historical eras in a way that escape most of us. In fact, Bruno recommends that if you and I want to see the same things that he sees, and if you and I want to escape the limits of our highly restrictive “common-sense” ontology (in which such things as “society,” “persons,” “animals,” “natural laws,” etc. figure prominently) that we begin by (little by little) divesting ourselves of old habits of thought and perception and acquiring the new habits that he has worked so hard to master.

The epistemological payoff of doing this would be to see the world just as Bruno sees it: a world in which humans are just one of another class of agents and which agency is shared equally by a host set of entities that our common-sense ontology fails to ascribe agency to (and which we thus fail to perceive the everyday ways in which these alleged non-agents exercise a sort of “power” and “influence” on our own behavior and action). In this way Bruno recommends that the ontology specified in our common sense be reduced and displaced by that specified in what he now calls “actor-network theory.” But this is a terrible name, for this is not a “theory” but a viewpoint; a way of practically reconfiguring our perception of the social and natural worlds. In fact this last sentence just used categories from the old ontology for in Bruno’s world, the “master-frame” that divides the things of “nature” from “social” things (Goffman 1974) is no longer operative and no longer serves to structure our perception.

don’t ask/don’t tell and social change theory

What does the repeal of DADT say about social movement theory?

If you believe in the Burstein hypothesis (movements don’t matter, just public opinion), you’d point to the fact that the American government only moved way, way after public opinion had moved. If you believed in the Fetner hypothesis, you might argue that public opinion moved because of movement activists. Which one do you believe? What evidence do you have?

Written by fabiorojas

December 20, 2010 at 12:49 am

X-studies and the “trivialization” of disciplinary scholarship

Here’s a very interesting observation that I recently came across:

The history of say, the sociology of nationalism or the sociology of religion might be written a one in which an initial competition between Marxist, Durkheimian and Weberian approaches to nationalism or religion–which were also Marxist, Durkhemian and Weberian approaches to anything else—has gradually given way to internal debates about nationalism or religion among specialists in those areas and the development of in-house “theories of” nationalism or religion.  From there it has sometimes been a short step to the jettisoning of the formal theoretical apparatuses altogether and a readiness to embrace increasingly interdisciplinary inquiries into the phenomenon of nationalism or religion.  Indeed, it is now possible to devote an entire career to one specialist area without worrying too much about the integrity of the [theoretical] tools one deploys to study it.  This sequence–sociological inquiry into X on the basis of established disciplinary procedures, expansion of the discipline by means of the increasing proliferation of specialist areas, development of localized substantive theories within those areas, abandonment of formal theory altogether, and embrace of interdisciplinary methods and the establishment of departments or centers of  “X studies”—was what the German sociologist F. Tenbruck had in mind in his essay on “the law of trivialization” (Turner 2010: 30).

This seems to me to do a good job of intuitively describing the fate of many fields in sociology and their relationship to their more mature “interdisciplinary” cousins.  For instance, there is the sociology of religion, and there are “religious studies.”  There is the sociology of race and ethnicity and there are “race and ethnic studies.”  There is the sociology of gender and there are “gender studies,” (sociology of organizations versus organizational studies?).  This also seems to agree with my impression that the work done under the interdisciplinary banner tends to be a little loosey-goosey, more descriptive and generally less interesting (with exceptions) than its more disciplinary classical or post-classical counterparts.  I wouldn’t go so far as calling this type of work “trivial” but I’d have to agree that there is certainly something of a transformation towards less compelling, more delimited and generally more circumscribed questions as we move along the gradient from classical, to post-classical to interdisciplinary “X-studies.”

Conversely, we can surmise that areas that are successful in partially resisting the move, may keep their original appeal.  I think the recent move towards performativity in economic sociology represents the first attempt to move the field from its postclassical, disciplinary form to a more interdisciplinary format (e.g. “social studies of the economy”; just like Mertonian “sociology of science” was transformed into “social studies of science” during the 1970s and 1980s).  If successful in reorienting the field we can predict that work on social studies of the economy will become less tied to classical or post-classical questions (e.g. Zelizer’s running argument with Marx, Simmel and Weber; Granovetter’s via media between under and over-socialized conceptions of the actor, etc.), more concerned with relatively minute conceptual, epistemological and methodological issues, and more dependent on micro-descriptive case study material.  If unsuccessful, we get to keep economic sociology as it now stands, but it is likely that post-classical “fatigue” will soon set in.

This is already what has happened to much that goes by the name of “organizational studies” in Europe and the U.S. which contrasts sharply with what used to go by name of “the sociology of organizations” along the same dimensions (e.g. circumscription versus ambition, descriptivism versus substantive theory, etc.).  The much ballyhooed “crisis” of organizational theory (which we have devoted some attention to here in the past), may then be recast simply as an abortive transition towards interdisciplinary “trivialization” manifested as (the lingering feeling of) being stuck in an intermediary status quo: neither here (sociology of organizations) nor there (organizational studies).  Organizational theory is in crisis because this type of “theoretical” concern (tethered by an umbilical cord to big or medium-sized classic questions) simply does not fit the organizational studies mold.

Written by Omar

November 2, 2010 at 2:20 pm

Posted in academia, omar

who’s getting screwed on the Nobel thing? cognitive science

We all know that only three real science Nobels exist: Physics, Chem, and Medicine/Physiology. Social scientists can get their consolation prize via the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel. Fabio has already started some sort of countdown clock until a sociologist (presumably Granovetter) gets that one. Given our facility with adopting victim postures, it is easy to scare up some indignation as to why a sociologist has never won a Noble, real or fake.

But the real problem with carving up the scientific space in such an ancient way is that the most important scientific cluster that was actually born in the twentieth century was actually left out. So it is really cognitive science (and to some extent its older psychology cousin) which has gotten the shaft, with people that have produced scientific contributions of a magnitude that dwarfs that of 90% of the people who have received the SRPIESIMOAN never having received any recognition.

So for instance, there will never be a “Nobel” for Noam Chomsky. Jean Piaget could never win one either. Stanley Milgram? You might be bar-none one of the most influential scientists of the 20th century, but sorry! You are a “psychologist” (unless you are a psychologist who studies “economics” related stuff like DK you are out). Allen Newell? Too bad; we gave a SRPIESIMOAN to your co-author Herbert Simon for a bunch of random stuff, but ignored the fundamental work in Artificial Intelligence and Human problem solving that he did with you. So Herbert Simon will forever be referred to as “A Nobel Laurate” but you are just Allen Newell. George Miller? Your foundational work on the limits of human cognitive processing capacity might have jump-started the cognitive revolution and actually provided the main inspiration for Simon’s work, but you get nada. This list could of course be expanded indefinitely (e.g. Marvin Minsky, Schank and Abelson, Eleanor Rosch, Rumelhart and McClelland, etc.).

Written by Omar

October 17, 2010 at 5:51 pm

great orgtheory news!!!

Have you ever asked: What would Omar do? Answer: Get married to Jessica Collett!


Written by fabiorojas

October 4, 2010 at 6:42 pm

learn a bunch of methods, enjoy some good weather

January is about the best time of the year to be in the Southwest.  While most of your friends are out shoveling snow, you are enjoying crisp, sunny days in the 70s and low 80s.  Not directly related to this, but AZ sociology is one of the best places in the world to be introduced to a variety of methodological approaches.  It is certainly the part of my graduate education that I now appreciate the most.

I just found out that all of this will come together in the Arizona Methods Workshops to be held on January 6th-8th of next year (2011 for those of you who are counting).  See the flyer here with cost and lodging info.  Charles Ragin, Erin Leahey, Ron Breiger and Scott Eliason will be holding court on everything from centrality measures, latent variables with multiple indicators, the difference between coverage and consistency in a QCA analysis and log-linear modeling strategies. Certainly enough to satisfy even the most demanding methods head!

Written by Omar

August 11, 2010 at 4:20 pm

I guess bob didn’t like the book

While prepping to teach a graduate seminar in classical theory seminar a couple of years ago, I decided to buy this edited volume featuring a series of Durkheimian scholars dealing about the role the notion of “representations” played in his work.

This is one of those Routledge hardback-only deals that is only of interest to a small audience of cognoscenti, which means that you can only find it for exorbitant prices at the usual used book sites.  The price was indeed exorbitant, but I decided the shell the big bucks anyways (as opposed to a lot of edited volumes, this one was actually worth it).  The seller said that the book in good condition, and the pages were generally all clean except for some writing in the first page.  I remember seeing that the book indeed had some writing, but it only consisted of some sort of note written by a person who bought the book and obviously sent it to somebody else as a gift or something.  I didn’t pay too much attention to it at the time.  The note in the first page is shown below.

When I picked up the book again a couple of months ago it struck me this scribble might actually be more significant than I first realized.  I don’t want to proffer any grandiose theories here, but I submit to you that the writer (“Bill”) is William S. F. Pickering (the book’s editor and a well-known Durkheim scholar and founder of The British Center for Durkheimian Studies) and the intended recipient (Bob Jones) is not a third-generation hellfire and brimstone evangelist, but the equally renowned Durkheim scholar Robert Alun Jones.  Since I now own the book, either Prof. Jones wasn’t very impressed (which I doubt because the chapters are great), or he figured (correctly) that there was a good market for these kinds of hugely overpriced limited edition books among suckers like myself (other scenarios are of course possible, maybe involving a forgetful Professor lending his book out to a starving grad student).

Written by Omar

July 8, 2010 at 3:01 pm

the new york new school school

It is a truism in the social study of science that innovations in knowledge production occur mostly through informal networks.  By the time you read it in the journals it is old news for the people at the knowledge frontier.  That’s why is so important for most of us who are still getting the hang of things, to learn how knowledge is really produced, or at least to learn the tack of guessing backwards from the finished product(s) to the way in which really good work is actually put together from scratch.

In American Sociology, this general rule is probably most applicable to network analysis.  The basic innovations (and innovators) of the so-called “Harvard-School” centered around Harrison White were certainly part of an informal network endowed with their own set of, never published, shadow texts in which the basic programmatic theses were written (For a nice discussion of this see Santoro 2008, and this post and this post).

More recently there has been a move towards a more historically nuanced and more culturally sensitive take on networks.  This intellectual movement, like the original network incursion, developed around an informal circle of young and more established scholars.  Once again Harrison White (now at Columbia) was in the middle of things (but this time he was joined by Charles Tilly).  Out of this “school” came such scholars as Mustafa Emirbayer, Ann Mische, David Gibson, Eiko Ikegami, Victoria Johnson and others.

In a fantastic chapter forthcoming in the Sage Handbook of Social Network Analysis, Ann Mische reconstructs this backroom history.  She also outlines some of the recent “turns” that that the study of the relationship between culture and networks has taken.

A highly recommended piece.  It goes very well if you pair it with Pachucki and Breiger (2010).

Written by Omar

April 11, 2010 at 1:46 pm

rest in peace pete

Richard “Pete” Peterson, one of the original founders of the ASA section on Culture died yesterday.  It is a strange feeling, since I actually never knew him very well, yet it seems like somebody I actually was close to is now gone.  Two great culture scholars who actually did know him closely have more personal reflections here and here.  My only major interaction with him was at Boston ASA where I actually felt useful for once:  he needed to go to an ASA session that was way across one of those weird tubes that connect the various buildings and I was going to that same session (Paul DiMaggio was presenting a paper).  When we got there, Pete said to Paul: “Hey Paul, Lizardo helped me get here.” Score!

Peterson (1979) wrote one of the two–the other one being of course Swidler (1986)–key theoretical articles that helped shake the study of culture in sociology out its Parsonian hangover.  Peterson was clear that they key was to get away from culture and values and towards the empirical study of culture as “expressive symbols.”  Although a healthy return to studying values empirically has been enacted of late, a lot of us wouldn’t be making a living without this reorientation of the field.  One of Peterson’s major contributions was in taking a good dose of Columbia-style organizational sociology (via Gouldner) and some non-negligible industrial economics to bring empirical specificity to the study of what Hirsch referred to as culture industry systems.  This came packaged as the “production” approach to the study of culture.  His famous distillation of the production approach as “6-step” model is still one of the greatest things to teach undergrads (Peterson 1985; see also Peterson 1990 on Rock and Roll).

Today, after hundreds of articles and enough books to fill out a couple of shelves, we have learned more about the production side of culture than seemed possible a mere 30 years ago (see Peterson and Anand 2004 for the latest review, and see DiMaggio 2000 for a retrospective). It is important to underscore however, that for Peterson the production perspective was a general perspective on the study of all forms of culture, including, science and religion in addition to the arts (the classic ABS special issue that he edited in 1976 is still a must read).

Yet, it was in studying the consumption side (it should be noted that very few scholars, this side of Pierre Bourdieu have actually been able to make theoretical contributions on both sides of the production/consumption divide), that Peterson would make what may in the end be remembered as his most enduring contribution.  First, in re-defining measures of cultural behavior away from the loaded terms inherited from mass culture approaches as “patterns of cultural choice” (Peterson 1983). Then in a series of now classic papers in the early and mid-1990s (Peterson 1992; Peterson and Simkus 1993; Peterson and Kern 1996) revolutionizing the study of arts participation with his proposal that the best way to describe the cultural stratification system of the United States was not as an “elite-mass” division but one premised on the division between omnivores and univores. We are still digesting and exploring the wide-ranging implications of this Kuhnian gestalt shift, as omnivores and univores appear to show up everywhere we look (Peterson 2005).

So maybe that’s why I feel like I knew Pete so well even if I really didn’t: those citations (Peterson 1992, Peterson and Kern 1996, Peterson 2005) roll out of my keyboard and into almost every paper that I write almost automatically.  I think they will continue to do so for a long time to come.

Written by Omar

February 6, 2010 at 1:45 am

is your (institutional) theory “Parsonian”? A technical criterion

Many times sociological analyses of institutionalization, especially those located in the “normative” and “cognitive” pole as defined by Scott (2008), and especially those associated with neo-institutionalism are accused of putting forth an “oversocialized” conception of the actor, and sometimes even the P-word is used:  institutional theory is “Parsonian” (this is usually followed by a reference to Garfinkel and the whole “cultural dope*” thing).

For instance, Hirsch and Lounsbury perceptively note that an intriguing irony of the neo-institutional defocalization of action, conflict and strategy is

…the close similarity of…neoinstitutionalism to the very Parsonian model from which it has worked so hard to distance itself…To the extent [that] Parsons’s general theories generated serious dissents for being too committed to isomorphism, to quick to accord functionalist legitimacy to existing structures and outcomes, too focused on stability rather than change, and too slow to see conflict or change as endogenous, the new institutionalism seems eerily close to old Parsonian theory.  In Garfinkel’s terms, individuals following the new cognitive scripts are as likely to be “cultural dopes” following taken-for-granted scripts as they were when following out the internalized norms, values and socialization espoused by Parsons (1997:  415).

That neo-institutional theory is “Parsonian” in itself would not be surprising, insofar as it was indeed Parsons who (at least theoretically) pointed the way towards considering the larger environment in which organizations are located as a primary factor. But the main problem with the “Parsonian” criticism as directed to a given deployment of institutional theory is that the adjective is bandied about as a vague characterization, and the reference to either Wrong or Garfinkel ends coming out more as a “I scored a point against the theory” (after all, if you can corner some theory into “cultural dope” land you know you got it in trouble) than as a substantive criticism.

In any case, I must confess that until now, I never actually understood, exactly what the whole cultural dope thing was all about anyways, and by implication never really appreciated what was so bad about it.

But now I think I (kind of) get it.  And I think that it is possible to come up with a more concrete definition of what makes a given account of norms or cognition in institutional analysis Parsonian, which will move this criticism closer to a productive analytical assessment that could be used to fix what’s wrong with a given way of conceptualizing the process or the role of institutions and institutionalization in organizations.  In fact, I think that a more concrete definition of when an account of norms in institutionalism has gone Parsonian can even help the theorist understand when such an account is actually warranted (I happen to be of the opinion “never”) and can be coherently defended.

The technical criterion for “Parsonian” comes from John Heritage’s (1984) brilliant distillation of Garfinkel’s ethnomethodology.  In discussing “the problem of reflexivity” Heritage (1984: 30) notes that the key problem that Garfinkel had vis a vis the Parsonian account of “socialization of norms” is that the actor was denied (by theoretical fiat) a particular sort of subjective orientation towards norms.  This orientation was simple self-consciousness.

That is, Parsons defined “internalization” as “unconscious introjection” which meant that if an actor was socialized into a norm, then the actor was unconscious of how that norm determined her conduct.  In essence, the Parsonian socialized actor cannot take norms as an object of reflexive consideration and strategization, for if that were the case then the norm would lose its status as “normative” and would become just another instrumental resource for action.  This is Parsons (1951: 37) as quoted in Heritage (1984: 30-31):

There is a range of possible modes of orientation in the motivational sense to a value standard. Perhaps the most important distinction is between that attitude of expediency at one pol, where conformity or non-conformity is a function of the instrumental interests of the actor, and at the other pole the ‘introjection’ or internalization of the standard so that to act in conformity with it becomes a need disposition in the actor’s own personality structure, relatively independently of any instrumentally significant consequences of conformity. The latter is to be regarded as the basic type of integration of motivation with a normative pattern-structure of values.

Thus, as Heritage (1984: 30) notes, while the actor may orient himself reflexively (and thus strategically) to all sorts of elements in her environment Parsons “excludes as objects of the actor’s orientation the cultural values which the actor has internalized” (italics in original).  Why is Parsons’ adamant about excluding the reflexive mode of orientation towards norms?  Heritage notes that Parsons main (theoretical) concern revolved around the fact that if actors where allowed to “adopt a reflexive attitude to their normative environments” then that would open up the possibility of their also “acting manipulatively in relation to them” which means that “actors who can manipulate their conduct in relation to a normative environment are those who can act strategically.” For theoretical reasons that need not concern us here, Parsons could not have any of that (e.g. actors acting strategically and manipulatively in relation to the norms that they are expected to conform to).

This is then the technical criterion of “Parsonian” for any theory.  If a theory does not allow the actor self-reflexive (and thus manipulative) subjective access to institutionalized norms it is technically Parsonian (notice that the theory would also have to offer some story as to how a norm can become so “deeply internalized” that actors end up not being aware that they are acting in accordance to it).  More precisely, if the theory defines institutionalization as norms acquiring such a character that actors cannot be so oriented towards, then it is Parsonian.

I have argued before than at the cognitive pole, this sort of account of institutionalization is incoherent.  For actors would have to be treated as not being capable of “thinking otherwise” and no (sociological) theorist is ever in a position to make this judgment about an actor given the data at hand (but maybe in the future a neuroscientist might!)  At the normative pole, this sort of Parsonian institionalization story is also, for similar reasons, incoherent, and any account that relies on it must suffer the analytic consequences.  For in this Parsonian account, actors cannot bring to conscious awareness the rules that they are presumably following, nor can they take a strategic attitude towards those rules.  No theorist is ever in a position to make that call in a manner that can be defended using the usual canons of evidence.

Moreover, we know that this account of normative institutionalization is empirically false.  Actors take strategic and calculative orientations towards institutionalized normative structures all of the time (Oliver 1991), and those structures are no less institutionalized for it.  In fact, it is clear now to me that Tolbert and Zucker’s (1996: 169) zeroing on the fact that institutional theory contains “an apparent logical ambiguity…one which involves the phenomenological status of structural arrangements that are the objects of institutionalization processes” and their recommendation that removing this ambiguity leads to a reconciliation of “rational actor” with “oversocialized” models is actually a much more insightful criticism than I initially thought.  For by “phenomenology” Tolbert and Zucker are clearly knocking close the reflexivity problem.  This is what I called “Goffman’s dilemma.”  The reason why institutional theory becomes indistinguishable from “resource dependence” theory once actors are allowed to be reflexive about their normative commitments is precisely connected to Parsons’ (pseudo)problem.  Once an institutional analysis stops being Parsonian, it indeed becomes indistinguishable from resource dependence theory at the level of empirical implications (at least at the organization level in terms of incentives for adoption).

Thus, with the abandonment of the notion that something can be strongly institutionalized at the “phenomenological” level (e.g. actors cannot do or think otherwise or a norm that becomes the subject of reflexive calculation all of a sudden loses its status as normative) one part of the Tolbert and Zucker’s criticism does lose force, which is the part that is centered on the anxiety that institutional theory would no longer be “unique” vis a vis resource dependence theory.  My response to the fear that institutional theory might become just a fancy version of (rationalist) resource dependence theory, is: so what?  At least it is a more conceptually defensible theory, which does not have to rely on (implausible) Parsonian assumptions about unconscious internalization or about culture as a constraint on behavior.And anyways, as exemplified in the research done by Tolbert and Zucker, Oliver, Suchman, Zajac/Westphal, etc. is not that IT and RD become completely indistinguishable, but more than the range of explanatory phenomena over which each theory is in “charge” becomes a fuzzier continuum of behavioral responses and not a sharp dichotomy between strategic rationality and unconscious normativity.

Tolbert and Zucker make a theoretical mistake by retaining a place (however delimited) for a Parsonian (via Berger and Luckmann) notion of institutionalization, one in which culture keeps its “power to determine behavior” (Tolbert and Zucker 1996: 175).  I think that this view of culture–and Brayden agrees–as an internalized “determinant” of behavior is simply indefensible from a cognitive viewpoint, so it should just be tossed.  It is not a service for institutional theory to be built on such shaky cognitive micro-foundations.  For any of the things that are subject to institutionalization, whether it be rules, norms or “cognitive templates” it is clear that nothing can ever be so deeply institutionalized that actors cannot do, conceive or imagine otherwise, nor can something be so deeply institutionalized that actors are simply incapable of taking a calculating, cynical, strategic attitude towards it (in terms of “norm compliance” or “cognitive template adoption” or both).

In fact, the evidence shows that actors are always thinking, doing and imagining otherwise (and are constantly strategizing and cynically adopting), and that this imagining-otherwise capacity and this ability to cynically adopt might (under some circumstances) be the actual reason for why something persists.  In fact, “invidious imagining otherwise” (e.g. conservatives obsessed with liberal debauchery or liberals obsessed with conservative “craziness” or the constant remembrance of really bad things such as the holocaust or slavery) is probably a key signal of that the opposite of what’s being fantasized about is institutionalized in my neck of the woods (e.g. democracy vis a vis Nazism).

I think that once we get over this conceptual hump, and also get over the Parsonian fear that allowing for the strategic orientation towards norms is equivalent to admitting to their lack of capacity to serve as behavioral referents (a fear that persists in Tolbert and Zucker and even Scott) then the ghost of Parsons will be permanently exorcised from institutional theory.

*The emergence of this “cultural dope” meme is in itself interesting. Garfinkel’s preferred term is “judgmental dope, of a cultural or psychological kind” (1967: 67) which (not to be too much of a theory dork here) carries fundamentally different implications than “cultural dope,” since the cultural dope is a special kind (the sociologist’s) of judgmental dope (1967: 68); this means that there are “psychological dopes,” “anthropological dopes” and maybe even “management dopes”.

Written by Omar

January 16, 2010 at 9:09 pm

confusing german titles: three things about the protestant ethic

In teaching Weber’s Protestant Ethic in my theory course the last two semesters, I’ve come to the conclusion that the title of the book (essay? long article? chapter in larger anthology?) is misleading.  I’ve posted about German titles half jokingly before. The basic idea is that Germans tend to title their books using the template: “blah, blah, blah und blah, blah, blah” (or whatever the German equivalent of “blah” is), even when their books are really about three things not two. The best example, as I noted is Freud’s classic The Ego and the Id, which is a classic in Psychoanalysis because of the theoretical reworking of the role of the Superego in the metapsychology, precisely the entity that is left out in the binary title Das Ich und Das Es.

Weber’s Protestant Ethic is the same way and I’ve come to realize that only after trying to teach it to undergrads.  It is not about two things as suggested by the classic germanic title (Die Protestantische Ethik und der Geist des Kapitalismus), but about three things:  (1) The Protestant Ethic (whatever that is), (2) The (Spirit/Mentality*) of Capitalism (whatever that is), and then the third thing excluded from the title: (3) The Modern Economic Order.  Without this last thing Weber’s argument is incomprehensible and in fact leads to the usual exegetical mangles generated by most people who try to make sense of “The Weber thesis.”

First, let us get rid of the most egregious fallacy. There is (no one) “Weber thesis” and much less is it (as has been proposed by the theoretically-challenged economists who are recently game of “testing” it) that Protestantism causes “capitalism” (whatever that is).

Second, the fact that the book is about three things, means that there are multiple “Weber theses,” some of which seem to be more historically plausible to me than others.  But the one that is most important from a pedagogical perspective (if you are in charge of teaching this) and the one that seems most easily defendable is the following: these three things are independent, and while in a single historical case, one led to the other which then led to other, the more probable historical possibility is that you find them hanging out all by themselves.

This is how Weber structured the book.  Thus, the first part is about Luther, Calvin and the Protestant sects (kind of boring). In this part, all that Weber wants to do is (1) provide an ideal type of what the “Protestant Ethic” is and (2) most crucially, to show that the Protestant Ethic existed prior to the Mentality of Capitalism; ergo the Protestant Ethic is quite independent from the Spirit of Capitalism, and is not bound to deterministically lead to the latter. In fact, most versions of the Protestant Ethic can’t do this, because they are too impregnated with what he called “mysticism” and thus lead towards an otherworldy asceticism (these terms are defined elsewhere in the collected essays).   The causal sequence Protestant Ethic –> Spirit of Capitalism, has only happened in a few cases (e.g. Calvinism).

Notice that we have already transcended the simplistic presumptions of the theoretically challenged interpretation of the so-called Weber thesis.  For if there is a “Weber thesis” here, it is the following:  The Protestant Ethic (may, under some determinate historical circumstances that are fairly rare) lead to The Mentality of Capitalism (not “capitalism” without qualifications—this confuses the mentality of capitalism with its objective structural foundations—and much less “economic growth”  (!!!???).  As Parsons noted in his Heidelberg dissertation, this is not a “culture —> structure” arrow of causation but a “culture —> culture” one.  We haven’t gotten to “structure” yet.  That’s the (or one of the) other Weber’s “thesis(es)”

After doing this, (part II of Weber’ story) Weber’s does something puzzling (Chapter 3); he starts talking about Benjamin Franklin’s advice to young people.  This usually throws people for a loop (leading to people thinking of the “Protestant Ethic” and “The Spirit of Capitalism” as coterminous).  But it is clear what Weber wants to do by bringing up the (largely secular) Ben Franklin (and by implication the historical example of the American Colonies; let us not forget that it was a “trip to America” that finally convinced Weber to write this darn book). He wants to establish one of his other theses: the fact that the Spirit of Capitalism is historically independent from the Protestant Ethic, since it can survive even after that PE is gone.  This is the whole point of the Franklin example (well actually two points, since the Franlkin example is also supposed to give us the ideal-typical definition of the Mentality of (rational) Capitalism).  In fact he is clear that what is historically significant about 18th century America is the fact that there existed the Spirit of Capitalism without the Protestant Ethic and without the (structural) accoutrements of the modern economic order (e.g. the U.S. was a largely agricultural society and whatever industry existed was “proto-capitalist” and small scale).

Finally the famous Chapter 5.  This is the most speculative and weakest of the chapter, because it contains the strongest and possibly most dubious of the Weber theses.  First the one that seems least dubious: in this chapter Weber wants to secure the last link in the three-step chain: he wants to show that Ben Franklin’s mentality of capitalism is historically transformed into the objective, institutional structures of the Modern Economic Order.  This is a culture —> structure argument, and today we would probably use the language of “institutionalization” to describe it.  The second “thesis” that he wants to establish (and here Weber substitutes Nitzschean pathos for actual argument) is that modern societies are an example of the fact that The Modern Economic Order can exist without either the Protestant Ethic or the Spirit of Capitalism.  Both of these cultural sources of meaning are exhausted and the system rests on “mechanical foundations” (never have such a few lines of bad argumentation generated more rivers of scholarship from both the left and the right).  So the final step is complete.  Weber’s full argument is: “Protestant Ethic –> Mentality of Capitalism –> Modern Economic Order.”

That’s why is silly to try to “refute” the Weber thesis by pointing to the fact that for instance, institutional elements of the Modern Economic Order existed in the (Catholic) Italian City-States. As if Weber (the History Savant) didn’t know this! (he did.  see General Economic History). In fact, he saw this as supporting his “all three things can exist independently of one another” argument.  The Italian city states had elements of the Modern Economic Order but had neither the Protestant Ethic nor the Spirit of Capitalism.  Also, I think the fact that the chain is three steps long does leave open the possibility that you can get a “Modern Economic Order” without the Protestant Ethic as long as you get a “Mentality of Capitalism” from somewhere.  If some other “[Fill in the blank] ethic —> Mentality of Capitalism” then it is possible to get a (culturally specific version of) the Modern Economic Order.  Obviously this argument has already been made for Japan (by Bellah and Collins) for instance.

So the moral of the story: three things not two. And all three are independent from one another not joined lockstep in some sort of inexorable causal chain.

*The German word “Geist” was translated by Parsons as “spirit” although the less spooky “mentality” is also acceptable and actually preferable (e.g. Geistesleben is translated as “Mental Life” in Simmel’s classic essay).

Written by Omar

January 8, 2010 at 2:49 pm

what could have been

In an article dealing with Black/White differences in arts participation (which I am re-reading for completely unrelated reasons), written with Francie Ostrower and published at Social Forces, Paul DiMaggio cites his empirical chapter in what became the Orange Bible as forthcoming in a book entitled The New Institutionalism in Organizational Theory.  So maybe organizational theory is indeed dead (which might be the reason for the change in planned title), but organizational analysis is alive and well.

Written by Omar

December 20, 2009 at 8:36 pm

staying away from the grindstone

I’ve been asked to write a bunch of letters of recommendation this semester.  Engaging in this exercise brought back to mind an amusing debate that I had with Fabio in what seems like a thousand years ago. In this exchange Fabio defended—what I still think is the hopelessly naive—proposition that using terms such as “hardworking” in a recommendation letter was a good thing.  I had the unenviable task of reminding my recalcitrant friend of the fact that certain adjectives are not “positive” on their own, especially if they (automatically as cognitive scientists are increasingly finding) bring to mind their opposite (e.g. effortless talent) and especially if said opposite is precisely the ideal that is (implicitly, wordlessly) favored in the elite nooks and crannies of certain culture production fields (such as academia).

The funny thing is that it never occurred to me until now to consider the possibility that there might actually exist a social science literature on this.  It turns out that (an apparently tiny) literature does exist and that (shocking for some I suppose; absolutely unsurprisingly to me), the phenomenon even has a name (and if you read Merton, you know that if a phenomenon has been baptized then it must be important!).  It is called “grindstone words.”  In the—content and discourse-analytic—research that I had a chance to carelessly glance at (here and here), grindstone words (including “hardworking”) are negatively correlated (in a biochemistry and chemistry sample of letters) with the use of “standout” keywords (which as I noted originally, generally refer to intrinsic or effortless ability).  One of the papers (using a medical school sample) finds that grindstone words are used more often in letters describing women candidates in comparison to male candidates (which is what you would expect if the automatic association of grindstone words was negative and not positive).

So for the love of everything that is holy, and pace Fabio and Steve, don’t pepper (I don’t and I have been able to fill up up three or more single-spaced pages) your recommendation letter with allusions to hard work (although you should listen to Fabio on everything else that he’s ever, ever said; ever.  Oh, except for that Obama’s toast thing).

Written by Omar

December 4, 2009 at 11:47 pm

on peer review and butterflies

The mechanics, dynamics, and vicissitudes of peer review have always been a hot topic around these parts (e.g. here and here).  In fact, one of our most famous posts consists of an indecent proposal to reform the peer review system. Usually clarion calls for radical revolution in the peer review process come from a shared intuition that really good, really innovative papers, can get squashed by the forces of homosocial reproduction, cognitive blinders and plain old inefficiency that are inevitably bred by a system in which producers are their own critics.

I have always been skeptical of the “radical inefficiency” viewpoint, challenging people to show me that brilliant, path-breaking paper that was never published in a top journal due to the recalcitrance of short-sighted reviewers (with the lone Akerloff example predictably thrown in my face).  But I do recognize the validity of the question behind these complaints.

It turns out that PNAS which has three separate submissions tracks (one traditional, one “self-promoting” and the other “friend-promoting”), provides a window with which to answer the following question: would it make sense to lower the average quality of publications if that meant that a few more highly innovative papers–papers that would not have made through the standard peer review channels–would find a home in a high impact journal?  The results of a recent analysis suggests that the answer may be yes (I think registration is required to take a look at the story).

The study finds that the top 10% (in terms of citations) of papers that are published as either “contributed” (where the NAS member gets her own reviews–from presumably friendly peers–and submits those along with the papers) or “communicated” (where an NAS member does the favor for a non-NAS colleague) tend to have more impact than the  top 10% of “regularly submitted” (blind peer review, editors in charge of getting reviewers, etc.).  This is in spite of the fact that regular papers tend to have a greater overall impact (when considered as a whole).  The authors of the study interpret this is a tradeoff of overall quality for the chance to sneak in a truly revolutionary “hit.”  So there might be something to those complaints after all…but of course, there are always those pesky butterflies.

Written by Omar

December 3, 2009 at 12:35 am

dark magic?

A recent study just published in PNAS, hit the mainstream news and attendant blogsphere circuit yesterday.  In the study, as most of you must have heard by now, the authors (Eugene Caruso, Chicago Booth School of Business; Nicole L. Mead, Tilburg University Marketing and Emily Balcetis, NYU Psychology) show that respondents who are shown a picture of a biracial political candidate tend to choose doctored photos of that candidate—some “ligthened” and other one’s “darkened”—as “most representative” of that candidate depending on how close they are to that candidate’s political views. When there is a match, the candidate is seen as better represented by his lighter self, and when there is a mismatch, the candidate is seen as better represented by his darker alter-ego (of course this is an experiment so exposure to stimuli is randomized).

The study is in the press because the authors of the study were smart enough to not only use a nondescript biracial candidate (they did that in study 1, the (poor?) guy’s name is Jarome Iginla, “a 32-year old biracial male whose father was a Black Nigerian and whose mother was a  White American.”), but then (in study’s 2 and 3) they selected the most famous biracial candidate in the history of modern democratic politics.  They also planned well, since study 2 was conducted just before the 2008 election and study 3 was conducted right after.  What they find is that the “photoshop” effect was replicated with Obama and that the extent to which participants judged that a lightened photograph better represented Obama’s “true essence”* predicted both voting intentions (study 2, before the election) and restrospective reports of actual voting behavior (study 3, post-election), controlling for both standard lib-con placement and (implicit and explicit) racial attitudes (the last of which were surprisingly impotent in predicting anything in this study; sorry Larry).

This is a very cool study, the authors have pretty big substantive (not just statistically) significant effects and this could probably be easily replicated.  The effect is real and (given the news attention) spectacular.  The issue is of course interpretation.  I believe that the main interpretation that’s circulating in the news report (and which the authors don’t appear to be doing much to combat) is a (slight) misinterpretation.  Most news reports are blabbing about how political views alter or skews the perception of the photograph. In fact the study is literally called “Political Partisanship Influences Perception of Biracial Candidates Skin Tone.”  But while this title was obviously designed with the press-release in mind, this is not what the study is actually about.  In fact, Marc Ambider’s summary at the Atlantic is actually a more accurate characterization of the effect: “Lighter Skin, More Like Me.”

For the key issue in this study is not “partisanship” but the extent to which you rate the candidates view as similar to your own. The confusion stems from the fact that this was explicit in Study 1 (where candidates where informed about the unknown biracial candiates view and then had to judge how representative these views where of their own) but became implicit in studies 2 and 3, since there all that you needed to know was the respondent’s lib-con placement (Obama’s was presumed to be known of course).

But most importantly (and I’m sure the Psychologist in the study knows this very well) this was not a perception task (e.g. in the psycho-physical sense, “estimate how dark or light this photograph is…”).  It was a judgment task (“how well the photographs represent…”), so “partisanship is not skewing perception in the raw sense.  In judgment (like judging distance) perception is involved but it is more complicated than partisanship making you “see” a photograph as lighter or darker. In fact, the best explanation for this effect in my view, is Dan Kahneman’s “attribute substitution” story.  All of the elements for an attribute substitution effect are there: (1) an ill-defined, loosely bounded judgment task (like judging distance), (2) a readily available (cognitively and affectively accessible) cue that can be used as a heuristic (the relative distance between the person and the candidate in terms of political views) in order to produce a judgment.

So I propose that the effect occurs in two steps:

1) First, we substitute the (impossible to define) “true essence” of the candidate with the more accessible “how close is this candidate from my values?” heuristic:  close to me=good essence, far from me=bad essence. This simplifies the problem to one of being asked to judge which picture best represents a bad essence (if candidate disagrees with my views) or a good essence (if candidate agrees).

2) We then answer this (more answerable) question with a second substitution, using a simple (and culturally entrenched heuristic [e.g. see Gandolf’s robe]) dark=bad and light=good); both steps occur at an implicit level of course.  So there is not effect of partisanship on perception.  The partisanship effect is cognitive/affective, and occurs during the first substitution (when judging “distance from self”).  The second effect is “cultural” and (you don’t have to see the scene in Spike Lee’s Malcolm X when the title character goes through a dictionary, or read Durkheim and Mauss’ Primitive Classification to get this), thoroughly intuitive (e.g. Dark Magic, etc.).

*The actual prompt is as follows:

…participants read instructions describing that photographs can differ in how well they ‘‘represent a politician’’ and capture his or her ‘‘true essence.’’ They then rated how much each of three photographs—one lightened, one unaltered, and one darkened—represented the candidate  on  scales  ranging  from  1  (not  at  all)  to  7  (a  great  deal).

Written by Omar

November 25, 2009 at 1:45 pm

are you a top or a bottom?

I just encountered a really fantastic little article by Morris Holbrook, entitled “A Note on Sadomasochism in the Review Process: I Hate When That Happens.”  Not only is the piece really funny, but is chock-full of really good advice. Even though the article was written in 1986 and it concerns the field of marketing in terms of the quality of the recommendations on both the writing and reviewing sides it could have been written yesterday and applies to all of the social sciences.  One caveat:  while the advice is good, we should be so lucky to posses the psychological maturity to actually be able follow it, especially on the reviewing side.

Written by Omar

June 30, 2008 at 10:09 pm

tilly cite

Came across something amusing today.  While reading chapter 7 of Charles Tilly’s brilliant Coercion, Capital and European States (in which he speculates about the applicablity of the analytic framework developed in the book to the contemporary situation), I stumbled upon the following sentence:

The rise of the Soviet Union is the greater puzzle.  The USSR had endured terrible privations in the war (7.5 million battle deaths, perhaps 20 million in total fatalities, and 60 percent of industrial capacity lost) but had built up a formidable  state organization in the process (Rice 1988) (p. 198).

I immediately thought to myself: Could it be that Rice???  So I quickly flipped to the citations at the end of the book.  And it was!

Rice, Condolezza.  1988.  “The Impact of World War II on Soviet State and Society.”  Working Paper 69, Center for Studies of Social Change, New School For Social Research.

Has anybody out there noticed the condie cite before?

Written by Omar

June 28, 2008 at 8:41 pm

drawing the short end of the long tail

Over at Harvard Business Review, there’s a very interesting article by Anita Elberse dealing with the “long tail” phenomenon (Teppo had a post that mentioned the Chris Anderson book a while back) in markets for cultural products.  I haven’t read the Anderson book, but from Elberse’s description it sounds like one of those “everything is different in the digital age” proclamations that is high on promise but skimpy on hard data (which means that it probably does well on the lecture circuit).

In essence, the Anderson argument as I understand it from Elberse’s exposition, is that the age of “traditional marketing” for cultural goods, in which failure is the norm and you hope for the “big hit” that will pay for all of the failures (i.e. Hirsch 1972), is over.  With the decline of the brick and mortar retailer and the emergence of online retailing, customers now have at their disposal a virtually unlimited array of choices, which means that even the most obscure niche taste can be satisfied.  Under these conditions, cultural industries can make money by supporting more niche products and not betting everything on the blockbuster bank.

The argument has been popularized as the “long tail” phenomenon because of Anderson’s suggestion that under these conditions the curve that plots observed popularity against the total number of products offered flattens, with a larger variety of products experiencing some level of success (and with the concentration ratio declining).   Notice that this implies that a more rational strategy for a given cultural producer is to expand the range of products offered and to transfer resources previously devoted to the promotion and marketing of a very few candidate blockbusters toward the promotion and marketing of a wider range of items with possible niche appeal.

Elberse argues to the contrary.  Based on her research on video and music sales (I found a preprint of the one of the papers here), she argues that the emergence of online retailing for cultural goods, does not necessarily imply the end of a “superstar” market and the emergence of a long-tailed niche regime.  She finds that while it is true that that there has been a secular trend toward a flatter tail during that last 5-7 years, there has also been an increase in the number of products that experience no-sales (outright failures), as well as a secular trend toward greater concentration towards the fat-end of the distribution.

Drawing on the work of sociologist William McPhee (from Berelson, Lazarsfeld and McPhee), she notes that the data don’t seem to support the long-tail argument because that argument makes the wrong assumptions about the way that the market is segmented and about the actual dynamics of popularity for cultural goods.  Instead as argued by McPhee, it seems that the cultural goods market is partitioned along the following binary: “heavy users” and “light users.”  Popular products do well because they tend to attract a disproportionate share of light users.  They do really well, because they also tend to attract a disproportionate share of heavy users.  Obscure products on the other hand, are only likely to attract heavy users and not light users.

The data (taken from Quickflix and Soundscan) seem to support this hypothesis.  Persons who are towards the bottom end of purchases per week tend to disproportionately select titles from the top popularity decile. Furthermore, persons who select titles from the lowest popularity decile tend to be disproportionately likely to belong to the top group of purchasers.  However, (and I think this is the key to the long-tail’s argument empirical failure), while smaller in absolute size than that of light users, the bulk of the downloads of heavy users is still composed of products from the top popularity decile.

Just in case I just confused you here’s Elberse’s findings in (somewhat) plain English:  non-discerning, occassional culture consumers mainly consume popular culture, discerning, frequent culture consumers consume both obscure and popular culture.  For discerning, frequent culture consumers there is no trade-off between niche and popular culture.

Elberse argues that there’s another reason why a “niche” advantage fails to materialize:  both heavy and light users are tougher (in terms of their ratings) on obscure goods than they are on popular goods.  This does not appear to be an expert-being-tougher-on-niche-goods effect, since heavy consumers go as easy on popular goods as do light consumers.

Given this pattern of audience segmentation, it is clear that a “wired” culture distribution regime, will not result in any of the consequences that the long-tail gurus predict, but will clearly preserve the superstar advantages of blockbusters in the digital distribution era, while producing a slight advantage for some niche items.  This is exactly what the data show.  Furthermore, Hirsch’s dictum that failure is the norm in culture industry systems will probably become even more of an “iron law” than ever before (even after controlling for the increasing number of products offered).

As a side-note, it is also clear that Anderson got carried away with the wrong audience segmentation model.  He imagined a world increasingly composed of niche cognoscenti, literati with very delimited and select tastes who would benefit from a diversified cultural availability regime.  In Peterson’s (1992) words, he imagined the expert culture-consuming audience as a “mosaic” of univores, each with delimited tastes for very obscure stuff (and possibly with a distaste for the stuff that was popular and which the “herd” tended to follow). It is clear from what we know on sociological resarch on audience segmentation that Mr. Anderson was off to the wrong start.  Heavy culture consumers are not niche “highbrows” but are “omnivores”; the fact that they do have strong, expert tastes for niche cultures does not imply that they stay away from popular culture (tautologously defined as “that culture which is popular”).  There are a bunch of reasons for this, but one powerful one is hitting the theaters today in the form of the latest Pixar flick.

Written by Omar

June 27, 2008 at 4:16 pm