Archive for the ‘just theory’ Category
I am now old enough that I have seen three traditions in American sociology die. In describing them, I am not necessarily saying that I don’t like them. In fact, I am a published practitioner in one of them. Rather, these traditions have not been able to reproduce themselves at the core of the profession. They may be popular in other fields, but not in soc:
- Rational choice
Each promised a lot and had a moment in American sociology. Munch, Alexander, and others led the charge on neo-functionalism in the 1990s, and Luhmann has a following. Rational choice still has notable adherents, like Doug Heckathorn at Cornell or Richard Breen at Yale. And the AJS and ASR had their share of articles discussing postmodernism (here, for example). But still, it’s hard to say that these traditions aren’t dormant in American sociology. Few students, few placements.
The question is whether there is any commonality. Is American sociology resistant to certain types of theory? If these three cases indicate a deeper process, then I’d make the following guesses:
- “Strong assumptions” – American sociologists don’t like models with what appear to be overly strong assumptions. Rational choice models have smart actors; postmodernism has overly complex actors; and the various functionalisms had actors that were hyper sensitive to social norms and communities were overly structured.
- “High tech” – With the exception of applied statistics, American sociologists don’t like fancy things. The AGIL system in functionalism; math for RCT; European philosophy/social theory for post-modernism.
So the ideal theory would be one with weak assumptions and requires little machinery. Many of the dominant theories these days seem to fit this: institutionalism/field theory; intersectionality theory; theories of racial privilege; etc. Network theory rests on simple, but weak, assumptions and uses only stats.
It is unclear to me if this is a good or bad state of affairs. However, if you think it’s bad, then you have a real problem. The most obvious way to change it is to recruit different kinds of people into the profession who like demanding theory or high tech tools. That seems like a tall order given our undergraduate audience, which is the major talent pool for the profession.
Recently, Elizabeth and Brayden have drawn attention to the institutional position of organizational sociology. Three pertinent facts:
- A lot of organizational sociologists have moved to b-schools.
- The major orgtheory/b-school journal, ASQ, rarely publishes people in sociology programs.*
- The dominance of institutional theory
When I look at these trends, I see two things. One, orgtheory has market value. A low budget discipline like sociology simply won’t retain people. Two, I think there is a “thinning” that is occurring in orgtheory. While orgtheory remains vibrant, it is now, in sociology, a field that has jettisoned much of its heritage. Sociologists have gravitated toward big structural theories, like institutionalism, networks, and ecology (the big three, as Heather might say). But what happened to the rest? Why don’t sociologists care about Carnegie school theory? Why have people stopped working on Blau style middle level theory? Human relations?
The answer is not clear to me. One culprit might be the journal system. To succeed in sociology at the higher levels, you need fast publication in two or three journals and it’s probably easier to just work on well established variables/processes (diffusion/density/networks). I certainly did that and I freely admit that I’d be unemployed if I tried to hatch new variables. Second, there might simply be a new division of labor in academia. The “sociology of organizations” now simply means structural analysis. An “b-school orgtheory” means other features of orgs, like performance, that sociologists care less about.
* That didn’t have to be the case, Don.
Good opens the article by suggesting that Go is inherently superior to all other strategy games, an opinion shared by pretty much every Go player I’ve met. “There is chess in the western world, but Go is incomparably more subtle and intellectual,” says South Korean Lee Sedol, perhaps the greatest living Go player and one of a handful who make over seven figures a year in prize money. Subtlety, of course, is subjective. But the fact is that of all the world’s deterministic perfect information games — tic-tac-toe, chess, checkers, Othello, xiangqi, shogi — Go is the only one in which computers don’t stand a chance against humans.
This is not for lack of trying on the part of programmers, who have worked on Go alongside chess for the last fifty years, with substantially less success. The first chess programs were written in the early fifties, one by Turing himself. By the 1970s, they were quite good. But as late as 1962, despite the game’s popularity among programmers, only two people had succeeded at publishing Go programs, neither of which was implemented or tested against humans.
Over at Scatterplot, Andy Perrin has a nice post pointing to a recent talk by Rodney Benson on actor-network theory and what Benson calls “the new descriptivism” in political communications. Benson argues that ANT is taking people away from institutional/field-theoretic causal explanation of what’s going on in the world and toward interesting but ultimately meaningless description. He also critiques ANT’s assumption that world is largely unsettled, with temporary stability as the development that must be explained.
At the end of the talk, Benson points to a couple of ways that institutional/field theory and ANT might “play nicely” together. ANT might be useful for analyzing the less-structured spaces between fields. And it helps draw attention toward the role of technologies and the material world in shaping social life. Benson seems less convinced that it makes sense to talk nonhumans as having agency; I like Edwin Sayes’ argument for at least a modest version of this claim.
I toyed with the possibility of reconciling institutionalism and ANT in an article on the creation of the Bayh-Dole Act a few years back. But really, the ontological assumptions of ANT just don’t line up with an institutionalist approach to causality. Institutionalism starts with fairly tidy individual and collective actors — people, organizations, professional groups. Even messy social movements are treated as well-enough-defined to have effects on laws or corporate behavior. The whole point of ANT is to destabilize such analyses.
That said, I think institutionalists can fruitfully borrow from ANT in ways that Latour would not approve of, just as they have used Bourdieu productively without adopting his whole apparatus. In particular, the insights of ANT can get us at least two things:
1) It not only increases our attention to the role of technologies in shaping organizational and field-level outcomes, but ANT makes us pay attention to variation in the stability of those technologies. It is simply not possible to fully accounting for the mortgage crisis, for example, without understanding what securitization is; how tranching restructured, redistributed and sometimes hid risk; how it was stabilized more or less durably in particular times and places; and so on.
You can’t just treat “securitization” as a unitary explanatory factor. You need to think about the specific configuration of rules, organizational practices, technologies, evaluation cultures and so on that hold “securitization” together more or less stably in a specific time and place. Sure, technologies are sometimes stable enough to treat as unified and causal—for example, a widely used indicator like GDP, or a standardized technology like a new drug. But thinking about this as a question of degree improves explanatory capacity.
An example from my own current work: VSL, the value of a statistical life. Calculations of VSL are critical to cost-benefit analyses that justify regulatory decisions. They inform questions of environmental justice, of choice of medical treatment, of worker safety guidelines. All sorts of political assumptions — for example, that the lives of people in poor countries are worth less than people in rich ones — are baked into them. There is no uniform federal standard for calculating VSL — it varies widely across agencies. ANT sensitizes us not only to the importance of such technologies, but to their semi-stable nature—reasonably persistent within a single agency, but evolving over time and different across agencies.
2) Second, ANT can help institutionalists deal better with evolving actors and partial institutionalization. For example, I’m interested in how economists became more important to U.S. policymaking over a few decades. The problem is that while you can define “economist” as “person with a PhD in economics,” what it means to be an economist changes over time, and differs across subfields, and is fuzzy around the borders.
I do think it’s meaningful to talk about “economists” becoming more influential, particularly because the production of PhDs happens in a fairly stable set of organizational locations. But you can’t just treat growth theorists of the 1960s and cost-benefit analysts from the 1980s and the people creating the FCC spectrum auctions in the 1990s as a unitary actor; you need ways to handle variety and evolution without losing sight of the larger category. And you need to understand not only how people called “economists” enter government, but also how people with other kinds of training start to reason a little more like economists.
Drawing from ANT helps me think about how economists and their intellectual tools gain a more-or-less durable position in policymaking: by establishing institutional positions for themselves, by circulating a style of reasoning (especially through law and public policy schools), and by establishing policy devices (like VSL). (See also my recent SER piece with Dan Hirschman.) Once these things have been accomplished, then economics is able to have effects on policy (that’s the second half of the book). While the language I use still sounds pretty institutionalist—although I find myself using the term “stabilized” more than I used to—it is definitely informed by ANT’s attention to the work it takes to make social arrangements last. Thus I end up with a very different story from, for example, Fligstein & McAdam’s about how skilled actors impose a new conception of a field — although new conceptions are indeed imposed.
I don’t have a lot of interest in fully adopting ANT as a methodology, and I don’t think the social always needs to be reassembled. The ANT insights also lend themselves better to qualitative, historical explanation than to quantitative hypothesis testing. But all in all, although I remain an institutionalist, I think my work is better for its engagement with ANT.
I am not well read in Benjamin’s ouevre, but I’ve always been semi-impressed. Moments of brilliance, but I couldn’t quite wrap my head around the big contribution. Well, Walter Laquer makes the argument that Benjamin is an over-rated thing. From the Mosaic:
Yes, his ideas (as in his best-known essay, “The Work of Art in the Age of Mechanical Reproduction”) were often original, and there were flashes of genius. But in what precisely did his genius consist? Had he produced a new philosophy of history, proposed a fundamentally new approach to our understanding of 19th-century European culture, his main area of concern, or revolutionized our thinking about modernity? The answers I received weren’t persuasive then, and the answers provided in the vast secondary literature of the last decades have done no better.
Wherein lay its originality? The figure of the flâneur had been “discovered” earlier in the novels of Honoré de Balzac and others, and the main themes of Baudelaire’s poems had been studied even by German academics, some of whom had offered analyses not dissimilar to Benjamin’s. Were the Parisian arcades, with or without Baudelaire, the right starting point for a new understanding of modernity? Even the most detailed Benjamin biography, by the distinguished French professor Jean Michel Palmier, reaches no satisfying conclusion on this point. (Palmier’s mammoth book, almost 1,400 pages long, remains, like Benjamin’s work, unfinished—which is a comment in itself.)
Defenders – show me the Benjamins!
There is an intrinsic interest in the philosophy of social science. Ideally, we all want well motivated and logical explanations for how we should do our professional work. However, there is usually one question that you don’t hear much about – why does scholarship seem to progress in the absence of a well motivated philosophy? In other words, doctors probably have a bad philosophy of science, but I don’t see philosophers refusing the services of their physicians.
I don’t have an answer to this, I’ve only started to think about this issue. But I raise it in the shadow of our debate over critical realism and the earlier debate over post-modernism. The claim of some supporters is that social scientists really need a new theory of social science (e.g., critical realism) because social scientists rely on a flawed positivist theory. It may be true that positivist social science is wrong and that we should adopt a newer theory. This view does not take into account two issues: (a) The cost of adopting a new theory is steep. If Kieran can’t quite get critical theory after reading it for 18 months, then I sure won’t get it. (b) A new social science that proceeds along new rules of engagement may not generate enough differences to make it worthwhile. For example, now that Phil Gorski has adopted critical realism, how would his book, The Disciplinary Revolution, be written any differently? Not clear to me since a lot of what Gorski does in that book is apply a specific theoretical lens in reading various developments in state formation. He might sprinkle a discussion of “multiple levels of causation” at the top but then he’d probably proceed to make similar arguments with similar data.
The ultimate puzzle, though, is in areas that seem to make progress even when practitioners work with a bad philosophy. This suggests that the demand for better foundations simply isn’t important for generating knowledge. Another datum is that advances in science, or social science, rarely require entirely new foundations. Take sociology. I don’t need to adopt anything new to, say, appreciate Swidler’s attack on functionalism. And I seem to be able to understand most feminist sociology by using meat and potatoes positivism. The bottom line is that, at the very least, there needs to be an explanation for the ubiquitous disjuncture between foundations and practice.
Around 2004 or so, I felt that we were “done” with institutionalism as it was developed from Stinchcombe (1965) to Fligstein (2000). My view was that once you focused on the organizational environment and produced a zillion diffusion studies, there were only so many extra topics to deal with. For example, you could propose a strong coupling argument (DiMaggio and Powell 1983) or weak coupling argument (Weick 1976 or Meyer and Rowan 1977). Then maybe you could do innovation within a field (DiMaggio’s institutional entrepreneur argument) or how people exploited fields (Fligstein’s social skill argument). Finally, starting with Clemens (1999), and then Armstrong (2004), then Bartley (2006), and then the work of the orgs/movement crowd (including Brayden and myself) you got into contention. So what else was left?
Well, it turns out there are two major moves that force you in a new direction. One might be called the “aspects of fields” program – which means that you study some element of an organizational field in depth and really analyze the living day lights out of it. For example, the Ocasio/Thornoton/Lounsbury stuff on institutional logics is an example. Another example is the new stuff by Suddaby and Lawrence on institutional work, which includes some of my work on power building in organizations.
The other program is the Fligstein/McAdam Theory of Fields, which essentially marries “social skill” era Fligstein with the incumbent-challenger dynamics that was highlighted in The Dynamics of Contention book. In other words, you rub the Orange Bible and DoC together and hope the child is attractive.
The purpose of this post is not to evaluate these programs. That’ll come later, and there will some special summer action concerning ToF. But here, I am just mapping out institutional theory as it stands these days. The “aspects of institutionalism” program is clearly a deepening and refinement of the theory that emerged in early post-Parsons sociology. On Facebook, I asserted that ToF was our “new new institutionalism,” and there was push back. I think my position is that, as far as genealogy and conent is concerened, ToF is a merger of two separate ideas.
As far as the discipline is concerned, management likes the aspects program because it is relatively easy to stick to firm level dynamics. Studies of executives, or regulations, or what have you can be pegged to “institutional X” theory. In contrast, sociologists like conflict a bit more and movements, so ToF will prove popular. If nothing else, it provides a simple and intuitive vocabulary for the types of social processes that contemporary macro-sociologists like to talk about.
What the heck, let’s do anarchism week. Let’s start with the following conversation I had at the end of my social theory class a few semesters ago. A student approached me and asked why I didn’t teach anarchism in the course. There’s a few good reasons, but not so strong that you couldn’t include it if you really wanted to.
First, the goal of my social theory class is to have people read original texts written by seminal social thinkers. This doubles as a sort of Western civ (since IU doesn’t require it) and people need to understand the core arguments of sociology. So we hit the “classics,” the interactionists, feminists, French theory,* and a little evolutionary psych. The course also needs to prepare a handful of students who will continue in soc, poli sci, or other fields at the graduate level.
Second, I teach things that really drive discussion in contemporary sociology, which means that that many topics, including those dear to my heart, must get cut. Since there are very few anarchist sociologists, or research that uses an anarchist perspective, it means that it simply isn’t a priority.
But that doesn’t mean that anarchism isn’t a real social theory or that it should be actively excluded. In contrast, there’s now a body of anarchist themed social writings, mainly in fields other than sociology. For example, anthropologist David Graeber’s writings should count. James Scott, the political scientist, has written about statelessness at length. There are the classic anarchists, like Prodhoun, and feminist anarchists like Emma Goldman. You have right wing anarchists like economist Murray Rothbard or philosopher Michael Huemer. Then you have empirical studies of statelessness like Pete Leeson’s pirate book.
In other words, you have more than enough material and it’s high quality material. But it’s definitely not central to sociology (yet?), so you don’t feel guilty cutting it. But the social theory course isn’t set in stone. I am already tiring of French theory and other topics, so it may be time to rotate some new material in.
* Remember, I don’t teach postmodernism anymore.
From my guy in Ann Arbor:
2014 Junior Theorists Symposium
15 August 2014
SUBMISSION DEADLINE: 15 FEBRUARY 2014
We invite submissions for extended abstracts for the 8th Junior Theorists Symposium (JTS), to be held in Berkeley, CA on August 15th, 2014, the day before the annual meeting of the American Sociological Association (ASA). The JTS is a one-day conference featuring the work of up-and-coming theorists, sponsored in part by the Theory Section of the ASA. Since 2005, the conference has brought together early career-stage sociologists who engage in theoretical work.
We are pleased to announce that Marion Fourcade (University of California – Berkeley), Saskia Sassen (Columbia University), and George Steinmetz (University of Michigan) will serve as discussants for this year’s symposium.
In addition, we are pleased to announce an after-panel on “The Boundaries of Theory” featuring Stefan Bargheer (UCLA), Claudio Benzecry (University of Connecticut), Margaret Frye (Harvard University), Julian Go (Boston University), and Rhacel Parreñas (USC) . The panel will examine such questions as what comprises sociological theory, and what differentiates “empirical” from “theoretical” work.
We invite all ABD graduate students, postdocs, and assistant professors who received their PhDs from 2010 onwards to submit a three-page précis (800-1000 words). The précis should include the key theoretical contribution of the paper and a general outline of the argument. Be sure also to include (i) a paper title, (ii) author’s name, title and contact information, and (iii) three or more descriptive keywords. As in previous years, in order to encourage a wide range of submissions, we do not have a pre-specified theme for the conference. Instead, papers will be grouped into sessions based on emergent themes.
Please send submissions to the organizers, Daniel Hirschman (University of Michigan) and Jordanna Matlon (Institute for Advanced Study in Toulouse), at firstname.lastname@example.org with the phrase “JTS submission” in the subject line. The deadline is February 15, 2014. We will extend up to 12 invitations to present by March 15. Please plan to share a full paper by July 21, 2014.
One thing that I found dissatisfying about our earlier “discussion” on CR is that it ultimately left the task of actually getting clear on what CR “is” unfinished (or bungled). Chris tried to provide a “bulletpoint” summary in one of the out of control comment threads, but his quick attempt at exposition mixed together two things that I think should be kept separate (what I call high level principles from substantively important derivations from those principles). This post tries to follow Chris Smith’s (sound advice) that ” We’ll all do better by focusing on important matters of intellectual substance, and put the others to rest.”
The task of getting clear on the nature of CR is particularly relevant for people who haven’t already formed strong opinions on CR and who are just curious about what it is. My argument here is that neither proponents nor critics do a good job of just telling people what CR is in its most basic form. The reason for this has to do with precisely the complex nature of CR as an ontology, epistemology, theory of science, and (most importantly) a set of interrelated theses about the natural, social, cultural, mental, world that are derived from applying the high level philosophical commitments to concrete problems. My argument is that CR will continue to draw incoherent reactions and counter-reactions (by both proponents and opponents) unless these aspects are disaggregated, and we get clear on what exactly we are disagreeing about. One of these incoherent reactions is that CR is both a “giant” package of meta-theoretical commitments and that CR is actually a fairly “minimalist” set of principles the reasonable nature of which would only be denied by the certifiably insane.
In particular it is important to separate the high level “core” commitments from all the substantive derivations, because it is possible to accept the core commitments and disagree with the derivations. In essence, a lot of stuff (actually most of the stuff) that gets called “CR” consists of a particular theorist’s application of the high level principles to a given problem. For instance, one can apply (as did Bhaskar in the “original” contributions) the high level ontology to derive a (general) theory of science. One can (as Bhaskar also did) use the general theory of science to derive a local theory (both descriptive and normative) of social science (via the undemonstrable assumption that social science is just like other sciences). And the same can be done for pretty much any other topic: I can use CR to derive a general theory of social structures, or human action, or culture, or the person, or whatever. Once again, the cautionary point above stands: I can vehemently disagree with all the special theories, while still agreeing with the high level CR principles. In other words I can disagree with the conclusion while agreeing with the high-level premises because I believe that you can’t get where you want to go from where you start. This may happen because let’s say, I can see the CR theorist engaging in all sorts of reasoning fallacies (begging the question, arguing against straw men, helping him or herself to undemonstrable but substantively important sub-theses, and so on) to get from the high level principles to the particular theory of (fill in the blank: the person, social structure, social mechanisms, human action, culture, and so on).
This is also I believe the best way to separate the “controversial” from the “uncontroversial” aspects of CR, and to make sense of why CR appears to be both trivial and controversial at the same time. In my view the high level principles are absolutely uncontroversial. It is the deployment of these principles to derive substantively meaningful special theories with strong and substantively important implications that results in controversial (because not necessarily coherent or valid at the level of reasoning) theories.
The High Level Basics.-
One thing that is seldom noted by either proponents or critics of CR is that the fundamental high level theses are actually pretty simple and in fact fairly uncontroversial. These only become “controversial” when counterposed to nutty epistemologies or theories of science that nobody holds or really believes (e.g. so-called “positivism”, radical social constructionism, or whatever). I argued against this way of introducing CR precisely because it confounds the level at which CR actually becomes controversial.
So what are these theses? As repeatedly pointed to by both Phil and Chris in the ridiculously long comment thread, and as ritualistically introduced by most CR writers in social theory (e.g. Dave Elder-Vass), these are simply a non-reductionist “realism” coupled to a non-reductionist, neo-Aristotelian ontology.
The non-reductionist realism part is usually the one that is much ballyhooed by proponents of CR, but in my view, this is actually the least interesting (and least distinctive) part of CR in relation to other options. In fact, if this was all that CR offered, there would be no reason to consider it any further. So the famous empirical/actual/real (EAR) triad is not really a particularly meaningful signature of CR. The only interesting high-level point that CR makes at this level is the “thou shall not reduce the real to the actual, or worse, to the empirical.” Essentially: the world throws surprises at you because it is not reducible to what you know, and is not reducible to what happens (or has happened or will happen). I don’t think that this is particularly interesting because no reasonable person will disagree with these premises. Yes, there are people that seem to say something different, but once you sit them down for 10 minutes and explain things to them, they would agree that the real is not reducible to our conceptions or our experiences of reality. Even the seemingly more controversial point (that reality is not reducible to the actual) is actually (pun intended) not that controversial. In this sense CR is just a vanilla form of realism.
When we consider the CR conception of ontology then things get more interesting. Most CR people propose an essentially neo-Aristotelian conception of the structure of world as composed of entities endowed with inherent causal powers. This conception links to the EAR distinction in the following sense: The real causal powers of an entity endow it with a dispositional set of tendencies or propensities to generate actual events in the world; these actual events may or may not be empirically observable. The causal powers of an entity are real in the sense that these powers and propensities exist even if they are never actualized or observed by anyone. To use the standard trite example, the causal power to break a window is a dispositional property of a rock; this property is real in these that it is there whether it is ever actualized (an actual window breaking with a rock event happens in the world), and whether anybody ever observes this event.
Reality then, is just such a collection of entities endowed with causal powers that come from their inherent nature. The nature of entities is not an unanalyzable monad but is itself the (“emergent” in the sense outlined below) result of the powers and dispositions of the lower level constituents of that entity suitably organized in the right configuration. What in earlier conceptions of science are called “laws of nature” happen to be simply observed events generated by the actualization of a mechanism, whereby a “mechanism” is simply a regular, coherently organized, collection of entities endowed inherent causal powers acting upon one another in a predictable fashion. Scientists isolate the mechanism when they are able to manipulate the organization of the entities in question so that the event is actualized with predictable regularity; these events are then linked to an observational system to generate the so-called phenomenological or empirical regularities (“the laws”) that formed the core of traditional (Hempelian) conceptions of science.
The laws thus result from the regular operation of “nomological machines” (in Cartwright’s sense). The CR point is thus that the phenomenological “laws” are secondary, because they are just the effect produced by hooking together a real mechanism to produce (potentially) observable events in a regular way. So the CR people would say that Hacking’s aphorism “if you can spray them they are real” is made sense of by the unobservable stuff that you can spray is an entity endowed with the causal power capable of generating observable phenomena when isolated as part of an actualized mechanism. The observability thing is secondary, because the powers are there whether you can observe the entity or not. That’s the CR “theory of science.”
The key to the CR ontology is that the nature of entities is understood using a “layered” ontological picture in which entities are understood as essentially wholes made of parts organized according to a given configuration (a system of relations). These “parts” are themselves other entities which may be decomposable into further parts (lower level entities organized in a system of relations and so on). Causal powers emerge at different levels and are not reducible to the causal powers of some “fundamental” level. Thus, CR proposes a non-reductionist, “layered” ontology, with emergent causal powers at each level.
This emergence is “ontological” and not “epistemic” in the sense that the causal powers at each level are “real” in the standard CR sense: they are not reducible to their actual manifestations nor are these “emergent” properties simply an epistemic gloss that we throw into the world because of our cognitive limitations. Thus, CR is an ontological democracy which retains the part-whole mereology of standard realist accounts, but rejects the reductionist implication that the structure of the world bottoms out at some fundamental level of reality where the really real causal powers can be found (and with higher level causal powers simply being a derivative shadow of the fundamental ones).
Now you can see things getting interesting, because we have a stronger set of position takings. Note that from our initial vanilla realism, and our seemingly innocuous EAR distinction, along with a meatier conceptualization of entities as organized wholes endowed with powers and propensitities, we are now living in a world composed of a panoply of real entities at different levels of analysis, endowed with (non-reducible) real causal powers at each level. The key proposition that is beginning to generate premises that we can actually have arguments about is of course the premise of ontological emergence. I argue that this premise not a CR requirement. For instance, why can’t I be a reductionist critical realist? (RCR) Essentially, RCR accepts the EAR distinction, but privileges a fundamental level; this fundamental level may ultimately figure in our theoretical conceptions of reality but it is the bedrock upon which all actual and empirical events stand. In other words, the only true “mechanisms” that I accept are the ones composed of entities at the most fundamental level of reality, which may or may not ever be uncovered. I don’t seriously intend to defend this position, but just bring it up as an attempt to show that CR hooks together a lot of things that are logically independent (emergentist ontology, Aristotelian conception of entities, part-whole mereology, with a “causal powers” view of causation, among others).
In any case, my argument is that most of the substantively interesting CR theses do not emerge (pun intended) from the Bhaskarian theory or science, or the account of causation, or the EAR distinction. They emerge from hooking together (ontological) emergentism and an Aristotelian conceptions of entities and dispositional causal powers. For emergentism is what generates the (controversial) explosion of real entities in CR writing. Not only that, emergentism is the only calling card that CR writers have to provide what Dave Elder-Vass has called a “regional ontology” for the social sciences, that does not resolve into just repeating the boring EAR distinction or the (increasingly uncontroversial) “theory of science” that Bhaskar developed in A Realist Theory of Science and The Possibility of Naturalism.
How to be a (controversial) Critical Realist in two easy steps.-
So now that we have that covered, it is easy to show how to produce a “controversial” CR argument. First, pick a mereology. Meaning, pick some entities to serve as the parts, preferably entities that themselves do not have a controversial status (most people would agree that the entities exist, form coherent wholes, have natures, and so on), and pick a more controversially coherent whole that these parts could conceivably be the parts of. Then argue that the parts do indeed form such a whole via the ontological emergence postulate. Note that the postulate allows you to fudge on this point, because you do not actually have to specify the mechanism via which this ontological emergence relation is actualized (you can argue that that is the job of empirical science and so on). Then hooking the CR notion of causal powers and the EAR distinction and the postulate of ontological democracy of all entities argue that this whole is now a super-addition to the usual vanilla reality. That is, the new emergent entity is real in the same sense that other things (apples, rocks, leptons, cells) are real. It has a inherent nature, a set of dispositions to generate actual events, and most importantly it has causal powers. The powers of this new emergent entity may be manifested at its own level (by affecting same-level entities), or they may be exhibited by the constraining power of that entity upon the lower level constituent entities (the postulate of “downward causation”). For instance, (to mention one thing that could actually be of interest to readers of this blog), Dave Elder-Vass has provided an account of the reality of “organizations” (and the non-reducibility of organizational action to individual action) using just this CR recipe.
Now we have the materials to make some people (justifiably) discomfited about a substantive CR claim (or at least motivated to write a critical paper). For if you look at most of the contributions of CR to various issues they resolve themselves to just the steps that I outlined above. So the CR “theory” of social structure, is precisely what you think. Social structure is composed of individuals, organized by a set of relations that form a coherent (configured) whole. This whole (social structure) is now a real entity endowed with its own causal powers, which now (may) exert “downward causation” on the individual’s the constitute it. These causal powers are not reducible to those of the individuals that constitute it. This how CR cashes in what John Levi Martin has referred to as the “substantive hunch” that animates all sociological research. “The social” emerges from the powers and activities of individuals but it never ultimately resolves itself into an aggregation of those powers and activities. Note that CR is opposed to any form of ontological reduction whether it is “downwards” or “upwards.” Thus attempts to reduce social structure to the mental or interactional level are “downward conflationist” and attempts to reduce individuals to social structure (or language or what have you), are “upward conflationist.” Thus, the “first” Archer trilogy can be read in this way. First, on the non-reducibility (and ontological independence between) social structure in relation to the individual or individual activity, then “culture” in relation to the individual or individual interaction, and later (in reverse) personal agency in relation to either social structure or culture.
Essentially, the stratified ontology postulate must be respected. Any attempt to simplify the ontological picture is rejected as so much covert (or overt) reductionism or “conflation.” Note that “conflation” is not technically a formal error of reasoning (as is begging the question) but simply an attempt by a theorist to simplify the ontological picture by abandoning the ontological democracy or ontological emergence postulates. A lot of the times CR theorists (like Archer) reject conflation as if it was such an error in reasoning, when in fact it is a substantive argument that cannot be dismissed in such an easy way. Note that this is weird because both the ontological democracy and the ontological emergence argument are themselves non-demonstrable but substantively important propositions in CR. Thus, most CR attempts to dismiss either reductionist or simplifying ontologies themselves do commit such a formal error of reasoning, namely, begging the question in favor of ontological emergence and ontological democracy.
Another way to make a CR argument is to start with a predetermined high level entity of choice. This kind of CR argument is more “defensive” than constructive. Here the analyst picks an entity the real status of which has (for some reason) become controversial, either because some theorists purport to show that it does not “really” exist (meaning that it is just a shorthand way to talk about some aggregate of actually existing lower level entities), or is not required to generate scientific accounts of some slice of the world (ontological simplification or reduction a la caloric or phlogiston). Here CR arguments essentially use the ontological democracy postulate to simply say that the preferred whole has ontological independence from either the lower constituents or higher level entities to which others seek to reduce the focal entity. Moreover, the CR theorist may argue that this ontological independence is demonstrated by the fact that this entity has (actualized and/or empirically observable) causal powers, once again above and beyond those provided by the lower level (or higher level ) entities or processes usually trotted out to “reduce it away.” This applies in particular to the “humanist” strand of CR that attempts to defend specific causal powers that are seen as inherent properties of persons (e.g. reflexivity in Archer’s case) or even the very notion of person (in Chris Smith’s What is a Person?) as an emergent whole endowed with specific causal powers, properties and propensities.
To recap, CR is a complex object composed of many parts. But not all parts are of the same nature. I have distinguished between roughly three parts, organized according to the generality of the claim and the specificity of the substantive points made. In this respect, I would distinguish between:
1) The parts that CR shares with all “vanilla” realisms. This includes the postulate of ontological realism (mind-independence of the existence of reality), the transitive/intransitive distinction, the EAR distinction, and so on. In itself, none of these theses make CR particularly distinctive, unique or useful. If you disagree with CR at this level, based on irrealist premises, congratulations. You are insane.
2) The Aristotelian ontology.- This specifies the kind of realism that CR proposes. Here things get more interesting, because there is actual philosophical debate about this (nobody seriously defends irrealist positions in Philosophy any more and most sociologists just like to pretend to be irrealists to show off at parties). Here CR could play a role in philosophical debates insofar as a neo-Aristotelian approach to realism and explanation is a coherent position in Philosophy of Science (although it is not without its challengers). Here belongs (among other things) the specific CR conceptualizations of objects and entities, the causal (dispositional) powers ontology (when hooked to the EAR distinction) and the specific “Theory of Science” and the “Theory of Explanation” that follows from these (essentially endorsing mechanismic and systems explanation over reductive, covering law stories). This is what I believe is the best ontological move and CR should be commended in this respect.
3) The stratified ontology.- This comes from yoking (1) and (2) to the ontological emergence and ontological democracy postulate. This is where you can find a lot of “controversial” (where by controversial I mean worth arguing about, worth specifying, worth clarifying, and in some cases worth rejecting) arguments in CR. These are of three types: ontological emergence arguments, augment the standard common-sense ontology of material entities to argue for the existence of higher level non-material entities; thus “social structures” are as real as the couch that you are lying on; the danger here is a world that comes to populated with a host of emergent “entities” with no principled way of deciding which ones are in fact real (beyond the theorist’s taste). This is the problem of ontological inflation, (2) “Downward causation” arguments add this postulate to suggest that the emergent (non-material or material) entities not only “exist” in a passive sense, but actually exert causal effect on lower level components or other higher-level entities, (3) “ontological independence” arguments attempt to show that a particular sort of entity that is usually done violence to in standard (reductionist) accounts has a level of ontological integrity that cannot be impugned and has a set of causal powers that cannot be dismissed. In humanist and personalist accounts, this entity is “the person” along with a host of powers and capacities that are usually blunted in “social-scientific” accounts (e.g. persons as centers of moral purpose) and the enemies are the positions that attempt to explain away these powers or capacities or that attempt to show that the don’t matter as much as other entities (e.g. “social structure”).
4) Continuing extensions of the stratified ontology argument.- This is the part of CR that has drawn (an unfair) amount of attention, because it extends the same set of arguments to defend both the reality but also the causal powers of a set of entities that (a) a lot of people are diffident about according the same level of reality to as the standard material entities, and (b) things that most people would have difficulty even calling entities. These may be “norms,” “the mental,” “the cultural,” “the discursive,” and “levels of reality” above and beyond the plain old material/empirical world that we all know and love (e.g. super-empirical domains of reality). You can see how CR can get controversial here.
5) Additional stuff.- A lot of other CR arguments do not directly follow from any of these, but are added as supplementary premises to round out CR as a holistic perspective. For instance, the rejection of the fact-value distinction in science is not really a logical derivation from the theory of science or the neo-Aristotelian ontology, and neither is the “judgmental rationality” postulate (that science progresses, gradually gets at the truth, etc.). I mean all realisms presuppose that we get better at science, but this is not really a logical derivation from realist premises (as argued by Arthur Fine). The fact/value thing is in the same boat, because it requires a detour through a lot of controversial group (3) and group (4) territory to be made to stick. For instance, given that persons are emergent entities, endowed with non-arbitrary properties and powers, then the “relativist” arguments that any social arrangement is as good as any other for the flourishing of personhood is clearly not valid. This means that social scientists have to take a strong stance on the value question (hence sociological inquiry cannot be value neutral). Because a mixture of Aristotelian ontology and ontological emergentism applied to human nature is incompatible with moral (and social-institutional) relativism, the the fact value distinction in social science is untenable. However, note that to get there a lot of other premises, sub-premises, and substantive arguments for the reality of persons as emergent, neo-Aristotelian entities have to be accepted as valid. In this sense the fact/value thing is only a derivation from certain extensions of CR into controversial territory. As already intimated, What is a Person? is a (well-argued!) piece of controversial CR precisely in this sense.
Note that this clarifies the “giant package” versus “minimalist” CR debate. Let’s go back to the cable analogy. So you are considering signing up for CR? Here’s the deal: The “basic” CR package would (in my view) be any acceptance of (1) and (2) (with some but not all elements of (3)). In this sense, I am a Critical Realist (and so should you). The “standard” CR package includes in addition to (1), (2) and all of (3), some elements of (4). Here we enter controversial territory, because a lot of CR arguments for the “reality” of this or that are not as tight or well-argued as their proponents suppose. In their worst forms, they resolve themselves into picking your favorite thing (e.g. self-reflexivity), and then calling it “real” and “causally powerful” because “emergent.” It is no surprise that Archer’s weakest work is of this (most recent) ilk. Here the obsession with ontological democracy prevents any consideration of ontological simplification or actual ontological stratification (meaning getting clear on which causal powers matter most rather than assigning each one their preferred, isolated level). Finally, the “turbo” package requires that you sign up for (1) through (5), this of course is undeniably controversial, because here CR goes from being a philosophy of scientific practice to being a philosophy of life, the universe and everything. Sometimes CR people seem to act surprised that people may be reluctant to adopt a philosophy of life, but I believe that this has to do with their penchant to suppose that once you accept the basic, then the chain or reasoning that will lead you to the standard and the turbo follows inexorably and unproblematically.
This is absolutely not the case, and this where CR folk would benefit most from talking to people who are not fully committed to the turbo, but who (like other sane people) are already 80% into the basic (and maybe even the standard). My sense is that we should certainly be arguing about the right things, and in my view the right things are at the central node (3), because this is the where the key set of argumentative devices that allows CR people to derive substantively meaningful (“controversial”) conclusions (both at that level—arguments for the reality of “social structure”—and about type (4) and (5) matters), and where most attempts to provide a workable ontology for the social sciences are either going to be cashed in, or be rejected as aesthetically pleasing formulations of dubious practical utility.
The continuing brouhaha over Fabio’s (fallaciously premised) post*, and Kieran’s clarification and response has actually been much more informative than I thought it would be. While I agree that this forum is not the most adequate to seriously explore intellectual issues, it does have a (latent?) function that I consider equally as valuable in all intellectual endeavors, which is the creation of a modicum of common knowledge about certain stances, premises and even valuational judgments. CR is a great intellectual object in the contemporary intellectual marketplace precisely because of the fact that it seems to demand an intellectual response (whether by critics or proponents) thus forcing people (who otherwise wouldn’t) to take a stance. The response may range from (seemingly facile) dismissal (maybe involving dairy products), to curiosity (what the heck is it?), to considered criticism, to ho hum neutralism, to critical acceptance, or to (sock-puppet aided) uncritical acceptance. But the point is that it is actually fun to see people align themselves vis a vis CR because it provides an opportunity for those people to actually lay their cards on the table in way that seldom happens in their more considered academic work.
My own stance vis a vis CR is mostly positive. When reading CR or CR-inflected work, I seldom find myself vehemently disagreeing or shaking my head vigorously (this in itself I find a bit suspicious, but more on that below). I find most of the epistemological, and meta-methodological recommendations of people who have been influenced by CR (like my colleague Chris Smith, Phil Gorski, or George Steinmetz, or Margaret Archer) fruitful and useful, and in some sense believe that some of the most important of these are already part of sociological best practice. I think some of the work on “social structure” that has been written by CR-oriented folk (Doug Porpora and Margaret Archer early on and more recently Dave Elder-Vass) important reading, especially if you want to think straight about that hornet’s nest of issues. So I don’t think that CR is “lame.” Although like any multi-author, somewhat loose cluster of writings, I have indeed come across some work that claims to be CR which is indeed lame. But that would apply to anything (there are examples of lame pragmatism, lame field theory, lame network analysis, lame symbolic interactionism, etc. without making any of these lines of thought “lame” in their entirety).
That said, I agree with the basic descriptive premises of Kieran’s post. So this post is structured as a way to try to unhook the fruitful observations that Kieran made from the vociferous name-calling and defensive over-reactions to which these sort of things can lead. So think of this as my own reflections of what this implies for CR’s attempt to provide a unifying philosophical picture for sociology.
I often teach our first year required course on social organization. I usually include a section on constructionism, which means we trot through Berger and Luckman. One of our BGS* got really upset (enthusiastic?) because (a) she was tired of reading the text and (b) she believed that the text reflected a male viewpoint.
In response, I started with a pedagogical explanation. As an introductory course, many students may not have read B&L (I hadn’t until I went to grad school). Also, there is benefit to rereading classic texts. They bear multiple readings. Then I turned the conversation around, rather than critique B&L, I asked: “What would you do to make this a better piece of theory?”
The discussion went in some positive directions. I’d summarize it as this. The sociology of knowledge in B&L has been modified in a few principal ways:
- Punctuated equilibrium/Kuhnian theory – the cycle of socialization and knowledge production is a stable cycle of habitualization until you get an event that shifts people to a new equlibrium (in physical science at leas)
- Reflexivity – people’s position allows them to observe things that other don’t. This is found in feminist sociology and the work of Patricia Hill Collins
- Foucauldian theory – conflict and resistance disrupt the dominant “episteme,” thus creating subaltern knowledge
It’s unclear to me how performativity works into this. It might be new, or a tweak – stocks of knowledge can be endogenous.
*Brilliant Graduate Student
In the movie The Holy Grail, one of the most insightful scenes is when Sir Lancelot charges a castle to save a maiden in distress. What makes it funny is that he charges across this open field for a few minutes and he is completely ignored by the guards. When he finally reaches the castle door, the guards act totally surprised. But of course, they should have seen it coming.
Sociology is having that moment right now. Right now, the territory of the social sciences is under pressure to expand and reshape itself. And we’ve seen this coming for a while. The forces are many. Increasing knowledge of gene-behavior links. The appearance of “Big Data,” which we’ve argued about on this blog. The demand for experiments from the policy world.
There are a few responses. We can simply ignore these trends and continue as usual. That’s what Nicholas Christakis was arguing against in his column. Or, we can uncritically accept them, a position which has some advocates. The response I’d prefer to see is a more thorough engagement, an integration of these issues into the core social sciences. Otherwise, we’ll become the discipline of 20th century theory and methods, not the place that comprehensively looks at social life.
This is a guest post by Graham Peterson. He has just finished up his master’s degree in economics at UIC and will begin the PhD program in sociology at the University of Chicago. He is interested in economic sociology
and extended blog comments.
It’s a shame sociology took its final-swoop quantitative turn just about the same time economics reissued its permissions to do comparative history and started publishing discourse studies in its mainstream journals. It’s as if estranged siblings would be forever doomed to blow past one another and reissue each other’s mistakes. Politics.
The turn for sociology was of course both theoretical and empirical, but that damned dissertation in 2021 really sealed it: Foundations of Sociological Analysis (Paul Samuelson glowed proud from his grave). It was as if sociologists had been waiting for someone to come along and resolve all the definitional issues in theory and compose an axiomatic graph-theoretic derivation of sociological principles. Now all the theorists do is applied combinatorics on graphs. Useless.
Earlier this week the Theory Section of ASA announced in an email that Omar Lizardo is this year’s winner of the Lewis Coser Award for Theoretical Agenda Setting. Congratulations! According to the Theory Section’s website, the award is “intended to recognize a mid-career sociologist whose work holds great promise for setting the agenda in the field of sociology.” This is a well-deserved recognition for Omar, whose research covers a diverse range of theory and empirical substance. It’s unlikely that you’ve read everything interesting that Omar has written lately (including his fantastic book reviews). My advice is just to dig in and start reading. You’ll learn something.
Now that I’ve been on faculty for almost ten years, I’ve spent a lot of time reading journal articles, manuscript submissions, book proposals, tenure files, and hundreds (!) of job applications. I’ve notice that almost no-one self-describes as a functionalist or neo-functionalist, except a few senior scholars like Jeffrey Alexander or Paul Colomy. This is surprising because sociologists still do research on social norms, social systems, and social differentiation, which are issues of central importance to classical structural functionalism. Why?
- Maybe people still do neo/functionalism, but they just don’t use the old jargon. Since Parsons got banished in the 1970s, maybe people don’t even know they are functionalists since they aren’t exposed to it. Call it functionalism on the “down low.”
- Maybe people simply migrated out of American sociology. Luhmann is clearly a neo-functionalist but he’s more popular in areas like media studies. He’s also a Big Deal in European sociology.
- Maybe there is a notable crowd of functionalists, but they simply don’t run in my circles.
- Functionalist ideas were imported/distorted by modern sociologists. A lot of folks have argued that institutionalism is a sort of modern day functionalism.
- Redefinition: The topics of interests to functionalists (e.g., norms) are better analyzed when recast in other theories. For example, the rational choice theory of norms has more appeal than Parson’s theory.
- Elite abandonment: Maybe it is just the structure of the profession. The elites killed functionalism by not hiring them in leading programs. With only a few functionalists (any other than Alexander?) in top 20 programs, it is nearly impossible to train a self-sustaining cohort of functionalist scholars.
Other ideas? Can any neo/functionalists enlighten me?
When I visited Millsaps College a few weeks ago, I got into a discussion about international relations theory with my host, political scientist Michael Reinhard. I asked him why we (social scientists) needed to study famous political leaders, like Julius Caesar or Winston Churchill. His argument was intriguing. He said that highly successful social actors have often spent a lot of time understanding their social world. They are good at what they do – international relations in this case – because, at the very least, they have an intuition about the world that is important and correct. Some, like Churchill, will even explain their views to others. In other words, political scientists should study great leaders because great leaders actually understand power fairly well.
In sociology, we have no such argument, but it is worth thinking about. We are resistant to great leader stories and for good reason. Great man stories often devolve into hero worship, or they rely on “Whig” history. But that doesn’t mean Great people scholarship is not without use. For example, what did Steve Jobs understand about markets that management scholars should learn? Or, a more sociological example, what does a great religious leader understand about religion that sociologists of religion should know? Taking a turn from Bourdieu, we could look at any social field, identify the “masters,” and then use them as research sites where we can understand how the field is put together.
A while ago I asked, “what happened to resource dependence theory?” Although resource dependence theory seemed to be the dominant macro-organizational theory of the late 1970s, by the early 1990s the theory was eclipsed by institutional theory and population ecology. In the previous post, I offered some reasons for why this might have happened, but I stopped short of doing any serious analysis or a literature review. So I was happy to see that Tyler Wry, Adam Cobb, and Howard Aldrich have a paper in the latest Academy of Management Annals that tackles this question and offers some thoughts about the future of RD theory. Based on their analysis, the problem is worse than I imagined. Not only is RD theory cited less than those other theories, but it also seems to be the case that most citations to RD theory are fairly superficial. On a positive note, RD theory has become associated with a few fragmented communities of scholars who were interested in studying the particular strategies that Pfeffer and Salancik suggested actors/organizations ought to take when seeking to gain control over dependencies. From the Wry et al. paper:
[W]e conducted a systematic analysis of every study that cited External Control in 29 highly regarded management, psychology, and sociology journals between 1978 and 2011. Given the breadth of empirical domains covered by RD, our analysis focused on identifying how, and to what extent, each article used the perspective. Our results indicate that there is merit in Pfeffer’s assertion that RD serves primarily as a metaphorical statement about organizations. Though External Control continues to be cited at an enviable rate, the vast majority of citations are ceremonial—variously used as a nod toward the environment, resources, or power. Results also show that beneath an ever growing citation count is a fragmented landscape of scholars whose primary interest is in the specific strategies discussed in External Control —mergers and acquisitions (M&A), joint ventures and strategic alliances, interlocking directorates and executive succession—rather than the underlying perspective….To say that RD has been reduced to a metaphorical statement about organizations, however, belies its considerable impact. Indeed, while RD lacks a coterie of followers and has failed to catalyze a dedicated research programin the vein of NIT or OE, it has had a uniquely broad influence within management scholarship. Scholars have drawn on RD to derive key hypotheses in the study of M&A’s, joint ventures and strategic alliances, interlocking directorates, and executive succession, with the hypotheses largely supported (Hillman, Withers, & Collins, 2009).
They also suggest that its time to revive RD theory in organizational analysis. Why should we do that? Read the rest of this entry »
Marcel Fournier’s exegesis of Durkheim’s life and work is much more than a biography of a French academic in fin-de-siecle Europe. It offers the reader an intellectual history of ideas, alongside an insight into the process of knowledge production and the craft and method of empirical analysis. The logic of Durkheim’s argumentation is meticulously (and exhaustively) dissected. Fournier’s forensic examination goes further, though, drawing on a wealth of archival documentation, including correspondence, manuscripts and reports, to re-create the energy, excitement and politically charged atmosphere in which academic sociology in France began to take shape.
Durkheim believed that sociology should concern itself with social facts, the external and objective nature of social reality that exists beyond the individual. Social facts are “the substratum of collective life”, he observed in his Rules of Sociological Method (1895), an early attempt to outline a modus operandi for the discipline of sociology. Social facts have a specific character and are discernible in systems of religious, moral and juridical belief. For Durkheim, man (sic) is both an individual and a social being. Ways of thinking and acting are not simply the work of the individual but are invested in a moral power above him.
Check it out.
In mathematics, there’s a very rough distinction between “linear” things and “combinatoric” things. We are all familiar with linear science, but combinatorics is more subtle. Combinatorics simply means the math you need in order to count different combinations of things. For example, you may ask, “if I have ten red balls and twenty green balls, and I randomly draw three balls, how many different combinations of red and green do I get?” That’s a combinatoric question – counting discrete things.
Social science has lots of tools that exploit linear models: utility maximization, regression analysis, scale construction, etc. But we don’t have a lot of tools that address the combinatoric side of social life. To see what I mean, consider the issue of policy formation – why does government make some policies and not others?
- The linear answer (taken from the Median Voter Theorem in economics): Politicians offer policies designed to attract the median voter. Thus, the utility of a policy is approximated by how popular it is.
- The combinatoric answer (taken from Agendas, Alternatives, and Public Policies): Nature produces a stream of political issues and actors. Think of nature as drawing them from a big box of issues and people. If nature happens to simultaneously choose an issue and actor that “match,” then a policy gets made.
These are not inconsistent views, but they require very different toolkits. The first is about studying distributions of voters. The second is an arrival process. Metaphorically, the first model is a world of smoothness with thresholds. The second is chunky. Over the last hundred years or so of quantitative social science, we have lots of tools for smooth things. We have a few tools for chunky discrete things, like network analysis, but not enough. Ambitious quantitative social science PhD students should carefully think about that last sentence.
Single autocatalytic networks generate life, but they do not generate novel forms of life. There is nothing outside of a single decontextualized network to bring in to recombine with what is already there. Self-organizing out of randomness into an equilibrium of reproducing transformations, the origin of life, was a nontrivial accomplishment, to be sure. But this is not quite speciation, which is emergence of one form of life out of another.
Transpositions and feedbacks among multiple networks are the sources of organizational novelty. In a multiple-network architecture, networks are the contexts of each other. Studying organizational novelty places a premium on measuring multiple social networks in interaction because that is the raw material for innovation. Subsequent cascades of death and reconstruction may or may not turn initial transpositions (innovations) across networks into system-wide invention.
Through fifteen empirical case chapters, Padgett and Powell extracted eight multiple-network mechanisms of organizational genesis:
In response to Kieran and Ezra’s posts, Shreeharsh Kelkar of MIT’s Program in History, Anthropology and STS wrote a lengthy post about the nature of performativity arguments. A representative clip:
To put in yet another way, the difference between constructivists and realists is over the issue of prediction, and in particular over the issue of long-term prediction. Short-term predictions are possible for both the realist and the constructivist. But long-term predictions, say, about housing prices or computer prices 50 years from now, will be more difficult for constructivists to make than realists. It is difficult only because even objective factors that determine prices can be changed by long-term cultural work; and this cultural work is impossible to predict. The more confident you are about prediction, you shift to the realism side of the spectrum. The less confident about prediction you are, will make you more of a constructivist.
And this explains, finally, some of the arguments that have been happening in the Orgtheory comment threads. Would you like your regulator to be a realist or a constructivist? Realists argue that even the existence of regulators is premised on realism; for if there were no objective factors constraining social facts (like prices) then how would one even begin to regulate something in the first place? I would disagree. I think it depends on the time-frame that the regulator is supposed to regulate. A regulator who is thinking about the future 50 years from now is simply deceiving himself or herself. For a regulator who is thinking 5 or 10 years down the line: it simply doesn’t matter whether he is a realist or a constructivist.
Check it out.
Last week, I argued that there was kind of a big problem in modern sociology: one of our dominant macro theories is highly inconsistent with many of our favorite micro theories. If we look at various popular account of individual action in cultural sociology (e.g., toolkit theory), many don’t produce isomorphism.
Here’s the outline of the argument:
- The gist of institutional theories of isomorphism is that people working in org fields experience pressures for conformity. If you don’t follow a pre-existing cultural script, you can’t run your organization.
- For this argument to work, you need to assume that people respond to their environment in fairly uniform ways.
- In the original D&P ’83 article, in the hypotheses section, they admit variance when status orders are weak. Otherwise, the prediction is when status orders are well established, or when high status actors propagate norms, you get conformity.
- Different authors offer different social psychological mechanisms. D&P ’83 and ’91 (the intro) often appeal to a wide range of scholarship to justify isomorphism. They appeal to Berger and Luckman, as well as Bourdieu. You can also concoct a rational choice version, which is consistent with resource dependency arguments.
- If you actually read the fine print of these social psychology theories, most do not predict isomorphism, except Bourdieu’s habitus theory. For example, Berger and Luckmann’s book describes how people develop a stock of knowledge that defines their social reality. Fair enough. But nowhere do B&L ever say that this social reality is highly uniform, resistant to change, or otherwise offer a mechanism that acts as an iron cage. The slip is that “taken for granted” is interpreted as “hard to challenge.” Look at Griswold’s theory of cultural objects, or Zelizer, and it’s all about local constructions of meaning. Does not imply isomorsphism. Another case is rational choice institutionalism, where you set up a game theory model to predict norm following. Fair enough, but you have lots of hidden assumptions – uniform agents, low enforce costs, etc. Drop these and you get heterogeneity. Indeed, what you get is from the way less popular Meyer and Rowan ’77 institutionalism.
Of course, I am not the only person who noted these issues. DiMaggio’s idea of the inst entrepreneur is one attempt to get around this problem. The Clemens and Cook ’99 note that even iron cage institutionalism only predict stasis if you assume perfect reproduction. Admit imperfect reproduction and the theory breaks down. In the 2000s, the focus shifted to logics, institutional work, and conflict/movements. Substantively, it’s an implicit rejection of earlier institutional. Theoretically, it’s (almost) a complete reworking of the theory. These may not be institutionalist in the sense of the 70s or 80s, or even early 90s, but at least it is consistent with how many sociologists describe motivation and action.
Ok, let’s start with the Coleman diagram (or the “bathtub” as they call it in Germany). For institutionlaism, the two “macro” states are culture and isomorphism in an organizational field. That’s what the macro states are in DiMaggio and Powell ’83 or world polity theory.
Now, take your favorite micro sociological theory – maybe you are a Swidlerian toolkit person, or a Goffmanian frame theorist, or a Bourdieu habitus person. Then, complete the Coleman diagram. Except for habitus theory, you’ll notice that a lot of these theories don’t really produce isomorpshism on the macro level.
A definition: theory death is when some intellectual group tires of theory based on armchair speculation. Of course, that doesn’t mean that people stop producing theory. Rather, it means that “theory” no longer means endless books based on the author’s insights. Instead, people produce theory that responds to, or integrates, or otherwise incorporates a wealth of normal science research. In sociology, theory death seems to have happened sometime in the 1980s or 1990s. For example, recent theory books like Levi-Martin’s Social Structures or McAdam and Fligstein’s A Theory of Fields are extended discussions of empirical research that culminate in broader statements. The days of endlessly quoting and reinterpreting Weber are over. :(
Now, it seems, theory death is hitting some areas of political science. Consider a recent blog post by political scientists Stephen Saideman called “Leaving Grand Theorists Behind.” Saideman trashes a recent piece by John Mearsheimer and Stephen Walt (“Leaving Theory Behind: Why Hypothesis Testing Has Become Bad for IR“) that urges international relations scholars to downplay empirical work return to grand thinking. Saideman is pissed:
- My first reaction was: Next title: why too much research is bad for IR….
- As folks pointed out on twitter and on facebook discussions, it seems ironic at the least that someone who made a variety of testable predictions that did not come true (the rise of Germany after the end of the cold war, conventional deterrence, the irrelevance of international institutions, etc) would suggest that testing our hypotheses is over-rated or over-done.
And the critique goes on and on… My take: for reasons that I have yet to understand, political science has not completely washed out old style “theory” in the way that it happened in most other social science disciplines. Therefore you have pockets of people who hold that as their ideal, even in fields that are obviously empirical. When they are very senior and very respected, you get this sort of flare up.
First, of all I’d like to thank Neil Fligstein for guest blogging on orgtheory. Acknowledging his contribution has been long overdue. He wrote a series of really provocative and intriguing posts about his new book, A Theory of Fields (see here and here), which spurred an intense discussion about the various strands of institutional theory, the role of agency and change in institutional theory, and the strategic orientation of actors. Rather than rehash that debate I wanted to step back and offer my own take on what I see as some of the most important (potential) contributions of field theory to organizational scholarship.
Even though in his posts Neil framed the book as a response to institutional scholarship, I think the book has more ambitious, broader designs. Their book tries to integrate various research strands and subfields – including, but not limited to, institutional theory and social movement theory – and offer a unified theory of fields and action. In this light, they have more in common with John Levi Martin (JLM), who has written his own treatise on fields and social action, than they do with the hordes of institutional scholars. (Their view of fields certainly owes more to Bourdieu than it does to DiMaggio and Powell’s concept of organizational fields.) They are attempting grand theory in a way that is rarely done in contemporary sociology. The grandness of their theoretical lens is apparent once you consider that they mean for it to apply not only to markets or industries but also to fields that exist within organizations or that describe relations between social movement activists.
The major difference between them (F&M) and JLM or other field theorists is the way they conceptualize fields as sites of collective action (strategic action being the most important form of collective action that actors take to reproduce or change fields). In contrast, JLM is more interested in fields as sites of social action, period. According to F&M, the major problem that faces actors in any field – whether you’re talking about American corporations seeking to deregulate an industry or parents addressing the education needs of their children – is figuring how to cooperate and take collective action so that they can gain advantages over contending groups. Engaging in collective action in order to get an advantage is the motivation that drives field formation, struggle, and change. A strong version of their theory would suggest that changes in meaning systems, rules and norms, or institutional settlements are endogenous to these strategic struggles. In fact, the field itself can be seen as situational, inasmuch as it forms around struggles over ideas and standing. Fields only exist inasmuch as there is some sort of collective action.
I’ve finished writing a brief bibliography on institutionalism, which includes a section on the critics that I blegged about earlier. What did I learn from reading the critics? Well, the critics come in a few flavors:
- Weak model of human behavior – This can be found in Stinchcombe’s Annual Review and the discussion of the “cultural dupe” model. The good news, for institutionalists, is that this problem has been addressed. Between the inhabited institutions folks like Hallett and his buddy Marc Ventresca and the Lawrence/Suddaby institutional work folks (including myself), I think we’ve simply abandoned the DiMaggio and Powell 83 model of behavior and replaced it with an improvement (people have agency, but they must deal with institutions).
- Vague – Jerry and others have claimed that the theory is vague or incoherent. This obviously motivated the Jepperson ’91 chapter and the endless army of books that followed (Scott 2000, the handbook of organizational institutionalism, etc). My verdict is mixed. A lot of basic ideas, like “field,” still retain the “you know it when you see it” flavor and are quite vague.
- Empirically false – No one, I think, has successfully answered the Kraatz and Zaajc (1996) article, which speaks to a major chunk of institutional theory – the view that organizations must act in accordance with cultural scripts to ensure survival. Jerry Davis is right about this. Yet, other rather simple neo-institutional hypotheses have been repeatedly tested. For example, there’s a lot of evidence for mimetic isomorphism in various fields. The recent work by Sauder/Espeland/etc focuses on how practices (such as rankings in higher ed) become taken for granted, which supports a Zucker 77 kind of institutionalism. Also, it seems nearly impossible to test the “B” hypotheses in DiMaggio and Powell 83 because you need to compare fields, which seems hard since fields can only be defined inductively.
My verdict? The haters are correct. New institutionalism has a number of severe issues. The good news is that some problems have been dealt with in positive ways, such as a better model of individual action. It’s really a rejection of late 70s/early 80s organizational institutionalism, but that’s ok . Other areas are mixed. I’d say that at least one major hypothesis has been refuted, while others seem to be ok. Finally, there seems to be some fundamental conceptual issues (e.g., how to know a field, what exactly counts as an institution) that really need to be rethought from the ground up.
When I went to grad school in the late 90s/early 2000s, organizational sociology/org theory was taught in the following way:
- Two or three major schools of thought –
ecologydemography, institutionalism, and rat choice/neo-rat choice/Carnegie school.
- One “perspective” – networks of/in organizations
- Various topics, such as gender or race in the workplace
What has changed in the last ten years? I’d guess that we’ve deepened our selves a great deal in networks and institutions, people are less interested in ecological arguments, and there are new topics (e.g., movements and orgs). What I find interesting is that were don’t have many new variables, in the same way the ecologists brought us density or institutionalists brought us isomorphism/diffusion.
Do you agree with this assessment? What do you think is the new variable in orgtheory? What ought to be the new variable in org theory?
Having taught undergraduate social theory a lot, I’ve come to appreciate that there are four levels of learning:
- Memorization and reading comprehension. For example, students need to know that Weber talked about bureaucracy and that they need to understand his definition.
- The “basic point.” We want students to understand the perspective presented by various theorists. Using Weber as an example, we might say that one of his major ideas was that modern life reflected a rationalization of behavior.
- Applications: We want students to be able to analyze some social phenomena in terms of the underlying variables or concepts found in various theories.
- Theoretical nimbleness: We want students to be able grasp the subtlety of theory, how one set of ideas (e.g., Marxism) connects to another (e.g., Weberian historical soci0logy).
A fair number of students hit a brick wall with #1. The level of reading in a typical social theory course is much, much more demanding than what we teach in intro or social problems. For many students, social theory will be the hardest course they will take. But still, most college students can achieve basic reading comprehension if they get some tutoring or if you boil it down to key words and catch phrases.
The next three levels elude most social theory students. Since they don’t have much experience reading challenging texts, they are stumped when asked to extract the main variable or idea of the theory. Since many sociology programs are disorganized – they don’t built linearly on fundamental principals – many sociology majors have never really been forced to think in terms of variables until very late in their program. Asking for applications of theory to real world examples seems to be a very alien concept for most students. Of course, level 4, facility with theory as theory, is not expected of students, except those aiming for graduate school.