Archive for the ‘social construction’ Category
Consider the following approaches to the same in issue – fetal alcohol syndrome (FAS). In 2000, Elizabeth Armstrong and Ernest Abel published an article in the journal Alcohol and Alcoholism arguing that fetal alcohol syndrome had become a moral panic. Even though people had become obsessed with FAS, there was actually very little evidence to suggest that moderate alcohol consumption damaged fetuses. This argument is elaborated in the 2008 book Conceiving Risk, Bearing Responsibility. In 2013, the economist Emily Oster published a book called Expecting Better, which assesses pregnancy advice with a review of the pertinent clinical evidence. Like Armstrong, Oster finds that the norm against moderate alcohol consumption is not supported by the data.
The comparison between Oster and Armstrong is revealing. For example, more people know about Expecting Better because, frankly, economists are more respected than sociologists. But there is a deeper lesson. When Oster frames her work, she presents it as a morally neutral project. Her framing is roughly: “Statistics is hard, people may not have all the facts, and you might have a mistaken belief, but as an economist, I am trained in statistics. I can help you make a better choice.” Thus, the reader is morally blameless.
In contrast, Armstrong’s approach to FAS relies on standard explanations of moral panics in sociology. It goes something like this: “The facts we believe reflect our underlying biases. These biases reflect our evaluations of certain types of people, who may not deserve that stigma.” Thus, if the reader buys FAS, they are implicated in an immoral action – unfairly exercising gender prejudice. Heck, all of society is implicated.
This is an interesting observation about the public image of disciplines. Economists may advocate unpopular policies (e.g., they are often critical of minimum wage laws) but their moral framework is fairly neutral and technocratic. If you don’t buy my policy, it’s probably because you aren’t aware of all the factors involved. You haven’t calculate the social welfare function properly! In contrast, sociologists often make arguments that implicate the moral character of the audience. And that doesn’t buy you a lot of friends.
This week the Supreme Court considered whether corporations ought to have constitutional rights of religious freedom, as given to human individuals, in Sebelius v. Hobby Lobby Stores Inc. For many people, the idea that companies ought to be given all of the rights of humans is absurd. But in recent years, this idea has become more and more of a reality, thanks to game-changing cases such as Citizens United vs. FEC. How did we get to this place?
In an article on Slate, Naomi Lamoreaux and William Novak briefly go over the history of how corporations evolved from artificial persons to real persons with human rights. They emphasize that this change was a slow descent that still seemed unthinkable to justices as late as the Rehnquist court.
The court’s move toward extending liberty rights to corporations is even more recent. In 1978, the court held in First National Bank of Boston v. Bellotti that citizens had the right to hear corporate political speech, effectively granting corporations First Amendment speech rights to spend money to influence the political process. But even then, the decision was contentious. Chief Justice William H. Rehnquist, in dissent, reminded the court of its own history: Though it had determined in Santa Clara that corporations had 14th Amendment property protections, it soon after ruled that the liberty of the due-process clause was “the liberty of natural, not artificial persons.”
If you find this piece interesting then I would encourage you to read Lamoreaux’s collaboration with Ruth Bloch, “Corporations and the Fourteenth Amendment,” a much more detailed look at this history. One interesting point that emerges from this paper is that our general understanding of how rights became ascribed to corporations is historically inaccurate. Bloch and Lamoreaux assert that although the Court in Santa Clara v. Southern Pacific Railroad likened corporations to individuals and asserted that they might have some protected rights, they were careful to distinguish between corporate and human civil rights.
During the late nineteenth and early twentieth centuries, the Supreme Court drew careful distinctions among the various clauses of the Fourteenth Amendment. Some parts it applied to corporations, in particular the phrases involving property rights; but other parts, such as the privileges and immunities clause and the due -
process protections for liberty, it emphatically did not. Although this parsing might seem strange to us today, it derived from a remarkably coherent theory of federalism in which the Court positioned itself both as the enforcer of state regulatory authority over corporations and as the guardian of individual (but not corporate) liberty against state intrusion. To the extent that the Court extended constitutional protections to corporations, it did so to protect the interests of the human persons who made them up.
Read the whole paper. It’s fascinating!
are we a post-racial society? See Vilna Bashi Treitler’s new book The Ethnic Project: Transforming Racial Fiction into Ethnic Factions
Sociologist Vilna Bashi Treitler has a new and especially timely book coming out about race and ethnicity in the US. Look for The Ethnic Project: Transforming Racial Fiction into Ethnic Factions at your local bookstore or at the Stanford University Press table at ASAs.
The Ethnic Project analyzes changing depictions of race relations to help readers understand the development and perpetuation of a racial hierarchy in the US.
A contemporary cartoon and summary after the jump.
Read the rest of this entry »
Imagine going to the ATM and discovering you can’t withdraw your money because the ATM is out of cash. Not only that, but the bank is closed because of a national holiday so you can’t use the bank teller to withdraw money, electronic transfers of funds are frozen, and the stores refuse to accept credit cards out of concerns that electronic payments won’t be made. If you are able to get to your money, you learn that 6.75% of your funds (or 9.95% if you are a lucky ducky with over 100K euros in your account) will be converted into bank shares under a compulsory levy intended to prop up the banking system. The mortgage payment that you scheduled, the student loan check that you deposited to pay for your education, the vendors that you need to pay for your small business – all are up in the air.
Even if you are told “nevermind, we’re re-evaluating that policy, back to the drawing board!”, what’s the rational thing to do? Most likely, you as a depositor will lose trust in the banking system and pull out as much as you can. If you are in an adjoining country with a shared currency, the mattress, precious metals, and alternate currencies are looking like more attractive places to keep your money. This is the scenario currently unfolding for residents in Cyprus and those who were parking their money in what seemed like a safe haven.
Less than a year ago, Greece was in a similar situation and is still dealing with the consequences. Now, it’s Cyprus’s turn. These supposedly one-off, “unique” situations involving untested interventions are becoming regularities as banking and governance systems around the world are becoming more tightly coupled together. Although Chick Perrow‘s Normal Accidents: Living with High Risk Technologies discusses nuclear plants and chemical plants, his concept of how reactions, once started, are hard to stop (much less understand) in tightly coupled systems, is a helpful read. Add to his concept the erosion of a shared understanding and belief in institutions for a potent mix – that is, the delegitimization processes of trust in banking and governance that we may be seeing in the EU.
For those of us who have been living under the various rocks of committee work/teaching/research/other commitments, a little background reading: Dealbreaker’s take, with plenty of links to others’ analysis, Reuters,and Zero Hedge’s mordant posts.
A guest post by John Padgett and Woody Powell about their new book The Emergence of Organizations and Markets:
Innovation in the sense of product design is a popular research topic today, because there is a lot of money in that. Innovation, however, in the deeper sense of new actors—new types of people, new organizational forms—is not even much on the research radar screen of contemporary social scientists, even though “speciation” (to use the biologists’ term for this) lies at the heart of historical change over the longue durée, both in biological evolution and in human history. Social science—meaning mostly economics, political science and sociology—is very good at understanding selection, both at the micro level of individual choice and at the macro level of institutional regulation and lock-in. But novelty, especially of actors but also of alternatives, has first to enter from off the stage of our collective imaginary for our existing theories to be able to go to work. Our analytical shears for trimming are sharp, but the life forces that push up novelty to be trimmed tend to escape our attention, much less our understanding. If this book accomplishes anything, we at least hope to put the research topic of speciation—the emergence of new organizational forms and people—on our collective agenda.
It is my pleasure to announce our February guest bloggers: Woody Powell and John Padgett. Professor Powell is Professor of Education and (by courtesy) Sociology, Organizational Behavior, Management Science and Engineering, Communication, and Public Policy at Stanford University. Professor Padgett is Professor of Political Science at the University of Chicago.
Woody and John are both leading figures in the study of organizations and networks. Professor Powell is co-author, with Paul DiMaggio, of the groundbreaking “iron cage” article and then went on to publish a series of highly influential papers in social network analysis. John Padgett is one of political science’s leading formal modellers, having written seminal papers on budgeting & garbage can processes, the courts, and state formation. His most well known work is likely the “Medici paper,” which used network analysis to describe the cultivation of political power in early modern Italy and introduced the idea of “robust action” into modern social theory.
They will be discussing their new book: The Emergence of Organizations and Markets. Here’s a summary:
The social sciences are rich with ideas about how choice occurs among alternatives, but have little to say about the invention of new alternatives in the first place. These authors directly address the question of emergence, both of what we choose and who we are. With the use of sophisticated deductive models building on the concept of autocatalysis from biochemistry and rich historical cases studies spanning seven centuries, Padgett and Powell develop a theory of the co-evolution of social networks. Novelty in new persons and new organizational forms emerges from spillovers across multiple, intertwined networks. To be sure, actors make relations; but the mantra of this book is that in the long run relations make actors. Through case studies of early capitalism and state formation, communist economic reforms and transition, and technologically advanced capitalism and science, the authors analyze speciation in the context of organizational novelty. Drawing on ideas from both the physical sciences and the social sciences, and incorporating novel computational, historical, and network analyses, this book offers a genuinely new approach to the question of emergence.
This week and next week, I’ll post some thoughts that John and Woody have shared with me. This is *required* reading for sociologists, management scholars, political scientists, and economists. And yes, there, will be a quiz!
Last semester, an undergraduate student wrote an essay about the Vietnam war movement. She asked why the movement itself was relatively unpopular even though the public was becoming disillusioned with the war. In other words, the antiwar movement won on policy, but lost on politics. Why?
Her hypothesis was that the antiwar movement became strongly associated with the counterculture. This is an important point. In my research on movements – mainly movements of the left for the most part – I have found that activists tend to have a very tense relationship with mainstream American culture at best. They think that conventional politics and bourgeois culture are to be mistrusted.
This leads to an issue that I’ve been thinking about – is left politics inherently counter cultural? Maybe not. The Civil Rights movement was obsessed with adherence to the social norms of the day. Participants were urged to be polite, look proper, and learn how to work within and against mainstream institutions. Nowadays, most left movements seem to have a hostile relationship to mainstream culture. Occupy Wall Street was a grungy DIY movement. The antiwar movement of the 2000s followed in the steps of the anti-globalization movement in working outside conventional channels. For anyone interested in social change, it is worth thinking about this link and if it is a necessary development, or merely an affectation of a current generation of activists.
This is the last installment of this Fall’s book forum on Andreas Glaeser’s Political Epistemics. I usually reserve the last installment of the book forum for criticisms and conjectures. This will be no exception. I’ll focus on the limits of the sociology of understanding as it pertains to explaining revolutions.
As you may remember from earlier parts of the book forum, the theoretical mission of Political Epistemics is to develop a “sociology of understanding,” which is a thick description of how people make sense of their social worlds. Glaeser used interview data and archival materials to explain how people developed their identity in East Germany and how that identity eroded in the 1980s to such an extent that the Stasi refused to repress anti-socialists movements in 1989.
What I like about the sociology of understanding is that it effectively undermines Western theories of socialist collapse. It wasn’t about folks reading Hayek. It was about East Germans using socialist ideas to formulate a critique of the whole system. The internal criticism was like tugging at a loose thread.
Now, what I take issue with is the incompleteness of this explanation. It doesn’t really tap into other elements of the socialist system and its eventual collapse. For example, you don’t really get a sense of the extreme violence involved in maintaining East European socialism. This system was imposed by political conquest. It was also supported by periodic mass repression (e.g., Hungary ’56, Prague ’68). East European nations did not treat dissidents well and many were violently treated. I’m a bit surprised that Glaeser didn’t delve into the violence that permeated the entire system.
Another issue is that by itself the sociology of understanding doesn’t explain the timing of the collapse. Why in 1989? Didn’t people question socialism before then? They did and there were uprisings as well. Heck, even Emma Goldman observed in the early 1920s that people weren’t thrilled with what was happening in the Soviet Union.
The key issue is that there was a generational turnover in the elite of the Soviet state and they were willing to let social change occur. This created a chain of protests first in the Baltic region, then Russia itself and then East Europe. As usual, various factions tried to repress these movements but the key elite group - the secret police – refused to do so. Thus, Glaeser doesn’t really, in my view, replace conventional views of revolution that link elite support of protest to success. Rather, he provides an account for why the elite might defect from the state. This fits neatly within current theories of revolution.
Finally, let me add that what I’d like to see is additional work by other scholars. I’d like to see the sociology of understanding applied to other groups, not just the elites. How did, say, farmers in the Ukraine construct their experience of communism? What was it about the Baltic states or those souls in 1956 Hungary that made them come out in the street? I’d love to find out.
I signed on to blog on Orgtheory a couple of months ago with the express purpose of writing about “A Theory of Fields” (Oxford Press, 2012), my new book with Doug McAdam. So here it goes.
Today I want to explain something about the shape of research in organizational theory for the past 35 years ago in order to situate “A Theory of Fields” in that research. The cornerstones of the “new institutionalism” in organizational theory are three works, the Meyer and Rowan paper (1977), the DiMaggio and Powell paper (1983), and the book edited by Powell and DiMaggio (1991).
I would like to take the provocative position that since about 1990, most scholars have given up on the original formulation of the new institutionalism even though they are ritually fixated on citing these canonical works. It is worth thinking why they found that formulation limited.
The Meyer/Rowan and DiMaggio/Powell position on organizations is that actors in organizations do not have interests and that their actions are “programmed” by scripts. Moreover, actors are unable to figure out what to do, so they either follow the leader (i.e. mimic those they perceive as successful), act according to norms often propagated by professionals, or else find themselves coerced by state authorities. The Meyer/Rowan and DiMaggio/Powell world was not only void of actors; it was also void of change. Once such an order got into place, it became taken for granted and difficult to dislodge. “People” in this world told themselves stories, used myth and ceremony, and they decoupled their stories from what they were doing. This meant that the consequences of their actions were not important. DiMaggio recognized this problem in 1988 when he suggested that in order to explain change we needed another theory one that involved actors, interests, power, and what he called “institutional entrepreneurs”.
The core of organizational studies since the early 1990s has been to reintroduce interests, actors, power and the problem of change into the center of organizational studies. Indeed, the field of entrepreneurship in management studies is probably at the moment, the hottest part of organizational theory. If one looks at these papers, one still sees ritual citing of DiMaggio/Powell and Meyer/Rowan. But the core ideas of these papers could not be farther from those works. The focus on entrepreneurial studies is on how new fields are like social movements. They come into existence during crises. They invoke the concept of institutional entrepreneurs who build the space and create new cultural frames, interests and identities. In doing so, the entrepreneurs build political coalitions to dominate the new order. Indeed, the gist of the past 15 years of organizational research is entirely antithetical to the “old” new institutionalism.
I submit to you that the time is now right to reject the “old” new institutionalism” entirely, free our minds, and produce a “new” new institutionalism.
Teppo is too humble to let us know that he’s the guest editor of a new special issue of Managerial and Decision Economics. The issue’s theme is the “emergent nature of organization, market, and wisdom of crowds.” The special issue has an impressive lineup of authors, including Nicolai Foss, Robb Willer, Bruno Frey, Peter Leeson, and Scott Page. Teppo’s introduction, as you might expect, is provocative, challenging learning theory and behavioral theories of the firm. Here’s a little teaser:
My basic thesis is that capabilities develop from within—they are endogenous and internal. In order to develop a capability, it must logically be there in latent or dormant form. Capabilities grow endogenously from latent possibility. In some respects, capabilities should be thought about as organs rather than as behavioral and environmental inputs. Experience, external inputs and environments are, in important respects, internal to organisms, individuals and organizations. Although environmental inputs play a triggering and enabling role in the development of capability, the environment is not the cause of capability. Furthermore, the latency of capabilities places a constraint on the set of possible capabilities that are realizable. But these constraintsare scarcely deterministic; rather, they also provide the means and foundation for generating noveltyand heterogeneity (285).
Teppo offers a real challenge to the typical “blank slate” approaches that dominate organizational theory and sociology. Social construction has limits if you assume that some capabilities are simply latent and waiting to be triggered into action. This reminds me of what my graduate school contemporary theory instructor, Al Bergesen, used to say about the deficiency of most sociological theory. (In fact, he repeated the whole bit to me again when I ran into him in Denver’s airport Monday evening.) Sociology, he’d say, has never fully come to grips with the cognitive revolution of psychology or linguistics. We still assume that individuals are completely shaped by their social world and ignore cognitive structure and the limits this imposes on how we communicate and who we can become. Teppo and Al would have a lot to talk about.
Ever since the NCAA announced they would sanction Penn State for its cover-up of the Sandusky sex abuse scandal, I’ve been thinking about writing a post related to institutional jurisdictions, authority, and reputation. I completely understand the NCAA’s response to the scandal, especially in light of the findings of the Freeh report, and I think this was a very predictable response. Was the punishment harsh? Yes. Was it excessively harsh as a condemnation of the crimes of Sandusky? No. Was the NCAA operating within its jurisdiction and exercising proper use of authority by making these sanctions? That’s debatable (and I’m sure it will be in the months to come).
My colleague Gary Alan Fine, who has thought a lot about scandal and collective memory (e.g., Fine 1997), has offered his thoughts on the sanctions in a New York Times op-ed. Gary questions “history clerks” who attempt to rewrite history as a response to a contemporary event/scandal.
The more significant question is whether rewriting history is the proper answer. And while this is not the first time that game outcomes have been vacated, changing 14 seasons of football history is a unique and disquieting response. We learn bad things about people all the time, but should we change our history? Should we, like Orwell’s totalitarian Oceania, have a Ministry of Truth that has the authority to scrub the past? Should our newspapers have to change their back files? And how far should we go?
This is a tricky issue. Everyone can agree that what happened at Penn State was deplorable. However, I think it’s perfectly reasonable to question whether the NCAA made these moves more as an effort to protect its own reputation and to safeguard the purity of college football, rather than as a reasoned response to the institutional crimes committed by Penn State’s decision-making authorities. This scandal isn’t disappearing anytime soon, and so I expect we’ll hear a lot more about this in the months and years to come.
In this and a series of forthcoming posts, I will attempt to outline an argument showing that most of the time claims to have derived a substantively important conclusion from constructionist premises are incoherent. By a substantively important conclusion I refer to strong arguments for the “social construction of X” where X is some sort of category or natural kind that is usually thought to have general ontological validity in the larger culture (e.g gender, race, mental illness, etc.).
In a nutshell, I will argue that the reason for why these sort of arguments do not really work is that they require us to draw on a theory of meaning, language and reference that is itself inconsistent with constructionism. To put it simply: substantively important conclusions derived from constructionist premises require a theory of reference that implies at least the potential for realism about natural kinds and a strong coupling between linguistic descriptions and the real properties of the entities to which those descriptions apply, but constructionism is premised on the a priori denial of realism about natural kinds and of such a strong coupling between language and the world. Thus, most strong claims about something being “socially constructed” cannot be strong claims at all. This argument applies to all forms of social constructionism, whether of the phenomenological, semiotic, or interactionist varieties.
Here I will first do two things: 1) give a more “technical” definition of what I mean by a “substantively important conclusion” within a constructionist mode of argumentation (noting that my argument does not apply to “softer” versions of constructionism) and 2) nail down the point that constructionism (and any other set of premises designed to draw substantively important conclusions about the natural and social worlds) depends on an “argument from reference” in order to work. Finally, I will lay out the argument that 3) because of this dependence, strong constructionist conclusions are usually not warranted (they follow from an incoherent argument).
The shock value in constructionism.- In a constructionist argument, a substantively important conclusion is one that has “shock value.” By shock value, I mean that the argument results in the conclusion that something that we thought was “real” in an unproblematic sense is shown to be either a) a fictitious entity that has never been or could never be real or b) a historically contingent entity endowed with a weaker form of existence (e.g. a collectively sustained fiction or even delusion). This is “shocking” in the sense that the constructionism thesis upsets the “folk ontology” heretofore taken for granted by lay and professional audiences alike.
A useful analogue (because it makes the technical argumentative steps clear) comes from the Philosophy of Mind. There, the most “shocking” argument ever put forth is know as “eliminativism” in relation to the so-called “propositional attitudes” (Stich 1983; Churchland 1981). Note that this argument is actually espoused by people who consider themselves to be radical materialists almost blindly committed to a traditional scientific epistemology and an anti-dualist ontology. Thus, I am not claiming a substantive commonality between constructionists and eliminativists. All that I want to do here is to point to some formal commonalities in their mode of argumentation in order to set up the subsequent point of common reliance on an argument from reference.
According to the eliminativist thesis, the denizens of the mental zoo that play a role in our ability to account for ours and other’s people’s behavior (such as beliefs, desires, wants, etc.) do not actually exist. The reason for that is that the theoretical system in which they play a role (so called “folk” or “belief-desire psychology”) is actually an empirically false theory, one that relies on the postulation of theoretical entities (mental entities) that have no scientifically defensible ontological status.
According to belief desire psychology, persons engage in action in order to satisfy desires. Beliefs play a causal role in behavior by providing the person with subjective descriptions of how means connect to desirable ends. Using belief-desire psychology, we can explain why person A engages in behavior B, by postulating that “Person A believes that by doing B, she will get C, and she desires/wants C.” A belief is a proposition about the world endowed with a truth value and a desire is a proposition that describes the sorts of states of affair that the person would like to bring about. Both are conceived to be mental entities endowed with “intentional” content (they are about something). Their intentional content dictates how they can relate to other entities in a systematic way (e.g. because some propositions logically imply others). We can then “predict” (or retrodict) the behavior of persons by linking desires to beliefs in a way that preserves the rationality of persons.
Accordingly, if I see somebody rummaging through the contents of a refrigerator, I can surmise that this person is engaging in this sort of behavior because she believes that she will find something to eat in there, and she wants something to eat. Relatedly, when persons are questioned as to why they did something, they usually give a “reason” for why the did what they did. This reason takes the form of a “motive report.” If I question somebody about why they are rummaging through a refrigerator, they are likely to say “because I’m hungry.”
According to eliminativists, the main causal factors in belief desire psychology have no ontological status. Thus, neither propositional beliefs of the sort of “I think that p” where p is a proposition of the sort “there is food in the refrigerator” nor desires of the sort “I want q” have any ontological status. As such, belief-desire psychology stands to be replaced by a mature neuropsychology, one in which “folk solids” such as desires and beliefs (to use Andy Clark‘s terms) will play no role in explanations and accounts of human behavior. These notions, previously thought to be natural kind endowed with unquestionable reality, are eliminated from our ontological storehouse and into the dustbin of fictional entities discarded by modern science (such as Phlogiston, Caloric, The Ether, The Four Humors, etc.).
Constructionism and eliminativism.- I argue that most substantively important conclusions within the constructionist paradigm are actually modeled after “eliminativist” arguments in the Philosophy of Mind.
All of the pieces are there. First, a constructionist argument usually takes some (folk or professional) system of “theory” as their target. This is regardless this is a system of theory currently in existence or from a previous historical era. This is usually a folk (or sometime professional) “theory of X” (e.g the “folk theory of race” or the “folk theory of gender”). Second, within this system the constructionist picks one or more central theoretical categories or concepts (X), which, within the system are endowed with an non-problematic ontological status as real (e.g. gender or racial “essence”). Third, the constructionist shows the folk theory of X to be false from the point of view of a more sophisticated theory (modern population genetics in the case of the old anthropological concept of “race”). Thus X (e.g. race), as conceptualized in the folk theory, does not really exist, even though it forms a key part of certain contemporary folk theories of race. The title of the famous PBS documentary: “Race: The Power of an Illusion” conveys that point well.
The constructionist may also argue for the indirect falsity of the current theory of X, by simply using the historical or anthropological record to show that there are cultures/historical periods in which X either was not presumed to exist in the way that it exists today or was part of a different theoretical system which radically changed its status (the properties that define membership in the concept were radically different). Here the constructionist will agree that X “exists” in the current setting, but it does not have the sort of existence attributed to it in the folk discourse (transhistorical and transcultural) instead it has a weaker form of existence: social; as in “sustained by a historically and culturally contingent social arrangement which could theoretically be subject to radical change.” Foucault’s famous argument for the radically different status of the category of “man” within the so-called “classical episteme” is an example of that sort of claim. The category of man in the modern era has a meaning that is radically incommensurate to the one that it had in the classical episteme. The implication is that therefore the category of “man” does not refer and we can thus conceive of a possible future in which it plays no actual role, follows.
The common element here is that a category that we take for granted (within the descriptions afforded by some lay or professional theoretical system) to be ontologically “real” (race, gender, the category of “man”, etc.) is shown instead to “actually” have a fictitious status because there is nothing in the world that meets that description. More implicitly, insofar as a concept has undergone radical changes in overall meaning (with meaning determined by its place within a network of other concepts in the form of a folk or professional theory), then there cannot be a preservation of reference across the incommensurate meanings.Hence the concept cannot really be picking out an ontologically coherent entity in the world. I refer to this as the “strong constructionist effect.” The basic idea, as I have already implied is that in order for the effect to be successful, we must already be working from within some theory of reference, otherwise the claim that “there is nothing in the world that meets that description” is either vacuous or incoherent.
Constructivism and arguments from reference.- What are “arguments from reference”? Arguments from reference are those that implicitly or explicitly require a theory of reference for their conclusions to follow (or even make sense), as has been recently pointed out by Ron Mallon (2007). When this is the case, it can be said that the substantively important conclusion is dependent on the (logically autonomous) theory of reference. It is striking how little most social scientists spend thinking about reference. They should, because even though it is seldom explicit, we all require some theory about how conceptualizations link up (or fail to!) to events in the world in order to make substantive statements about the nature of that world. I argue that in order to produce the strong constructionist effect, and thus derive substantively important conclusions, the argument from social construction requires a particular theory of reference.
One would think that when it comes to theorizing about how conceptual, theoretical or folk terms “refer” to the world there would be various competing theories. Instead, twentieth century analytic philosophy was long dominated by single dominant account of how concepts refer. This was Frege’s suggestion that “intension” (the meaning of a term) determines “extension” (the object in the world that the term picks out). Lewis (1971, 1972) formalized this formulation for the case of so-called theoretical entities in scientific theories. According to Lewis, terms in scientific theories purport to describe objects in the world bearing certain properties or standing in certain relations with other objects. This is the description of that term. According to Lewis, the terms of Folk Psychology are theoretical entities that gain their meaning from their relations to other entities and observational statements within a system of theory. Eliminativists built their argument on this suggestion, by suggesting that there is nothing in the (scientifically acceptable) world that meets the description for a propositional attitude (a mental entity endowed with “intentional” content); ergo, belief-desire psychology is false, its terms do not refer, and we need a better theory of the mental.
In short, from the viewpoint of a descriptivist theory of reference, a given term or concept defined within a given theoretical system refers if and only if there is an object in the world that bears the properties or stands in the relations specified in the description. According to this theory, terms refer to real world entities when there exists an object satisfies the necessary and sufficient conditions of membership in the category defined by the term (which in the limiting case may be an individual). Descriptions that have no counterpart in the real world are descriptions of fictional entities and thus fail to refer (and the validity of the theoretical systems of which they are a part is therefore impugned). When competent speakers use the terms of any theory (scientific or folk) they have a description in mind, which specifies the set of properties that an object would have to have for that term to be said to successfully refer to it.
The basic argument that I want to propose here is that “shock value” constructionism depends on a descriptivist theory of reference. This should already be obvious. The standard constructionist argument begins by a painstaking reconstruction of a given set of folk or professional descriptions. The analyst then moves on to ask the rhetorical question: is there anything in the world that actually satisfies this description? If the answer is no, then the conclusion that the term fails to refer (and is a fictional and not a real entity) readily follows. The standard criteria for satisfaction of these conditions usually boil down to some sort of semantic analysis. For instance, in Orientalism, Edward Said painstakingly reconstructed a Western “image” (read description) of the Middle East as a kind of place and of the Arab “Other” as a (natural?) kind of person. Said pointed out that this description of Arab peoples (menacing, untrustworthy, exotic, emotional, eroticized, etc.) was not only logically incoherent; it was simply false, there had never been a group of people who met this description; it had been a fabrication espoused by a misleading theoretical system: Orientalism. Thus, Orientalism as a culturally influential theory of the nature of the Arab “Orient” needed to be transcended. The main theoretical entity implied by such theory, the Oriental “other” endowed with a bizarre set of attributes and properties was thereby eliminated from our ontological storehouse.
Houston we have a problem.- It would be easy to show that essentially all arguments that produce the “strong constructionist effect” follow a similar intellectual procedure. There are at least two problems with this (largely unacknowledged) dependence of social constructionism on a descriptivist theory of reference. First, constructionism denies the conditions that make a descriptivist strategy an adequate theory of reference, which is at a minimum the validity of a truth-conditional semantics and the capacity of words to unambiguously (e.g. literally) refer to objects and events in the world. This is not a problem for Gottlob Frege and David Lewis, or most descriptivist theorists in analytic philosophy, most of whom subscribe to some version of propositional realism (propositions have truth values that can be unproblematically redeemed by just checking to see if the “correspond” to the world). However, this is a problem for constructionists because they cannot accept such a strong version of realism.
Thus, if the very theory of the relationship between language and the world that is espoused by social constructionism (skepticism as to the applicability of a truth conditional semantics and unambiguous reference) is true then descriptivism has to be false. This means that social constructionism is an inherently contradictory strategy; to produce substantively meaningful conclusions (the strong constructionist effect) it has to rely on a theory of the relationship between meanings and the world that is denied by that very approach. Second, even if this logical argument could be sidestepped, constructionism would still be in trouble. The reason for this is that there is a competing (and equally appealing on purely argumentative grounds) theory of reference in modern philosophy: this is the causal-historical theory of reference most influentially outlined by Saul Kripke and Hilary Putnam. The basic issue is not that this is a competing account of reference; the problem is that this account of reference actually denies a key link in the constructionist argument: that in order to refer, there has to be match between the description of the term and the properties of the object that the term putatively refers to.
Instead, causal-historical theories of reference allow for two possibilities that are seldom taken into account by constructionists: 1) that persons can refer to things in the world even though their mental description of the term that they are using to refer to those things those not at all match the properties of those things, and 2) that the description of a term can undergo radical historical change while the term continues to refer to the same entities or cluster of entities. The first possibility undercuts the capacity of the constructionist to “correct the folk,” because reference is decoupled from the descriptive validity of the terms that are used to refer. The second possibility undercuts the argument for social construction based on historical and cultural variability of descriptions. It opens up the possibility that there is “rigid designation” to the same set of social or natural realities across cultures in spite or radical differences in the cultural frameworks from within which these referential relations are established.
A reasonable objection is simply to point out that we simply do not have sufficiently strong grounds of picking descriptivism over causal-historical theories of reference, as equally respectable arguments have been put forth in defense of both. This is in fact the position taken by most philosophers who instead go on to worry about whether people are cherry-picking one of the two theories of reference to support their preferred argumentative strategy. However, I believe that most constructionists in social science cannot be content with this non-committal solution. Instead, like other areas of Philosophy (e.g. epistemology, ethics, mind), there is a way to “break the tie” between various philosophical theories and that is to look to naturalize these types of inquiry by looking at what theories seem to be consistent with the relevant sciences. Here we have good news and bad neews for constructionists.
Research in cognitive science, cognitive semantics and cognitive linguistics points to the inadequacy of descriptivist theories of reference from a purely naturalistic standpoint. This should be good news for constructionists because the upshot is that truth-conditional semantics roundly fails as an account of how persons generate meaning (Lakoff 1987). The irony is that these theories redeem the original skepticism of constructionism vis a vis any form of truth-conditional semantics and propositional realism, but in so doing also undercut the ability of constructionists to engage in the sort of argument that results in “shocking” or substantively strong claims for the social construction of X, because the rhetorical force of these arguments depends on descriptivism and descriptivism implies propositional realism and “objectivism” (that truth is the literal correspondence of statements and reality). The resulting counter-intuitive conclusion is that it is precisely because linguistic meaning and natural categories meet the constructionist specifications that strong constructionist arguments are actually impossible. In fact, it is precisely because language and semantics work the way that constructionist (implicitly) presuppose that they do that the norm in historical change may not be the radical transformation of reference relations in historical and cultural change (as implied by Foucauldian analysts), but rigid designation of the same (social, or natural) “essences” and relations even in the wake of superficial shifts in the accepted cultural description of those entities.