orgtheory.net

Search Results

critique of a recent ajs genetics paper: levi-martin v. guo, li, wang, cai and duncan

John Levi-Martin has written a comment on a recent paper by Guo, Li, Wang, Cai, and Duncan  claiming that the social contagion of binge drinking associated with a medium genetic propensity. Levi-Martin claims that GLWCD having simply misread their data:

Guo, Li, Wang, Cai and Duncan (2015) recently claimed to have provided evidence for a
general theory of gene-environment interaction. The theory holds that those who are labelled as having high or low genetic propensity to alcohol use will be unresponsive to environmental factors that predict binge-drinking among those of moderate propensity. They actually demonstrate evidence against their theory, but do not seem to have understood this.
The main claim is that GLWCD are testing against nulls rather than properly estimating a U-shaped effect:
This is consequential because of the way that choose to examine their data. Although
the verbal description of the swing theory here refers to the comparison of magnitudes  (“more likely”), the methods used by GLWCD involve successive tests of the null hypothesis across three subsets formed by partitioning the sample by level of what is termed genetic propensity. If we denote these three subsets L, M and H, standing for low, medium and high propensity, then, for the kth predictor, they estimate three slopes, bLk, bMk, and bHk. Because the swing theory does not require that any particular predictor have an effect, but only that if it does, it does not in the extreme propensity tiers, this theory holds that for any k, bLk≈bHk≈ 0.
Publishing note: The comment is on SocArXiv for all to read. If the criticism holds water, it’s a shame that it is not in a journal, preferably the AJS. If journals simply aren’t interested in error correction, then they simply aren’t into science.
50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street
Advertisements

Written by fabiorojas

November 21, 2016 at 3:29 am

dear sociology: go learn some behavioral genetics

An article in Nature Genetics presents a new meta-analysis of twin studies:

Despite a century of research on complex traits in humans, the relative importance and specific nature of the influences of genes and environment on human traits remain controversial. We report a meta-analysis of twin correlations and reported variance components for 17,804 traits from 2,748 publications including 14,558,903 partly dependent twin pairs, virtually all published twin studies of complex traits. Estimates of heritability cluster strongly within functional domains, and across all traits the reported heritability is 49%. For a majority (69%) of traits, the observed twin correlations are consistent with a simple and parsimonious model where twin resemblance is solely due to additive genetic variation. The data are inconsistent with substantial influences from shared environment or non-additive genetic variation. This study provides the most comprehensive analysis of the causes of individual differences in human traits thus far and will guide future gene-mapping efforts. All the results can be visualized using the MaTCH webtool.

Social constructionists, give it your best shot.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

May 29, 2015 at 12:01 am

Posted in uncategorized

stuff that doesn’t replicate

Here’s the list (so far):

Some people might want to hand wave the problem away or jump to the conclusion that science is broken. There’s a more intuitive explanation – science is “brittle.” That is, once you get past some basic and important findings, you get to findings that are small in size, require many technical assumptions, or rely on very specific laboratory/data collection conditions.

There should be two responses. First, editors should reject submissions which might depend on “local conditions” or very small results or send them to lower tier journals. Second, other researchers should feel free to try to replicate research. This is appropriate work for early career academics who need to learn how work is done. Of course, people who publish in top journals, or obtain famous results, should expect replication requests.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

October 13, 2015 at 12:01 am

does race exist? part trois

This semester we spent a lot of time discussing Shiao et al’s (2012) article in Sociological Theory claiming that recent genetic research provides a reason to believe that “races” exist. Now we’ll discuss the symposium that was recently published.

There are three responses by Anne Morning, Daniel Hosag, and Fujimora et al. There is a lot in there so I will focus on what I think is most important:

  1. Genomic analyses are contaminated by racial categories. I.e., genomic clustering results rely on social categories of race.
  2. Genomic analyses are inconsistent in that different algorithms producs different number of human clusters (“clinal groups”).
  3. Genomic analyses do not clearly map onto groups that would clearly be identitified as racial or ethnic groups.

Let me take each in turn. On a purely logical level, 1 is probably the weakest point. As I noted in my original post on Shiao et al 2012, the contamination of a scientific research program by social bias does not logically imply that the basic idea is flawed. That requires a different argument. My original example: social definitions of non-humans have plagued scientific research, but that doesn’t imply  that there aren’t meaningful distinctions between fish or rocks. It only shows that a particular scientist got it wrong.

Point 2 is a much stronger point. Inconsistent results, or those that are very sensitive to initial paramters set during model estimation, should reduce our confidence. I think the respondents do a good job suggesting that genomic research does not show a clear partition of people based on genomic data.

I think Shiao (and other people on his side) have a plausible response: human populations have no clear boundaries, they intermix a bit, and we should expect fuzzy boundaries. To support this point, you would examine the distribution of the number of clusters done using different data and different methods. If we get a very “flat” posterior (i.e., any number of groups is equally possible), the critics win. If the distribution has concentrated mass and the mean is not zero, then Shiao et al wins. In other words, meta analysis is the way we settle this sort of claim. Neither side has done this analysis in the Sociological Theory exchange.

Point 3 is unpersuasive as presented. As I noted in an earlier point, it is logically possible that there is genuine clustering of people but it doesn’t match to our notions of what counts as a race. For example, maybe I am not Latino but I am Basque-Dutch-Colombian-Sub-Tico. So race exists, but not in the way we understand it. So the mismatch between genomic data and folk notions of race may be beside the point.

Shiao et al’s response hits some common points and focuses on others (e.g., their review of the genomic literature is accurate in contrast to what the critics claim). Shiao et al’s response to point 2 is a bit different in that it goes into detail about what certain algorithms accomplish.

Overall, I am struck at what was accepted by most folks. There seem to be genuine biological differences between people, behavioral genetics is not irrelevant to sociology, and there seems to be meaningful dimensions of variation among people that is tied to geography. This last point is also noted by Shiao at el. Shiao then makes a strong point – if you believe that there is genomic variation by geography, why immediately jump to the strongest constructionist argument? Doesn’t make sense.

A few months ago, I noted that I was a racial agnostic because I don’t possess the technical knowledge to judge rival claims and I don’t immediately assume the constructionist view is true. After reading this exchange, I am moving toward the view that there is indeed systematic variation in people, but “Race” might be a terribly bad way to think about it.

50+ chapters of grad skool advice goodness: Grad Skool Rulz/From Black Power

Written by fabiorojas

December 22, 2014 at 7:22 am

Posted in fabio, sociology

race and genomics: comments on shiao et al.

Shiao et al in Sociological Theory, the symposioum, Scatterplot’s discussion, Andrew Perrin’s comments, last week’s discussion.

Last week, I argued that many sociologists make a strong argument. Not only are social classifications of race a convention, but there is no meaningful clustering of people that can be derived from physical or biological traits. To make this claim, I suggested that one would need to have a discussion of what meaningful traits would include, get a huge sample people, and then see if there are indeed clusters. The purpose of Shaio et al (2012) is to claim that when someone conducts such an exercise, there is some clustering.

Before I offer my own view of the evidence that Shiao et al offer, we need to set some ground rules. What are the logical possible outcomes of such an exercise?

  1. The null hypothesis: your clustering methods yield no clusters (e.g., there are no detectable sub-groups of people).
  2. The weak hypothesis: clustering algorithms yield ambiguous results. It’s like getting in regression analysis a small correlation with a p=.07. This is important because it should shift your prior moderately.
  3. The “conventional” strong hypothesis: unambiguous groups that correspond to social classifications of people. E.g., there really is a “White” group of people corresponding to people from Europe.
  4. The “unconventional” strong hypothesis: unambiguous groups that do not correspond to common social classifications of people. For example, there might be an extremely well defined group of people that combines Hawaiians and Albanians.

A few technical points, which are important. First, any such exercise will need top incorporate robustness checks because clustering methods require the use to set up initial parameters. Clustering algorithms do not tell you how many groups there are. Instead, they answer the question of how well the model fits the hypothesis that you have X groups. Second, sociologists tend to mix up these possible outcomes. They correctly point out that there is a social construction called “race” which is real in its effects and influence on people. But that doesn’t logically entail anything about the presence or absence of human populations that are differentiated due to random variation of inherent physical traits over time. Also, they fail to consider #4. Their might be actual differences, but they might not match up to our common beliefs.

So what does Shiao at al offer and where does it lie in this spectrum of possibilities? Well, the article is a not a systematic review of genomic research that searches for clusters or people. Rather, it offers a few important points drawn from anthropology and genomics. First, Shiao et al point out that there is a now undisputed (among academics) human history. Humans originated in East Africa and then spread out (“Out of Africa thesis”). Second, as people spread out, genomic variation emerges as people mate with people close by. Third, genetic drift implies that geography will predict variations in genes. As you move from X to Y, you will see measurable differences in people. Fourth, these differences are gradual in character.

Shiao then switch gears and talk about clustering of people using genomic data. They tell us that there are statistically detectable and stable group differences and that these do not rigidly determine behavior. They also cite research suggesting these statistical groups correlate with self-described racial groupings. Then, the authors discuss a “bounded” approach to social theory where biology imposes some constraints on the variation on behavior but in a non-deterministic fashion.

I’ll get to the symposium next week, but here’s my response: 1. There is a real tension. At some points, Shiao et al suggests a world of gradual variation, which suggests no distinct racial groups (outcome #1) but then there’s a big focus clusters.  2. If we do live in a world of gradual, but real, variation in human biology, then the whole clustering approach is misleading. Instead, we might live in a world that’s like a contour map. It’s all connected, there are no groups, but you see some variables increase as you move along the map. 3. If that’s true, we need an outcome #5 – “race is not real but biology is real.” 4. I definitely need more detail on the clustering methods and procedures. Some critics have pointed out that the clusters found in research are endogenously produced, which makes me suspect that the underlying science might be hovering around outcomes #1 (it all depends on the algorithm and its parameters) or #2 (there might be some clustering, but it is very poorly defined).

50+ chapters of grad skool advice goodness: Grad Skool Rulz/From Black Power

Written by fabiorojas

October 20, 2014 at 12:01 am

before you say race isn’t real, you need a definition of race

This week, I’d like to focus on the sociology of race. We’ll discuss Shiao et al.’s Sociological Theory article The Genomic Challenge to the Social Construction of Race, which is the subject of a symposium. After you read the article and symposium, you might enjoy the Scatterplot discussion.

In this first post, I’d like to discuss the definitional problems associated with the concept “race.” The underlying concept is that people differ in some systematic way that goes beyond learned traits (like language). One aspect of the “person in the street” view of race is that it reflects common ancestry, which produces correlated physical and social traits. When thinking about this approach to race, most sociologists adopt the constructivist view which says that: (a) the way we group people together reflects our historical moment, not a genuine grouping of people with shared traits and  (b) the only physical differences between people are superficial.

One thing to note about the constructivist approach to race is that the first claim is very easy to defend and the other is very challenging. The classifications used by the “person on the street” are essentially fleeting social conventions. For example, Americans used the “one drop rule” to classify people, but it makes little sense because putting more weight on Black ancestors than White ancestors is arbitrary. Furthermore, ethnic classifications vary by place and even year to year. The ethnic classifications used in social practice flunk the basic tests of reliability and validity that one would want from any measurement of the social world.

The second claim is that there are no meaningful differences between people in general. This claim is much harder to make. This is not an assessment of truth of the claim, but the evidence needed to make is of a tall order. Namely, to make the strong constructivist argument, you would need (a) a definition of which traits matter, (b) a systematic measurement of those traits from a very large sample of people, (c) criteria for clustering people based on data, and (d) a clear test that all (or even most) reasonable clustering methods show a single group of people. As you can see, you need *a lot* of evidence to make that work.

That is where Shiao et al get into the game. They never dispute the first claim, but suggest that the second claim is indefensible – there is evidence of non-random clustering of people using genomic data. This is very important because it disentangles two important issues – race as social category and race as intra-group similarity. It’s like saying the Average Joe may be mistaken about air, earth, water, and fire, but real scientists can see that there are elements out there and you can do real science with them.

50+ chapters of grad skool advice goodness: Grad Skool Rulz/From Black Power 

Written by fabiorojas

October 14, 2014 at 12:04 am

race agnosticism: commentary on ann morning’s research

Earlier this week, Ann Morning of NYU sociology gave a talk at the Center for Research on Race and Ethnicity in Society. Her talk summarized her work on the meaning of race in varying scientific and educational contexts. In other words, rather than study what people think about other races (attitudes), she studies what people think race is.  This is the topic of her book, The Nature of Race.

What she finds is that educated people hold widely varying views of race. Scientists, textbook writers, and college students seem to have completely independent views of what constitutes race. That by itself is a key finding, and raises numerous other questions. Here, I’ll focus on one aspect of the talk. Morning finds that experts do not agree on what race is. And by experts, she means Ph.D. holding faculty in the biological and social sciences that study human variation (biology, sociology, and anthropology). This finding shouldn’t be too surprising given the controversy of the subject.

What is interesting is the epistemic implication. Most educated people, including sociologists, have rather rigid views. Race is *obviously* a social convention, or race is *obviously* a well defined population of people. Morning’s finding suggests a third alternative: race agnosticism. In other words, if experts in human biology, genetics, and cultural studies themselves can’t agree and these disagreements are random (e.g., biologists themselves disagree quite a bit), then maybe other people should just back off and admit they don’t know.

This is not a comfortable position since fights over the nature of human diversity are usually proxies for political fights. Admitting race agnosticism is an admission that you don’t know what you’re talking about. Your entire side in the argument doesn’t know what it’s talking about. However, it should be natural for a committed sociologist. Social groups are messy and ill defined things. Statistical measures of clustering may suggest that the differences among people are clustered and nonrandom, but jumping from that observation to clearly defined groups is very hard in many cases. Even then, it doesn’t yield the racial categories that people use to construct their social worlds based on visual traits, social norms, and learned behaviors. In such a situation, “vulgar” constructionism and essentialism aren’t up to the task. When the world is that complicated and messy, a measure of epistemic humility is in order.

50+ chapters of grad skool advice goodness: Grad Skool Rulz/From Black Power

Written by fabiorojas

October 3, 2014 at 12:01 am

Why strong social constructionism does not work I: Arguments from Reference

In this and a series of forthcoming posts, I will attempt to outline an argument showing that most of the time claims to have derived a substantively important conclusion from constructionist premises are incoherent.   By a substantively important conclusion I refer to strong arguments for the “social construction of X” where X is some sort of category or natural kind that is usually thought to have general ontological validity in the larger culture (e.g gender, race, mental illness, etc.).

In a nutshell, I will argue that the reason for why these sort of arguments do not really work is that they require us to draw on a theory of meaning, language and reference that is itself inconsistent with constructionism.  To put it simply: substantively important conclusions derived from constructionist premises require a theory of reference that implies at least the potential for realism about natural kinds and a strong coupling between linguistic descriptions and the real properties of the entities to which those descriptions apply, but constructionism is premised on the a priori denial of realism about natural kinds and of such a strong coupling between language and the world.  Thus, most strong claims about something being “socially constructed” cannot be strong claims at all.  This argument applies to all forms of social constructionism, whether of the phenomenological, semiotic, or interactionist varieties.

Here I will first do two things:  1) give a more “technical” definition of what I mean by a “substantively important conclusion” within a constructionist mode of argumentation (noting that my argument does not apply to “softer” versions of constructionism) and 2) nail down the point that constructionism (and any other set of premises designed to draw substantively important conclusions about the natural and social worlds) depends on an “argument from reference” in order to work.  Finally, I will lay out the argument that 3) because of this dependence, strong constructionist conclusions are usually not warranted (they follow from an incoherent argument).

The shock value in constructionism.-  In a constructionist argument, a substantively important conclusion is one that has “shock value.”  By shock value, I mean that the argument results in the conclusion that something that we thought was “real” in an unproblematic sense is shown to be either a) a fictitious entity that has never been or could never be real or b) a historically contingent entity endowed with a weaker form of existence (e.g. a collectively sustained fiction or even delusion).  This is “shocking” in the sense that the constructionism thesis upsets the “folk ontology” heretofore taken for granted by lay and professional audiences alike.

A useful analogue (because it makes the technical argumentative steps clear) comes from the Philosophy of Mind. There, the most “shocking” argument ever put forth is know as “eliminativism” in relation to the so-called “propositional attitudes” (Stich 1983; Churchland 1981).  Note that this argument is actually espoused by people who consider themselves to be radical materialists almost blindly committed to a traditional scientific epistemology and an anti-dualist ontology.  Thus, I am not claiming a substantive commonality between constructionists and eliminativists.  All that I want to do here is to point to some formal commonalities in their mode of argumentation in order to set up the subsequent point of common reliance on an argument from reference.

According to the eliminativist thesis, the denizens of the mental zoo that play a role in our ability to account for ours and other’s people’s behavior (such as beliefs, desires, wants, etc.) do not actually exist. The reason for that is that the theoretical system in which they play a role (so called “folk” or “belief-desire psychology”) is actually an empirically false theory, one that relies on the postulation of theoretical entities (mental entities) that have no scientifically defensible ontological status.

According to belief desire psychology, persons engage in action in order to satisfy desires.  Beliefs play a causal role in behavior by providing the person with subjective descriptions of how means connect to desirable ends.  Using belief-desire psychology, we can explain why person A engages in behavior B, by postulating that “Person A believes that by doing B, she will get C, and she desires/wants C.” A belief is a proposition about the world endowed with a truth value and a desire is a proposition that describes the sorts of states of affair that the person would like to bring about.   Both are conceived to be mental entities endowed with “intentional” content (they are about something). Their intentional content dictates how they can relate to other entities in a systematic way (e.g. because some propositions logically imply others). We can then “predict” (or retrodict) the behavior of persons by linking desires to beliefs in a way that preserves the rationality of persons.

Accordingly, if I see somebody rummaging through the contents of a refrigerator, I can surmise that this person is engaging in this sort of behavior because she believes that she will find something to eat in there, and she wants something to eat.  Relatedly, when persons are questioned as to why they did something, they usually give a “reason” for why the did what they did.  This reason takes the form of a “motive report.”  If I question somebody about why they are rummaging through a refrigerator, they are likely to say “because I’m hungry.”

According to eliminativists, the main causal factors in belief desire psychology have no ontological status.  Thus, neither propositional beliefs of the sort of “I think that p” where p is a proposition of the sort “there is food in the refrigerator” nor desires of the sort “I want q” have any ontological status.  As such, belief-desire psychology stands to be replaced by a mature neuropsychology, one in which “folk solids” such as desires and beliefs (to use Andy Clark‘s terms) will play no role in explanations and accounts of human behavior.  These notions, previously thought to be natural kind endowed with unquestionable reality, are eliminated from our ontological storehouse and into the dustbin of fictional entities discarded by modern science (such as Phlogiston, Caloric, The Ether, The Four Humors, etc.).

Constructionism and eliminativism.- I argue that most substantively important conclusions within the constructionist paradigm are actually modeled after “eliminativist” arguments in the Philosophy of Mind.

All of the pieces are there.  First, a constructionist argument usually takes some (folk or professional) system of “theory” as their target. This is regardless this is a system of theory currently in existence or from a previous historical era.  This is usually a folk (or sometime professional) “theory of X” (e.g the “folk theory of race” or the “folk theory of gender”).  Second, within this system the constructionist picks one or more central theoretical categories or concepts (X), which, within the system are endowed with an non-problematic ontological status as real (e.g. gender  or racial “essence”).  Third, the constructionist shows the folk theory of X to be false from the point of view of a more sophisticated theory (modern population genetics in the case of the old anthropological concept of “race”).  Thus X (e.g. race), as conceptualized in the folk theory, does not really exist, even though it forms a key part of certain contemporary folk theories of race. The title of the famous PBS documentary: “Race: The Power of an Illusion” conveys that point well.

The constructionist may also argue for the indirect falsity of the current theory of X, by simply using the historical or anthropological record to show that there are cultures/historical periods  in which X either was not presumed to exist in the way that it exists today or was part of a different theoretical system which radically changed its status (the properties that define membership in the concept were radically different).  Here the constructionist will agree that X “exists” in the current setting, but it does not have the sort of existence attributed to it in the folk discourse (transhistorical and transcultural) instead it has a weaker form of existence: social; as in “sustained by a historically and culturally contingent social arrangement which could theoretically be subject to radical change.”  Foucault’s famous argument for the radically different status of the category of “man” within the so-called “classical episteme” is an example of that sort of claim.  The category of man in the modern era has a meaning that is radically incommensurate to the one that it had in the classical episteme.  The implication is that therefore the category of “man” does not refer and we can thus conceive of a possible future in which it plays no actual role, follows.

The common element here is that a category that we take for granted (within the descriptions afforded by some lay or professional theoretical system) to be ontologically “real” (race, gender, the category of “man”, etc.) is shown instead to  “actually” have a fictitious status because there is nothing in the world that meets that description. More implicitly, insofar as a concept has undergone radical changes in overall meaning (with meaning determined by its place within a network of other concepts in the form of a folk or professional theory), then there cannot be a preservation of reference across the incommensurate meanings.Hence the concept cannot really be picking out an ontologically coherent entity in the world. I refer to this as the “strong constructionist effect.”  The basic idea, as I have already implied is that in order for the effect to be successful, we must already be working from within some theory of reference, otherwise the claim that “there is nothing in the world that meets that description” is either vacuous or incoherent.

Constructivism and arguments from reference.- What are “arguments from reference”? Arguments from reference are those that implicitly or explicitly require a theory of reference for their conclusions to follow (or even make sense), as has been recently pointed out by Ron Mallon (2007).  When this is the case, it can be said that the substantively important conclusion is  dependent on the (logically autonomous) theory of reference. It is striking how little most social scientists spend thinking about reference. They should, because even though it is seldom explicit, we all require some theory about how conceptualizations link up (or fail to!) to events in the world in order to make substantive statements about the nature of that world. I argue that in order to produce the strong constructionist  effect, and thus derive substantively important conclusions, the argument from social construction requires a particular theory of reference.

One would think that when it comes to theorizing about how conceptual, theoretical or folk terms “refer” to the world there would be various competing theories.  Instead, twentieth century analytic philosophy was long dominated by single dominant account of how concepts refer.  This was Frege’s suggestion that “intension” (the meaning of a term) determines “extension” (the object in the world that the term picks out).  Lewis (1971, 1972) formalized this formulation for the case of so-called theoretical entities in scientific theories.  According to Lewis, terms in scientific theories purport to describe objects in the world bearing certain properties or standing in certain relations with other objects. This is the description of that term.  According to Lewis, the terms of Folk Psychology are theoretical entities that gain their meaning from their relations to other entities and observational statements within a system of theory.  Eliminativists built their argument on this suggestion, by suggesting that there is nothing in the (scientifically acceptable) world that meets the description for a propositional attitude (a mental entity endowed with “intentional” content); ergo, belief-desire psychology is false, its terms do not refer, and we need a better theory of the mental.

In short, from the viewpoint of a descriptivist theory of reference, a given term or concept defined within a given theoretical system refers if and only if there is an object in the world that bears the properties or stands in the relations specified in the description.  According to this theory, terms refer to real world entities when there exists an object satisfies the necessary and sufficient conditions of membership in the category defined by the term (which in the limiting case may be an individual).  Descriptions that have no counterpart in the real world are descriptions of fictional entities and thus fail to refer (and the validity of the theoretical systems of which they are a part is therefore impugned).  When competent speakers use the terms of any theory (scientific or folk) they have a description in mind, which specifies the set of properties that an object would have to have for that term to be said to successfully refer to it.

The basic argument that I want to propose here is that “shock value” constructionism depends on a descriptivist theory of reference. This should already be obvious.  The standard constructionist argument begins by a painstaking reconstruction of a given set of folk or professional descriptions.  The analyst then moves on to ask the rhetorical question: is there anything in the world that actually satisfies this description?  If the answer is no, then the conclusion that the term fails to refer (and is a fictional and not a real entity) readily follows.  The standard criteria for satisfaction of these conditions usually boil down to some sort of semantic analysis. For instance, in Orientalism, Edward Said painstakingly reconstructed a Western “image” (read description) of the Middle East as a kind of place and of the Arab “Other” as a (natural?) kind of person. Said pointed out that this description of Arab peoples (menacing, untrustworthy, exotic, emotional, eroticized, etc.) was not only logically incoherent; it was simply false, there had never been a group of people who met this description; it had been a fabrication espoused by a misleading theoretical system: Orientalism. Thus, Orientalism as a culturally influential theory of the nature of the Arab “Orient” needed to be transcended. The main theoretical entity implied by such theory, the Oriental “other” endowed with a bizarre set of attributes and properties was thereby eliminated from our ontological storehouse.

Houston we have a problem.- It would be easy to show that essentially all arguments that produce the “strong constructionist effect” follow a similar intellectual procedure.  There are at least two problems with this (largely unacknowledged) dependence of social constructionism on a descriptivist theory of reference. First, constructionism denies the conditions that make a descriptivist strategy an adequate theory of reference, which is at a minimum the validity of a truth-conditional semantics and the capacity of words to unambiguously (e.g. literally) refer to objects and events in the world.  This is not a problem for Gottlob Frege and David Lewis, or most descriptivist theorists in analytic philosophy, most of whom subscribe to some version of propositional realism (propositions have truth values that can be unproblematically redeemed by just checking to see if the “correspond” to the world).  However, this is a problem for constructionists because they cannot accept such a strong version of realism.

Thus, if the very theory of the relationship between language and the world that is espoused by social constructionism (skepticism as to the applicability of a truth conditional semantics and unambiguous reference) is true then descriptivism has to be false. This means that social constructionism is an inherently contradictory strategy; to produce substantively meaningful conclusions (the strong constructionist effect) it has to rely on a theory of the relationship between meanings and the world that is denied by that very approach. Second, even if this logical argument could be sidestepped, constructionism would still be in trouble.  The reason for this is that there is a competing (and equally appealing on purely argumentative grounds) theory of reference in modern philosophy: this is the causal-historical theory of reference most influentially outlined by Saul Kripke and Hilary Putnam.  The basic issue is not that this is a competing account of reference; the problem is that this account of reference actually denies a key link in the constructionist argument: that in order to refer, there has to be match between the description of the term and the properties of the object that the term putatively refers to.

Instead, causal-historical theories of reference allow for two possibilities that are seldom taken into account by constructionists:  1) that persons can refer to things in the world even though their mental description of the term that they are using to refer to those things those not at all match the properties of those things, and 2) that the description of a term can undergo radical historical change while the term continues to refer to the same entities or cluster of entities.  The first possibility undercuts the capacity of the constructionist to “correct the folk,” because reference is decoupled from the descriptive validity of the terms that are used to refer.  The second possibility undercuts the argument for social construction based on historical and cultural variability of descriptions. It opens up the possibility that there is “rigid designation” to the same set of social or natural realities across cultures in spite or radical differences in the cultural frameworks from within which these referential relations are established.

A reasonable objection is simply to point out that we simply do not have sufficiently strong grounds of picking descriptivism over causal-historical theories of reference, as equally respectable arguments have been put forth in defense of both. This is in fact the position taken by most philosophers who instead go on to worry about whether people are cherry-picking one of the two theories of reference to support their preferred argumentative strategy.  However, I believe that most constructionists in social science cannot be content with this non-committal solution. Instead, like other areas of Philosophy (e.g. epistemology, ethics, mind), there is a way to “break the tie” between various philosophical theories and that is to look to naturalize these types of inquiry by looking at what theories seem to be consistent with the relevant sciences.  Here we have good news and bad neews for constructionists.

Research in cognitive science, cognitive semantics and cognitive linguistics points to the inadequacy of descriptivist theories of reference from a purely naturalistic standpoint. This should be good news for constructionists because the upshot is that truth-conditional semantics roundly fails as an account of how persons generate meaning (Lakoff 1987).  The irony is that these theories redeem the original skepticism of constructionism vis a vis any form of truth-conditional semantics and propositional realism, but in so doing also undercut the ability of constructionists to engage in the sort of  argument that results in “shocking” or substantively strong claims for the social construction of X, because the rhetorical force of these arguments depends on descriptivism and descriptivism implies propositional realism and “objectivism” (that truth is the literal correspondence of statements and reality).  The resulting counter-intuitive conclusion is that it is precisely because linguistic meaning and natural categories meet the constructionist specifications that strong constructionist arguments are actually impossible.  In fact, it is precisely because language and semantics work the way that constructionist (implicitly) presuppose that they do that the norm in historical change may not be the radical transformation of reference relations in historical and cultural change (as implied by Foucauldian analysts), but rigid designation of the same (social, or natural) “essences” and relations even in the wake of superficial shifts in the accepted cultural description of those entities.

Written by Omar

March 7, 2012 at 6:57 pm

creative groups

It’s been a while since we’ve knocked heads with our evil twin blog.  I can’t let this one pass. Peter Klein misrepresents the main point of this Jonah Lehrer New Yorker article, which dissects the myth that brainstorming leads to creativity and greater problem solving. Citing a quote by former orgtheory guest blogger Keith Sawyer – “Decades of research have consistently shown that brainstorming groups think of far fewer ideas than the same number of people who work alone and later pool their ideas” – Peter implies that groups would be more creative if they’d just let individuals work on their own. This fits nicely with a pure reductionist perspective but it’s not at all what the article is really trying to say.

This is the conclusion that Peter should have drawn from the essay: “[L]ike it or not, human creativity has increasingly become a group process.”  Lehrer goes on to cite research by my colleagues at Northwestern, Ben Jones and Brian Uzzi, which shows that both scientists and Broadway teams are more successful and creative when bringing together teams made up of diverse individuals. From an article in Science by Wuchty, Jones, and Uzzi:

By analyzing 19.9 million peer-reviewed academic papers and 2.1 million patents from the past fifty years, [Jones] has shown that levels of teamwork have increased in more than ninety-five per cent of scientific subfields; the size of the average team has increased by about twenty per cent each decade. The most frequently cited studies in a field used to be the product of a lone genius, like Einstein or Darwin. Today, regardless of whether researchers are studying particle physics or human genetics, science papers by multiple authors receive more than twice as many citations as those by individuals. This trend was even more apparent when it came to so-called “home-run papers”—publications with at least a hundred citations. These were more than six times as likely to come from a team of scientists.

And summarizing Uzzi’s and Spiro’s AJS paper on Broadway shows:

Uzzi devised a way to quantify the density of these connections, a figure he called Q. If musicals were being developed by teams of artists that had worked together several times before—a common practice, because Broadway producers see “incumbent teams” as less risky—those musicals would have an extremely high Q. A musical created by a team of strangers would have a low Q…..When the Q was low—less than 1.7 on Uzzi’s five-point scale—the musicals were likely to fail. Because the artists didn’t know one another, they struggled to work together and exchange ideas. “This wasn’t so surprising,” Uzzi says. “It takes time to develop a successful collaboration.” But, when the Q was too high (above 3.2), the work also suffered. The artists all thought in similar ways, which crushed innovation. According to Uzzi, this is what happened on Broadway during the nineteen-twenties, which he made the focus of a separate study. The decade is remembered for its glittering array of talent—Cole Porter, Richard Rodgers, Lorenz Hart, Oscar Hammerstein II, and so on—but Uzzi’s data reveals that ninety per cent of musicals produced during the decade were flops, far above the historical norm. “Broadway had some of the biggest names ever,” Uzzi explains. “But the shows were too full of repeat relationships, and that stifled creativity.”

In short, Uzzi argues that teams that had intermediate levels of relationship density were more creative and more successful.

It’s not that groups aren’t effective generators of creativity. As these studies show, innovation tends to be produced via group processes. Knowledge production is increasingly a collective outcome. Rather than assume that people work best alone, we should think more carefully about what kinds of groups are optimally designed for producing creativity.  Diverse groups will be more creative than homogeneous groups. Groups that embrace conflict and critical thought will be less susceptible to groupthink than groups that avoid such conflict.  Groups made up of members who have little experience with outsiders will be less creative.  I agree with Peter that brainstorming is ineffectively taught in many classrooms, but rather than throw out the idea altogether, we should try to teach people how to design groups that are good at generating new ideas.

Written by brayden king

February 14, 2012 at 12:05 am

book spotlight: selfish reasons to have more kids

Selfish Reasons to Have More Kids is a new book by economist and blogger Bryan Caplan. It makes a simple argument of extreme importance: you should probably have more children. Though this book is written by an economist, it’s not another cute-o-nomics pop text. It’s a serious book about family planning that’s based on his reading of child development, psychology, genetics, economics, and other fields. It’s about one of life’s most important decisions, and this is what social scientsits should be thinking about.

The argument boils down to a simple point. If the evidence shows that you are over estimating the cost of having children, then, on the margin, you should probably have another child. This isn’t to say that everyone should have children, or that you should have lots of children. Rather, if you are indifferent between between having one more and not, the cautious thing to do is have one more.

Let me start with the arguments that I think are strongest. One is that people rarely regret having childen. According to survey data, people who have children rarely say that they wish that they never had children. Childless people are way more likely to say they wish they had children. Another strong argument is that having children makes the world a better place. There’s little evidence that population size by itself leads to poverty, environmental destruction, or what have you. Rather, bad policies and institutions cause these outcomes. More people means more innovators and more customers who will buy stuff from the innovators.

Another sensible argument is that you don’t need to kill yourself parenting. “The kids will be alright” should be Caplan’s motto. There’s a lot of evidence that all the crazy stuff that people do really doesn’t have much of an overall effect on life course outcomes. The piano lessons, the ballet classes – not needed. Unless the child truly enjoys these activities, and some do, better to save money, time, and stress by dropping them. Once you realize that not most kids do not need expensive inputs, you can save money and time – and have another kid.

Caplan’s biggest detractors will likely focus on his most controversial argument. He argues that you really don’t need to worry about the kids because inherited traits are much more likely to determine life course outcomes, not parenting. He supports his argument with the now voluminous literature on twins and adopted children that shows strong effects of shared parents, not family environment. Many arguments rest on his readings of these twin and adoption studies.

On one level, I agree with this overall point. We often think that we can remake people and ignore the traits, such as personality and cognitive ability, that are tough to change through socialization. As far as I can tell, twin studies do show that there are really poweful inherited traits that affect social behavior. On another level, I feel that twin and adoption studies can be pushed to far because twin and adoption studies have a very powerful, but very specific, research design.

In my view, twin studies tend to have two important limitations. First, there is non-random selection of parents into adoption. Adopters are, by definition, very unlike the rest of the population. Not in income or demographics, but in personality. Adoption is an enormous investment of resources in someone who is not biologically related to you.  In other words, adopters are extraordinarily nice people. Any argument that denies the effect of parenting by appealing to studies with only Very Nice Parents is reaching too far. My hypothesis is that random assignment of twins to randomly selected parents (not just the Very Nice People) will yeild model estimates with bigger family coefficients.

The other limitation of twin and adoption studies is that they study variation in existing parenting practices. It may be the case that American parents simply don’t know how to correctly socialize a kid to reach some goal. Therefore, variations in family environment are just variations in failed practices.

Here’s a concrete example: child obesity. A hard core twin study advocate would justifiably point to twin studies showing that weight or BMI is more linked to shared parents than shared family environment. However, many Americans eat diets high in carbs, corn syrup and other ingredients. They also seem to consume many more calories than needed. To be blunt, in a world where *everyone* eats bags of twinkies, there won’t be much of an effect of living in a home where people eat a few more or less twinkies.

For that reason, it is too much of a jump to say that family environment can’t possibly affect weight. For example, parents who remove all twinkies and switch to an all broccoli diet will likely affect their children’s weight. In other words, to correctly conclude that family environment has no or little effect on weight, you would need a sample of families that have radically different diets, including at least one option that actually works (e.g., twinkies vs. broccoli). For many important life course outcomes, I am not persuaded that a sample of twins adopted into American or Western families provides enough variation in family environmnents, or that a sample would include enough families who do the practice that research has shown works.

After reading the last passage, you might think I am against genetic explanations of behavior, or that I think that Caplan’s book is fatally flawed. Instead, I see my critique as a qualification of an important argument.  Even if the argument is overstated, and parents in some cases can have a big impact, parenting can be much, much less budrensome becuase the kids will be alright. In end, I find Caplan’s book to be a really humane text. Children aren’t a burden or a problem or an investment. They are to be enjoyed. They are a benefit and we should welcome more them into the world.

Written by fabiorojas

March 28, 2011 at 12:59 am

Posted in books, fabio

should I drop post-modernism from the theory course?

I want to completely drop post-modernism from my sociological theory teaching. Here’s my argument.

First, a definition. I’ll call someone post-modern if (a) they claim to be post-modern, (b) place themselves within a post-structuralist tradition, or  (c) are arguing with post-modernists. This would include Lyotard, Giddens (in his is radical modernity text), Jameson, Derrida and all deconstructionists such as De Man, Foucault, Flax, Baudrillard, the various feminists and sexuality theorists who argue with Foucault. I don’t include people who are just “fancy Europeans,” such as Bourdieu, who never called himself post-modernist and stems from an earlier modernist sociological tradition.

Here are my reasons for cutting post-modernism theory (PMT):

  1. Professional: American sociology is not really focused on PMT. The major journals simply do not publish much on PMT, at least since the mid-1990s or so. The major books in our field tend not to be the massive “theory” volumes of the past. The one exception is Foucault, who pops up from time to time.
  2. Cognitive: I find it very, very hard to understand. Also, if one of my goals is to teach clear argument about social behavior, it’s immoral to teach PMT.
  3. Empirical: I do not know if I can clearly say that I can judge or assess many PMT claims. I find those that I understand bizarre and unsupported (e.g., the lack of self asserted by some PMT). I teach stuff I disagree with, but at least I have to understand the theory and its empirical consequences.
  4. Substitutes: Why not teach stuff like networks, globalization, or epigenetics as “theory?” These ideas are really changing the way we think about the social world, which is exactly what a theory course should be about.

The two PMT folks I’d continue teaching might be Foucault and Baudrillard, but I don’t need the whole PMT blah-blah-blah apparatus to teach them. They can easily be folded into my teachings on critical theory and Marxism.

Tell me if I am right or wrong.

Written by fabiorojas

January 3, 2011 at 8:39 am

organizational homes

Having worked at three different universities in the past three years, I’ve been thinking lately about how the places we sit affect the research we do.

First, a brief autobiographical note, which sets the stage for some questions about organizational homes…

I did a PhD in sociology at a large public university on the West Coast.  As I was finishing my dissertation on egg and sperm donation, I was thoroughly enmeshed in conversations with sociologists and sociology, but my physical office was in an interdisciplinary center on genetics.  As a graduate fellow there, I attended talks by geneticists and philosophers, legal scholars and anthropologists, all of whom were concerned in some way with molecular genetics, particularly in how it was changing medicine.

Heading up the coast, I settled in at another large public university in California to do a postdoctoral fellowship in health policy.  Surrounded by a handful of other postdocs who had been trained as sociologists, political scientists, and economists, our offices were located in the heart of a public health school.  Here, I worked on turning some embryonic ideas about studying genetic testing into a full-fledged research agenda, all the while chatting with economists and political scientists about how they conceptualize and study the world.  One of those casual conversations resulted in a collaborative project with a political scientist that involves an experimental survey design, which is pretty far removed from my graduate training in qualitative methods.

Then, for the past year, I’ve been an assistant professor in a sociology department at a private university on the East Coast, and it’s too soon to tell in what ways this new setting will influence my research trajectory.

In moving from one organizational home to another, each transition involved meeting new people, working in new surroundings (sociology departments and interdisciplinary programs), and entering a new stage (grad student, postdoc, assistant professor).  In some cases, the effects of sitting in a new place are quite direct, as when my initial interests in genetics developed over time into a new book project, or when talking with a political scientist became a side project.  But I think the effects can be more subtle as well, such as when the preoccupying concerns of those who are nearby influence one’s own thinking, from which research questions to ask to how to go about answering them.

Certainly, people have more and less choice about where they end up sitting, and there are a lot of reasons why people might go sit in one place or another.  Moreover, there are broader institutional and economic factors to consider, with just one example being the funding priorities of different programs.  Putting these kinds of considerations into the background for the moment, and focusing more on the level of interaction, here are a few questions for the readers of OrgTheory…

To what extent do you think your organizational home matters, either for the kinds of research you do or for how you do it?  In what ways does it matter?  Does it matter more for people who are in earlier stages of their careers?

Written by almeling

October 18, 2010 at 10:41 pm

Posted in academia, research

the social world according to searle

John Searle has written a new book that should be of interest to many of you. Following the line of thought of his earlier The Construction of Social Reality, Searle’s Making the Social World tries to explain how we create a world of institutions, like organizations and culture, from a physical world that seems to play by a different set of principles. He starts by identifying a simple principle that he thinks can explain much of what counts for social reality. Here’s an excerpt from the introductory chapter:

It is typical of domains where we have a secure understanding of the ontology, that there is a single unifying principle of that ontology. In physics it is the atom, in chemistry it is the chemical bond, in biology it is the cell, in genetics it is the DNA molecule, and in geology it is the tectonic plate. I will argue that there is similarly an underlying principle of social ontology, and one of the primary aims of the book is to explain it. In making these analogies to the natural sciences I do not imply that the social sciences are just like the natural sciences. That is not the point. The point rather is that it seems to me implausible to suppose that we would use a series of logically independent mechanisms for creating institutional facts, and I am in search of a single mechanism. I claim we use one formal linguistic mechanism, and we apply it over and over with different contents (7).

The claim that I will be expounding and defending in this book is that all of human institutional reality is created and maintained in existence by (representations that have the same logical form as) [Status Function] Declarations, including the cases that are not speech acts in the explicit form of Declarations (13).

Searle isn’t saying that every speech act makes the world change and therefore has a declarative effect.  But some sorts of speech are intended to “change the world by declaring that the state of affairs exists and thus bringing that state of affairs into existence” (12). These declarative speech acts, then, are the fundamental units of any institution because without them humans would be completely constrained by reality as it stands now. They would be unable to create anything new.

Needless to say, the performativity folks will eat this up.

Written by brayden king

February 25, 2010 at 3:35 pm

gelman votes the rational way

From the home office at Columbia University, Andrew Gelman wrote to offer his explanation of why voting is rational. Here’s the summary. Here’s the long version. Gelman’s cookie: “The very short version is that it makes sense to vote in a national election because, if your vote is decisive, it will make a difference to millions of people.”  We’ll pick this one up later, when I return from Denver, but the argument is interesting.

Previous orgtheory voting arguments: the voting paradox, the “real” voting paradox, Casey Mulligan on pivotal elections, genetics and voter turnout, shareholder votes, and voting and negative campaigns.

Written by fabiorojas

August 22, 2008 at 2:21 am

resolving the structure-agency debate

Brayden

Essays by Dick Scott and Thomas Luckmann, based on talks given at the last EGOS conference in Vienna, appear in the new issue of Organization Studies. It’s not often you get two luminaries of this magnitude sharing issue space.

Both papers show a shift in the scholars’ thinking over the last several decades. Luckmann is best known for his classic The Social Construction of Reality, coauthored with Peter Berger. In that book, Berger and Luckmann argue that reality is constructed from ongoing patterns of typification and habitualization – i.e., chains of social interaction lead to routine ways of doing things and become infused with meaning. Reality gradually becomes reinforced, but through an unconscious process. Institutional theory is also based on the idea that institutions seep into daily life. Institutionalization of behavior is mostly a top-down process (see Scott’s three pillars). But in these essays Luckmann and Scott take agency much more seriously. Luckmann and Scott want to explore the role that intentionality plays in shaping institutions.

Read the rest of this entry »

Written by brayden king

February 28, 2008 at 8:31 pm

sleeping beauties in science

Omar

I recently discovered this really cool and funky journal with the unfortunate name of scientometrics, which publishes all kinds of cross-disciplinary quantitative studies on science. It is very addictive and I’ve already wasted hours reading through many of their (short and sweet, physical-science-style) papers. One thing that I’ve noticed is that these science research folk love their metaphors, with many “effects” and empirical patterns of citations garnering their own (sometimes funny, sometimes obviously coined by people who speak English as a second language) names. The original inspiration appears to be Robert K. Merton’s coinage of the term Mathew Effect to refer to patterns of cumulative advantage in explaining scientific success, which apparently continues to be (from what I could gather) a vibrant research subfield in scientometrics.

In any case, one of the funniest (and actually thought provoking) of the effects that I found while rummaging through recent issues of the journal, was the “sleeping beauty” effect (or what some other people call the “Mendel Effect”) which refers to the sometimes observed phenomenon of a paper that initially comes out to a chilly reception (“falls asleep”, operationalized by the authors as receiving one or less citations a year), for a “sleeping period” of varied length (5 to 10 years) and then is “awakened by a prince” (for those of you with a clear idea of the patriarchal connotations produced by taking the sleeping beauty story to refer to an empirical effect, please don’t shoot the messenger), or is cited by a recent paper, which then sparks an avalanche of interest in the original citation, thus “awakening” the sleeping beauty (the authors were studying male dominated physical science fields, but I don’t think that they thought through very well the homoerotic implications of a “prince” “awakening a sleeping beauty from her slumber,” when odds are that both papers were written by men). The extent of the awakening is then measured by the number of citations that the paper receives after being kissed (operationalized in various ordinal categories with 60+ being the maximum).

The authors discover that the sleeping beauty effect is an incredibly rare occurrence in science. Out of a database of 20 million papers (1988-1997) with 300 million citations between them, they uncover only one “true” sleeping beauty, defined as a paper that was asleep for 10 years and then received 60+ citations: the well known (not really) “Massive N= 2a supergravity in ten dimensions”” published in Physics Letters B in 1986 (maybe the author should have spiced up the paper with a jazzier title like we do in sociology: Massive or Flaccid?: N=2a supergravity in ten dimensions). The probability of being a sleeping beauty thus follows a steep power law, with the chances of being awakened being a rapidly decaying function of sleep time (they even write down a “sleeping beauty equation” with the power law exponents estimated from the data). They note, that like Mendel’s genetics, sleeping beauty papers are “ahead of their time,” and therefore their true genius is not discovered until after the “times” or the paradigms have changed. This particular paper for instance, dealt with string theory when it was still not “all the rage” in Physics. There is a network story in the whole thing, since the awakening “Prince” happened to be a younger physicist who happened to work in the lab as the original author at UCSB.

This got me to thinking: are there any sleeping beauties in org theory or sociology? And then I remembered that there is indeed one, even more dramatic case than the one talked about by the scientometrics guys: as recounted by Jerry Jacobs in his “ASR’s greatest hits” and the web supplement “Further Reflections on…” (page 9, table 3) the sociological sleeping beauty is none other than Stewart Macaulay‘s (now classic) 1963 paper entitled “Non-Contractual Relations in Business: A Preliminary Study.” A pretty neat paper, which fell into deaf ears for the first 10 years of its existence (garnering a grand total of 4 citations), but which has been cited 360 times the last 10 years alone.

The question becomes: who laid the big smooch? This one is easy to answer: there is absolutely no question that the top Prince in this case was Mark Granovetter (although there were surely others since the paper had already “awoken” by 1985 [my own rumage through JSTOR suggests Jeffrey Pfeffer (1972) as a possible early Prince], but it went into caffeinated insomnia after Granovetter), who cited the paper in his 1985 AJS classic. Thus, Macaulay’s “visionary” piece was awakened by the new economic sociology and modern (open systems) institutional theory, with the turn toward thinking of economic activity as relationally embedded.

Written by Omar

July 22, 2006 at 7:21 pm