Archive for the ‘epistemology and methods’ Category

waiting for big data to change my world

I keep hearing about the coming big data revolution.  Data scientists are now using huge data sets, many produced through online interactions and media, that shed light on basic social processes.  Big data data sets,  from sources like Twitter, Facebook, or mobile phones, give social scientists ways to tap into interactions and cultural output at a scale that has never been seen before in social science.  The way we analyze data in sociology and organizational theory are bound to change due to this influx of new data.

Unfortunately, the big data revolution has yet to happen. When I see job candidates or new scholars present their research, they are mostly using the same methods that their predecessors did, although with incremental improvements to study design. I see more field experiments for sure, and scholars seem more attuned to identification issues, but the data sources are fairly similar to what you would have seen in 2003. With a few notable exceptions, big data have yet to change the way we do our work. Why is that?

Last week Fabio had a really interesting post about brain drain in academia. One reason we might see less big data than we’d like is because the skills needed to handle this type of analysis are rare and much of the talent in this area is finding that research jobs in the for-profit world are more lucrative and rewarding than what they’re being offered in academia. I believe that’s true, especially for the kinds of people who are attracted to data mining techniques. The other problem though, I think, is that social scientists are having a hard time figuring out how to fit big data techniques into the traditional milieu of social science. Sociologists, for example, want studies to be framed in a theoretically compelling way. Organizational theorist would like scholars to use data that map on to the conceptual problems of the field. It’s not always clear in many of the studies that I’ve read and reviewed that big data analyses are doing anything new other than using big data. If big data studies are going to take over the field they need to address pressing theoretical problems.

With that in mind, you should really read a new paper by Chris Bail (forthcoming in Theory and Society) about using big data in cultural sociology.  Chris makes the case that cultural sociology, a subfield that is obsessed with understanding the origins of and practical uses of meaning, is prime for a big data surge. Cultural sociology has the theoretical questions, and big data research offers the methods.

More data were accumulated in 2002 than all previous years of human history combined. By 2011, the amount of data collected prior to 2002 was being collected very two days. This dramatic growth in data spans nearly every part of our lives from gene sequencing to consumer behavior. While much of these data are binary and quantitative, text-based data is also being accumulated on an unprecedented scale. In an era of social science research plagued by declining survey response rates and concerns about the generalizability of qualitative research, these data hold considerable potential. Yet social scientists – and cultural sociologists in particular – have ignored the promise of so-called ‘big data.’ Instead, cultural sociologists have left this wellspring of information about the arguments, worldviews, or values of hundreds of millions of people from internet sites and other digitized texts to computer scientists who possess the technological expertise to extract and manage such data but lack the theoretical direction to interpret their meaning in situ….[C]ultural sociologists have made very few ventures into the universe of big data. In this article, I argue inattention to big data among cultural sociologists is particularly surprising since it is naturally occurring – unlike survey research or cross-sectional qualitative interviews – and therefore critical to understanding the evolution of meaning structures in situ. That is, many archived texts are the product of conversations between individuals, groups, or organizations instead of responses to questions created by researchers who usually have only post-hoc intuition about the relevant factors in meaning-making – much less how culture evolves in ‘real time’ (note: footnotes and references removed).

Chris goes on to offer suggestions about how cultural sociology might use big data to address big theoretical questions. For example, he believes that scholars studying discursive fields would be wise to use big data methods to evaluate the content of such fields, the relationships between actors and ideas, and the relationships between different fields. Of course, much of the paper is about how to use big data analysis to enhance or replace traditional methods used in cultural sociology.  He discusses how Twitter and Facebook data might supplement newspaper analysis, a fairly common method in cultural and political sociology. Although he doesn’t go into great detail about how you would do it, an implicit argument he makes is that big data analysis might replace some survey methods as ways to explore public opinion.

I continue to think there is enormous potential for using big data in the social sciences. The key for having it accepted more broadly is for data scientists to figure out how to use big data to address important theoretical questions. If you can do that, you’re gold.

Written by brayden king

June 28, 2013 at 8:17 pm

y’know, i’m kind of proud of science right now, even social science

In age of climate denialism and other chicanery, it’s easy to be a science pessimist. But when I stand back, I become a little more confident about things. Science, as an institution, has not buckled under pressure. For example, I think about vaccine skeptics. Truly bad science that has lead to some deaths. However, science did not abandon vaccines and instead went in search of confirmatory evidence and found nil. This was before the retraction of the infamous article in Lancet.

People may sneer at the social sciences, but they hold up as well. Recently, a well known study in economics was found to be in error. People may laugh because it was an Excel error, but there’s a deeper point. There was data, it could be obtained, and it could be replicated. Fixing errors and looking for mistakes is the hallmark of science. In sociology, we often shy away from the mantle of science, but our recent treatment of the Regnerus paper makes me proud. My fellow sociologists obtained the data, analyzed it, and showed that the new data support the long standing finding of no differences between same sex and different sex parents in terms of childhood outcomes.

If you watch the news, the Coburns of the world claim the attention. But when you think about it, the science haters are really standing in the shadow of a much larger enterprise.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

May 3, 2013 at 12:13 am

biernacki book forum, part 1: why we should think about coding very carefully

biernackibookRead Andrew Perrin’s review at Scatterplot.

This Spring, our book forum will address Richard Biernacki’s Reinventing Evidence in Social Inquiry: Decoding Facts and Variables. In this initial post, I’ll describe the book and give you my summary judgment. Reinventing Evidence, roughly speaking, claims that numerically coding extended texts is a very, very bad idea. How bad? It is soooo bad that sociologists should just stop coding text and abandon any hope of providing a quantitative or numerical coding of texts or speech. It’s all about interpretation. This is an argument that prevents a much needed integration of the different approaches to sociology, and it deserves a serious hearing.

In support of this point, Biernacki does a few things. He makes an argument about how coding text lacks validity (i.e., associating a number to a text does correctly measure what we want it to measure). Then he spends three chapters going back to well known studies that use content analysis and argues, at varying points, that the coding is misleading, obviously incorrect, or that there were no consistent standard for handling the text or the data.

As a proponent of mixed methods, I was rather dismayed to read this argument. I do not agree that coding of text is a hopeless task and that we should retreat into the interpretive framework of the humanities. There seem to be regularities in speech, and other text, that makes us want to group them together. If you accept that statement, then it follows that a code can be developed. So, on one level I don’t buy into the main argument of the book.

At a more surface level, I think the book does some things rather well. For example, the meat of the book is in replication, which many of us, like Jeremy Freese, have advocated. Biernacki goes back and examines a number of high profile publications that rely on coding texts and finds a lot to be desired.

Next week, we’ll get into some details of the argument. Also, please check out our little buddy blog, Scatterplot. Andrew Perrin will discussing the book and offering his own views.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

April 2, 2013 at 12:45 am

a new era in research on the media and social movements

If you are a social movement researcher, you often want data from the media. But there are serious logistical problems, not to mention the regular problems one has when one tries to interpret media data. Obtaining media data is hard. You need a lot of resources to do any but the most basic analyses. Doug McAdam’s group had a large NSF grant to support a detailed coding of the NY Times. In my own research, I had a team of undegraduates work for a year to scour three major newspapers for reports of Black student protest events.

That era is now over. As long as the media your are interested in is digitized and accessible, you can compile a data set in days, if not hours. There are two general approaches. First, you can use search engines to generate lists of articles with key words. Then the human coders take their turn. Second, if you are merely counting words that clearly tag a concept (e.g., “the Tea Party”) then you can write (or pay someone to write) a program called a “web scraper” to load websites and extract the text you need. For older media, such as newspaper, say, pre-1990, this is hard. But if you have a question about a recent movement, then it’s orders of magnitude easier. I forsee an era where sociologists routinely partner with computer science geeks to generate powerful data sets cheaply and complete research in months, rather than years.

Adverts: From Black Power/Grad Skool Rulz 

Written by fabiorojas

March 5, 2013 at 12:41 am

poll: is ethnography completely different than other kinds of research?

Click on all answers that apply.

Although I have published an ethnography article and a book of interviews and historical research, I do not have the habitus of the typical qualitative sociologist. In talking to ethnographers and other qualitative researchers, I often get the feeling that they are openly hostile or critical to the ideas that motivate quantitative research. Samples, inferences, and general conclusions are anathema. Is this just a posture? Or do qualitative researchers think that they are doing something completely different? If they aren’t looking for applicable lessons, are they just looking for well documented “just so” stories? Or is it merely an exercise in distinction, where ethnographers and other qualitative scholars use a different rhetoric to bolster their academic standing?

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

February 28, 2013 at 12:03 am

glaeser book forum 2: understanding the sociology of understanding

This Fall’s book forum is about Andreas Glaeser’s Political Epistemics, a historical ethnography of East German socialism. This week’s installment will focus on the theoretical purpose of the book, which is to articulate and defend “the sociology of understanding.”

What is this “sociology of understanding?” Well, it draws on a number of ideas that should be familiar to cultural sociologists. First, it’s fairly Schutz/Berger and Luckmann in nature. There is a “lifeworld” built upon a common stock of knowledge. “We all know that this is true.” Second, it’s also interactional. In Glaeser’s model, people develop their understanding of the world through affirmation/negation from other people or institutions.

So far, I think the picture is well rooted in cultural sociology. What Glaeser adds is an argument about the institutionalization of the self. Rather than assume that people have fairly independent interests and beliefs about the world, he argues that selves are built from of affirmation and negation from the social environment. Now, Glaeser isn’t making a Foucault style argument about how we lose ourselves in a network of signifiers. Quite the contrary, he’s arguing about the rootedness of one’s understanding of the world. Historical events affirm one’s understanding of the world, while others disrupt that notion of self.

How does this sociology of understanding (SoU) help us to do political sociology, such as analyzing the dissolution of communism? Well, if you believe SoU, the locus of attention should be on understanding how people construct their world in both abstract terms and in daily life. Abstract theories, like Marxism-Leninism, provide a basic vocabulary for people to assess their world and produce collective action. At the same time SoU theory suggests that these understandings can only sustain a type of self when reinforced by exogenous events and institutional life. A lot of daily political life is a response to the juxtaposition of these worldviews and observation, with actors often scrambling to make sense of events that would be unsurprising to others.

The SoU theory has interesting implications. For example, SoU theory implies that Western arguments about freedom would me moot points. The ideals of individual liberty only resonates in nations with specific institutional arrangements. Instead, people in socialist nations would criticize the system from within. And there is much truth to this observation. Dissidents and reforms rarely waved their copy of Road to Serfdom in the air. Rather, they often relied on arguments articulated by dissident socialist intellectuals. Thus, the collapse of communism, in this view, is less about external pressures and more about the management or mismanagement of contradictions.

The result of SoU theory is that one should understand how historical events, ideologies, organizational behavior, and personal biography intertwine to create the political system. Social changes happens when these factors shift, not so much when outsiders, like Reagan or Kennedy, stand by a wall and proclaim freedom. Next week, we’ll see the sociology of understanding in action, when I discuss the world of the Stasi and Berlin peace activists.

Adverts: From Black Power/Grad Skool Rulz    

Written by fabiorojas

October 17, 2012 at 12:01 am

book spotlight: interpretation and social snowledge by isaac a. reed

A lot of people have bugged me about Isaac Reed‘s book, Interpretation and Social Knowledge: On the use of theory in the human sciences. It’s a book that offers an explanation (sorry!) of the different ways that social researchers construct explanations. I think this a wonderful and engaging book, but it  has some major points that I disagree with. Let’s just say that, me and this book are frenemies.

This book argues that there are three types of explanations to be found in the social sciences. There is “naturalism” or “positivism,” where explanations are tied to a social reality that is “out there.” There are normative explanations, which focus on social processes because of what they say about some ideal state derived from ethical theory (e.g., Habermas’ public sphere account vs. the theory communicative action). The third style of explanation situates human action within worlds of meaning, which Reed calls interpretive sociology.

Let’s start with what I like. Despite the occasional wordiness that is typical in the social theory genre, this is actually a short and elegant book. I enjoyed this book. I think Reed’s typology of social research is valuable and on target. If I were to teach graduate theory, I’d assign this book. Substantively, Reed is correct in pointing out that what makes social research distinctive is meaning. Indeed, with the exception of rational choice, nearly every major development in the social sciences addresses the role of meanings and beliefs. Institutionalists talk about cultural stemplates. There’s toolboxes, schema, habitus, and so forth. These are all attempts to integrate theories of action with theory of psychology and beliefs. Reed is also to be applauded in arguing that social explanation, to be effective, must situate an individuals moods or dispositions within a “cultural landscape.”

I level a few criticisms at this book. One is purely stylistic. The book is filled with loving references to the likes of Roy Bhaskar and post-modernism. I don’t think their work adds much to Reed’s main point. I can easily that some sociologists would just stop reading. Why would a demographer or labor market researcher bother with such a book? There’s a lot of preaching to the choir.

Second, there’s a big argument that interpretive sociology is inherently different than the naturalist or positivist sociology that takes it cues from the physical sciences. My view is different. Ideas about falsification, inference, data collection, hypothesis testing, and so forth can be applied to systems of symbols and meaning. In linguistics, for example, there are successful research programs that focus on how systems of language evolve and are put together. No reason that can’t be applied to the historical study of colonialism, Christianity, or whatever. In fact, there is something called schema theory in psychological anthropology, which takes Reed’s idea of “cultural landscapes” and converts it into a positivist research agenda.

The separation of interpretation from naturalism is even more implausible once we consider how the same argument would play out in the natural sciences. Let’s take biology. It’s fairly clear that you can’t understand animal behavior without thinking about the organism’s history and ecosystem. So what should a biologist do? Option A: Develop a general principle that will help us explain variation in ecosystems, organisms, and evolution. Option B: Ditch the ideas of normal science and do ad hoc interpetations of different animals and their ecosystems. I hope that the reader thinks, along with Darwin, that option A is very desirable.

Those that separate qualitative and interpretive research from positivist modes of social science are missing something important. Meaning systems, or cultural landscapes, are complicated systems built up from simpler structures that are embedded in larger systems. “American culture” is emergent from American words, emotions, norms, social practices, and so forth. If you buy that argument, then the link between interpretive work and naturalist social science is obvious. You need a positivist explanation of how these complex systems are born, evolve, and operate. It’s not an easy problem by any means, but it’s one that easily fits within the ideas that we associate with natural science.

Reed does make some points in this direction. For example, in chapter four, he says that interpretations should be “locally consistent.” But he needs to go farther. Interpretation needs to always have an eye on general principles. Interpretations of different groups and historical eras need to be consistent with each in ways that provide guidance for future research. Without such an imperative, interpretive sociology threatens to devolve into the solipsism of historical specificity.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

August 2, 2012 at 12:01 am

retractions are good for science

There’s a scandal brewing over Japanese scientist Yoshitaka Fujii, who is accused of fabricating 168 (!) published studies, many in well established American journals. This led me to the blog called Retraction Watch, which chronicles papers that have been pulled from publication due to error or fraud. My favorite is a paper on transcendental meditation, which was originally scheduled for publication in the prestigious Archives of Internal Medicine.

When we see retractions, we laugh at the peer reviewers or editors who let bad work slip by, or the authors, whose careers may be ruined. Butlaughter misses the point. Retractions are good for science. A retraction is an institutionalized form of correcting error. There is no other institution where public error correcting matters so much. Don’t go to church to see corrections to the Bible and don’t turn on the news to see your favorite politician admit the errors of his policy. They are always right. Retractions may be humiliating in science, but there are part of what makes science work.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

July 7, 2012 at 12:01 am

why behaviorism isn’t satanism

Here’s a recent book chapter worth reading: “Why Behaviorism Isn’t Satanism.”


The history of comparative evolutionary psychology can be characterized, broadly speaking, as a series of reactions to Cartesian versus pragmatist views of the mind and behavior. Here, a brief history of these theoretical shifts is presented to illuminate how and why contemporary comparative evolutionary psychology takes the form that it does. This brings to the fore the strongly cognitivist research emphasis of current evolutionary comparative research, and the manner in which alternative accounts based on learning theory and other behaviorist principles generally receive short shrift. I attempt to show why many of these criticisms of alternative accounts are unjustified, that cognitivism does not constitute the radical lurch away from behaviorism that many imagine, and that an alternative “embodied and embedded” view of cognition—itself developing in reaction to the extremes of cognitivism—reaches back to a number of behaviorist philosophical principles, including the rejection of a separation between brain and body, and between the organism and environment.

Key Words: animal, cognition, behavior, cognitivism, behaviorism, evolution, learning, psychology

Written by teppo

June 19, 2012 at 5:48 pm

ethnography is totally generalizable

I’m still mulling over some of the issues raised at the Chicago ethnography and causal inference conference. For example, a lot of ethnographers say “sure, we can’t generalize but ….” The reason they say this is that they are making a conceptual mistake.

Ethnography is generalizable – just not within a single study. Think of it this way. Data is data, whether it is from a survey, experiment or field work. The reason that surveys are generalizable is in the sampling. The survey data is a representative sub-group of the larger group.

What’s the deal with ethnography? Usually, we want to say that what we observe in fieldwork is applicable in other cases. The  problem is that we only have one (or a few) field sites. The solution? Increase the number of field sites. Of course, this can’t be done by one person. However, there can be teams. Maybe they aren’t officially related, but each ethnographer could contribute to the field of ethnography by randomly selecting their field site, or choosing a field site that hasn’t been covered yet.

Thus, over the years, each ethnographer would contribute to the validity of the entire enterprise. As time passes, you’d observe new phenomena, but by linking field site selection to prior questions  you’d also be expanding the sample of field sites. This isn’t unheard of. The Manchester School of anthropology did exactly that – spread the ethnographers around – to great effect. Maybe it’s time that sociological ethnographers do the same.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

June 15, 2012 at 12:01 am

philosophy of science bleg

Orgheads: What is the canonical citation for “Kuhn’s model doesn’t work so well in the social sciences?” Thanks.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

June 14, 2012 at 8:14 pm

fall book forum: political epistemics by andreas glaeser

This Fall’s book forum will be dedicated to Political Epistemics: The Secret Police, the Opposition, and the End of East German Socialism, Andreas Glaeser’s recent book on the collapse of the East German communist state. We will start on October 1. It’s a long time a away because this is, like, a totally long book. It reads well, so it will be worth it. I promise.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

June 9, 2012 at 12:01 am

kieran healy on the philosophy profession

Our friend Kieran has a series of posts on his research at Leiter Reports, the leading academic philosophy blog. Aside from writing on economic sociology, Kieran has begun an ambitious project analyzing the way that philosophers evaluate each other. Three posts so far, each well worth reading:

I’ve seen this project presented in workshops. There is much more and it is very good. Can’t wait to see more posts.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

March 22, 2012 at 12:03 am

mathematical sociology argument: transparency vs. truth

Over at Scatterplot, there was a question about mathematical sociology. What are the major controversies in that area?

First, mathematical sociology is a vastly under-developed field. It is in its infancy compared to mathematical economics or mathematical psychology. As I’ve noted in the past, there is not a core set of theorems that define the major results of the field. The results we do have are often descriptive or definitional. There is little that resembles a core math econ theorem, like the existence of Nash equilibria.

Instead, there is a smattering of results in collective behavior, networks, social psychology, and stochastic process models. I doubt that most sociologists could even tell you what these are, except for a few specialists. Doesn’t mean that it’s a bad area. Rather, we need a critical mass of people to hit the topic, develop it, and then teach the rest of the profession. Right now, most soc departments don’t even teach mathematical sociology. Since the field isn’t developed, it’s fair to say that there is no major debate among formal model builders/theorem provers beyond those that exist in specific topics (e.g., network models or collective action).

Second, there is actually a major and extremely important debate among computational sociologists. I call this the truth vs. transparency debate. If you make a computer simulation of a social system, you have a trade-off to make. Accurate models can be made, but they requires tons of variables and complex data. Since they have so many moving parts, it’s hard to tell what is really happening in the model. In contrast, you can make toy models that are easy to analyze. Simple models are transparent, but you know they are highly unrealistic.

So that’s the debate: what’s the right trade-off between transparent models and accurate models? Unclear. It’s a major research problem.

Adverts: From Black Power/Grad Skool Rulz


Written by fabiorojas

March 21, 2012 at 12:01 am

You say within, I say between…Let’s call the whole thing off!

Suppose we are interested in the effects of some social psychological construct that we are theoretically devoted to (let’s say “symbolic racism”) on support (or lack thereof) for (generous) social welfare policies.  In quantitative social science we would spend a lot of money surveying people, collect some data, and ultimately specify a regression model of the form:

Y=a+bW+cX+e      (1)

Where Y is some sort of scale or that lines up individuals in terms of their support for social welfare policies, W is some sort of scale that lines up individuals in terms of their “symbolic racism” is a matrix of other “socio-demographic” stuff and e is a random disturbance.  Suppose further that the model provides support for our theory; b is substantively and statistically significant and its sign goes in the right direction: the more symbolic racism the less support for social welfare policies.  We would then write a paper arguing that individuals who are high in symbolic racism are less likely to support social welfare policies, and this is a likely source of support for the Republican party in the South, we might even insinuate in the conclusion that trends in income inequality would be much less steep if it wasn’t for these darn racists, etc.

I would bet you 10,000 dollars*, however that in actually presenting their results and their implications the authors would say things that are in fact not supported by their statistical model.  In fact we all say or imply these things, especially when W is an attitude (or some other “intra-individual” attribute) and Y is a behavior, and we desire to conclude from a model such as (1) that attitude is a cause of the behavior (the same thing would apply if  the unit of analysis are organizations, and W is some organizational attribute–like the implementation of a “strategy”–and Y is an organizational outcome).

Now suppose even further that W passes all of the (usual) hurdles for something to constitute a cause: it precedes Y, the model is correctly specified on the observables, etc.  My point here is that even if that were true, it is not true that from the fact of observing a large and statistically significant effect of b we can conclude that at the individual level there is some sort of psychological (intra-organizational) process with the same structure as our W  called “symbolic racism” that causes the individual’s support for this or that policy.

An obscure segment of the statistical and psychometrics literature tells us why this is the case (see in particular Borsboom et al 2003):  in order to jump from information that is obtained from a comparison between persons to statements about the data generating process within persons, we must make what is called the local homogeneity assumption.  This assumption is just that; an assumption.  And for the most part it is a shaky one to make.  For b in (1) only gives us information about the conditional distribution of Y responses among the population of subjects as we move across levels of W; it says nothing about causal processes at the individual level. In fact the model that produces responses at the individual level could be wildly different from (1) above and yet it could generate the between-persons result that we observe.  In this respect, the statements:

1a. Our results provide support for the conclusion that in the contemporary United States a person with a high degree of symbolic racism is less likely to support social welfare policies than another person with a lower degree of symbolic racism.

1b. Our results provide support for the conclusion that a person’s support for punitive welfare policies would decrease if their propensity towards symbolic racism were to decrease.

Are empirically and logically independent.  Model (1) only supports 1a, but it says nothing about 1b (or would only say something about 1b under the weight of a host of unsupportable assumptions).  However, whenever we write up results obtained from models such as (1), we sometimes present them as if (or insinuate that) they provide support for 1b.

Startlingly, this lack of (necessary or logical) correspondence between a between-subjects result and the DGP (data-generating process) at the individual level implies that most statistical models are useless for the sort of thing that people think that they are good for (draw conclusions about mechanisms at the level of the person/organization).   Not only that, it implies that a model that provides a good explanatory fit for within individual variation (let’s say a growth curve model of the factors that account for individual support for social welfare across the life course) might be radically different from the  one that provides the best fit in the between-persons context.  Finally, it implies a “rule” of sociological method: “whenever a within-subject explanation is extracted from a between subjects analysis we can be sure that this explanation is (probably) false (at least for most non-trivial outcomes in social science).”

*I don’t actually have 10,000 dollars.

Written by Omar

December 16, 2011 at 4:28 pm

the decentralization of science

While in graduate school, I had a very interesting discussion with one of my advisers, Terry Clark. Before he became an active figure in urban sociology, Terry was a fairly accomplished org theory guy with a taste for the sociology of science. For example, he wrote a nice book called Prophets and Patrons, which compared the development of sociology in different higher education systems.

So anyways, we were chatting one day and I said that the sociology of science is kind of depressing because there are so many studies showing that scientists are conformists. Because we need the recognition of older colleagues and the right journals, it is very hard for scientist to be risk takers. How can science progress if it is so hierarchical?

Terry had a very wise answer, in my view. Science is a decentralized order. Yes, within specific laboratories, or disciplines, the hierarchy is very strong. However, there is a sort of competition between scientific communities. New ideas may pop up in other disciplines, or a new laboratory may be set up that doesn’t fit into the rest of the system.  Even though specific research communities gravitate toward normal science and stasis, the overall structure of science is enriched by ideas popping up in unexpected places.

Written by fabiorojas

August 11, 2011 at 12:44 am

the order of things, part dos

This week, I’ll address a question about The Order of Things (Toot) and Foucault more generally. How do we evaluate such books? As I noted week, Foucault does not operate in the American sociology way. What Foucault does in many of his texts, including Toot, is a highly philosophical reading of intellectual history. Furthermore, most of the text is Foucault – he’ll drop one snippet from a text and then offer his own lengthy interpretation. Honestly, it’s hard to tell if he’s being selective or faithful. That’s a criticism that’s always being lobbed his way, a fast and loose way of handling primary sources. Unless one had a really, really deep knowledge, of say Renaissance botany, it would be hard to judge the accuracy of his interpretations.

But let’s say that I believe Foucault’s readings. Should I believe the bigger shift? And, should I care? Well, I think the second question is probably the easiest. A core concern in sociology is “culture,” which, roughly, means the ways that we collectively define the world and attach meaning to our actions. Foucault’s claim, that the modern self is emaciated  do to the relationship between the self and the system of signs is important. It’s a fundamental statement about how we constitute ourselves in the structure of meaning that we employ. Clearly important. Toot, if we believe it, also offers a way of understanding the evolution of sciences, which is also an important sociological issue.

Now, do I actually believe it? Well, Foucault’s evidence in Toot (though not in all other texts) is from intellectual elites. It’s an argument about how they way they put their intellectual system together. It’s important, but as presented, I don’t see any reason to immediately jump to a broader conclusion about Western culture, as many Foucauldians might insist. At best, this suggests something about elite Western intellectual traditions, and not all of them. For example, American intellectual culture is notoriously pragamatist and has resisted the slide toward phenomonoloy described by Foucault at the end of Toot. Also, I wonder if Foucault is relying on a canon that was retrospectively created by us, rather than a wide reading of what everyone was doing at the time. I.e., when we rely on Adam Smith as a representative of classical economics, is he really the central figure? What about the mercantilists? How do they fit in to the story of the classical episteme?

Second, Foucault relies on a number of structuralist arguments that have never quite sat well with me. The entire first half of Toot relies on an explanation of how people interpreted signs. Does he assume that the symbolic systems of the classical era have more order or structure than is warranted? As a network oriented person, I tend to believe that semantic systems are more sedimented and ecological than orderly. Also, just because the system of signs in classical and post-classical Western cultures was more decentered, it doesn’t mean that the subject is gone. I think Giddens made a similar point in his book against the post-modernists, people are still able to hash out their subjectivities in a post-modern world.

Overall, I remain intrigued by Foucault’s ability to pinpoint the common threads of Western intellectual culture and it’s instabilities. And the idea of episteme is clearly valuable. My gut sense is that if I immersed myself in textual readings, I’d probably agree that that shift in episteme is real, but I’d probably disagree on the difference.

Written by fabiorojas

July 12, 2011 at 12:42 am

the order of things, part uno

The next stage of Foucault week is a long commentary on The Order of Things (Toot from here on). This summer, I began to read Toot for a few reasons. First, I needed some theoretical mojo. After a few referee reports argued that my work (yes, they used my name specifically) was un-theoretical, I decided to dip into some heavy texts to help me develop an Omarian Schwarzeneggerian level of theoretical rigor. Second, of all of Foucault’s major works, Toot is probably the one that least appeals to the typical sociologists. No organizations, institutions, or repression. It’s all about paintings and verbs and stuff. It’s the major Foucault work that is closest to the spirit of the humanities. I wanted to see if my memory was correct on this second point and see if maybe there’s a message for mainstream American sociology in there.

So “Toot, part uno” is simply diagnostic and exegetical. What, exactly, is in Toot? Well, it’s kind of hard to explain, but here it goes:

  • Foucault claims that there is a shift in culture from the Renaissance to the Classical era (early modernity in some terminologies)
  • The shift had to do with how scholars and thinkers viewed the world. In short, there was an underlying logic to pre-scientific thought that changed in the transition to early modernity.
  • The “before” was a social world where scholars tended to think in a cross-sectional fashion and relied on similarities between things. In the “before,” representations of things were imprints of reality.
  • The “after” is a social world where we think of thinks in a dynamic evolving fashion. It’s about developing a grammar of things that go together. In the “after,” it’s less about figuring out how to represent reality, it’s more about setting up systems of signs and it’s more about how meaning is added to symbols.
  • Foucault goes to show this shift in three area of classical thought: linguistics, biology, and economics. About 40% of the book is an interpretation of Renaissance and Classical texts in these areas.
  • He then goes on to make a bigger argument about the “detachment of thought,” which I take to mean that the relation of the self to the body of knowledge is severed. As people shifted to thinking that emphasized formalization and symbol manipulation, the individual’s role in this whole system became blurry and hard to pin down, which is why you had a whole round of “me” centered thinking (e.g., Cartesian philosophy).
  • The result of all this is numerous attempts at creating “man” as a stable thing in Western thought, which in Foucault’s view, is hard to pull off.

Read the rest of this entry »

Written by fabiorojas

July 7, 2011 at 3:50 am

this week is foucault week

I am re-reading Foucault’s “The Order of Things” and I have a few posts summarizing my thoughts in the pipeline. So if you want a Foucault/Order post on a particular topic, just comment/email/tweet me about it. Two posts in mind: one on how sociologists absorb Foucault and another on evaluating the main claim of Order. Other ideas are welcome.

Written by fabiorojas

July 3, 2011 at 12:06 am

jeffrey alexander on social reality and theory


Here’s part two.

Written by teppo

July 1, 2011 at 8:01 pm

What is at stake for Sociology in Walmart?

Much has been discussed about the Walmart case and ASA Amicus Brief in the postings and comments on the orgtheory [with subsequent posts 1, 2] and scatterplot blogs. Little, however, has been said about the literature review in the ASA Amicus Brief, though it spans a little more than half the main body of the Brief. Some have even suggested that the only thing the Brief does is take the position that the methods that Bill uses are those of science and sociology in particular. Clearly it does much more. [In providing the analysis below, I want to be quite clear that I am not making any claims about what people’s motives were in writing and submitting the ASA Brief.  Laura Beth has been quite clear about hers and I believe her.]

Read the rest of this entry »

Written by Chris Winship

May 28, 2011 at 11:07 pm

Goals and a Few Answers

I spent last night reading through all the comments on orgtheory and scatterplot. My key goal in writing my initial post was to get a discussion going about the role of sociology in the courts and the particular problems involved. I guess I succeeded! My interest in the Walmart case was only secondary and I discussed it, the ASA Amicus Brief, and Bill’s expert report because it was current, was potentially important, and exemplified many of the issues that I thought needed to be discussed. I did not write it to attack the ASA as Sally Hillsman has accused me of in an email to the Council. Truthfully, I do not know enough about what was done to know whether I would believe it to be unproblematic or not. If the Council, the ASA members’ elected representatives, had the time to seriously consider the matter, read the materials involved, appreciated the issues, and voted to submit an Amicus Brief to the Supreme Court, then I think I and others should not complain. Of course the Mitchell et al. paper does attack the ASA brief, but on scientific, not procedural grounds. [I should also note that Sally’s claim that I offered Laura Beth the opportunity to publish her reply to Mitchell et al. in SMR and withdrew that offer is factually incorrect. I withdrew the offer for her to write a quite different paper, for quite defensible reasons. All that said, what will go in the SMR special issue is still evolving.]

In reading through all the comments last night I was amazed by the number times various people said I said particular things (using their words, not mine), and claimed that I thought various things (with no access that I am aware of to my mind). Amy’s post is perhaps the extreme example of this. In an actual court proceeding this may be appropriate. I don’t think it is appropriate for blogging, assuming the goal should be to try to understand each others’ thinking–why they believe what they think is reasonable–and that by hearing what each other thinks, we might improve and deepen our own thinking. Let’s not put words in people’s mouths or thoughts in their heads. If a position someone has taken is important for a point you want to make then quote the person. If you believe someone thinks a particular thing and that is why they are taking the position they do, then ask them whether that is what they think. More generally, as Laura Beth has asked, let’s keep it as diplomatic as possible. In doing so, this will vastly increase the likelihood of having a constructive dialogue.

Read the rest of this entry »

Written by Chris Winship

May 25, 2011 at 12:55 am

Walmart and the ASA (a guest post by Chris Winship)

Note: Chris is a professor of sociology at Harvard University and the Harvard Kennedy School of Government and, since 1995, he has edited Sociological Methods and Research, which is a peer-reviewed scholarly methodology journal. SMR content is also available on the SMRblog.

The current employment discrimination case against Walmart raises the important question of whether social science, and sociology in particular, can effectively participate in court cases and at the same time maintain its scientific integrity. If the answer is yes, there is then the further question of what criteria need to be met for scientific integrity to be maintained. These are important questions requiring discussion, even debate. But first some history.

By early fall, if not sooner, the Supreme Court will make a key decision in the largest employment discrimination suit in history: Dukes v. WalmartOral arguments in the case were heard on March 29. The suit itself, involving a class of as many as 1.5 million women, alleges that Walmart has systematically discriminated against women in its salary and promotion decisions. Potentially, billions of dollars in damages are at stake. The question before the Court, however, is not whether Walmart in fact discriminated against its employees but rather whether such a large case, involving women working in varied circumstances in thousands of different stores and involving different supervisors can be thought to constitute a single class and thus whether the class should be certified.

Read the rest of this entry »

Written by Chris Winship

May 18, 2011 at 12:10 am

theories of entrepreneurship: an exercise in dichotomies

There’s a certain resistance to dichotomizing: the truth is somewhere in between, it’s more nuanced, processual, interactional etc — both “x” and “y” need to be considered — so we’ll call it “z” (say, “structuration”).  But, as I’m preparing for an entrepreneurship-related PhD class tomorrow, most of the papers we read indeed tend to set up a dichotomous relationship between two things.  Despite problems with these types of contrasts (it’s usually pretty easy to see where the argument is going), I still find the exercise of extremes very valuable.  Theories, after all, idealize and need to focus on something (usually in reaction to its opposite, sorta).

So, here are some of the entrepreneurship-related dichotomies that popped up:

  • structure versus agency
  • macro versus micro
  • exogenous versus endogenous
  • observation versus theory
  • experience versus thought
  • supply versus demand
  • backward- versus forward-looking
  • discovery versus creation
  • something versus nothing
  • actual versus possible

(The truth can be found on the right-hand side.)

Many of the above dichotomies — in one way or another — hearken to classic debates in philosophy: rationalism versus empiricism, realism versus constructionism, etc.   I don’t think that organizational scholars will solve any of these classic problems, though obviously there are comparative opportunities vis-a-vis the things that we study: collective action, social process and interaction, value creation and so forth.

Below the fold you’ll find some of the (somewhat eclectic) readings that somehow relate to the above dichotomies of entrepreneurship: Read the rest of this entry »

Written by teppo

May 10, 2011 at 9:52 pm

workflow articles in “the political methodologist”

I’ve written a few times before about how to choose the software you work with, and what you should and should not care about when making those choices. I maintain a page with various resources related to this, if you’re interested, most notably the Emacs Starter Kit for the Social Sciences. A revised version of an article of mine on this topic called “Choosing Your Workflow Applications”, which I’ve had online for a while, has now been published in The Political Methodologist, the newsletter of the Society for Political Methodology. (The source document for my article is also available, as I wanted the piece to walk its own talk.) There are also some great contributions from others along similar lines, covering different aspects of setting up and running your research so that you can collaborate easily, remember what you did, easily revisit work when needed, and do good, reproducible social science in a relatively hassle-free way. I think the issue as a whole is something that grad students in any social science program—especially those just starting out—could benefit from reading, and there’s a lot there for faculty to chew on, too.

Written by Kieran

April 1, 2011 at 2:41 pm

i think everyone is a scientist: the poverty of stimulus argument

There is a disconnect between how some social scientists see themselves versus how they see their subjects.  Scientists theorize about the world — they develop hypotheses, models, they reason, imagine, simulate, then test and revise, etc — and regular folks, well, learn more myopically via observation and experience. Behaviorism of course represented an extreme case of the latter – a stimulus-driven, passive view of human behavior.

But I’ll go on a limb and say that I think that the “scientist model” is a far better conception of all human activity.  Everyday living and interaction is scientific activity of a sort: we have models of the world that we constantly update and revise.  Importantly, these models have an a priori nature, decoupled from experience.  Does experience matter?  Sure.  But, I think the a priori factors matter just as much, even more.  How one conceptualizes the a priori depends on one’s field and purposes, but it includes the following types of things – human nature, choice, reason, imagination, intention, conjectures, hypotheses and theories and so forth.

Readers will of course recognize the above dichotomy as the rationalism versus empiricism debate: reason versus experience.  Empiricism, very often, looks deceptively scientific.  After all, it’s easy to count things that we can observe.  Experience and history are master mechanisms behind gobs of theories — tracing, counting what happened in the past appears scientific.  In some cases it is.   But, the stuff that we observe and perceive is heavily theory-laden (no, not in that sense), and observations and perceptions might simply be epiphenomena of a priori “stuff.”  And, experience might simply “trigger” rather than cause outcomes.  Furthermore, experience and history are only one of many, possible worlds.

The “poverty of stimulus” argument relates to this.  Varieties of the poverty of stimulus argument show up in developmental psychology, linguistics, philosophy, ethology and other areas.  In short, the upshot of the poverty of stimulus argument is that outputs and capabilities manifest by organisms far outstrip inputs such as experiences and stimuli. The work on infants, by folks like Elizabeth Spelke and Alison Gopnik, highlights this point: children have clear, a priori conceptions of their surroundings.  Wilhelm Von Humboldt’s notion of language capabilities as the “infinite use of finite means” relates to the poverty of stimulus argument.  Some varieties of decision-making models (depending on what types of “priors” they allow) also fit.  Ned Block’s “productivity argument” fits into this.  As does, perhaps, Charles Peirce’s notion of “abduction.” Etc.

The above discussion of course is a very Chomskyan view of human nature and science.  But, this tradition goes back much further (well, to Plato).  In my mind, one of the best, historical primers on some of these issues is Chomsky’s Cartesian Linguistics: A Chapter in the History of Rationalist Thought (be sure to get the 2003 edition, with McGilvray’s excellent introduction).  A very, very under-rated book.

Overall — I’ll go out on a limb, again (no one reads the last paragraph of loose, jargon-laden rants/posts like this anyways) — I don’t think the social sciences have come to terms with the scientific problems associated with experience-heavy arguments and the crucial importance of the a priori (however conceived).  I think there are lots of research opportunities in this space.

Written by teppo

March 18, 2011 at 7:06 am

should i listen to people who don’t agree with me?

First, congratulations to co-blogger Kieran Healy. His political philosophy group blog, Crooked Timber, was mentioned in the NY Times by Paul Krugman. Now, I want to focus on what Krugman wrote in that post after he praised Crooked Timber:

Some have asked if there aren’t conservative sites I read regularly. Well, no. I will read anything I’ve been informed about that’s either interesting or revealing; but I don’t know of any economics or politics sites on that side that regularly provide analysis or information I need to take seriously. I know we’re supposed to pretend that both sides always have a point; but the truth is that most of the time they don’t. The parties are not equally irresponsible; Rachel Maddow isn’t Glenn Beck; and a conservative blog, almost by definition, is a blog written by someone who chooses not to notice that asymmetry. And life is short …

I am agreement with Krugman’s point. I really don’t feel any need to analyze what Glenn Beck or Rush Limbaugh say. And sometimes being “open minded” turns you into this guy. They’re entertainers and not serious thinkers. Also, they spew garbage.

But hold on, let’s apply the economic way of thinking here. Not everyone who disagrees with me is Glenn Beck or Sarah Palin. Also, I am not infallible. So it seems that the optimal amount of listening to people who disagree with me is somewhere between 0% and 100%. What’s the percentage? How do I optimize input from people who appear to be wrong?

I don’t know, but maybe it helps to provide a checklist:

  1. Experts. If you have spent the time mastering a topic, maybe I should listen to you.
  2. Truth seeking. If you seem to care about logic and evidence, maybe I should listen to you. I should not listen to you if you ignore evidence, fabricate it, or distort it to suit yourself
  3. Clear comminication. If you can transalte your ideas into terms I can understand, maybe I should listen.
  4. Novelty. If you satisfy #1-3 and you can show me a new way of looking at something, I might listen.

I should not pay attention if:

  1. You know almost nothing about the topic and yet pontificate.
  2. You engage in ad-hominem attacks.
  3. Your point is entertainment rather than communication.
  4. Repitition. If I’ve heard it before, I can tune out.

Using these rules of thumb, I can probably tune out most mass media. It clearly doesn’t exist to transmit knowledge. I can also tune out much political discourse as it repeats, it is ad-hominem, and not truth seeking. Blogs are probably out, including this one when it veers into fun topics that aren’t management or sociology. And of course, I should definitely pay attention to Kieran when he explains the subtleties of organ donation.

Written by fabiorojas

March 17, 2011 at 3:23 am

voice and social control: comparative organization

“Voice is a means of social control: that is to say, the voice is a means of influencing the behavior of individuals so as to bring them into cooperation, one with another.”

That’s from a 1908 American Journal of Sociology article by biologist and ethologist Wallace Craig –  “The Voices of Pigeons Regarded as a Means of Social Control.”  Yes, the article indeed is about pigeons. I don’t know whether AJS still publishes articles by ethologists.  Probably not.

I think ethology can offer some interesting meta-theoretical, comparative and methological insights for studying activity, behavior and social interaction across and within various contexts (from various types of animals to humans).  Sure, one-to-one borrowing across species can be lame (directly applying insights from biology can lead to sloppy reasoning), and is all too frequent.  Of course humans are not like pigeons – or ants or bees – though some abstract similarities might exist and specifying the underlying nature of an organism makes for an intriguing, comparative exercise.

More importantly, the nature of the thing itself, the thing that is being studied, needs to be vetted (Craig, Lorenz etc were brilliant at this), rather than resorting to studying the thing’s environment.  That’s a personal pet peeve of mine.  (Is that vague enough?  Good.)

If any of you are interested in ethology, its origins, the emergence of a field, etc — I would highly, highly recommend Richard Burkhardt Jr’s brilliant book Patterns of Behavior: Konrad Lorenz, Niko Tinbergen, and the Founding of Ethology, University of Chicago Press.  It is one of the best books I have read during the last three years (was just re-skimming it).

Written by teppo

February 28, 2011 at 8:26 pm

why we could use more experiments

One thing that organizational and economic sociology could use more of is experimental methods. While sociologists are not completely averse to experiments (see its prominent use in exchange theory), the method seems to occupy a small niche. Some sociologists express a real distaste for experiments. Our love of context and history seems to bias us against experiments, which emphasize internal validity over external validity and random assignment over sampling from real populations.

My sense though is that a number of theoretical areas could be more fully developed by using experiments. The real value of experiments comes from being able to more precisely identify theoretical mechanisms, especially at the cognitive level. (If you have any doubt of the utility of experiments, check out Correll’s, Benard’s and Paik’s beautiful study of the motherhood penalty.) Given the calls to explore the micro/cognitive foundations of social theories, experiments could be very useful. Here are just a few conceptual areas that could benefit from experiments.

  • Networks and relationship formation – what cognitive dynamics explain homophily? How does framing affect relationships (see, for example, this paper in Psych Science). What sorts of social cues trigger relationship formation? What is the role of emotion in choosing friends?
  • Institutions and cultural persistence – Zucker (1977) broke ground in this area but since then experimental methods have been scantly used. What cognitive dynamics explain habituation? What role does social influence play in the transfer of cultural preferences? What situational dynamics lead to rule conformity?
  • Collective action frames – why are some frames more resonant than others? How important is shared identity to frame resonance?
  • Categories and legitimacy – to what extent does categorical contrast lead to perceptions about legitimacy? How different does something have to be from others in a category before individuals perceive a fit problem? What is the relationship between categorical fit and valuation?
  • Status and power – why are individuals so biased by status? How sensitive are individuals to status differences? What are the cognitive dimensions of status deference?

What else would you add to the list?

Written by brayden king

February 25, 2011 at 10:19 pm

satoshi kanazawa, intelligence and all its correlates

Satoshi Kanazawa seems to believe that intelligence explains, well, a lot of stuff.  Here’s what intelligence is correlated with:

  • a preference for classical music — Kanazawa, Satoshi and Kaja Perina.  Forthcoming.  “Why More Intelligent Individuals Like Classical Music.”  Journal of Behavioral Decision Making.
  • physical attractiveness — Kanazawa, Satoshi.  2011.  “Intelligence and Physical Attractiveness.”  Intelligence. 39:  7-14.
  • substance abuse — Kanazawa, Satoshi and Josephine E. E. U. Hellberg.  2010.  “Intelligence and Substance Use.”  Review of General Psychology. 14:  382-396.
  • being a night owl — Kanazawa, Satoshi and Kaja Perina.  2009.  “Why Night Owls Are More Intelligent.”  Personality and Individual Differences. 47:  685-690.
  • being a liberal and an atheist — Kanazawa, Satoshi.  2010.  “Why Liberals and Atheists Are More Intelligent.”  Social Psychology Quarterly. 73:  33-57.
  • all kinds of other stuff — Kanazawa, Satoshi.  2010.  “Evolutionary Psychology and Intelligence Research.”  American Psychologist.  65:  279-289.

We all want sharp graduate students and colleagues, so based on the above we could almost develop a Kanazawa-quotient, a simple heuristic for hiring and selection.  If you meet, say, three-four of the criteria, you should receive serious consideration: you like classical music, are attractive, have a substance abuse problem, are a night owl, liberal and an atheist.

More Kanazawa here.

Written by teppo

January 19, 2011 at 6:20 am

management journals ranking, crowdsourced

Is Administrative Science Quarterly really the #9 journal in management (as suggested by ISI/impact factors a few years ago)?  Pl-eez!  Is Management Science really #24 (as ranked by ISI in 2009) among management journals?  Is the Journal of Product Innovation Management, ahem, really a better management journal than Organization Science (relegated to #13! in 2008)?

Now you can decide.

Inspired by Kieran and Steve’s ranking initiative (of sociology departments, see here), here’s an effort to crowdsource management journal rankings:


Sure, a ranking like this has lots of problems: apples and oranges (organizational behavior, strategy, org theory journals all in one), the lack of disciplinary journals (for now), etc. It’s certainly not definitive.  But I think a crowdsourced ranking of management journals might nonetheless be quite informative, and it certainly won’t make the mistake of keeping ASQ, Organization Science or Management Science out of the top 5.  Well, we’ll see.

Updated map of where the votes are coming from:

Written by teppo

January 15, 2011 at 12:20 am

What is it like to be Bruno Latour?

When you and I wake up in the morning a series of unconscious microhabits of perception and appreciation take over. These habits structure our “common-sense” perception of the physical and the social worlds. In fact these habits dictate a specific partition of the everyday objects that we encounter into those that are “animate” (agents) and “inanimate” (non-agents). Within the subset of agents that we endow with “animacy” we distinguish those that have a resemblance to you and me (we use the term “humans” to refer to them) and those who do not. We treat the “humans” in a special way, for instance, by holding them responsible for their actions, getting mad at them if they do not acknowledge our existence but we have previously acknowledged theirs, saying “Hello” to some of them in the morning, etc. We also ascribe distinct powers and abilities to those humans (and maybe to those furry non-human agents whom we have grown close to).

The most important of these powers is called (by some humans) “agency.” That is the capacity to make things happen and to be the centers of a special sort of causation that is different from that which befalls non-human agents and non-agents in general (such as my lamp). This is our common-sense ontology.  Bruno Latour does not experience the world in this way. In Bruno’s experience, the world is not partitioned into a set of “animated” entities and a set of “non-animated” ones.  After much wrestling with previous habits of thought and experience (which Bruno imbibed from his upbringing in a Western household and his education at Western schools), Bruno has taught himself to perceive something that we usually do not notice (although I hasten to add, it is available for our perception only if we started to make an effort to notice): a bunch of those entities that the rest of the world does not ascribe that special property of “agency” to (because the rest of us continue to hold on to our species-centric habit of thought that dictates that that this capacity is only held by our human conspecifics), actually behave and affect the world in a manner that is indistinguishable from humans.  For instance, they act on humans, they make humans do things, they participate (in concert with humans some of the time; in fact humans can be observed to “recruit” these non-human agents and these “non-agents” for their own self-aggrandizement projects) in the creation of large socio-technical networks that are responsible for a lot of the “wonders” of modern civilization.

The important thing is that now Bruno is able to directly perceive (in an everyday unproblematic manner) that these “machines” and these “animals” are the source of as much agency (sometimes even more! ), than other humans. Bruno has gotten so good at practically deploying this new conceptual scheme (along with the radically new ontological partition of the world that it carries along with it) so as to transpose this newly acquired and newly mastered habits of perception and appreciation to discover evidence of the agentic capacities of those entities that were previously thought not to exercise it, in the history of Science and Politics.  He has even uncovered evidence of humans being aware of this evidence, but then he noted that they proceeded to hide this evidence by creating elaborate systems of ontology and metaphysics in which non-human agency was explicitly denied, and in which it was explicitly conceptualized as being an exclusive property of so-called “persons” (where persons is now a category restricted to humans) only. These “human” agents were now thought to reside in a special realm that these human apologists called “society.” This “society,”—these thinkers proposed—was organized by a specific set of properties and laws that were distinct from those that “governed” (the humans even used a metaphor from their own way of dealing with another! ) the “slice” of the world that was populated by those entities which “lacked” this agency (the humans called these latter “natural laws”).

Giddy with excitement at this discovery, Bruno even wrote a book in which he announced the entire cover-up to the rest of his human counterparts. But the basic point is as follows: When Bruno experiences the world directly, or when Bruno’s brain simulates this experience (e.g. when reading a historical account of the discovery of the germ theory of disease) he does not deploy our common-sense ontology. Instead he practically deploys a conceptual scheme that in many ways does “violence” to our common sense ontology by radically redrawing and liberally redistributing certain properties that we restrict to a smaller class of entities. Bruno is thus able to perceive the action of these “agents” both in the contemporary world and in past historical eras in a way that escape most of us. In fact, Bruno recommends that if you and I want to see the same things that he sees, and if you and I want to escape the limits of our highly restrictive “common-sense” ontology (in which such things as “society,” “persons,” “animals,” “natural laws,” etc. figure prominently) that we begin by (little by little) divesting ourselves of old habits of thought and perception and acquiring the new habits that he has worked so hard to master.

The epistemological payoff of doing this would be to see the world just as Bruno sees it: a world in which humans are just one of another class of agents and which agency is shared equally by a host set of entities that our common-sense ontology fails to ascribe agency to (and which we thus fail to perceive the everyday ways in which these alleged non-agents exercise a sort of “power” and “influence” on our own behavior and action). In this way Bruno recommends that the ontology specified in our common sense be reduced and displaced by that specified in what he now calls “actor-network theory.” But this is a terrible name, for this is not a “theory” but a viewpoint; a way of practically reconfiguring our perception of the social and natural worlds. In fact this last sentence just used categories from the old ontology for in Bruno’s world, the “master-frame” that divides the things of “nature” from “social” things (Goffman 1974) is no longer operative and no longer serves to structure our perception.

the scientific method fallacy

Lieberson and Horwich’s article on implication (profiled yesterday) raised an issue that I think is very important in social research. I call it the “scientific method fallacy.” Here’s how  I would explain it:

The scientific method fallacy is when a researcher mistakenly assumes that a tool used by physical scientists is the only legitimate way to do research. In other words, they move from “X is a great tool for science” to “X is the only way to do science.”

Examples of the scientific method fallacy: “Experiments are the only way you can really know anything.” “You really don’t have a clear theory unless it is expressed mathematically.”

The underlying philosophical claim is that science is pragmatic. The world is too complex and hard to be captured by any single tool, so we need multiple tools. A formal model, or an ethnographic observation, is a map of the world, not the world itself, which suggests the needs for more kinds of maps.

If you actually look at what scientists do, you see that no single method rules, even though some are clearly more popular than others. Instead of saying that “real scientists use X,” one should  read science journals. A medical journal might include randomized controlled trials, qualitative case studies, observational data, and even opinion pieces. In engineering, it is common to find reports on prototypes. You can learn a lot from building something, even if they theory isn’t nice and neat. There’s even “grounded theory” in the physical sciences from time to time. For example, when the first particle colliders were invented, physicists had a great time just seeing what new particles were made and how that might lead to new theory. And of course, you also see lots of formal models and controlled experiments across scientific areas.

The bottom line is that reality is more complicated than suggested by those pushing the scientific method fallacy. Real science is messy and that means that science progresses on multiple fronts. If an experiment can be done to convincingly settle an issue, great. But a lot of times, it’s not possible, or even desirable. The lesson for social scientists is that we should stop listening to those who say “real science is done this way” and instead have the courage to make science’s many tools work for us. And that’s the way real science works.

Written by fabiorojas

November 10, 2010 at 12:11 am

let’s ask michele lamont a bunch of questions

The Social Science History Association will have a panel on Michele Lamont’s How Professors Think. The panelists include Regina Werum, Steven Epstein, James Evans and myself. It will be in Chicago on November 18, 2010 at 2:30pm at the Palmer House Hilton. [Anyone have the room #?]

My comments will be along the lines of my earlier post, where I responded to a talk Lamont gave at the University of Michigan. Post your own questions on the book in the comments.  I will summarize them and include them in my own comments.

Written by fabiorojas

November 3, 2010 at 12:57 am

learn a bunch of methods, enjoy some good weather

January is about the best time of the year to be in the Southwest.  While most of your friends are out shoveling snow, you are enjoying crisp, sunny days in the 70s and low 80s.  Not directly related to this, but AZ sociology is one of the best places in the world to be introduced to a variety of methodological approaches.  It is certainly the part of my graduate education that I now appreciate the most.

I just found out that all of this will come together in the Arizona Methods Workshops to be held on January 6th-8th of next year (2011 for those of you who are counting).  See the flyer here with cost and lodging info.  Charles Ragin, Erin Leahey, Ron Breiger and Scott Eliason will be holding court on everything from centrality measures, latent variables with multiple indicators, the difference between coverage and consistency in a QCA analysis and log-linear modeling strategies. Certainly enough to satisfy even the most demanding methods head!

Written by Omar

August 11, 2010 at 4:20 pm