normal science, social problems, and plugging and chugging


When I was a sophomore in high school, I had a math teacher who called some of our homework problems “plug-and-chug”: we knew whatever formula we had to use, and we just plugged in the numbers and chugged it out. I use the term now to describe certain kinds of articles, most of them quantitative, which identify some particular sociological problem, which is usually also a social problem (say, racial disparity in school discipline) and then find either a new data set or a new way to approach an old dataset, plugging in the data and chugging out some relevant findings.

It’s a normal science approach to sociology, and some might scoff at it, but there’s a compelling argument that one of the reasons sociology is less powerful than, say, economics, is precisely because there are too many sociologist chefs trying to paradigm shift the kitchen. And, in those subdisciplines that have a more normal science approach (such as the sociology of education), there is a core problem and various scholars approach it. Some have bigger projects than others, but everyone’s basically putting water in the same bucket.

For what it’s worth, for the sociology of education, I think that bucket is inequality within and because of organized schooling, with that inequality understood to be along lines of SES, race, ethnicity, gender, or sexuality. It’s hard for folks like me, who study schools without really looking at inequality, to fit into the sociology of education, but that might just be the cost of a subdiscipline with an admirably focused commitment to a particular social problem. As such, sociologists of education like me wind up doing work in culture or theory, or somewhere else in sociology’s pretty big tent (For example, I sent a paper to the education section this year, and it was rejected, but then picked up by the culture section.)

Of course, there are lots of articles in the sociology of education that are not plug-and-chug in the way I’ve described, but what I’m arguing here is that a kind of normal science approach makes plug-and-chug articles easier to pass muster: if there’s a set list of problems, then new data on those problems (data that isn’t necessarily acquired in a methodologically or theoretically interesting way) is important in and of itself.

There are other kinds of plug-and-chug sociology of course. There’s a qualitative species, which takes certain ethnographic or interview data and shows how some theorist would interpret it, without really telling us much about the theorist or the empirical site. And there’s plug-and-chug work in all of the many sociological subdisciplines. In fact, I’m going to propose a hypothesis that I think is testable but I don’t really have time: the closer an article is to a question about stratification or some other social problem about which sociologists are deeply concerned, the less it has to provide anything interesting in its methods or theory.

I don’t think that’s a bad thing (I want to fix stratification too!) but it does wind up having an interesting side-effect, which is that those who don’t study stratification or specific social problems more central to the discipline’s identity (prejudice or discrimination for example) have to develop particular theoretical or methodological chops to justify their work, in a way that those who study stratification or these other social problems do not. That winds up furthering the idea that certain subfields are “less theoretical” than others when there seems to me no obvious reason any one subfield should be more or less theoretical than any other.

Thanks to my comparative-historical cabdriver, Barry Eidlin, who I talked to about this, and who confirmed all of my findings in a pithy way that I will use to open my monograph. (Actually, he showed me how his own very interesting work might well disprove the perhaps facile categorization I described above, which is sort of always the way, I think. But that’s okay. That just means he’ll be the cabdriver anecdote in the conclusion.)

Written by jeffguhin

June 10, 2016 at 11:18 pm

One Response

Subscribe to comments with RSS.

  1. I think a contributing problem is that we, as a discipline, scoff at the research note. This is a shame. Often, what analysis we produce is interesting in its own right, even if it is what we might consider normal science (or even more simply as data). The pressure to concoct a novel theoretical twist with each new piece of research can result in theoretical gymnastics that the data don’t really require to be interesting. It is true that any interpretation of data, and indeed, any measurement task is theoretical, but we don’t need to reinvent the wheel with each new project. Sure, these notes and pieces of normal science might not end up in SF/ASR/AJS, but even regional journals push authors to do this work in an effort (I suppose) to mimic top journals. What’s worse, it all contributes to the “framing” problem, where any paper can be rejected for any reason, assuming that reason is phrased as a framing problem. So, not only do we have to innovate theoretically again and again, those innovations can be the source of a rejection all their own – even while we (as reviewers) admit the case and analysis are interesting and important. That sounds like lunacy to me.

    Liked by 3 people


    June 11, 2016 at 3:59 pm

Comments are closed.

%d bloggers like this: