orgtheory.net

Archive for the ‘knowledge’ Category

answering the “so what?” question: chuck tilly’s 2003 guide

One of the perennial issues for novice and expert researchers alike is answering the “so what” question of why bother researching a particular phenomena.  In particular, sociologists must justify their places in a big-tent discipline, and orgheads swim in the murky expanse of interdisciplinary waters.  For such researchers, this question must be answered in presentations and publications, particularly in the contributions section.

While it’s easy for expert researchers to melt into a potentially crippling existential sweat about the fathomless unknown unknowns, novice researchers, unburdened by such knowledge, face a broader vista.  According to Chuck Tilly,* researchers need to decide whether to enter existing conversations, bridge two different conversations, initiate a new conversation, or…???**

Since I couldn’t remember Tilly’s exact quote about conversations despite hearing it at least twice during his famous Politics and Protest workshop (before at Columbia, now at the GC), I pinged CCNY colleague John Krinsky.

Krinsky responded to my inquiry by sharing this great questionnaire and chart of low/high risk/reward research: TillyQuestionnaire_2003.  This document offers helpful exercises for discerning possible contributions for research projects at all stages.

*For Krinksy’s (and others) tribute to Tilly’s mentorship and scholarship, go here.

** If anyone remembers Tilly’s exact quote about conversations, please share in the comments.

Written by katherinechen

October 24, 2018 at 3:16 pm

the antitrust equilibrium and three pathways to policy change

Antitrust is one of the classic topics in economic sociology. Fligstein’s The Transformation of Corporate Control and Dobbin’s Forging Industrial Policy both dealt with how the rules that govern economic life are created. But with some exceptions, it hasn’t received a lot of attention in the last decade in econ soc.

In fact, antitrust hasn’t been on the public radar that much at all. After the Microsoft case was settled in 2001, antitrust policy just hasn’t thrown up a lot of issues that have gotten wide public attention, beyond maybe griping about airline mergers.

But in the last year or so, it seems like popular interest in antitrust is starting to bubble up again.

Just in the last few months, there have been several widely circulated pieces on antitrust policy. Washington Monthly, the Atlantic, ProPublica (twice), the American Prospect—all these have criticized existing antitrust policy and argued for strengthening it.

This is timely for me, because I’ve also been studying antitrust. As a policy domain that is both heavily technocratic and heavily influenced by economists, it’s a great place to think about the role of economics in public policy.

Yesterday I put a draft paper up on SocArXiv on the changing role of economics in antitrust policy. The 1970s saw a big reversal in antitrust, when we went from a regime that was highly skeptical of mergers and all sorts of restraints on trade to one that saw them as generally efficiency-promoting and beneficial for consumers. At the same time, the influence of economics in antitrust policy increased dramatically.

But while these two development are definitely related—there was a close affinity between the Chicago School and the relaxed antitrust policy of the Reagan administration, for example—there’s no simple relationship here: economists’ influence began to increase at a time when they were more favorable to antitrust intervention, and after the 1980s most economists rejected the strongest Chicago arguments.

I might write about the sociology part of the paper later, but in this post I just want to touch on the question of what this history implies about the present moment and the possibility of change in antitrust policy.

Read the rest of this entry »

Written by epopp

January 9, 2017 at 6:51 pm

does making one’s scholarly mark mean transplanting the shoulders of giants elsewhere?

The Society for the Advancement of Socio-Economics (SASE) website has made Neil Fligstein‘s powerpoint slides on the history of economic sociology available for general viewing as a PDF.  (Here are the slides in powerpoint form: 1469704310_imagining_economic_sociology_-socio-economics-fligstein) It’s a fascinating read of the development of a sub-field across continents, and it also includes discussion of a challenge that some believes plagues the sociology discipline:

Both Max Weber and Thomas Kuhn recognized that Sociology as a discipline might be doomed to never cumulate knowledge.

  • Sociology would proceed as a set of research projects which reflected the current concerns and interests of a small set of scholars
  • When the group hit a dead end in producing novel results, the research program would die out only to be replaced by another one
  • Progress in economic sociology is likely to be made by putting our research programs into dialogue with one another to make sense of how the various mechanisms that structure markets interact
  • Failure to do so risks the field fragmenting of the field into ever smaller pieces and remaining subject to fashion and fad

Fligstein’s claim for these field-fragmenting tendencies stems from the current structure of the academic field.  He depicts sociology as rewarding scholars for applying ideas from one area to another area where current theorizing is insufficient, rather than expanding existing research:

  • … the idea is not to work on the edge of some mature existing research program with the goal of expanding it
  • But instead, one should be on the lookout for new ideas from different research programs to borrow to make sense for what should be done next

In short, scholars tend to form intellectual islands where they can commune with other like-minded scholars.  Bridging paths to other islands can generate rewards, but the efforts needed to disseminate knowledge more widely – even within a discipline – can exceed any one person’s capacity.

 

Written by katherinechen

October 10, 2016 at 6:30 pm

bad reporting on bad science

This Guardian piece about bad incentives in science was getting a lot of Twitter mileage yesterday. “Cut-throat academia leads to natural selection of bad science,” the headline screams.

The article is reporting on a new paper by Paul Smaldino and Richard McElreath, and features quotes from the authors like, “As long as the incentives are in place that reward publishing novel, surprising results, often and in high-visibility journals above other, more nuanced aspects of science, shoddy practices that maximise one’s ability to do so will run rampant.”

Well. Can’t disagree with that.

But when I clicked through to read the journal article, the case didn’t seem nearly so strong. The article has two parts. The first is a review of review pieces published between 1962 and 2013 that examined the levels of statistical power reported in studies in a variety of academic fields. The second is a formal model of an evolutionary process through which incentives for publication quantity will drive the spread of low-quality methods (such as underpowered studies) that increase both productivity as well as the likelihood of false positives.

The formal model is kind of interesting, but just shows that the dynamics are plausible — something I (and everyone else in academia) was already pretty much convinced of. The headlines are really based on the first part of the paper, which purports to show that statistical power in the social and behavioral sciences hasn’t increased over the last fifty-plus years, despite repeated calls for it to do so.

Well, that part of the paper basically looks at all the papers that reviewed levels of statistical power in studies in a particular field, focusing especially on papers that reported small effect sizes. (The logic is that such small effects are not only most common in these fields, but also more likely to be false positives resulting from inadequate power.) There were 44 such reviews. The key point is that average reported statistical power has stayed stubbornly flat. The conclusion the authors draw is that bad methods are crowding out good ones, even though we know better, through some combination of poor incentives and selection that rewards researcher ignorance.

 

 

The problem is that the evidence presented in the paper is hardly strong support for this claim. This is not a random sample of papers in these fields, or anything like it. Nor is there other evidence to show that the reviewed papers are representative of papers in their fields more generally.

More damningly, though, the fields that are reviewed change rather dramatically over time. Nine of the first eleven studies (those before 1975) review papers from education or communications. The last eleven (those after 1995) include four from aviation, two from neuroscience, and one each from health psychology, software engineering, behavioral ecology, international business, and social and personality psychology. Why would we think that underpowering in the latter fields at all reflects what’s going on in the former fields in the last two decades? Maybe they’ve remained underpowered, maybe they haven’t. But statistical cultures across disciplines are wildly different. You just can’t generalize like that.

The news article goes on to paraphrase one of the authors as saying that “[s]ociology, economics, climate science and ecology” (in addition to psychology and biomedical science) are “other areas likely to be vulnerable to the propagation of bad practice.” But while these fields are singled out as particularly bad news, not one of the reviews covers the latter three fields (perhaps that’s why the phrasing is “other areas likely”?). And sociology, which had a single review in 1974, looks, ironically, surprisingly good — it’s that positive outlier in the graph above at 0.55. Guess that’s one benefit of using lots of secondary data and few experiments.

The killer is, I think the authors are pointing to a real and important problem here. I absolutely buy that the incentives are there to publish more — and equally important, cheaply — and that this undermines the quality of academic work. And I think that reviewing the reviews of statistical power, as this paper does, is worth doing, even if the fields being reviewed aren’t consistent over time. It’s also hard to untangle whether the authors actually said things that oversold the research or if the Guardian just reported it that way.

But at least in the way it’s covered here, this looks like a model of bad scientific practice, all right. Just not the kind of model that was intended.

[Edited: Smaldino points on Twitter to another paper that offers additional support for the claim that power hasn’t increased in psychology and cognitive neuroscience, at least.]

Written by epopp

September 22, 2016 at 12:28 pm

slave names no longer forgotten

The Virginia Historical Society has a website that brings together many documents from the antebellum period of American history so that you can search for the names of African Americans who might otherwise be lost to history. From the website:

This database is the latest step by the Virginia Historical Society to increase access to its varied collections relating to Virginians of African descent. Since its founding in 1831, the VHS has collected unpublished manuscripts, a collection that now numbers more than 8 million processed items.

Within these documents are numerous accounts that collectively help tell the stories of African Americans who have lived in the state over the centuries. Our first effort to improve access to these stories came in 1995 with publication of our Guide to African American Manuscripts. A second edition appeared in 2002, and the online version is continually updated as new sources enter our catalog (http://www.vahistorical.org/aamcvhs/guide_intro.htm).

The next step we envisioned would be to create a database of the names of all the enslaved Virginians that appear in our unpublished documents. Thanks to a generous grant from Dominion Resources and the Dominion Foundation in January 2011, we launched the project that has resulted in this online resource. Named Unknown No Longer, the database seeks to lift from the obscurity of unpublished historical records as much biographical detail as remains of the enslaved Virginians named in those documents. In some cases there may only be a name on a list; in others more details survive, including family relationships, occupations, and life dates.

Check it out.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

 

Written by fabiorojas

June 24, 2016 at 12:07 am

a history of “command and control”, or, thomas schelling is behind every door

I’m working on a paper about the regulatory reform movement of the 1970s. If you’ve read anything at all about regulation, even the newspaper, you’ve probably heard the term “command and control”.

“Command and control” is a common description for government regulation that prescribes what some actor should do. So, for example, the CAFE standards say that the cars produced by car manufacturers must, on average, have a certain level of fuel efficiency. Or the EPA’s air quality standards say that ozone levels cannot exceed a certain number of parts per billion. Or such regulations may simply forbid some things, like the use of asbestos in many types of products.

This is typically contrasted with incentive-based regulation, or market-based regulation, which doesn’t set an absolute standard but imposes a cost on an undesirable behavior, like carbon taxes, or provides some kind of reward for good (usually meaning efficient) performance, as utility regulators often do.

The phrase “command and control” is commonly used in the academic literature, where it is not explicitly pejorative. Yet it’s kind of a loaded term. Who wants to be “commanded” and “controlled”?

So as I started working on this paper, I became more and more curious about the phrase, which only seemed to date back to the late 1970s, as the deregulatory movement really got rolling. Before that, it was a military term.

To the extent that I had thought about it at all, I assumed it was a clever framing coined by some group like the American Enterprise Institute that wanted to draw attention to regulation as a form of government overreach.

So I asked Susana Muñiz Moreno, a terrific graduate student working on policy expertise in Mexico, to look into it. She found newspaper references starting in 1977, when the New York Times references CEA chair Charles Schultze’s argument that “the current ‘command-and-control’ approach to social goals, which establishes specific standards to be met and polices compliance with each standard, is not only inefficient ‘but productive of far more intrusive government than is necessary.’”

Untitled.png

From the Aug. 21, 1977 edition of the New York Times

And sure enough, Schultze’s influential book of that year, The Private Use of Public Interest, uses the phrase a number of times. Which makes sense, as Schultze was instrumental in advancing regulatory reform and plays a key role in my story. But he’s clearly not the AEI type I would have imagined coining such a phrase—before becoming Carter’s CEA chair Schultze was at Brookings, and before that he was LBJ’s budget director.

Nevertheless, given Schultze’s influence and the lack of earlier media use of the term, I figured he probably came up with it and it took off from there.

But I started poking around Google Scholar, mostly because I wondered if some more small-government-oriented reformer of regulation had been using it prior to Schultze. I thought James C. Miller III might be a possibility.

I didn’t find any early uses of the term from Miller, but you know what I did find? An obscure book chapter called “Command and Control” written by Thomas Schelling in 1974.

Sociologists probably best know Schelling from his 1978 book, Micromotives and Macrobehavior, and its tipping point model, which shows how the decisions of agents who prefer that even a relatively small proportion of their neighbors be like them (read: of the same race) can quickly lead to a highly segregated space. Its insights are regularly referenced in the literature on neighborhoods and segregation.

(If you haven’t seen it, you should totally check out this brilliant visualization of the model, “Parable of the Polygons”.)

polygons-2

Economists know him for a broader range of game theoretic work on decision-making and strategy—work that was recognized in 2005 with a Nobel Prize.

Anyway, I just checked out the chapter—and I’m pretty sure this is the original source. Like most of Schelling’s work, it’s written in crystal-clear prose. The chapter itself is only secondarily about government regulation; it’s is in an edited book about the social responsibility of the corporation. It hasn’t been cited often—29 times on Google Scholar, often in the context of business ethics.

Schelling muses on the difficulty of enforcing some behavioral change—like making taxi passengers fasten their seat belts—even for the head of a firm, and considers how organizations try to accomplish such goals: for example, by supporting government requirements that might be more effective than their own policing efforts.

It’s a wandering but fascinating reflection, with a Carnegie-School feel to it. And the “command and control” of the title doesn’t refer to government regulation, but to the difficulties faced by organizational leaders who are trying to command and control.

In fact, if I didn’t know the context I’d think this was a completely coincidental use of the phrase. But the volume, Social Responsibility and the Business Predicament, is part of the same Brookings series, “Studies in the Regulation of Economic Activity,” that published Schultze’s lectures in 1977, and which catalyzed a network of economists studying regulation in the early 1970s.

So while Schultze adapts the phrase for his own needs, and it’s possible that he could have borrowed the military phrase directly, my strong hunch is that he is lifting it from Schelling. Which actually fits my larger story—which highlights how the deregulatory movement built on the work of McNamara’s whiz kids from RAND, a community Schelling was an integral part of—quite well.

I can’t resist ending with one other contribution Schelling made to the use of economics in policy beyond his strategy work: the 1968 essay, “The Life You Save May Be Your Own.” (He was good with titles—this one was borrowed from a Flannery O’Connor story.) This introduced the willingness-to-pay concept as a way to value life—the idea that one could calculate how much people valued their own lives based on how much they had to be paid in order to accept very small risks of death. Controversial at the time, the proposal eventually became the main method policymakers used to place a monetary value on life.

Thomas Schelling. He really got around.

Written by epopp

June 8, 2016 at 3:05 pm

tying our own noose with data? higher ed edition

I wanted to start this post with a dramatic question about whether some knowledge is too dangerous to pursue. The H-bomb is probably the archetypal example of this dilemma, and brings to mind Oppenheimer’s quotation of the Bhagavad Gita upon the detonation of Trinity: “Now I am become Death, the destroyer of worlds.

But really, that’s way too melodramatic for the example I have in mind, which is much more mundane. Much more bureaucratic. It’s less about knowledge that is too dangerous to pursue and more about blindness to the unanticipated — but not unanticipatable — consequences of some kinds of knowledge.

19462658349_0e7d937d6d_b

Maybe this wasn’t such a good idea.

The knowledge I have in mind is the student-unit record. See? I told you it was boring.

The student-unit record is simply a government record that tracks a specific student across multiple educational institutions and into the workforce. Right now, this does not exist for all college students.

There are records of students who apply for federal aid, and those can be tied to tax data down the road. This is what the Department of Education’s College Scorecard is based on: earnings 6-10 years after entry into a particular college. But this leaves out the 30% of students who don’t receive federal aid.

There are states with unit-record systems. Virginia’s is particularly strong: it follows students from Virginia high schools through enrollment in any not-for-profit Virginia college and then into the workforce as reflected in unemployment insurance records. But it loses students who enter or leave Virginia, which is presumably a considerable number.

But there’s currently no comprehensive federal student-unit record system. In fact at the moment creating one is actually illegal. It was banned in an amendment to the Higher Education Act reauthorization in 2008, largely because the higher ed lobby hates the idea.

Having student-unit records available would open up all kind of research possibilities. It would help us see the payoffs not just to college in general, but to specific colleges, or specific majors. It would help us disentangle the effects of the multiple institutions attended by the typical college student. It would allow us to think more precisely about when student loans do, and don’t, pay off. Academics and policy wonks have argued for it on just these grounds.

In fact, basically every social scientist I know would love to see student-unit records become available. And I get it. I really do. I’d like to know the answers to those questions, too.

But I’m really leery of student-unit records. Maybe not quite enough to stand up and say, This is a terrible idea and I totally oppose it. Because I also see the potential benefits. But leery enough to want to point out the consequences that seem likely to follow a student-unit record system. Because I think some of the same people who really love the idea of having this data available would be less enthused about the kind of world it might help, in some marginal way, create.

So, with that as background, here are three things I’d like to see data enthusiasts really think about before jumping on this bandwagon.

First, it is a short path from data to governance. For researchers, the point of student-level data is to provide new insights into what’s working and what isn’t: to better understand what the effects of higher education, and the financial aid that makes it possible, actually are.

But for policy types, the main point is accountability. The main point of collecting student-level data is to force colleges to take responsibility for the eventual labor market outcomes of their students.

Sometimes, that’s phrased more neutrally as “transparency”. But then it’s quickly tied to proposals to “directly tie financial aid availability to institutional performance” and called “an essential tool in quality assurance.”

Now, I am not suggesting that higher education institutions should be free to just take all the federal money they can get and do whatever the heck they want with it. But I am very skeptical that, in general, the net effect of accountability schemes is generally positive. They add bureaucracy, they create new measures to game, and the behaviors they actually encourage tend to be remote from the behaviors they are intended to encourage.

Could there be some positive value in cutting off aid to institutions with truly terrible outcomes? Absolutely. But what makes us think that we’ll end up with that system, versus, say, one that incentivizes schools to maximize students’ earnings, with all the bad behavior that might entail? Anyone who seriously thinks that we would use more comprehensive data to actually improve governance of higher ed should take a long hard look at what’s going on in the UK these days.

Second, student-unit records will intensify our already strong focus on the economic return to college, and further devalue other benefits. Education does many things for people. Helping them earn more money is an important one of those things. It is not, however, the only one.

Education expands people’s minds. It gives them tools for taking advantage of opportunities that present themselves. It gives them options. It helps them to find work they find meaningful, in workplaces where they are treated with respect. And yes, selection effects — or maybe it’s just because they’re richer — but college graduates are happier and healthier than nongraduates.

The thing is, all these noneconomic benefits are difficult to measure. We have no administrative data that tracks people’s happiness, or their health, let alone whether higher education has expanded their internal life.

What we’ve got is the big two: death and taxes. And while it might be nice to know whether today’s 30-year-old graduates are outliving their nongraduate peers in 50 years, in reality it’s tax data we’ll focus on. What’s the economic return to college, by type of student, by institution, by major? And that will drive the conversation even more than it already does. Which to my mind is already too much.

Third, social scientists are occupationally prone to overestimate the practical benefit of more data. Are there things we would learn from student-unit records that we don’t know? Of course. There are all kinds of natural experiments, regression discontinuities, and instrumental variables that could be exploited, particularly around financial aid questions. And it would be great to be able to distinguish between the effects of “college” and the effects of that major at this college.

But we all realize that a lot of the benefit of “college” isn’t a treatment effect. It’s either selection — you were a better student going in, or from a better-off family — or signal — you’re the kind of person who can make it through college; what you did there is really secondary.

Proposals to use income data to understand the effects of college assume that we can adjust for the selection effects, at least, through some kind of value-added model, for example. But this is pretty sketchy. I mean, it might provide some insights for us to think about. But as a basis for concluding that Caltech, Colgate, MIT, and Rose-Hulman Institute of Technology (the top five on Brookings’ list) provide the most value — versus that they have select students who are distinctive in ways that aren’t reflected by adjusting for race, gender, age, financial aid status, and SAT scores — is a little ridiculous.

So, yeah. I want more information about the real impact of college, too. But I just don’t see the evidence out there that having more information is going to lead to policy improvements.

If there weren’t such clear potential negative consequences, I’d say sure, try, it’s worth learning more even if we can’t figure out how to use it effectively. But in a case where there are very clear paths to using this kind of information in ways that are detrimental to higher education, I’d like to see a little more careful thinking about the real likely impacts of student-unit records versus the ones in our technocratic fantasies.

Written by epopp

June 3, 2016 at 2:06 pm

that chocolate milk study: can we blame the media?

A specific brand of high-protein chocolate milk improved the cognitive function of high school football players with concussions. At least that’s what a press release from the University of Maryland claimed a few weeks ago. It also quoted the superintendent of the Washington County Public Schools as saying, “Now that we understand the findings of this study, we are determined to provide Fifth Quarter Fresh [the milk brand] to all of our athletes.”

The problem is that the “study” was not only funded in part by the milk producer, but is unpublished, unavailable to the public and, based on the press release — all the info we’ve got — raises immediate methodological questions. Certainly there are no grounds for making claims about this milk in particular, since the control group was given no milk at all.

The summary also raises questions about the sample size. The total sample included 474 high school football players, but included both concussed and non-concussed players. How many of these got concussions during one season? I would hope not enough to provide statistical power — this NAS report suggests high schoolers get 11 concussions per 10,000 football games and practices.

And even if the sample size is sufficient, it’s not clear that the results are meaningful. The press release suggests concussed athletes who drank the milk did significantly better on four of thirty-six possible measures — anyone want to take bets on the p-value cutoff?

Maryland put out the press release nearly four weeks ago. Since then there’s been a slow build of attention, starting with a takedown by Health News Review on January 5, before the story was picked up by a handful of news outlets and, this weekend, by Vox. In the meanwhile, the university says in fairly vague terms that it’s launched a review of the study, but the press release is still on the university website, and similarly questionable releases (“The magic formula for the ultimate sports recovery drink starts with cows, runs through the University of Maryland and ends with capitalism” — you can’t make this stuff up!) are up as well.

Whoever at the university decided to put out this press release should face consequences, and I’m really glad there are journalists out there holding the university’s feet to the fire. But while the university certainly bears responsibility for the poor decision to go out there and shill for a sponsor in the name of science, it’s worth noting that this is only half of the story.

There’s a lot of talk in academia these days about the status of scientific knowledge — about replicability, bias, and bad incentives, and how much we know that “just ain’t so.” And there’s plenty of blame to go around.

But in our focus on universities’ challenges in producing scientific knowledge, sometimes we underplay the role of another set of institutions: the media. Yes, there’s a literature on science communication that looks as the media as intermediary between science and the public. But a lot of it takes a cognitive angle on audience reception, and it’s got a heavy bent toward controversial science, like climate change or fracking.

More attention to media as a field, though, with rapidly changing conditions of production, professional norms and pathways, and career incentives, could really shed some light on the dynamics of knowledge production more generally. It would be a mistake to look back to some idealized era in which unbiased but hard-hitting reporters left no stone unturned in their pursuit of the public interest. But the acceleration of the news cycle, the decline of journalism as a viable career, the impact of social media on news production, and the instant feedback on pageviews and clickthroughs all tend to reinforce a certain breathless attention to the latest overhyped university press release.

It’s not the best research that gets picked up, but the sexy, the counterintuitive, and the clickbait-ish. Female-named hurricanes kill more than male hurricanes. (No.) Talking to a gay canvasser makes people support gay marriage. (Really no.) Around the world, children in religious households are less altruistic than children of atheists. (No idea, but I have my doubts.)

This kind of coverage not only shapes what the public believes, but it shapes incentives in academia as well. After all, the University of Maryland is putting out these press releases because it perceives it will benefit, either from the perception it is having a public impact, or from the goodwill the attention generates with Fifth Quarter Fresh and other donors. Researchers, in turn, will be similarly incentivized to focus on the sexy topic, or at least the sexy framing of the ordinary topic. And none of this contributes to the cumulative production of knowledge that we are, in theory, still pursuing.

None of this is meant to shift the blame for the challenges faced by science from the academic ecosystem to the realm of media. But if you really want to understand why it’s so hard to make scientific institutions work, you can’t ignore the role of media in producing acceptance of knowledge, or the rapidity with which that role is changing.

After all, if academics themselves can’t resist the urge to favor the counterintuitive over the mundane, we can hardly blame journalists for doing the same.

Written by epopp

January 18, 2016 at 1:23 pm

asr reviewer guidelines: comparative-historical edition

[The following is an invited guest post by Damon Mayrl, Assistant Professor of Comparative Sociology at Universidad Carlos III de Madrid, and Nick Wilson, Assistant Professor of Sociology at Stony Brook University.]

Last week, the editors of the American Sociological Review invited members of the Comparative-Historical Sociology Section to help develop a new set of review and evaluation guidelines. The ASR editors — including orgtheory’s own Omar Lizardo — hope that developing such guidelines will improve historical sociology’s presence in the journal. We applaud ASR’s efforts on this count, along with their general openness to different evaluative review standards. At the same time, though, we think caution is warranted when considering a single standard of evidence for evaluating historical sociology. Briefly stated, our worry is that a single evidentiary standard might obscure the variety of great work being done in the field, and could end up excluding important theoretical and empirical advances of interest to the wider ASR audience.

These concerns derive from our ongoing research on the actual practice of historical sociology. This research was motivated by surprise. As graduate students, we thumbed eagerly through the “methodological” literature in historical sociology, only to find — with notable exceptions, of course — that much of this literature consists of debates about the relationship between theory and evidence, or conceptual interventions (for instance, on the importance of temporality in historical research). What was missing, it seemed, were concrete discussions of how to actually gather, evaluate, and deploy primary and secondary evidence over the course of a research project. This lacuna seemed all the more surprising because other methods in sociology — like ethnography or interviewing — had such guides.

With this motivation, we set out to ask just what kinds of evidence the best historical sociology uses, and how the craft is practiced today. So far, we have learned that historical sociology resembles a microcosm of sociology as a whole, characterized by a mosaic of different methods and standards deployed to ask questions of a wide variety of substantive interests and cases.

One source for this view is a working paper in which we examine citation patterns in 32 books and articles that won awards from the ASA Comparative-Historical Sociology section. We find that, even among these award-winning works of historical sociology, at least four distinct models of historical sociology, each engaging data and theory in particular ways, have been recognized by the discipline as outstanding. Importantly, the sources they use and their modes of engaging with existing theory vary dramatically. Some works use existing secondary histories as theoretical building blocks, engaging in an explicit critical dialogue with existing theories; others undertake deep excavations of archival and other primary sources to nail down an empirically rich and theoretically revealing case study; and still others synthesize mostly secondary sources to provide new insights into old theoretical problems. Each of these strategies allows historical sociologists to answer sociologically important questions, but each also implies a different standard of judgment. By extension, ASR’s guidelines will need to be supple enough to capture this variety.

One key aspect of these standards concerns sources, which for historical sociologists can be either primary (produced contemporaneously with the events under study) or secondary (later works of scholarship about the events studied). Although classic works of comparative-historical sociology drew almost exclusively from secondary sources, younger historical sociologists increasingly prize primary sources. In interviews with historical sociologists, we have noted stark divisions and sometimes strongly-held opinions as to whether primary sources are essential for “good” historical sociology. Should ASR take a side in this debate, or remain open to both kinds of research?

Practically speaking, neither primary nor secondary sources are self-evidently “best.” Secondary sources are interpretive digests of primary sources by scholars; accordingly, they contain their own narratives, accounts, and intellectual agendas, which can sometimes strongly shape the very nature of events presented. Since the quality of historical sociologists’ employment of secondary works can be difficult for non-specialists to judge, this has often led to skepticism of secondary sources and a more favorable stance toward primary evidence. But primary sources face their own challenges. Far from being systematic troves of “data” readily capable of being processed by scholars, for instance, archives are often incomplete records of events collected by directly “interested” actors (often states) whose documents themselves remain interpretive slices of history, rather than objective records. Since the use of primary evidence more closely resembles mainstream sociological data collection, we would not be surprised if a single standard for historical sociology explicitly or implicitly favored primary sources while relatively devaluing secondary syntheses. We view this to be a particular danger, considering the important insights that have emerged from secondary syntheses. Instead, we hope that standards of transparency, for both types of sources, will be at the core of the new ASR guidelines.

Another set of concerns relates to the intersection of historical research and the review process itself. For instance, our analysis of award-winners suggests that, despite the overall increased interest in original primary research among section members, primary source usage has actually declined in award-winning articles (as opposed to books) over time, perhaps in response to the format constraints of journal articles. If the new guidelines heavily favor original primary work without providing leeway in format constraints (for instance, through longer word counts), this could be doubly problematic for historical sociological work attempting to appear in the pages of ASR.  Beyond the constraints of word-limits, moreover, as historical sociology has extended its substantive reach through its third-wave “global turn,” the cases historical sociologists use to construct a theoretical dialogue with one another can sometimes rely on radically different and particularly unfamiliar sources. This complicates attempts to judge and review works of historical sociology, since the reviewer may find their knowledge of the case — and especially of relevant archives — strained to its limit.

In sum, we welcome efforts by ASR to provide review guidelines for historical sociology.  At the same time, we encourage plurality—guidelines, rather than a guideline; standards rather than a standard. After all, we know that standards tend to homogenize and that guidelines can be treated more rigidly than originally intended. In our view, this is a matter of striking an appropriate balance. Pushing too far towards a single standard risks flattening the diversity of inquiry and distorting ongoing attempts among historical sociologists to sort through what the new methodological and substantive diversity of the “third wave” of historical sociology means for the field, while pushing too far towards describing diversity might in turn yield a confusing sense for reviewers that “anything goes.” The nature of that balance, however, remains to be seen.

Written by epopp

September 8, 2015 at 5:51 pm

book spotlight: in defense of disciplines by jerry jacobs

Sadly, I could not be present at the SSS meetings, so I wrote out my comments about Jerry Jacobs’ wonderful In Defense of Disciplines. Go buy the book!

I want to start by thanking Sarah Winslow and the Southern Sociological Society for organizing this session. Jerry is a leading sociologist of higher education and his work merits sustained attention and critique. It is an honor to be allowed to participate in this event.

In a way, this is an awkward critique to write because I agree with much of what In Defense of Disciplines has to say. For example, on the basic conceptual issue of what counts as a discipline, Jerry’s definition is very close to my own feeling on the subject – disciplines are closed social fields of self-certifying intellectuals who are institutionalized in universities. In my work on the Black Studies movement, I found this approach to be very useful in that it identifies how interdisciplinary fields like Ethnic Studies are different than older fields like history or sociology. They haven’t yet achieved closure and rely on allied disciplines for personnel. I call fields like Black Studies “permanent inter-disciplines” because they can’t quite reach the status of a discipline and they don’t seem to be going anywhere.

Read the rest of this entry »

Written by fabiorojas

May 15, 2015 at 12:01 am