Archive for the ‘guest bloggers’ Category
[The following is an invited guest post by Damon Mayrl, Assistant Professor of Comparative Sociology at Universidad Carlos III de Madrid, and Nick Wilson, Assistant Professor of Sociology at Stony Brook University.]
Last week, the editors of the American Sociological Review invited members of the Comparative-Historical Sociology Section to help develop a new set of review and evaluation guidelines. The ASR editors — including orgtheory’s own Omar Lizardo — hope that developing such guidelines will improve historical sociology’s presence in the journal. We applaud ASR’s efforts on this count, along with their general openness to different evaluative review standards. At the same time, though, we think caution is warranted when considering a single standard of evidence for evaluating historical sociology. Briefly stated, our worry is that a single evidentiary standard might obscure the variety of great work being done in the field, and could end up excluding important theoretical and empirical advances of interest to the wider ASR audience.
These concerns derive from our ongoing research on the actual practice of historical sociology. This research was motivated by surprise. As graduate students, we thumbed eagerly through the “methodological” literature in historical sociology, only to find — with notable exceptions, of course — that much of this literature consists of debates about the relationship between theory and evidence, or conceptual interventions (for instance, on the importance of temporality in historical research). What was missing, it seemed, were concrete discussions of how to actually gather, evaluate, and deploy primary and secondary evidence over the course of a research project. This lacuna seemed all the more surprising because other methods in sociology — like ethnography or interviewing — had such guides.
With this motivation, we set out to ask just what kinds of evidence the best historical sociology uses, and how the craft is practiced today. So far, we have learned that historical sociology resembles a microcosm of sociology as a whole, characterized by a mosaic of different methods and standards deployed to ask questions of a wide variety of substantive interests and cases.
One source for this view is a working paper in which we examine citation patterns in 32 books and articles that won awards from the ASA Comparative-Historical Sociology section. We find that, even among these award-winning works of historical sociology, at least four distinct models of historical sociology, each engaging data and theory in particular ways, have been recognized by the discipline as outstanding. Importantly, the sources they use and their modes of engaging with existing theory vary dramatically. Some works use existing secondary histories as theoretical building blocks, engaging in an explicit critical dialogue with existing theories; others undertake deep excavations of archival and other primary sources to nail down an empirically rich and theoretically revealing case study; and still others synthesize mostly secondary sources to provide new insights into old theoretical problems. Each of these strategies allows historical sociologists to answer sociologically important questions, but each also implies a different standard of judgment. By extension, ASR’s guidelines will need to be supple enough to capture this variety.
One key aspect of these standards concerns sources, which for historical sociologists can be either primary (produced contemporaneously with the events under study) or secondary (later works of scholarship about the events studied). Although classic works of comparative-historical sociology drew almost exclusively from secondary sources, younger historical sociologists increasingly prize primary sources. In interviews with historical sociologists, we have noted stark divisions and sometimes strongly-held opinions as to whether primary sources are essential for “good” historical sociology. Should ASR take a side in this debate, or remain open to both kinds of research?
Practically speaking, neither primary nor secondary sources are self-evidently “best.” Secondary sources are interpretive digests of primary sources by scholars; accordingly, they contain their own narratives, accounts, and intellectual agendas, which can sometimes strongly shape the very nature of events presented. Since the quality of historical sociologists’ employment of secondary works can be difficult for non-specialists to judge, this has often led to skepticism of secondary sources and a more favorable stance toward primary evidence. But primary sources face their own challenges. Far from being systematic troves of “data” readily capable of being processed by scholars, for instance, archives are often incomplete records of events collected by directly “interested” actors (often states) whose documents themselves remain interpretive slices of history, rather than objective records. Since the use of primary evidence more closely resembles mainstream sociological data collection, we would not be surprised if a single standard for historical sociology explicitly or implicitly favored primary sources while relatively devaluing secondary syntheses. We view this to be a particular danger, considering the important insights that have emerged from secondary syntheses. Instead, we hope that standards of transparency, for both types of sources, will be at the core of the new ASR guidelines.
Another set of concerns relates to the intersection of historical research and the review process itself. For instance, our analysis of award-winners suggests that, despite the overall increased interest in original primary research among section members, primary source usage has actually declined in award-winning articles (as opposed to books) over time, perhaps in response to the format constraints of journal articles. If the new guidelines heavily favor original primary work without providing leeway in format constraints (for instance, through longer word counts), this could be doubly problematic for historical sociological work attempting to appear in the pages of ASR. Beyond the constraints of word-limits, moreover, as historical sociology has extended its substantive reach through its third-wave “global turn,” the cases historical sociologists use to construct a theoretical dialogue with one another can sometimes rely on radically different and particularly unfamiliar sources. This complicates attempts to judge and review works of historical sociology, since the reviewer may find their knowledge of the case — and especially of relevant archives — strained to its limit.
In sum, we welcome efforts by ASR to provide review guidelines for historical sociology. At the same time, we encourage plurality—guidelines, rather than a guideline; standards rather than a standard. After all, we know that standards tend to homogenize and that guidelines can be treated more rigidly than originally intended. In our view, this is a matter of striking an appropriate balance. Pushing too far towards a single standard risks flattening the diversity of inquiry and distorting ongoing attempts among historical sociologists to sort through what the new methodological and substantive diversity of the “third wave” of historical sociology means for the field, while pushing too far towards describing diversity might in turn yield a confusing sense for reviewers that “anything goes.” The nature of that balance, however, remains to be seen.
Cristobal Young is an assistant professor at Stanford’s Department of Sociology. He works on quantitative methods, stratification, and economic sociology. In this post co-authored with Aaron Horvath, he reports on the attempt to replicate 53 sociological studies. Spoiler: we need to do better.
Do Sociologists Release Their Data and Code? Disappointing Results from a Field Experiment on Replication.
Replication packages – releasing the complete data and code for a published article – are a growing currency in 21st century social science, and for good reasons. Replication packages help to spread methodological innovations, facilitate understanding of methods, and show confidence in findings. Yet, we found that few sociologists are willing or able to share the exact details of their analysis.
We conducted a small field experiment as part of a graduate course in statistical analysis. Students selected sociological articles that they admired and wanted to learn from, and asked the authors for a replication package.
Out of the 53 sociologists contacted, only 15 of the authors (28 percent) provided a replication package. This is a missed opportunity for the learning and development of new sociologists, as well as an unfortunate marker of the state of open science within our field.
Some 19 percent of authors never replied to repeated requests, or first replied but never provided a package. More than half (56 percent) directly refused to release their data and code. Sometimes there were good reasons. Twelve authors (23 percent) cited legal or IRB limitations on their ability to share their data. But only one of these authors provided the statistical code to show how the confidential data were analyzed.
Why So Little Response?
A common reason for not releasing a replication package was because the author had lost the data – often due to reported computer/hard drive malfunctions. As well, many authors said they were too busy or felt that providing a replication package would be too complicated. One author said they had never heard of a replication package. The solutions here are simple: compiling a replication package should be part of a journal article’s final copy-editing and page-proofing process.
More troubling is that a few authors openly rejected the principle of replication, saying in effect, “read the paper and figure it out yourself.” One articulated a deep opposition, on the grounds that replication packages break down the “barriers to entry” that protect researchers from scrutiny and intellectual competition from others.
The Case for Higher Standards
Methodology sections of research articles are, by necessity, broad and abstract descriptions of their procedures. However, in most quantitative analyses, the exact methods and code are on the author’s computer. Readers should be able to download and run replication packages as easily as they can download and read published articles. The methodology section should not be a “barrier to entry,” but rather an on-ramp to an open and shared scholarly enterprise.
When authors released replication packages, it was enlightening for students to look “under the hood” on research they admired, and see exactly how results were produced. Students finished the process with deeper understanding of – and greater confidence in – the research. Replication packages also serve as a research accelerator: their transparency instills practical insight and confidence – bridging the gap between chalkboard statistics and actual cutting-edge research – and invites younger scholars to build on the shoulders of success. As Gary King has emphasized, replications have become first publications for many students, and helped launched many careers – all while ramping up citations to the original articles.
In our small sample, little more than a quarter of sociologists released their data and code. Top journals in political science and economics now require on-line replication packages. Transparency is no less crucial in sociology for the accumulation of knowledge, methods, and capabilities among young scholars. Sociologists – and ultimately, sociology journals – should embrace replication packages as part of the lasting contribution of their research.
Table 1. Response to Replication Request
|Yes: Released data and code for paper||15||28%|
|No: Did not release||38||72%|
|Reasons for “No”|
|IRB / legal / confidentiality issue||12||23%|
|No response / no follow up||10||19%|
|Don’t have data||6||11%|
|Don’t have time / too complicated||6||11%|
|Still using the data||2||4%|
|‘See the article and figure it out’||2||4%|
Note: For replication and transparency, a blinded copy of the data is available on-line. Each author’s identity is blinded, but the journal name, year of publication, and response code is available. Half of the requests addressed articles in the top three journals, and more than half were published in the last three years.
Figure 1: Illustrative Quotes from Student Correspondence with Authors:
- “Here is the data file and Stata .do file to reproduce [the] Tables…. Let me know if you have any questions.”
- “[Attached are] data and R code that does all regression models in the paper. Assuming that you know R, you could literally redo the entire paper in a few minutes.”
- “While I applaud your efforts to replicate my research, the best guidance I can offer
is that the details about the data and analysis strategies are in the paper.”
- “I don’t keep or produce ‘replication packages’… Data takes a significant amount of human capital and financial resources, and serves as a barrier-to-entry against other researchers… they can do it themselves.”
Victoria Reyes is an assistant professor at Bryn Mawr in the Growth and Structure of Cities Department. Her research is about specific urban sites in the global system. This post addresses her recent article in Theory and Society.
Thanks to Fabio for allowing me to post about my work. I study global inequality through a cultural and relational lens, and am particularly interested in places of foreign-control, by which I mean places that are either foreign-owned or are heavily influenced by foreigners. I have two recent articles about this (see below for citations and abstracts).
In one that was recently published in Theory and Society, I draw and extend work on global cities and cities along geopolitical borders to develop a concept I call “global borderlands”—semi-autonomous, foreign-controlled, geographic locations geared toward international exchange. These are places like overseas military bases, embassies, tourist resorts, international branch campuses (e.g. NYU Abu Dhabi), and special economic zones, where tariff barriers are relaxed. When I speak of global borderlands, I do not necessarily assume negative connotations. Indeed, some people may enjoy or prefer working, visiting, and/or living within global borderlands, while others are excluded from these places.
I argue that these places have three features in common. First, semi-autonomy and foreign-control. These are places where the notion of “who rules?” is fluid and negotiated, and where regulation depends on nationality. Second, like many other places, global borderlands are defined by geographic and symbolic boundaries. Third, these places are built on unequal relations, by which I refer to structural inequality that again, does not necessarily come bundled with negative connotations. For example, in my work, I examine the Harbor Point mall within the Subic Bay Freeport Zone, Philippines, and compare it to both the SM mall in Olongapo City—which is 30 feet away, on the other side of the Freeport’s gate—and Hanjin Shipping, a Korean-owned shipping and manufacturing company within the Freeport that is known for human rights violations. Although most Harbor Point mall employees cannot afford to purchase lunch within the mall, they prefer working within it because of the higher wages they earn and the relatively more stable employment, when compared to similar work outside the Freeport.
The Subic Bay Freeport Zone, Philippines was originally the location of the former U.S. Subic Bay Naval Base. In another article, in City & Community, I examine how the legacies of the U.S. military continue to influence present day practices and discourses, and Filipino elites’ role in institutionalizing these legacies.
Is management research a folly? If not, whose interests does it serve? And whose interests should it serve?
The questions of good for what and good for whom are worth revisiting. There is reason to worry that the reward system in our field, particularly in the publication process, is misaligned with the goals of good science.
There can be little doubt that a lot of activity goes into management research: according to the Web of Knowledge, over 8,000 articles are published every year in the 170+ journals in the field of “Management,” adding more and more new rooms. But how do we evaluate this research? How do we know what a contribution is or how individual articles add up? In some sciences, progress can be measured by finding answers to questions, not merely reporting significant effects. In many social sciences, however, including organization studies, progress is harder to judge, and the kinds of questions we ask may not yield firm answers (e.g., do nice guys finish last?). Instead we seek to measure the contribution of research by its impact.
Management of humans by other humans may be increasingly anachronistic. If managers are not our primary constituency, then who is? Perhaps it is each other. But this might lead us back into the Winchester Mystery House, where novelty rules. Alternatively, if our ultimate constituency is the broader public that is meant to benefit from the activities of business, then this suggests a different set of standards for evaluation.
Businesses and governments are making decisions now that will shape the life chances of workers, consumers, and citizens for decades to come. If we want to shape those decisions for public benefit, on the basis of rigorous research, we need to make sure we know the constituency that research is serving.
Hi all, I’m Ellen Berrey. I’ll be guest blogging over the next few weeks about inequality, culture, race, organizations, law, and multi-case ethnography. Thanks for the invite, Katherine, and the warm welcomes! Here’s what I’m all about: I’m an assistant professor of sociology at the University at Buffalo-SUNY and an affiliated scholar of the American Bar Foundation. I received my PhD from Northwestern in 2008. This fall, I jet off from the Midwest to join the faculty of the University of Denver (well, I’m actually going to drive the fading 2003 Toyota I inherited from my mom).
As a critical cultural sociologist, I study organizational, political, and legal efforts to address inequality. My new book, The Enigma of Diversity: The Language of Race and the Limits of Racial Justice (University of Chicago Press), is officially out next Monday (yay!). I’ll dive into that in future posts, for sure. I’m writing up another book on employment discrimination litigation with Robert Nelson and Laura Beth Nielsen, Rights on Trial: Employment Civil Rights in Work and in Court. These and my articles and other projects explore organizational symbolic politics, affirmative action in college admissions (also here and here), affirmative action activism (and here), corporate diversity management, fairness in discrimination litigation, discrimination law and inequality (and here), gentrification politics, and benefit corporations.
I’ll kick off today with some thoughts about a theme that I’ve been exploring for many years:
How can powerful, elite-led organizations advance broad progressive causes like social justice or environmental protection? I’m not just referring to self-identified activists but also corporations, universities, community agencies, foundations, churches, and the like. Various arms of the state, too, are supposed to forward social causes by, say, ending discrimination at work or alleviating poverty. To what extent can organizational decision-makers create positive social change through discrete initiatives and policies—or do they mostly just create the appearance of effective action? Time and again, perhaps inevitably, top-down efforts to address social problems end up creating new problems for those they supposedly serve.
To the point: Have you come across great research that examines how organizations can bring about greater equality and engages organizational theory?
I think this topic is especially important for those of us who study organizations and inequality. We typically focus on the harms that organizations cause. We know, for example, that employers perpetuate racial, class, and gender hierarchies within their own ranks through their hiring and promotion strategies. I believe we could move the field forward if we also could point to effective, even inspiring ways in which organizations mitigate inequities. I have in mind here research that goes beyond applied evaluations and that resists the Polly Anna-ish temptation to sing the praises of corporations. Critical research sometimes asks these questions, but it often seems to primarily look for (and find) wrongdoing. Simplistically, I think of this imperative in terms of looking, at once, at the good and bad of what organizations are achieving. Alexandra Kalev, Frank Dobbin, and Erin Kelly’s much-cited American Sociological Review article on diversity management programs is one exemplar. There is room for other approaches, as well, including those that foreground power and meaning making. Together with the relational turn in the study of organizational inequality, this is a promising frontier to explore.
More soon. Looking forward to the conversation.