orgtheory.net

Archive for the ‘epistemology and methods’ Category

is ethnography the most policy-relevant sociology?

The New York Times – the Upshot, no less – is feeling the love for sociology today. Which is great. Neil Irwin suggests that sociologists have a lot to say about the current state of affairs in the U.S., and perhaps might merit a little more attention relative to you-know-who.

Irwin emphasizes sociologists’ understanding “how tied up work is with a sense of purpose and identity,” quotes Michèle Lamont and Herb Gans, and mentions the work of Ofer Sharone, Jennifer Silva, and Matt Desmond.

Which all reinforces something I’ve been thinking about for a while—that ethnography, that often-maligned, inadequately scientific method—is the sociology most likely to break through to policymakers and the larger public. Besides Evicted, what other sociologists have made it into the consciousness of policy types in the last couple of years? Of the four who immediately pop to mind—Kathy Edin, Alice Goffman, Arlie Hochschild and Sara Goldrick-Rab—three are ethnographers.

I think there are a couple reasons for this. One is that as applied microeconomics has moved more and more into the traditional territory of quantitative sociology, it has created a knowledge base that is weirdly parallel to sociology, but not in very direct communication with it, because economists tend to discount work that isn’t produced by economics.

And that knowledge base is much more tapped into policy conversations because the status of economics and a long history of preexisting links between economics and government. So if anything I think the Raj Chettys of the world—who, to be clear, are doing work that is incredibly interesting—probably make it harder for quantitative sociology to get attention.

But it’s not just quantitative sociology’s inability to be heard that comes into play. It’s also the positive attraction of ethnography. Ethnography gives us stories—often causal stories, about the effects of landlord-tenant law or the fraying safety net or welfare reform or unemployment policy—and puts human flesh on statistics. And those stories about how social circumstances or policy changes lead people to behave in particular, understandable ways, can change people’s thinking.

Indeed, Robert Shiller’s presidential address at the AEA this year argued for “narrative economics”—that narratives about the world have huge economic effects. Of course, his recommendation was that economists use epidemiological models to study the spread of narratives, which to my mind kind of misses the point, but still.

The risk, I suppose, is that readers will overgeneralize from ethnography, when that’s not what it’s meant for. They read Evicted, find it compelling, and come up with solutions to the problems of low-income Milwaukeeans that don’t work, because they’re based on evidence from a couple of communities in a single city.

But I’m honestly not too worried about that. The more likely impact, I think, is that people realize “hey, eviction is a really important piece of the poverty problem” and give it attention as an issue. And lots of quantitative folks, including both sociologists and economists, will take that insight and run with it and collect and analyze new data on housing—advancing the larger conversation.

At least that’s what I hope. In the current moment all of this may be moot, as evidence-based social policy seems to be mostly a bludgeoning device. But that’s a topic for another post.

 

Advertisements

Written by epopp

March 17, 2017 at 2:04 pm

that chocolate milk study: can we blame the media?

A specific brand of high-protein chocolate milk improved the cognitive function of high school football players with concussions. At least that’s what a press release from the University of Maryland claimed a few weeks ago. It also quoted the superintendent of the Washington County Public Schools as saying, “Now that we understand the findings of this study, we are determined to provide Fifth Quarter Fresh [the milk brand] to all of our athletes.”

The problem is that the “study” was not only funded in part by the milk producer, but is unpublished, unavailable to the public and, based on the press release — all the info we’ve got — raises immediate methodological questions. Certainly there are no grounds for making claims about this milk in particular, since the control group was given no milk at all.

The summary also raises questions about the sample size. The total sample included 474 high school football players, but included both concussed and non-concussed players. How many of these got concussions during one season? I would hope not enough to provide statistical power — this NAS report suggests high schoolers get 11 concussions per 10,000 football games and practices.

And even if the sample size is sufficient, it’s not clear that the results are meaningful. The press release suggests concussed athletes who drank the milk did significantly better on four of thirty-six possible measures — anyone want to take bets on the p-value cutoff?

Maryland put out the press release nearly four weeks ago. Since then there’s been a slow build of attention, starting with a takedown by Health News Review on January 5, before the story was picked up by a handful of news outlets and, this weekend, by Vox. In the meanwhile, the university says in fairly vague terms that it’s launched a review of the study, but the press release is still on the university website, and similarly questionable releases (“The magic formula for the ultimate sports recovery drink starts with cows, runs through the University of Maryland and ends with capitalism” — you can’t make this stuff up!) are up as well.

Whoever at the university decided to put out this press release should face consequences, and I’m really glad there are journalists out there holding the university’s feet to the fire. But while the university certainly bears responsibility for the poor decision to go out there and shill for a sponsor in the name of science, it’s worth noting that this is only half of the story.

There’s a lot of talk in academia these days about the status of scientific knowledge — about replicability, bias, and bad incentives, and how much we know that “just ain’t so.” And there’s plenty of blame to go around.

But in our focus on universities’ challenges in producing scientific knowledge, sometimes we underplay the role of another set of institutions: the media. Yes, there’s a literature on science communication that looks as the media as intermediary between science and the public. But a lot of it takes a cognitive angle on audience reception, and it’s got a heavy bent toward controversial science, like climate change or fracking.

More attention to media as a field, though, with rapidly changing conditions of production, professional norms and pathways, and career incentives, could really shed some light on the dynamics of knowledge production more generally. It would be a mistake to look back to some idealized era in which unbiased but hard-hitting reporters left no stone unturned in their pursuit of the public interest. But the acceleration of the news cycle, the decline of journalism as a viable career, the impact of social media on news production, and the instant feedback on pageviews and clickthroughs all tend to reinforce a certain breathless attention to the latest overhyped university press release.

It’s not the best research that gets picked up, but the sexy, the counterintuitive, and the clickbait-ish. Female-named hurricanes kill more than male hurricanes. (No.) Talking to a gay canvasser makes people support gay marriage. (Really no.) Around the world, children in religious households are less altruistic than children of atheists. (No idea, but I have my doubts.)

This kind of coverage not only shapes what the public believes, but it shapes incentives in academia as well. After all, the University of Maryland is putting out these press releases because it perceives it will benefit, either from the perception it is having a public impact, or from the goodwill the attention generates with Fifth Quarter Fresh and other donors. Researchers, in turn, will be similarly incentivized to focus on the sexy topic, or at least the sexy framing of the ordinary topic. And none of this contributes to the cumulative production of knowledge that we are, in theory, still pursuing.

None of this is meant to shift the blame for the challenges faced by science from the academic ecosystem to the realm of media. But if you really want to understand why it’s so hard to make scientific institutions work, you can’t ignore the role of media in producing acceptance of knowledge, or the rapidity with which that role is changing.

After all, if academics themselves can’t resist the urge to favor the counterintuitive over the mundane, we can hardly blame journalists for doing the same.

Written by epopp

January 18, 2016 at 1:23 pm

asr reviewer guidelines: comparative-historical edition

[The following is an invited guest post by Damon Mayrl, Assistant Professor of Comparative Sociology at Universidad Carlos III de Madrid, and Nick Wilson, Assistant Professor of Sociology at Stony Brook University.]

Last week, the editors of the American Sociological Review invited members of the Comparative-Historical Sociology Section to help develop a new set of review and evaluation guidelines. The ASR editors — including orgtheory’s own Omar Lizardo — hope that developing such guidelines will improve historical sociology’s presence in the journal. We applaud ASR’s efforts on this count, along with their general openness to different evaluative review standards. At the same time, though, we think caution is warranted when considering a single standard of evidence for evaluating historical sociology. Briefly stated, our worry is that a single evidentiary standard might obscure the variety of great work being done in the field, and could end up excluding important theoretical and empirical advances of interest to the wider ASR audience.

These concerns derive from our ongoing research on the actual practice of historical sociology. This research was motivated by surprise. As graduate students, we thumbed eagerly through the “methodological” literature in historical sociology, only to find — with notable exceptions, of course — that much of this literature consists of debates about the relationship between theory and evidence, or conceptual interventions (for instance, on the importance of temporality in historical research). What was missing, it seemed, were concrete discussions of how to actually gather, evaluate, and deploy primary and secondary evidence over the course of a research project. This lacuna seemed all the more surprising because other methods in sociology — like ethnography or interviewing — had such guides.

With this motivation, we set out to ask just what kinds of evidence the best historical sociology uses, and how the craft is practiced today. So far, we have learned that historical sociology resembles a microcosm of sociology as a whole, characterized by a mosaic of different methods and standards deployed to ask questions of a wide variety of substantive interests and cases.

One source for this view is a working paper in which we examine citation patterns in 32 books and articles that won awards from the ASA Comparative-Historical Sociology section. We find that, even among these award-winning works of historical sociology, at least four distinct models of historical sociology, each engaging data and theory in particular ways, have been recognized by the discipline as outstanding. Importantly, the sources they use and their modes of engaging with existing theory vary dramatically. Some works use existing secondary histories as theoretical building blocks, engaging in an explicit critical dialogue with existing theories; others undertake deep excavations of archival and other primary sources to nail down an empirically rich and theoretically revealing case study; and still others synthesize mostly secondary sources to provide new insights into old theoretical problems. Each of these strategies allows historical sociologists to answer sociologically important questions, but each also implies a different standard of judgment. By extension, ASR’s guidelines will need to be supple enough to capture this variety.

One key aspect of these standards concerns sources, which for historical sociologists can be either primary (produced contemporaneously with the events under study) or secondary (later works of scholarship about the events studied). Although classic works of comparative-historical sociology drew almost exclusively from secondary sources, younger historical sociologists increasingly prize primary sources. In interviews with historical sociologists, we have noted stark divisions and sometimes strongly-held opinions as to whether primary sources are essential for “good” historical sociology. Should ASR take a side in this debate, or remain open to both kinds of research?

Practically speaking, neither primary nor secondary sources are self-evidently “best.” Secondary sources are interpretive digests of primary sources by scholars; accordingly, they contain their own narratives, accounts, and intellectual agendas, which can sometimes strongly shape the very nature of events presented. Since the quality of historical sociologists’ employment of secondary works can be difficult for non-specialists to judge, this has often led to skepticism of secondary sources and a more favorable stance toward primary evidence. But primary sources face their own challenges. Far from being systematic troves of “data” readily capable of being processed by scholars, for instance, archives are often incomplete records of events collected by directly “interested” actors (often states) whose documents themselves remain interpretive slices of history, rather than objective records. Since the use of primary evidence more closely resembles mainstream sociological data collection, we would not be surprised if a single standard for historical sociology explicitly or implicitly favored primary sources while relatively devaluing secondary syntheses. We view this to be a particular danger, considering the important insights that have emerged from secondary syntheses. Instead, we hope that standards of transparency, for both types of sources, will be at the core of the new ASR guidelines.

Another set of concerns relates to the intersection of historical research and the review process itself. For instance, our analysis of award-winners suggests that, despite the overall increased interest in original primary research among section members, primary source usage has actually declined in award-winning articles (as opposed to books) over time, perhaps in response to the format constraints of journal articles. If the new guidelines heavily favor original primary work without providing leeway in format constraints (for instance, through longer word counts), this could be doubly problematic for historical sociological work attempting to appear in the pages of ASR.  Beyond the constraints of word-limits, moreover, as historical sociology has extended its substantive reach through its third-wave “global turn,” the cases historical sociologists use to construct a theoretical dialogue with one another can sometimes rely on radically different and particularly unfamiliar sources. This complicates attempts to judge and review works of historical sociology, since the reviewer may find their knowledge of the case — and especially of relevant archives — strained to its limit.

In sum, we welcome efforts by ASR to provide review guidelines for historical sociology.  At the same time, we encourage plurality—guidelines, rather than a guideline; standards rather than a standard. After all, we know that standards tend to homogenize and that guidelines can be treated more rigidly than originally intended. In our view, this is a matter of striking an appropriate balance. Pushing too far towards a single standard risks flattening the diversity of inquiry and distorting ongoing attempts among historical sociologists to sort through what the new methodological and substantive diversity of the “third wave” of historical sociology means for the field, while pushing too far towards describing diversity might in turn yield a confusing sense for reviewers that “anything goes.” The nature of that balance, however, remains to be seen.

Written by epopp

September 8, 2015 at 5:51 pm

inside higher education discusses replication in psychology and sociology

Science just published a piece showing that only a third of articles from major psychology journals can be replicated. That is, if you reran the experiments, only a third of experiments will have statistically significant results. The details of the studies matter as well. The higher the p-value, the less like you were to replicate and “flashy” results were less likely to replicate.

Insider Education spoke to me and other sociologists about the replication issue in our discipline. A major issue is that there is no incentive to actually assess research since it seems to be nearly impossible to publish replications and statistical criticisms in our major journals:

Recent research controversies in sociology also have brought replication concerns to the fore. Andrew Gelman, a professor of statistics and political science at Columbia University, for example, recently published a paper about the difficulty of pointing out possible statistical errors in a study published in the American Sociological Review. A field experiment at Stanford University suggested that only 15 of 53 authors contacted were able or willing to provide a replication package for their research. And the recent controversy over the star sociologist Alice Goffman, now an assistant professor at the University of Wisconsin at Madison, regarding the validity of her research studying youths in inner-city Philadelphia lingers — in part because she said she destroyed some of her research to protect her subjects.

Philip Cohen, a professor of sociology at the University of Maryland, recently wrote a personal blog post similar to Gelman’s, saying how hard it is to publish articles that question other research. (Cohen was trying to respond to Goffman’s work in the American Sociological Review.)

“Goffman included a survey with her ethnographic study, which in theory could have been replicable,” Cohen said via email. “If we could compare her research site to other populations by using her survey data, we could have learned something more about how common the problems and situations she discussed actually are. That would help evaluate the veracity of her research. But the survey was not reported in such a way as to permit a meaningful interpretation or replication. As a result, her research has much less reach or generalizability, because we don’t know how unique her experience was.”

Readers can judge whether Gelman’s or Cohen’s critiques are correct. But the broader issue is serious. Sociology journals simply aren’t publishing error correction or replication, with the honorable exception of Sociological Science which published a replication/critique of the Brooks/Manza (2006) ASR article. For now, debate on the technical merits of particular research seems to be the purview of blog posts and book reviews that are quickly forgotten. That’s not good.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

August 31, 2015 at 12:01 am

replication and the future of sociology

Consider the following:

Sociology, we can do better. Here is what I suggest:

  • Dissertation advisers should insist on some sort of storage of data and code for students. For those working with standard data like GSS or Ad Health, this should be easy. For others, some version of the data should accompany the code. There are ways of anonymizing data, or people can sign non-disclosure forms. Perhaps universities can create digital archives of dissertation data, like they have paper copies of dissertations. Secure servers can hold relevant field notes and interview transcripts.
  • Journals and book publishers should require quant papers to have replication packages. Qualitative paper authors should be willing to provide complete information for archival work & transcription samples for interview based research. The jury is still out on what ethnographers might provide.
  • IRB’s should allow all authors to come up with a version of the data that others might read or consult.
  • Professional awards should only be given to research that can be replicated in some fashion. E.g., as Phil Cohen has argued – no dissertation awards should be given for dissertations that were not deposited in the library.

Let’s try to improve.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

August 17, 2015 at 12:01 am

sociologists need to be better at replication – a guest post by cristobal young

Cristobal Young is an assistant professor at Stanford’s Department of Sociology. He works on quantitative methods, stratification, and economic sociology. In this post co-authored with Aaron Horvath, he reports on the attempt to replicate 53 sociological studies. Spoiler: we need to do better.

Do Sociologists Release Their Data and Code? Disappointing Results from a Field Experiment on Replication.

 

Replication packages – releasing the complete data and code for a published article – are a growing currency in 21st century social science, and for good reasons. Replication packages help to spread methodological innovations, facilitate understanding of methods, and show confidence in findings. Yet, we found that few sociologists are willing or able to share the exact details of their analysis.

We conducted a small field experiment as part of a graduate course in statistical analysis. Students selected sociological articles that they admired and wanted to learn from, and asked the authors for a replication package.

Out of the 53 sociologists contacted, only 15 of the authors (28 percent) provided a replication package. This is a missed opportunity for the learning and development of new sociologists, as well as an unfortunate marker of the state of open science within our field.

Some 19 percent of authors never replied to repeated requests, or first replied but never provided a package. More than half (56 percent) directly refused to release their data and code. Sometimes there were good reasons. Twelve authors (23 percent) cited legal or IRB limitations on their ability to share their data. But only one of these authors provided the statistical code to show how the confidential data were analyzed.

Why So Little Response?

A common reason for not releasing a replication package was because the author had lost the data – often due to reported computer/hard drive malfunctions. As well, many authors said they were too busy or felt that providing a replication package would be too complicated. One author said they had never heard of a replication package. The solutions here are simple: compiling a replication package should be part of a journal article’s final copy-editing and page-proofing process.

More troubling is that a few authors openly rejected the principle of replication, saying in effect, “read the paper and figure it out yourself.” One articulated a deep opposition, on the grounds that replication packages break down the “barriers to entry” that protect researchers from scrutiny and intellectual competition from others.

The Case for Higher Standards

Methodology sections of research articles are, by necessity, broad and abstract descriptions of their procedures. However, in most quantitative analyses, the exact methods and code are on the author’s computer. Readers should be able to download and run replication packages as easily as they can download and read published articles. The methodology section should not be a “barrier to entry,” but rather an on-ramp to an open and shared scholarly enterprise.

When authors released replication packages, it was enlightening for students to look “under the hood” on research they admired, and see exactly how results were produced. Students finished the process with deeper understanding of – and greater confidence in – the research. Replication packages also serve as a research accelerator: their transparency instills practical insight and confidence – bridging the gap between chalkboard statistics and actual cutting-edge research – and invites younger scholars to build on the shoulders of success. As Gary King has emphasized, replications have become first publications for many students, and helped launched many careers – all while ramping up citations to the original articles.

In our small sample, little more than a quarter of sociologists released their data and code. Top journals in political science and economics now require on-line replication packages. Transparency is no less crucial in sociology for the accumulation of knowledge, methods, and capabilities among young scholars. Sociologists – and ultimately, sociology journals – should embrace replication packages as part of the lasting contribution of their research.

Table 1. Response to Replication Request

Response Frequency Percent
Yes:   Released data and code for paper 15 28%
No: Did not release 38 72%
Reasons for “No”
    IRB / legal / confidentiality issue 12 23%
    No response / no follow up 10 19%
    Don’t have data 6 11%
    Don’t have time / too complicated 6 11%
    Still using the data 2 4%
    ‘See the article and figure it out’ 2 4%
Total 53 100%

Note: For replication and transparency, a blinded copy of the data is available on-line. Each author’s identity is blinded, but the journal name, year of publication, and response code is available. Half of the requests addressed articles in the top three journals, and more than half were published in the last three years.

Figure 1: Illustrative Quotes from Student Correspondence with Authors:

Positive:

  1. “Here is the data file and Stata .do file to reproduce [the] Tables….  Let me know if you have any questions.”
  2. “[Attached are] data and R code that does all regression models in the paper. Assuming that you know R, you could literally redo the entire paper in a few minutes.”

Negative:

  1. “While I applaud your efforts to replicate my research, the best guidance I can offer
    is that the details about the data and analysis strategies are in the paper.”
  2. “I don’t keep or produce ‘replication packages’… Data takes a significant amount of human capital and financial resources, and serves as a barrier-to-entry against other researchers… they can do it themselves.”

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

August 11, 2015 at 12:01 am

working with computer scientists

doof-wagon-2

In North Carolina, this is called the “Vaisey Cart.

I’ve recently begun to work with a crew of computer scientists at Indiana when I was recruited to help with a social media project. It’s been a highly informative experience that has reinforced my belief that sociologists and computer scientists should team up. Some observations:

  • CS and sociology are complimentary. We care about theory. They care about tools and application. Natural fit.
  • In contrast, sociology and other social sciences are competing over the same theory space.
  • CS people have a deep bucket of tools for solving all kinds of problems that commonly occur in cultural sociology, network analysis, and simulation studies.
  • CS people believe in the timely solution of problems and workflow. Rather than write over a period of years, they believe in “yes, we can do this next week.”
  • Since their discipline runs on conferences, the work is fast and it is expected that it will be done soon.
  • Another benefit of the peer reviewed conference system is that work is published “for real” quickly and there is much less emphasis on a few elite publication outlets. Little “development.” Either it works or it doesn’t.
  • Quantitative sociologists are really good at applied stats and can help most CS teams articulate data analysis plans and execute them, assuming that the sociologist knows R.
  • Perhaps most importantly, CS researchers may be confident in their abilities, but less likely to think that they know it all and have no need for help from others. CS is simply too messy a field, which is similar to sociology.
  • Finally: cash. Unlike the arts and sciences, there is no sense that we are broke. While you still have to work extra hard to get money, it isn’t a lost cause like sociology is where the NSF hands out a handful of grants. There is money out there for entrepreneurial scholars.

Of course, there are downsides. CS people think you are crazy for working on a 60 page article that takes 5 years to get published. Also, some folks in data science and CS are more concerned about tools and nice visuals at the expense of theory and understanding. As a corollary, it is often the case that some CS folks may not appreciate sampling, bias, non-response, and other issues that normally inform sociological research design. But still, my experience has been excellent, the results exciting, and I think more sociologists should turn to computer science as an interdisciplinary research partner.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

June 24, 2015 at 12:01 am