orgtheory.net

Archive for the ‘epistemology and methods’ Category

answering the “so what?” question: chuck tilly’s 2003 guide

leave a comment »

One of the perennial issues for novice and expert researchers alike is answering the “so what” question of why bother researching a particular phenomena.  In particular, sociologists must justify their places in a big-tent discipline, and orgheads swim in the murky expanse of interdisciplinary waters.  For such researchers, this question must be answered in presentations and publications, particularly in the contributions section.

While it’s easy for expert researchers to melt into a potentially crippling existential sweat about the fathomless unknown unknowns, novice researchers, unburdened by such knowledge, face a broader vista.  According to Chuck Tilly,* researchers need to decide whether to enter existing conversations, bridge two different conversations, initiate a new conversation, or…???**

Since I couldn’t remember Tilly’s exact quote about conversations despite hearing it at least twice during his famous Politics and Protest workshop (before at Columbia, now at the GC), I pinged CCNY colleague John Krinsky.

Krinsky responded to my inquiry by sharing this great questionnaire and chart of low/high risk/reward research: TillyQuestionnaire_2003.  This document offers helpful exercises for discerning possible contributions for research projects at all stages.

*For Krinksy’s (and others) tribute to Tilly’s mentorship and scholarship, go here.

** If anyone remembers Tilly’s exact quote about conversations, please share in the comments.

Advertisements

Written by katherinechen

October 24, 2018 at 3:16 pm

new book spotlight: approaches to ethnography

New book alert!  For those prepping a methods course or wanting additional insight into ethnography as a research method, sociologists Colin Jerolmack and Shamus Khan*  have co-edited an anthology Approaches to Ethnography: Analysis and Representation in Participant Observation (2017, Oxford University Press).**

ApproachestoEthnographyCoverPhoto

In Approaches to Ethnography, several ethnographers, including myself, have contributed chapters that delve into our experiences with ethnography across the subfields of urban sociology, poverty and inequality, race and ethnicity, culture, political economies, and organizational research.  For example, in his chapter, Douglas Harper explains how he integrated visual ethnography to get farmers to discuss experiences of farming past and present, capture the itinerant lives and transitory relations among tramps, and document food traditions in Bologna, Italy.

My own chapter “Capturing Organizations as Actors” was particularly difficult to write, with several major chunks jettisoned and sections rewritten several times to incorporate feedback from an ever-patient Khan.  Eventually, I realized I was struggling with how to advocate what is taken-for-granted among organizational researchers.  Normally, organizational researchers write for audiences who readily accept organizations as the unit of analysis and as important and consequential actors worthy of study.  However, for sociologists and social scientists who are not organizational researchers, the organization falls into the background as static, interchangeable scenery.  Given this anthology’s audience, I had to make an explicit argument for studying organizations to readers who might be inclined to ignore organizations.

With this in mind, my chapter focused on explaining how to use ethnography to bring organizations to the foreground.  To illustrate how researchers can approach different aspects of organizations, I drew on my ethnographic data collected on the Burning Man organization.  Most of the vignettes tap never-before-seen data, including discussions from organizers’ meetings and my participant-observations as a volunteer in Playa Info’s Found.  With these examples, I show how organizational ethnography can help us understand:

  • how informal relations animate organizations
  • how organizations channel activities through routines and trainings
  • how organizations and its subcultures communicate and inculcate practices
  • how organizations handle relations with other actors, including the state

Here is Approaches to Ethnography‘s table of contents:

Introduction: An Analytic Approach to Ethnography
Colin Jerolmack and Shamus Khan

1. Microsociology: Beneath the Surface
Jooyoung Lee
2. Capturing Organizations as Actors
Katherine Chen

3. Macro Analysis: Power in the Field
Leslie Salzinger and Teresa Gowan

4. People and Places
Douglas Harper

5. Mechanisms
Iddo Tavory and Stefan Timmermans

6. Embodiment: A Dispositional Approach to Racial and Cultural Analysis
Black Hawk Hancock

7. Situations
Monica McDermott

8. Reflexivity: Introspection, Positionality, and the Self as Research Instrument-Toward a Model of Abductive Reflexivity
Forrest Stuart

* Jerolmack and Khan have also co-authored a Socius article “The Analytic Lenses of Ethnography,” for those interested in an overview.

** I have a flyer for a slight discount that I hope is still good from the publisher; if you need it, send me an email!

Written by katherinechen

January 13, 2018 at 4:55 pm

is ethnography the most policy-relevant sociology?

The New York Times – the Upshot, no less – is feeling the love for sociology today. Which is great. Neil Irwin suggests that sociologists have a lot to say about the current state of affairs in the U.S., and perhaps might merit a little more attention relative to you-know-who.

Irwin emphasizes sociologists’ understanding “how tied up work is with a sense of purpose and identity,” quotes Michèle Lamont and Herb Gans, and mentions the work of Ofer Sharone, Jennifer Silva, and Matt Desmond.

Which all reinforces something I’ve been thinking about for a while—that ethnography, that often-maligned, inadequately scientific method—is the sociology most likely to break through to policymakers and the larger public. Besides Evicted, what other sociologists have made it into the consciousness of policy types in the last couple of years? Of the four who immediately pop to mind—Kathy Edin, Alice Goffman, Arlie Hochschild and Sara Goldrick-Rab—three are ethnographers.

I think there are a couple reasons for this. One is that as applied microeconomics has moved more and more into the traditional territory of quantitative sociology, it has created a knowledge base that is weirdly parallel to sociology, but not in very direct communication with it, because economists tend to discount work that isn’t produced by economics.

And that knowledge base is much more tapped into policy conversations because the status of economics and a long history of preexisting links between economics and government. So if anything I think the Raj Chettys of the world—who, to be clear, are doing work that is incredibly interesting—probably make it harder for quantitative sociology to get attention.

But it’s not just quantitative sociology’s inability to be heard that comes into play. It’s also the positive attraction of ethnography. Ethnography gives us stories—often causal stories, about the effects of landlord-tenant law or the fraying safety net or welfare reform or unemployment policy—and puts human flesh on statistics. And those stories about how social circumstances or policy changes lead people to behave in particular, understandable ways, can change people’s thinking.

Indeed, Robert Shiller’s presidential address at the AEA this year argued for “narrative economics”—that narratives about the world have huge economic effects. Of course, his recommendation was that economists use epidemiological models to study the spread of narratives, which to my mind kind of misses the point, but still.

The risk, I suppose, is that readers will overgeneralize from ethnography, when that’s not what it’s meant for. They read Evicted, find it compelling, and come up with solutions to the problems of low-income Milwaukeeans that don’t work, because they’re based on evidence from a couple of communities in a single city.

But I’m honestly not too worried about that. The more likely impact, I think, is that people realize “hey, eviction is a really important piece of the poverty problem” and give it attention as an issue. And lots of quantitative folks, including both sociologists and economists, will take that insight and run with it and collect and analyze new data on housing—advancing the larger conversation.

At least that’s what I hope. In the current moment all of this may be moot, as evidence-based social policy seems to be mostly a bludgeoning device. But that’s a topic for another post.

 

Written by epopp

March 17, 2017 at 2:04 pm

that chocolate milk study: can we blame the media?

A specific brand of high-protein chocolate milk improved the cognitive function of high school football players with concussions. At least that’s what a press release from the University of Maryland claimed a few weeks ago. It also quoted the superintendent of the Washington County Public Schools as saying, “Now that we understand the findings of this study, we are determined to provide Fifth Quarter Fresh [the milk brand] to all of our athletes.”

The problem is that the “study” was not only funded in part by the milk producer, but is unpublished, unavailable to the public and, based on the press release — all the info we’ve got — raises immediate methodological questions. Certainly there are no grounds for making claims about this milk in particular, since the control group was given no milk at all.

The summary also raises questions about the sample size. The total sample included 474 high school football players, but included both concussed and non-concussed players. How many of these got concussions during one season? I would hope not enough to provide statistical power — this NAS report suggests high schoolers get 11 concussions per 10,000 football games and practices.

And even if the sample size is sufficient, it’s not clear that the results are meaningful. The press release suggests concussed athletes who drank the milk did significantly better on four of thirty-six possible measures — anyone want to take bets on the p-value cutoff?

Maryland put out the press release nearly four weeks ago. Since then there’s been a slow build of attention, starting with a takedown by Health News Review on January 5, before the story was picked up by a handful of news outlets and, this weekend, by Vox. In the meanwhile, the university says in fairly vague terms that it’s launched a review of the study, but the press release is still on the university website, and similarly questionable releases (“The magic formula for the ultimate sports recovery drink starts with cows, runs through the University of Maryland and ends with capitalism” — you can’t make this stuff up!) are up as well.

Whoever at the university decided to put out this press release should face consequences, and I’m really glad there are journalists out there holding the university’s feet to the fire. But while the university certainly bears responsibility for the poor decision to go out there and shill for a sponsor in the name of science, it’s worth noting that this is only half of the story.

There’s a lot of talk in academia these days about the status of scientific knowledge — about replicability, bias, and bad incentives, and how much we know that “just ain’t so.” And there’s plenty of blame to go around.

But in our focus on universities’ challenges in producing scientific knowledge, sometimes we underplay the role of another set of institutions: the media. Yes, there’s a literature on science communication that looks as the media as intermediary between science and the public. But a lot of it takes a cognitive angle on audience reception, and it’s got a heavy bent toward controversial science, like climate change or fracking.

More attention to media as a field, though, with rapidly changing conditions of production, professional norms and pathways, and career incentives, could really shed some light on the dynamics of knowledge production more generally. It would be a mistake to look back to some idealized era in which unbiased but hard-hitting reporters left no stone unturned in their pursuit of the public interest. But the acceleration of the news cycle, the decline of journalism as a viable career, the impact of social media on news production, and the instant feedback on pageviews and clickthroughs all tend to reinforce a certain breathless attention to the latest overhyped university press release.

It’s not the best research that gets picked up, but the sexy, the counterintuitive, and the clickbait-ish. Female-named hurricanes kill more than male hurricanes. (No.) Talking to a gay canvasser makes people support gay marriage. (Really no.) Around the world, children in religious households are less altruistic than children of atheists. (No idea, but I have my doubts.)

This kind of coverage not only shapes what the public believes, but it shapes incentives in academia as well. After all, the University of Maryland is putting out these press releases because it perceives it will benefit, either from the perception it is having a public impact, or from the goodwill the attention generates with Fifth Quarter Fresh and other donors. Researchers, in turn, will be similarly incentivized to focus on the sexy topic, or at least the sexy framing of the ordinary topic. And none of this contributes to the cumulative production of knowledge that we are, in theory, still pursuing.

None of this is meant to shift the blame for the challenges faced by science from the academic ecosystem to the realm of media. But if you really want to understand why it’s so hard to make scientific institutions work, you can’t ignore the role of media in producing acceptance of knowledge, or the rapidity with which that role is changing.

After all, if academics themselves can’t resist the urge to favor the counterintuitive over the mundane, we can hardly blame journalists for doing the same.

Written by epopp

January 18, 2016 at 1:23 pm

asr reviewer guidelines: comparative-historical edition

[The following is an invited guest post by Damon Mayrl, Assistant Professor of Comparative Sociology at Universidad Carlos III de Madrid, and Nick Wilson, Assistant Professor of Sociology at Stony Brook University.]

Last week, the editors of the American Sociological Review invited members of the Comparative-Historical Sociology Section to help develop a new set of review and evaluation guidelines. The ASR editors — including orgtheory’s own Omar Lizardo — hope that developing such guidelines will improve historical sociology’s presence in the journal. We applaud ASR’s efforts on this count, along with their general openness to different evaluative review standards. At the same time, though, we think caution is warranted when considering a single standard of evidence for evaluating historical sociology. Briefly stated, our worry is that a single evidentiary standard might obscure the variety of great work being done in the field, and could end up excluding important theoretical and empirical advances of interest to the wider ASR audience.

These concerns derive from our ongoing research on the actual practice of historical sociology. This research was motivated by surprise. As graduate students, we thumbed eagerly through the “methodological” literature in historical sociology, only to find — with notable exceptions, of course — that much of this literature consists of debates about the relationship between theory and evidence, or conceptual interventions (for instance, on the importance of temporality in historical research). What was missing, it seemed, were concrete discussions of how to actually gather, evaluate, and deploy primary and secondary evidence over the course of a research project. This lacuna seemed all the more surprising because other methods in sociology — like ethnography or interviewing — had such guides.

With this motivation, we set out to ask just what kinds of evidence the best historical sociology uses, and how the craft is practiced today. So far, we have learned that historical sociology resembles a microcosm of sociology as a whole, characterized by a mosaic of different methods and standards deployed to ask questions of a wide variety of substantive interests and cases.

One source for this view is a working paper in which we examine citation patterns in 32 books and articles that won awards from the ASA Comparative-Historical Sociology section. We find that, even among these award-winning works of historical sociology, at least four distinct models of historical sociology, each engaging data and theory in particular ways, have been recognized by the discipline as outstanding. Importantly, the sources they use and their modes of engaging with existing theory vary dramatically. Some works use existing secondary histories as theoretical building blocks, engaging in an explicit critical dialogue with existing theories; others undertake deep excavations of archival and other primary sources to nail down an empirically rich and theoretically revealing case study; and still others synthesize mostly secondary sources to provide new insights into old theoretical problems. Each of these strategies allows historical sociologists to answer sociologically important questions, but each also implies a different standard of judgment. By extension, ASR’s guidelines will need to be supple enough to capture this variety.

One key aspect of these standards concerns sources, which for historical sociologists can be either primary (produced contemporaneously with the events under study) or secondary (later works of scholarship about the events studied). Although classic works of comparative-historical sociology drew almost exclusively from secondary sources, younger historical sociologists increasingly prize primary sources. In interviews with historical sociologists, we have noted stark divisions and sometimes strongly-held opinions as to whether primary sources are essential for “good” historical sociology. Should ASR take a side in this debate, or remain open to both kinds of research?

Practically speaking, neither primary nor secondary sources are self-evidently “best.” Secondary sources are interpretive digests of primary sources by scholars; accordingly, they contain their own narratives, accounts, and intellectual agendas, which can sometimes strongly shape the very nature of events presented. Since the quality of historical sociologists’ employment of secondary works can be difficult for non-specialists to judge, this has often led to skepticism of secondary sources and a more favorable stance toward primary evidence. But primary sources face their own challenges. Far from being systematic troves of “data” readily capable of being processed by scholars, for instance, archives are often incomplete records of events collected by directly “interested” actors (often states) whose documents themselves remain interpretive slices of history, rather than objective records. Since the use of primary evidence more closely resembles mainstream sociological data collection, we would not be surprised if a single standard for historical sociology explicitly or implicitly favored primary sources while relatively devaluing secondary syntheses. We view this to be a particular danger, considering the important insights that have emerged from secondary syntheses. Instead, we hope that standards of transparency, for both types of sources, will be at the core of the new ASR guidelines.

Another set of concerns relates to the intersection of historical research and the review process itself. For instance, our analysis of award-winners suggests that, despite the overall increased interest in original primary research among section members, primary source usage has actually declined in award-winning articles (as opposed to books) over time, perhaps in response to the format constraints of journal articles. If the new guidelines heavily favor original primary work without providing leeway in format constraints (for instance, through longer word counts), this could be doubly problematic for historical sociological work attempting to appear in the pages of ASR.  Beyond the constraints of word-limits, moreover, as historical sociology has extended its substantive reach through its third-wave “global turn,” the cases historical sociologists use to construct a theoretical dialogue with one another can sometimes rely on radically different and particularly unfamiliar sources. This complicates attempts to judge and review works of historical sociology, since the reviewer may find their knowledge of the case — and especially of relevant archives — strained to its limit.

In sum, we welcome efforts by ASR to provide review guidelines for historical sociology.  At the same time, we encourage plurality—guidelines, rather than a guideline; standards rather than a standard. After all, we know that standards tend to homogenize and that guidelines can be treated more rigidly than originally intended. In our view, this is a matter of striking an appropriate balance. Pushing too far towards a single standard risks flattening the diversity of inquiry and distorting ongoing attempts among historical sociologists to sort through what the new methodological and substantive diversity of the “third wave” of historical sociology means for the field, while pushing too far towards describing diversity might in turn yield a confusing sense for reviewers that “anything goes.” The nature of that balance, however, remains to be seen.

Written by epopp

September 8, 2015 at 5:51 pm

inside higher education discusses replication in psychology and sociology

Science just published a piece showing that only a third of articles from major psychology journals can be replicated. That is, if you reran the experiments, only a third of experiments will have statistically significant results. The details of the studies matter as well. The higher the p-value, the less like you were to replicate and “flashy” results were less likely to replicate.

Insider Education spoke to me and other sociologists about the replication issue in our discipline. A major issue is that there is no incentive to actually assess research since it seems to be nearly impossible to publish replications and statistical criticisms in our major journals:

Recent research controversies in sociology also have brought replication concerns to the fore. Andrew Gelman, a professor of statistics and political science at Columbia University, for example, recently published a paper about the difficulty of pointing out possible statistical errors in a study published in the American Sociological Review. A field experiment at Stanford University suggested that only 15 of 53 authors contacted were able or willing to provide a replication package for their research. And the recent controversy over the star sociologist Alice Goffman, now an assistant professor at the University of Wisconsin at Madison, regarding the validity of her research studying youths in inner-city Philadelphia lingers — in part because she said she destroyed some of her research to protect her subjects.

Philip Cohen, a professor of sociology at the University of Maryland, recently wrote a personal blog post similar to Gelman’s, saying how hard it is to publish articles that question other research. (Cohen was trying to respond to Goffman’s work in the American Sociological Review.)

“Goffman included a survey with her ethnographic study, which in theory could have been replicable,” Cohen said via email. “If we could compare her research site to other populations by using her survey data, we could have learned something more about how common the problems and situations she discussed actually are. That would help evaluate the veracity of her research. But the survey was not reported in such a way as to permit a meaningful interpretation or replication. As a result, her research has much less reach or generalizability, because we don’t know how unique her experience was.”

Readers can judge whether Gelman’s or Cohen’s critiques are correct. But the broader issue is serious. Sociology journals simply aren’t publishing error correction or replication, with the honorable exception of Sociological Science which published a replication/critique of the Brooks/Manza (2006) ASR article. For now, debate on the technical merits of particular research seems to be the purview of blog posts and book reviews that are quickly forgotten. That’s not good.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

August 31, 2015 at 12:01 am

replication and the future of sociology

Consider the following:

Sociology, we can do better. Here is what I suggest:

  • Dissertation advisers should insist on some sort of storage of data and code for students. For those working with standard data like GSS or Ad Health, this should be easy. For others, some version of the data should accompany the code. There are ways of anonymizing data, or people can sign non-disclosure forms. Perhaps universities can create digital archives of dissertation data, like they have paper copies of dissertations. Secure servers can hold relevant field notes and interview transcripts.
  • Journals and book publishers should require quant papers to have replication packages. Qualitative paper authors should be willing to provide complete information for archival work & transcription samples for interview based research. The jury is still out on what ethnographers might provide.
  • IRB’s should allow all authors to come up with a version of the data that others might read or consult.
  • Professional awards should only be given to research that can be replicated in some fashion. E.g., as Phil Cohen has argued – no dissertation awards should be given for dissertations that were not deposited in the library.

Let’s try to improve.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

August 17, 2015 at 12:01 am