orgtheory.net

Archive for the ‘epistemology and methods’ Category

answering the “so what?” question: chuck tilly’s 2003 guide

One of the perennial issues for novice and expert researchers alike is answering the “so what” question of why bother researching a particular phenomena.  In particular, sociologists must justify their places in a big-tent discipline, and orgheads swim in the murky expanse of interdisciplinary waters.  For such researchers, this question must be answered in presentations and publications, particularly in the contributions section.

While it’s easy for expert researchers to melt into a potentially crippling existential sweat about the fathomless unknown unknowns, novice researchers, unburdened by such knowledge, face a broader vista.  According to Chuck Tilly,* researchers need to decide whether to enter existing conversations, bridge two different conversations, initiate a new conversation, or…???**

Since I couldn’t remember Tilly’s exact quote about conversations despite hearing it at least twice during his famous Politics and Protest workshop (before at Columbia, now at the GC), I pinged CCNY colleague John Krinsky.

Krinsky responded to my inquiry by sharing this great questionnaire and chart of low/high risk/reward research: TillyQuestionnaire_2003.  This document offers helpful exercises for discerning possible contributions for research projects at all stages.

*For Krinksy’s (and others) tribute to Tilly’s mentorship and scholarship, go here.

** If anyone remembers Tilly’s exact quote about conversations, please share in the comments.

Advertisement

Written by katherinechen

October 24, 2018 at 3:16 pm

new book spotlight: approaches to ethnography

New book alert!  For those prepping a methods course or wanting additional insight into ethnography as a research method, sociologists Colin Jerolmack and Shamus Khan*  have co-edited an anthology Approaches to Ethnography: Analysis and Representation in Participant Observation (2017, Oxford University Press).**

ApproachestoEthnographyCoverPhoto

In Approaches to Ethnography, several ethnographers, including myself, have contributed chapters that delve into our experiences with ethnography across the subfields of urban sociology, poverty and inequality, race and ethnicity, culture, political economies, and organizational research.  For example, in his chapter, Douglas Harper explains how he integrated visual ethnography to get farmers to discuss experiences of farming past and present, capture the itinerant lives and transitory relations among tramps, and document food traditions in Bologna, Italy.

My own chapter “Capturing Organizations as Actors” was particularly difficult to write, with several major chunks jettisoned and sections rewritten several times to incorporate feedback from an ever-patient Khan.  Eventually, I realized I was struggling with how to advocate what is taken-for-granted among organizational researchers.  Normally, organizational researchers write for audiences who readily accept organizations as the unit of analysis and as important and consequential actors worthy of study.  However, for sociologists and social scientists who are not organizational researchers, the organization falls into the background as static, interchangeable scenery.  Given this anthology’s audience, I had to make an explicit argument for studying organizations to readers who might be inclined to ignore organizations.

With this in mind, my chapter focused on explaining how to use ethnography to bring organizations to the foreground.  To illustrate how researchers can approach different aspects of organizations, I drew on my ethnographic data collected on the Burning Man organization.  Most of the vignettes tap never-before-seen data, including discussions from organizers’ meetings and my participant-observations as a volunteer in Playa Info’s Found.  With these examples, I show how organizational ethnography can help us understand:

  • how informal relations animate organizations
  • how organizations channel activities through routines and trainings
  • how organizations and its subcultures communicate and inculcate practices
  • how organizations handle relations with other actors, including the state

Here is Approaches to Ethnography‘s table of contents:

Introduction: An Analytic Approach to Ethnography
Colin Jerolmack and Shamus Khan

1. Microsociology: Beneath the Surface
Jooyoung Lee
2. Capturing Organizations as Actors
Katherine Chen

3. Macro Analysis: Power in the Field
Leslie Salzinger and Teresa Gowan

4. People and Places
Douglas Harper

5. Mechanisms
Iddo Tavory and Stefan Timmermans

6. Embodiment: A Dispositional Approach to Racial and Cultural Analysis
Black Hawk Hancock

7. Situations
Monica McDermott

8. Reflexivity: Introspection, Positionality, and the Self as Research Instrument-Toward a Model of Abductive Reflexivity
Forrest Stuart

* Jerolmack and Khan have also co-authored a Socius article “The Analytic Lenses of Ethnography,” for those interested in an overview.

** I have a flyer for a slight discount that I hope is still good from the publisher; if you need it, send me an email!

Written by katherinechen

January 13, 2018 at 4:55 pm

is ethnography the most policy-relevant sociology?

The New York Times – the Upshot, no less – is feeling the love for sociology today. Which is great. Neil Irwin suggests that sociologists have a lot to say about the current state of affairs in the U.S., and perhaps might merit a little more attention relative to you-know-who.

Irwin emphasizes sociologists’ understanding “how tied up work is with a sense of purpose and identity,” quotes Michèle Lamont and Herb Gans, and mentions the work of Ofer Sharone, Jennifer Silva, and Matt Desmond.

Which all reinforces something I’ve been thinking about for a while—that ethnography, that often-maligned, inadequately scientific method—is the sociology most likely to break through to policymakers and the larger public. Besides Evicted, what other sociologists have made it into the consciousness of policy types in the last couple of years? Of the four who immediately pop to mind—Kathy Edin, Alice Goffman, Arlie Hochschild and Sara Goldrick-Rab—three are ethnographers.

I think there are a couple reasons for this. One is that as applied microeconomics has moved more and more into the traditional territory of quantitative sociology, it has created a knowledge base that is weirdly parallel to sociology, but not in very direct communication with it, because economists tend to discount work that isn’t produced by economics.

And that knowledge base is much more tapped into policy conversations because the status of economics and a long history of preexisting links between economics and government. So if anything I think the Raj Chettys of the world—who, to be clear, are doing work that is incredibly interesting—probably make it harder for quantitative sociology to get attention.

But it’s not just quantitative sociology’s inability to be heard that comes into play. It’s also the positive attraction of ethnography. Ethnography gives us stories—often causal stories, about the effects of landlord-tenant law or the fraying safety net or welfare reform or unemployment policy—and puts human flesh on statistics. And those stories about how social circumstances or policy changes lead people to behave in particular, understandable ways, can change people’s thinking.

Indeed, Robert Shiller’s presidential address at the AEA this year argued for “narrative economics”—that narratives about the world have huge economic effects. Of course, his recommendation was that economists use epidemiological models to study the spread of narratives, which to my mind kind of misses the point, but still.

The risk, I suppose, is that readers will overgeneralize from ethnography, when that’s not what it’s meant for. They read Evicted, find it compelling, and come up with solutions to the problems of low-income Milwaukeeans that don’t work, because they’re based on evidence from a couple of communities in a single city.

But I’m honestly not too worried about that. The more likely impact, I think, is that people realize “hey, eviction is a really important piece of the poverty problem” and give it attention as an issue. And lots of quantitative folks, including both sociologists and economists, will take that insight and run with it and collect and analyze new data on housing—advancing the larger conversation.

At least that’s what I hope. In the current moment all of this may be moot, as evidence-based social policy seems to be mostly a bludgeoning device. But that’s a topic for another post.

 

Written by epopp

March 17, 2017 at 2:04 pm

that chocolate milk study: can we blame the media?

A specific brand of high-protein chocolate milk improved the cognitive function of high school football players with concussions. At least that’s what a press release from the University of Maryland claimed a few weeks ago. It also quoted the superintendent of the Washington County Public Schools as saying, “Now that we understand the findings of this study, we are determined to provide Fifth Quarter Fresh [the milk brand] to all of our athletes.”

The problem is that the “study” was not only funded in part by the milk producer, but is unpublished, unavailable to the public and, based on the press release — all the info we’ve got — raises immediate methodological questions. Certainly there are no grounds for making claims about this milk in particular, since the control group was given no milk at all.

The summary also raises questions about the sample size. The total sample included 474 high school football players, but included both concussed and non-concussed players. How many of these got concussions during one season? I would hope not enough to provide statistical power — this NAS report suggests high schoolers get 11 concussions per 10,000 football games and practices.

And even if the sample size is sufficient, it’s not clear that the results are meaningful. The press release suggests concussed athletes who drank the milk did significantly better on four of thirty-six possible measures — anyone want to take bets on the p-value cutoff?

Maryland put out the press release nearly four weeks ago. Since then there’s been a slow build of attention, starting with a takedown by Health News Review on January 5, before the story was picked up by a handful of news outlets and, this weekend, by Vox. In the meanwhile, the university says in fairly vague terms that it’s launched a review of the study, but the press release is still on the university website, and similarly questionable releases (“The magic formula for the ultimate sports recovery drink starts with cows, runs through the University of Maryland and ends with capitalism” — you can’t make this stuff up!) are up as well.

Whoever at the university decided to put out this press release should face consequences, and I’m really glad there are journalists out there holding the university’s feet to the fire. But while the university certainly bears responsibility for the poor decision to go out there and shill for a sponsor in the name of science, it’s worth noting that this is only half of the story.

There’s a lot of talk in academia these days about the status of scientific knowledge — about replicability, bias, and bad incentives, and how much we know that “just ain’t so.” And there’s plenty of blame to go around.

But in our focus on universities’ challenges in producing scientific knowledge, sometimes we underplay the role of another set of institutions: the media. Yes, there’s a literature on science communication that looks as the media as intermediary between science and the public. But a lot of it takes a cognitive angle on audience reception, and it’s got a heavy bent toward controversial science, like climate change or fracking.

More attention to media as a field, though, with rapidly changing conditions of production, professional norms and pathways, and career incentives, could really shed some light on the dynamics of knowledge production more generally. It would be a mistake to look back to some idealized era in which unbiased but hard-hitting reporters left no stone unturned in their pursuit of the public interest. But the acceleration of the news cycle, the decline of journalism as a viable career, the impact of social media on news production, and the instant feedback on pageviews and clickthroughs all tend to reinforce a certain breathless attention to the latest overhyped university press release.

It’s not the best research that gets picked up, but the sexy, the counterintuitive, and the clickbait-ish. Female-named hurricanes kill more than male hurricanes. (No.) Talking to a gay canvasser makes people support gay marriage. (Really no.) Around the world, children in religious households are less altruistic than children of atheists. (No idea, but I have my doubts.)

This kind of coverage not only shapes what the public believes, but it shapes incentives in academia as well. After all, the University of Maryland is putting out these press releases because it perceives it will benefit, either from the perception it is having a public impact, or from the goodwill the attention generates with Fifth Quarter Fresh and other donors. Researchers, in turn, will be similarly incentivized to focus on the sexy topic, or at least the sexy framing of the ordinary topic. And none of this contributes to the cumulative production of knowledge that we are, in theory, still pursuing.

None of this is meant to shift the blame for the challenges faced by science from the academic ecosystem to the realm of media. But if you really want to understand why it’s so hard to make scientific institutions work, you can’t ignore the role of media in producing acceptance of knowledge, or the rapidity with which that role is changing.

After all, if academics themselves can’t resist the urge to favor the counterintuitive over the mundane, we can hardly blame journalists for doing the same.

Written by epopp

January 18, 2016 at 1:23 pm

asr reviewer guidelines: comparative-historical edition

[The following is an invited guest post by Damon Mayrl, Assistant Professor of Comparative Sociology at Universidad Carlos III de Madrid, and Nick Wilson, Assistant Professor of Sociology at Stony Brook University.]

Last week, the editors of the American Sociological Review invited members of the Comparative-Historical Sociology Section to help develop a new set of review and evaluation guidelines. The ASR editors — including orgtheory’s own Omar Lizardo — hope that developing such guidelines will improve historical sociology’s presence in the journal. We applaud ASR’s efforts on this count, along with their general openness to different evaluative review standards. At the same time, though, we think caution is warranted when considering a single standard of evidence for evaluating historical sociology. Briefly stated, our worry is that a single evidentiary standard might obscure the variety of great work being done in the field, and could end up excluding important theoretical and empirical advances of interest to the wider ASR audience.

These concerns derive from our ongoing research on the actual practice of historical sociology. This research was motivated by surprise. As graduate students, we thumbed eagerly through the “methodological” literature in historical sociology, only to find — with notable exceptions, of course — that much of this literature consists of debates about the relationship between theory and evidence, or conceptual interventions (for instance, on the importance of temporality in historical research). What was missing, it seemed, were concrete discussions of how to actually gather, evaluate, and deploy primary and secondary evidence over the course of a research project. This lacuna seemed all the more surprising because other methods in sociology — like ethnography or interviewing — had such guides.

With this motivation, we set out to ask just what kinds of evidence the best historical sociology uses, and how the craft is practiced today. So far, we have learned that historical sociology resembles a microcosm of sociology as a whole, characterized by a mosaic of different methods and standards deployed to ask questions of a wide variety of substantive interests and cases.

One source for this view is a working paper in which we examine citation patterns in 32 books and articles that won awards from the ASA Comparative-Historical Sociology section. We find that, even among these award-winning works of historical sociology, at least four distinct models of historical sociology, each engaging data and theory in particular ways, have been recognized by the discipline as outstanding. Importantly, the sources they use and their modes of engaging with existing theory vary dramatically. Some works use existing secondary histories as theoretical building blocks, engaging in an explicit critical dialogue with existing theories; others undertake deep excavations of archival and other primary sources to nail down an empirically rich and theoretically revealing case study; and still others synthesize mostly secondary sources to provide new insights into old theoretical problems. Each of these strategies allows historical sociologists to answer sociologically important questions, but each also implies a different standard of judgment. By extension, ASR’s guidelines will need to be supple enough to capture this variety.

One key aspect of these standards concerns sources, which for historical sociologists can be either primary (produced contemporaneously with the events under study) or secondary (later works of scholarship about the events studied). Although classic works of comparative-historical sociology drew almost exclusively from secondary sources, younger historical sociologists increasingly prize primary sources. In interviews with historical sociologists, we have noted stark divisions and sometimes strongly-held opinions as to whether primary sources are essential for “good” historical sociology. Should ASR take a side in this debate, or remain open to both kinds of research?

Practically speaking, neither primary nor secondary sources are self-evidently “best.” Secondary sources are interpretive digests of primary sources by scholars; accordingly, they contain their own narratives, accounts, and intellectual agendas, which can sometimes strongly shape the very nature of events presented. Since the quality of historical sociologists’ employment of secondary works can be difficult for non-specialists to judge, this has often led to skepticism of secondary sources and a more favorable stance toward primary evidence. But primary sources face their own challenges. Far from being systematic troves of “data” readily capable of being processed by scholars, for instance, archives are often incomplete records of events collected by directly “interested” actors (often states) whose documents themselves remain interpretive slices of history, rather than objective records. Since the use of primary evidence more closely resembles mainstream sociological data collection, we would not be surprised if a single standard for historical sociology explicitly or implicitly favored primary sources while relatively devaluing secondary syntheses. We view this to be a particular danger, considering the important insights that have emerged from secondary syntheses. Instead, we hope that standards of transparency, for both types of sources, will be at the core of the new ASR guidelines.

Another set of concerns relates to the intersection of historical research and the review process itself. For instance, our analysis of award-winners suggests that, despite the overall increased interest in original primary research among section members, primary source usage has actually declined in award-winning articles (as opposed to books) over time, perhaps in response to the format constraints of journal articles. If the new guidelines heavily favor original primary work without providing leeway in format constraints (for instance, through longer word counts), this could be doubly problematic for historical sociological work attempting to appear in the pages of ASR.  Beyond the constraints of word-limits, moreover, as historical sociology has extended its substantive reach through its third-wave “global turn,” the cases historical sociologists use to construct a theoretical dialogue with one another can sometimes rely on radically different and particularly unfamiliar sources. This complicates attempts to judge and review works of historical sociology, since the reviewer may find their knowledge of the case — and especially of relevant archives — strained to its limit.

In sum, we welcome efforts by ASR to provide review guidelines for historical sociology.  At the same time, we encourage plurality—guidelines, rather than a guideline; standards rather than a standard. After all, we know that standards tend to homogenize and that guidelines can be treated more rigidly than originally intended. In our view, this is a matter of striking an appropriate balance. Pushing too far towards a single standard risks flattening the diversity of inquiry and distorting ongoing attempts among historical sociologists to sort through what the new methodological and substantive diversity of the “third wave” of historical sociology means for the field, while pushing too far towards describing diversity might in turn yield a confusing sense for reviewers that “anything goes.” The nature of that balance, however, remains to be seen.

Written by epopp

September 8, 2015 at 5:51 pm

inside higher education discusses replication in psychology and sociology

Science just published a piece showing that only a third of articles from major psychology journals can be replicated. That is, if you reran the experiments, only a third of experiments will have statistically significant results. The details of the studies matter as well. The higher the p-value, the less like you were to replicate and “flashy” results were less likely to replicate.

Insider Education spoke to me and other sociologists about the replication issue in our discipline. A major issue is that there is no incentive to actually assess research since it seems to be nearly impossible to publish replications and statistical criticisms in our major journals:

Recent research controversies in sociology also have brought replication concerns to the fore. Andrew Gelman, a professor of statistics and political science at Columbia University, for example, recently published a paper about the difficulty of pointing out possible statistical errors in a study published in the American Sociological Review. A field experiment at Stanford University suggested that only 15 of 53 authors contacted were able or willing to provide a replication package for their research. And the recent controversy over the star sociologist Alice Goffman, now an assistant professor at the University of Wisconsin at Madison, regarding the validity of her research studying youths in inner-city Philadelphia lingers — in part because she said she destroyed some of her research to protect her subjects.

Philip Cohen, a professor of sociology at the University of Maryland, recently wrote a personal blog post similar to Gelman’s, saying how hard it is to publish articles that question other research. (Cohen was trying to respond to Goffman’s work in the American Sociological Review.)

“Goffman included a survey with her ethnographic study, which in theory could have been replicable,” Cohen said via email. “If we could compare her research site to other populations by using her survey data, we could have learned something more about how common the problems and situations she discussed actually are. That would help evaluate the veracity of her research. But the survey was not reported in such a way as to permit a meaningful interpretation or replication. As a result, her research has much less reach or generalizability, because we don’t know how unique her experience was.”

Readers can judge whether Gelman’s or Cohen’s critiques are correct. But the broader issue is serious. Sociology journals simply aren’t publishing error correction or replication, with the honorable exception of Sociological Science which published a replication/critique of the Brooks/Manza (2006) ASR article. For now, debate on the technical merits of particular research seems to be the purview of blog posts and book reviews that are quickly forgotten. That’s not good.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

August 31, 2015 at 12:01 am

replication and the future of sociology

Consider the following:

Sociology, we can do better. Here is what I suggest:

  • Dissertation advisers should insist on some sort of storage of data and code for students. For those working with standard data like GSS or Ad Health, this should be easy. For others, some version of the data should accompany the code. There are ways of anonymizing data, or people can sign non-disclosure forms. Perhaps universities can create digital archives of dissertation data, like they have paper copies of dissertations. Secure servers can hold relevant field notes and interview transcripts.
  • Journals and book publishers should require quant papers to have replication packages. Qualitative paper authors should be willing to provide complete information for archival work & transcription samples for interview based research. The jury is still out on what ethnographers might provide.
  • IRB’s should allow all authors to come up with a version of the data that others might read or consult.
  • Professional awards should only be given to research that can be replicated in some fashion. E.g., as Phil Cohen has argued – no dissertation awards should be given for dissertations that were not deposited in the library.

Let’s try to improve.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

August 17, 2015 at 12:01 am

sociologists need to be better at replication – a guest post by cristobal young

Cristobal Young is an assistant professor at Stanford’s Department of Sociology. He works on quantitative methods, stratification, and economic sociology. In this post co-authored with Aaron Horvath, he reports on the attempt to replicate 53 sociological studies. Spoiler: we need to do better.

Do Sociologists Release Their Data and Code? Disappointing Results from a Field Experiment on Replication.

 

Replication packages – releasing the complete data and code for a published article – are a growing currency in 21st century social science, and for good reasons. Replication packages help to spread methodological innovations, facilitate understanding of methods, and show confidence in findings. Yet, we found that few sociologists are willing or able to share the exact details of their analysis.

We conducted a small field experiment as part of a graduate course in statistical analysis. Students selected sociological articles that they admired and wanted to learn from, and asked the authors for a replication package.

Out of the 53 sociologists contacted, only 15 of the authors (28 percent) provided a replication package. This is a missed opportunity for the learning and development of new sociologists, as well as an unfortunate marker of the state of open science within our field.

Some 19 percent of authors never replied to repeated requests, or first replied but never provided a package. More than half (56 percent) directly refused to release their data and code. Sometimes there were good reasons. Twelve authors (23 percent) cited legal or IRB limitations on their ability to share their data. But only one of these authors provided the statistical code to show how the confidential data were analyzed.

Why So Little Response?

A common reason for not releasing a replication package was because the author had lost the data – often due to reported computer/hard drive malfunctions. As well, many authors said they were too busy or felt that providing a replication package would be too complicated. One author said they had never heard of a replication package. The solutions here are simple: compiling a replication package should be part of a journal article’s final copy-editing and page-proofing process.

More troubling is that a few authors openly rejected the principle of replication, saying in effect, “read the paper and figure it out yourself.” One articulated a deep opposition, on the grounds that replication packages break down the “barriers to entry” that protect researchers from scrutiny and intellectual competition from others.

The Case for Higher Standards

Methodology sections of research articles are, by necessity, broad and abstract descriptions of their procedures. However, in most quantitative analyses, the exact methods and code are on the author’s computer. Readers should be able to download and run replication packages as easily as they can download and read published articles. The methodology section should not be a “barrier to entry,” but rather an on-ramp to an open and shared scholarly enterprise.

When authors released replication packages, it was enlightening for students to look “under the hood” on research they admired, and see exactly how results were produced. Students finished the process with deeper understanding of – and greater confidence in – the research. Replication packages also serve as a research accelerator: their transparency instills practical insight and confidence – bridging the gap between chalkboard statistics and actual cutting-edge research – and invites younger scholars to build on the shoulders of success. As Gary King has emphasized, replications have become first publications for many students, and helped launched many careers – all while ramping up citations to the original articles.

In our small sample, little more than a quarter of sociologists released their data and code. Top journals in political science and economics now require on-line replication packages. Transparency is no less crucial in sociology for the accumulation of knowledge, methods, and capabilities among young scholars. Sociologists – and ultimately, sociology journals – should embrace replication packages as part of the lasting contribution of their research.

Table 1. Response to Replication Request

Response Frequency Percent
Yes:   Released data and code for paper 15 28%
No: Did not release 38 72%
Reasons for “No”
    IRB / legal / confidentiality issue 12 23%
    No response / no follow up 10 19%
    Don’t have data 6 11%
    Don’t have time / too complicated 6 11%
    Still using the data 2 4%
    ‘See the article and figure it out’ 2 4%
Total 53 100%

Note: For replication and transparency, a blinded copy of the data is available on-line. Each author’s identity is blinded, but the journal name, year of publication, and response code is available. Half of the requests addressed articles in the top three journals, and more than half were published in the last three years.

Figure 1: Illustrative Quotes from Student Correspondence with Authors:

Positive:

  1. “Here is the data file and Stata .do file to reproduce [the] Tables….  Let me know if you have any questions.”
  2. “[Attached are] data and R code that does all regression models in the paper. Assuming that you know R, you could literally redo the entire paper in a few minutes.”

Negative:

  1. “While I applaud your efforts to replicate my research, the best guidance I can offer
    is that the details about the data and analysis strategies are in the paper.”
  2. “I don’t keep or produce ‘replication packages’… Data takes a significant amount of human capital and financial resources, and serves as a barrier-to-entry against other researchers… they can do it themselves.”

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

August 11, 2015 at 12:01 am

working with computer scientists

doof-wagon-2

In North Carolina, this is called the “Vaisey Cart.

I’ve recently begun to work with a crew of computer scientists at Indiana when I was recruited to help with a social media project. It’s been a highly informative experience that has reinforced my belief that sociologists and computer scientists should team up. Some observations:

  • CS and sociology are complimentary. We care about theory. They care about tools and application. Natural fit.
  • In contrast, sociology and other social sciences are competing over the same theory space.
  • CS people have a deep bucket of tools for solving all kinds of problems that commonly occur in cultural sociology, network analysis, and simulation studies.
  • CS people believe in the timely solution of problems and workflow. Rather than write over a period of years, they believe in “yes, we can do this next week.”
  • Since their discipline runs on conferences, the work is fast and it is expected that it will be done soon.
  • Another benefit of the peer reviewed conference system is that work is published “for real” quickly and there is much less emphasis on a few elite publication outlets. Little “development.” Either it works or it doesn’t.
  • Quantitative sociologists are really good at applied stats and can help most CS teams articulate data analysis plans and execute them, assuming that the sociologist knows R.
  • Perhaps most importantly, CS researchers may be confident in their abilities, but less likely to think that they know it all and have no need for help from others. CS is simply too messy a field, which is similar to sociology.
  • Finally: cash. Unlike the arts and sciences, there is no sense that we are broke. While you still have to work extra hard to get money, it isn’t a lost cause like sociology is where the NSF hands out a handful of grants. There is money out there for entrepreneurial scholars.

Of course, there are downsides. CS people think you are crazy for working on a 60 page article that takes 5 years to get published. Also, some folks in data science and CS are more concerned about tools and nice visuals at the expense of theory and understanding. As a corollary, it is often the case that some CS folks may not appreciate sampling, bias, non-response, and other issues that normally inform sociological research design. But still, my experience has been excellent, the results exciting, and I think more sociologists should turn to computer science as an interdisciplinary research partner.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

June 24, 2015 at 12:01 am

can powerful, elite-led organizations lessen inequality?

Hi all, I’m Ellen Berrey. I’ll be guest blogging over the next few weeks about inequality, culture, race, organizations, law, and multi-case ethnography. Thanks for the invite, Katherine, and the warm welcomes! Here’s what I’m all about: I’m an assistant professor of sociology at the University at Buffalo-SUNY and an affiliated scholar of the American Bar Foundation. I received my PhD from Northwestern in 2008. This fall, I jet off from the Midwest to join the faculty of the University of Denver (well, I’m actually going to drive the fading 2003 Toyota I inherited from my mom).  

As a critical cultural sociologist, I study organizational, political, and legal efforts to address inequality. My new book, The Enigma of Diversity: The Language of Race and the Limits of Racial Justice (University of Chicago Press)is officially out next Monday (yay!). I’ll dive into that in future posts, for sure. I’m writing up another book on employment discrimination litigation with Robert Nelson and Laura Beth Nielsen, Rights on Trial: Employment Civil Rights in Work and in Court.  These and my articles and other projects explore organizational symbolic politics, affirmative action in college admissions (also here and here), affirmative action activism (and here), corporate diversity management, fairness in discrimination litigation, discrimination law and inequality (and here), gentrification politics, and benefit corporations.

I’ll kick off today with some thoughts about a theme that I’ve been exploring for many years:

How can powerful, elite-led organizations advance broad progressive causes like social justice or environmental protection? I’m not just referring to self-identified activists but also corporations, universities, community agencies, foundations, churches, and the like. Various arms of the state, too, are supposed to forward social causes by, say, ending discrimination at work or alleviating poverty. To what extent can organizational decision-makers create positive social change through discrete initiatives and policies—or do they mostly just create the appearance of effective action? Time and again, perhaps inevitably, top-down efforts to address social problems end up creating new problems for those they supposedly serve.

To the point: Have you come across great research that examines how organizations can bring about greater equality and engages organizational theory?

I think this topic is especially important for those of us who study organizations and inequality. We typically focus on the harms that organizations cause. We know, for example, that employers perpetuate racial, class, and gender hierarchies within their own ranks through their hiring and promotion strategies. I believe we could move the field forward if we also could point to effective, even inspiring ways in which organizations mitigate inequities. I have in mind here research that goes beyond applied evaluations and that resists the Polly Anna-ish temptation to sing the praises of corporations. Critical research sometimes asks these questions, but it often seems to primarily look for (and find) wrongdoing. Simplistically, I think of this imperative in terms of looking, at once, at the good and bad of what organizations are achieving. Alexandra Kalev, Frank Dobbin, and Erin Kelly’s much-cited American Sociological Review article on diversity management programs is one exemplar. There is room for other approaches, as well, including those that foreground power and meaning making. Together with the relational turn in the study of organizational inequality, this is a promising frontier to explore.

More soon. Looking forward to the conversation.

 

Written by ellenberrey

May 13, 2015 at 2:08 pm

open borders… in the new york times?

An op-ed in the New York Times makes the case for open borders. From Debunking the Myth of the Job Stealing Immigrant by Adam Davidson:

… Few of us are calling for the thing that basic economic analysis shows would benefit nearly all of us: radically open borders.

And yet the economic benefits of immigration may be the ­most ­settled fact in economics. A recent University of Chicago poll of leading economists could not find a single one who rejected the proposition. (There is one notable economist who wasn’t polled: George Borjas of Harvard, who believes that his fellow economists underestimate the cost of immigration for low-­skilled natives. Borjas’s work is often misused by anti-immigration activists, in much the same way a complicated climate-­science result is often invoked as “proof” that global warming is a myth.) Rationally speaking, we should take in far more immigrants than we currently do.

Outstanding. I hope this spurs more discussion of open borders.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street!!

Written by fabiorojas

March 26, 2015 at 12:01 am

questionable hypothesizing

Heads turn whenever accusations are made about academic impropriety, but it is especially provocative when one of the researchers involved in a study makes accusations about his/her own research. A forthcoming article in the Journal of Management Inquiry written by an anonymous senior scholar in organizational behavior does exactly this. The author, who remains anonymous to protect the identity of his/her coauthors, claims that research in organizational behavior routinely violates norms of scientific hypothesis testing.

I want to be clear: I never fudged data, but it did seem like I was fudging the framing of the work, by playing a little fast and loose with the rules of the game—as I thought I understood how the game should be played according to the rules of the scientific method. So, I must admit, it was not unusual for me to discover unforeseen results in the analysis phase, and I often would then create post hoc hypotheses in my mind to describe these unanticipated results. And yes, I would then write up the paper as if these hypotheses were truly a priori. In one way of thinking, the theory espoused (about the proper way of doing research) became the theory-in-practice (about how organizational research actually gets done).

I’m certain that some people reading this will say, “Big deal, people reformulate hypotheses all the time as they figure out what their analysis is telling them.”  The author recognizes this is the case and, I believe, relates his/her experience as a warning of how the field’s standards for writing are evolving in detrimental ways. For example, the author says, “there is a commonly understood structure or “script” for an article, and authors ask for trouble when they violate this structure. If you violate the script, you give the impression that you are inexperienced. Sometimes, even the editor requires an author to create false framing.” Sadly, this is true.

All too often reviewers feel that it is their role to tell the authors of a paper how to structure hypotheses, rewrite hypotheses, and explain analysis results. Some reviewers, especially inexperienced ones, may do this because they feel that they are doing the author(s) a favor – they’re helping make the paper cleaner and understandable. But the unintended consequence of this highly structured way of writing and presenting results is that it forces authors into a form of mild academic dishonesty in which they do not allow the exploratory part of the analytical process to be transparent.

Some journals have a much stronger ethos about hypothesis testing than others. AJS is looser and allows authors more freedom in this regard. But some social psych journals (like JPSP) have become extremely rigid in wanting to see hypotheses stated a priori and then tested systematically.  I would love to see more journals encourage variability in reporting of results and allow for the possibility that many of our results were, in fact, unexpected. I would love it if more editors chastised reviewers who want to force authors into a highly formulaic style of hypothesizing and writing results. It simply doesn’t reflect how most of us do research.

Perhaps the anonymous author’s tale will ignite a much needed discussion about how we write about social scientific analysis.

Written by brayden king

February 8, 2015 at 3:57 pm

ethnographers looking back

One on-going aspect of ethnographic work is the never-ending reflection and re-evaluation of conclusions made months, years, or decades prior. Retrospection invites extended analysis of findings that were otherwise cut short; it also facilitates shift from a worm’s eye to a bird’s eye contextualization of a case. Michael Burawoy’s “Ethnographic Fallacies: Reflections on Labour Studies in the Era of Market Fundamentalism” offers one such contemplation.*

In this research note, Burawoy re-examines several decades of his participant-observations in workplaces in various nations; he reveals the actual names of his most famous disguised field sites. Looking back, he summarizes six revelations while imparting a warning to those overly invested in the merits of particular methodologies:

From the ethnographer’s curse, therefore, I turn to the ethnographic fallacies that limited my vision of market fundamentalism. First, there are three traps that await the ethnographer who seeks to comprehend the world beyond the field site: the fallacies of ignoring, reifying and homogenizing that world. Second, there are three traps awaiting the ethnographer who fails to give the field site a dynamic of its own: the fallacies of viewing the field site as eternal or, when the past is examined, the danger of treating the present as a point of arrival rather than also as a point of departure; and finally the danger of wishful thinking, projecting one’s own hopes onto the actors we study.
I describe these six fallacies not to indict ethnography but to improve its practice, to help ethnographers grapple with the limitations of their method. No method is without fallacies, it is a matter of how honestly and openly we approach them. Being accountable to the people we study requires us to recognize our fallibility and, thus, to wrestle with that fallibility. The methodological dogmatists, who declare they have found the flawless method and spend their time condemning others for not following the golden trail, are the real menace to our profession.

Read the rest of this entry »

Written by katherinechen

February 7, 2015 at 6:39 pm

the fda, calorie counts, and the lost pleasure of junk food

Tyler Cowen has his markets in everything. Some days I feel like we need another tagline: the commensuration of everything.

Cost-benefit analysis is ubiquitous in government. But the question of what counts as a cost and a benefit, and how to measure them, has been up for debate pretty much since people started tallying them up.

The FDA has, somewhat controversially, been calculating “lost pleasure” as one impact of its regulations. This came up most recently around requirements that restaurants post calorie counts of their menu items. Headlines crowed that consumers might lose some $5.27 billion in pleasure from giving up their high-calorie junk foods, although the value of the health benefits still, according to the FDA, justified displaying calorie counts.

The FDA analysis was based on economics — a working paper by Yale economist Jason Abaluck plus calculations by internal staff, according to Reuters — but a lot of economists disagree that it makes sense to think about it this way. Calorie counts are just providing information. So if you order differently upon being presented with this information — you give up your Gordita for the bean burrito — you’re basically by definition making a choice you prefer, and thus can’t be losing from the decision.

This was a second round in this battle for the FDA, the first having been over the monetary costs of the lost pleasure of smoking. Earlier this year, the FDA, in an analysis of the costs and benefits of graphic warning labels on cigarettes, similarly gave considerable weight to the lost pleasure of smoking — to the extent that the avoidance of death and disease caused by quitting smoking would have to be discounted by 70% to account for the loss in pleasure.

This did not go over well with many economists. In fact, an all-star team wrote a response paper detailing their disagreement with the analysis. What I found interesting was that here they made a slightly different argument than for calorie counts. It’s harder to argue that graphic warning labels just provide information. And if they’re not just providing information, perhaps smokers—rationally choosing to smoke but deterred by gross-out pictures of rotting lungs—are really losing consumer surplus.

smokers

So the critics bring addiction into it. Smoking is addictive and therefore not rational. Moreover, people often become addicted before they are adults. Perhaps some people have rationally chosen to smoke, but surely not those who were daily smokers by the age of 18. Since 70% of American smokers started smoking before they turned 17, at least that fraction of any lost consumer surplus should be ignored.

For those who started smoking as legal adults, the authors turn to behavioral economics. Addiction causes us to discount the future too heavily, rendering decisions irrational. Thus even the smokers who started later in life aren’t really losing consumer surplus, either.

I don’t follow current cost-benefit politics closely, but I’ve spent a fair amount of time reading about cost-benefit politics of the 1950s, 60s, and 70s. And what strikes me about this debate that is so similar to conversations that people were having about building dams or calculating the benefits of education or the polio vaccine is how fundamentally political they are.

There’s no avoiding it. In many cases, there’s simply no absolute way to determine that one method of calculating costs and benefits is better than another, and so people make methodological choices based on other factors. Sometimes those may be overtly political, as in, “I think the FDA should do everything it can to deter smoking and I’m going to come up with reasons that make sense for it to do that.” Sometimes they may be based on disciplinary politics, as in “This is my intellectual team, and I’m going to defend our position here, wherever that takes me.” And of course they can be less purposeful, too, and well-intentioned, but such decisions are still judgment calls.

Eventually, practitioners may reach consensus over which choices are the right ones, and then people don’t have to argue anymore over whether to value life based on people’s expected future earnings or on their willingness to pay to avoid risks (as they did some decades back). But just because the choices are settled doesn’t make them any less normative.

In the end, I tend to think these debates are more than a little insane, because they lend false precision to decisions that involve judgment call upon judgment call, and questionable assumption upon questionable assumption. But they matter. Cost-benefit calculations aren’t determinative, but they make a real difference in what decisions are seen as no-brainers, and which find their way into the dustbin. And powerful interest groups — hello, tobacco and junk food lobbies — have stakes in which methods get chosen, not just economists.

Myself, I’d vote for just listing multiple types of costs and benefits and quantifying them by order of magnitude, rather than pretending to an unachievable precision. That’s not likely to happen. If we’re stuck with trying to commensurate the incommensurable, though, at the very least we need to keep making explicit the choices that go into such calculations. That means laying out the assumptions — both normative and methodological — that lay behind them.

 

 

Written by epopp

December 30, 2014 at 2:15 pm

so, you wanna be fabio’s student?

In this post, I want to discuss my style as a dissertation advisor. This is mainly for potential students, but I also want to start a thread on how to best advise doctoral students in sociology and related areas.

1. Statue of Liberty: With a few exceptions, I will accept any student who needs a dissertation advisor. This is a personal decision on my part. In my career, I’ve been in institutions where students couldn’t find advisers. It’s a problem when faculty get too picky about who they take on and a few advisers get saddles with most of the load. I will not contribute to the problem. The exceptions to the Statue of Liberty policy are where (a) the student is really having academic problems; I’ve never been able to help these students as much as I have tried and (b) you happen to be in a specialty where having an non-specialist advisor will really create problems for you.

2. Even though I accept the masses, I have a few general areas where I am most helpful: orgs/economic sociology; political sociology; education/higher education; sociology of knowlegde and science; formal methods/computational sociology. Specifically: institutional theory, networks, movements, social media, rational choice, higher education/disciplines, computational sociology. I am also developing my knowledge of health.

3. General approach I: I think it is important to tailor the CV to the student. If you want an R1 job, I will encourage publication. If you want liberal arts, we will work on your teaching CV. For policy jobs, speedy completion and showing research in a policy related area.

4. General approach II: I focus on nuts and bolts “American social science.” In other words, I like clearly stated problems, high quality data and a focus on description or inference. I don’t care if you are qualitative or quantitative. Just make it good.

5. General approach III: In general, I don’t tell people what their dissertation will be about. I do try to tell them if it is a good or bad. In other words, I don’t say “this will never work.” Instead, I’ll tell you about what’s been done, what sounds good, what might get them a job and so forth. But making a decision is what the process is about. If you want to do it, convince me!

6. General approach IV: Hands on. I believe in solving problems now rather than later. Some of my students come by all the time, others once or twice a semester. In general, I believe in constant interaction so we move students forward. For this reason, I think an open doors policy is good.

7. Philosophy of the dissertation: First, my default for most sociology students is “three chapters.” Why? The dissertation is a pedagogical exercise meant to show that the student can do research. It is not a masterpiece. Also, most students will start with articles so this is good. I note that this is a default – not a rule. If a student really needs a book format dissertation, that’s ok.

8. Dissertation quality: It is important that students be judged according to their career goals. All students must submit a good dissertation but how good can vary. The research oriented PhD student should be held to a higher standard than the student who will find non-academic work.

9. Graduation: For students oriented toward academia, article = graduation. For other students, we can start the graduation process as soon as I have two or three complete empirical chapters.

Use the comment to disucss how you approach PhD training.

50+ chapters of grad skool advice goodness: Grad Skool Rulz/From Black Power

Written by fabiorojas

December 30, 2014 at 12:01 am

is economics partisan? is all of social science just wrong?

Last week, fivethirtyeight.com put up a piece titled, “Economists Aren’t As Nonpartisan As We Think.” Beyond the slightly odd title (do we think they’re nonpartisan? should we expect them to be?), it’s an interesting write-up of a new working paper by Zubin Jelveh, Bruce Kogut, and Suresh Naidu. It went up a week ago, but since it gives me a chance to write about three of my favorite things, I thought it was still worth a post.

Favorite thing 1: Economists

The research started by identifying the political positions of economists, using campaign contributions and signatures on political petitions. This suggested economists lean left, by about 60-40. Not surprising so far. Then Jelveh et al. used machine learning techniques to identify phrases in journal articles most closely associated with left or right positions. Some of these are not unexpected (“Keynesian economics” versus “laissez faire”), while others are less obvious (“supra note” is left, and “central bank” is right).

Read the rest of this entry »

Written by epopp

December 15, 2014 at 2:11 pm

the Q words

Regular orgtheory commenter Howard Aldrich has an interesting and provocative piece up at the OOW blog, Work in Progress, and the LSE Impact blog. His plea is that we should abandon the Q words — qualitative and quantitative — in describing our research. They aren’t terribly descriptive of what we’re actually doing, they create unnecessary divisions within social science, and using them inappropriately devalues qualitative work:

I’ve endured this distinction for so long that I had begun to take it for granted, a seemingly fixed property in the firmament of social science data collection and analysis strategies. However, I’ve never been happy with the distinction and about a decade ago, began challenging people who label themselves this way. I was puzzled by the responses I received, which often took on a remorseful tone, as if somehow researchers had to apologize for the methodological strategies they had chosen. To the extent that my perception is accurate, I believe their tone stems from the persistent way in which non-statistical approaches have been marginalized in many departments. However, it also seemed as though the people I talked with had accepted the evaluative nature of the distinction. As Lamont and Swidler might say, these researchers had bought into “methodological tribalism.”

Having recently argued that Sociological Science needs more “qualitative” work, I read this with interest. Certainly the terms are not the most descriptive, and they do reinforce a division within sociology that might better be blurred post-Methodenstreit.

But I think the distinction is likely to persist, despite Howard’s good intentions, for two reasons.

Read the rest of this entry »

Written by epopp

November 28, 2014 at 5:58 pm

history, the stock market, and predicting the future

So the stock market has been freaking out a bit the last couple of weeks. Secular stagnation, Ebola, a five-year bull market—who knows why. Anyway, over the weekend I was listening to someone on NPR explain what the average person should do under such circumstances (answer: hang tight, don’t try to time the market). This reminded me of one of my pet quibbles with financial advice, which I think applies to a lot of social science more generally.

For years, the conventional wisdom around what ordinary folks should do with their money has gone something like this. Save a lot. Put it in tax-favored retirement accounts. Invest it mostly in index funds—the S&P 500 is good. Don’t mess with it. In the long run this is likely to net you a reliable 7% return after inflation, about the best you’re likely to do.

Now, it’s not that I think this is bad advice. In fact, this is pretty much exactly what I do, with some small tweaks.

But it has always struck me how, in news stories and advice columns and talk shows, people talk about how this is a good strategy because it’s worked for SO LONG. For 30 years! Or since 1929! Or since 1900! (Adjust returns accordingly.)

And yes, 30 years, or 85, or 114, are all a long time relative to human life. And we have to make decisions based on the knowledge we’ve got.

But it’s always seemed to me that if what you’re interested in is what will happen over the 30+ years of someone’s earning life (more if you’re not in academia!), you’ve basically got an N of 1 to 4 here. I mean, sure, this may be a reasonable guess, but I don’t think there’s any strong reason to believe that the next 100 years are likely to look very similar to the last 100. Odds are better if you’re just interested in the next 30, but even then, I’m always surprised by just how confident the conventional wisdom is around the idea that the market always coming out ahead over a 25- or 30-year period—going ALL THE WAY BACK TO 1929—is rock solid evidence that it will do so in the future.

Of course, there are lots of people who don’t believe this, too, as evidenced by what happened to gold prices after the financial crisis. Or by, you know, survivalists.

Anyway, I think this overconfidence in the lessons of the recent past is something we as social scientists tend to be susceptible to. The study that comes most immediately to mind here is the Raj Chetty study on value-added estimates of teachers (paper 1, paper 2, NYT article).

The gist of the argument is that teachers’ effects on student test scores, net of student characteristics (their value added), predicts students’ eventual income at age 28. Now, there’s a lot that could be discussed about this study (latest round of critique, media coverage thereof).

But I just want to point to it—or rather, broader interpretations of it—as illustrating a similar overconfidence in the ability of the past to predict the future.

Here we have a study based on a massive (2.5 million students) dataset over a twenty-year period (1989-2009). Just thinking about the scale of the study and taking its results at face value, it’s hard to imagine how much more certain one could be in social science than at the end of such an endeavor.

And much of the media coverage takes that certainty and projects it into the future (see the NYT article again). If you replace a low value-added teacher with an average one, the classroom’s lifetime earnings will increase by more than $250,000.

And yet to make such a leap, you have to be willing to assume so many things about the future will be like the past: not only that incentivizing teachers differently and making tests more important won’t change their predictive effects (which the papers acknowledge), but, just as importantly, that the effects of education on earnings—or, more specifically, of teacher value-added on earnings—will be similar in future 20-year periods as it was from 1989-2009. And that nothing else meaningful about teachers, students, schools, or earnings will evolve over the next 20 years in ways that mess with that relationship in a significant way.

I think we do this a lot—project into the future based on our understanding of a past that is, really, quite recent. Of course knowledge about the (relatively) recent past still should inform the decisions we make about the future. But rather a lot of modesty is called for when making blanket claims that assume the future is going to look just like the past. Maybe it’s human nature. But I think that modesty is often missing.

Written by epopp

October 20, 2014 at 11:01 am

do we need multi-disciplinary organization research? a guest post by siri ann terjesen

Siri Ann Terjesen is an assistant professor of management and international business at Indiana University and an Associate Editor of the Academy of Management Learning & Education. She is an entrepreneurship researcher and she also does work on supply chains and related issues. This guest post addresses multidisciplinary scholarship.

I am interested in orgtheory readers’ perspectives on a critical but under-examined issue in academia, including scholarship about organizations. That is, in academia, individual scholars are incentivized to focus on a particular issue in a particular discipline and discouraged from developing deep expertise in multiple fields. For example, business scholars examine the same universe (e.g., firms, employees, etc.), albeit through different branches (disciplines such as strategy, organizational behavior, operations management, finance, accounting, ethics, law, etc.) which do not dialogue actively with one another—and there are very few academics who develop a real repertoire across multiple fields- that is, are truly multidisciplinary ‘protean’ scholars who contribute to leading journals in multiple disciplines (e.g., disciplines as distinct as ethics and operations management or accounting and organizational behavior) and have a profound influence across these distinct arenas.

This is surprising because history shows us that some of the greatest learning and paradigm shifts come from the contributions of polymaths- individuals whose expertise draws on a wide range of knowledge- from early historical examples (Francis Bacon, Erasmus, Galileo Galelei, Hildegard von Bingen, and Ben Franklin) to more recent scholars (Michael Polanyi and Linus Pauling). Researchers in the applied sciences are beginning to recognize the power of polymath, protean scholars who bring new innovations through their openness to variety and flexibility and operations across multidisciplinary spaces. There are also personal motivations- individuals who have many repertoires of knowledge may develop a broader understanding and appreciation of all human accomplishments and are personally able to enjoy the pursuit of multiple paths to excellence and to have more peak experiences across these fields. Certainly there are prevailing counterarguments concerning a Jack-of-all-Trades but master of noneand the sheer costs of operating in multiple institutions with distinct players, particularly gatekeepers. I welcome orgtheory readers’ insights and debates on this issue in any respect- theoretical perspectives, pros/cons, examples, personal experiences, etc.

50+ chapters of grad skool advice goodness: Grad Skool Rulz/From Black Power 

Written by fabiorojas

October 15, 2014 at 12:01 am

before you say race isn’t real, you need a definition of race

This week, I’d like to focus on the sociology of race. We’ll discuss Shiao et al.’s Sociological Theory article The Genomic Challenge to the Social Construction of Race, which is the subject of a symposium. After you read the article and symposium, you might enjoy the Scatterplot discussion.

In this first post, I’d like to discuss the definitional problems associated with the concept “race.” The underlying concept is that people differ in some systematic way that goes beyond learned traits (like language). One aspect of the “person in the street” view of race is that it reflects common ancestry, which produces correlated physical and social traits. When thinking about this approach to race, most sociologists adopt the constructivist view which says that: (a) the way we group people together reflects our historical moment, not a genuine grouping of people with shared traits and  (b) the only physical differences between people are superficial.

One thing to note about the constructivist approach to race is that the first claim is very easy to defend and the other is very challenging. The classifications used by the “person on the street” are essentially fleeting social conventions. For example, Americans used the “one drop rule” to classify people, but it makes little sense because putting more weight on Black ancestors than White ancestors is arbitrary. Furthermore, ethnic classifications vary by place and even year to year. The ethnic classifications used in social practice flunk the basic tests of reliability and validity that one would want from any measurement of the social world.

The second claim is that there are no meaningful differences between people in general. This claim is much harder to make. This is not an assessment of truth of the claim, but the evidence needed to make is of a tall order. Namely, to make the strong constructivist argument, you would need (a) a definition of which traits matter, (b) a systematic measurement of those traits from a very large sample of people, (c) criteria for clustering people based on data, and (d) a clear test that all (or even most) reasonable clustering methods show a single group of people. As you can see, you need *a lot* of evidence to make that work.

That is where Shiao et al get into the game. They never dispute the first claim, but suggest that the second claim is indefensible – there is evidence of non-random clustering of people using genomic data. This is very important because it disentangles two important issues – race as social category and race as intra-group similarity. It’s like saying the Average Joe may be mistaken about air, earth, water, and fire, but real scientists can see that there are elements out there and you can do real science with them.

50+ chapters of grad skool advice goodness: Grad Skool Rulz/From Black Power 

Written by fabiorojas

October 14, 2014 at 12:04 am

race agnosticism: commentary on ann morning’s research

Earlier this week, Ann Morning of NYU sociology gave a talk at the Center for Research on Race and Ethnicity in Society. Her talk summarized her work on the meaning of race in varying scientific and educational contexts. In other words, rather than study what people think about other races (attitudes), she studies what people think race is.  This is the topic of her book, The Nature of Race.

What she finds is that educated people hold widely varying views of race. Scientists, textbook writers, and college students seem to have completely independent views of what constitutes race. That by itself is a key finding, and raises numerous other questions. Here, I’ll focus on one aspect of the talk. Morning finds that experts do not agree on what race is. And by experts, she means Ph.D. holding faculty in the biological and social sciences that study human variation (biology, sociology, and anthropology). This finding shouldn’t be too surprising given the controversy of the subject.

What is interesting is the epistemic implication. Most educated people, including sociologists, have rather rigid views. Race is *obviously* a social convention, or race is *obviously* a well defined population of people. Morning’s finding suggests a third alternative: race agnosticism. In other words, if experts in human biology, genetics, and cultural studies themselves can’t agree and these disagreements are random (e.g., biologists themselves disagree quite a bit), then maybe other people should just back off and admit they don’t know.

This is not a comfortable position since fights over the nature of human diversity are usually proxies for political fights. Admitting race agnosticism is an admission that you don’t know what you’re talking about. Your entire side in the argument doesn’t know what it’s talking about. However, it should be natural for a committed sociologist. Social groups are messy and ill defined things. Statistical measures of clustering may suggest that the differences among people are clustered and nonrandom, but jumping from that observation to clearly defined groups is very hard in many cases. Even then, it doesn’t yield the racial categories that people use to construct their social worlds based on visual traits, social norms, and learned behaviors. In such a situation, “vulgar” constructionism and essentialism aren’t up to the task. When the world is that complicated and messy, a measure of epistemic humility is in order.

50+ chapters of grad skool advice goodness: Grad Skool Rulz/From Black Power

Written by fabiorojas

October 3, 2014 at 12:01 am

that’s (not) interesting!

So I’m teaching a graduate class this semester that’s got sort of a logic-of-qualitative-inquiry thing going. One of the pieces we just read was Murray Davis’s “That’s Interesting!

I imagine many of you know the article (it’s come up several times before on orgtheory), but for those who don’t, Davis attempts to taxonomize what makes social theories “interesting.” An interesting piece of research, he argues, does four things:

  • It articulates taken-for-granted assumptions.
  • Then it challenges one of those assumptions.
  • It demonstrates that the challenge is correct.
  • And it suggests the practical consequences of that correction.

People thought that suicide resulted from individual proclivities. But Durkheim argued that suicide could be explained by social factors. Sociologists assumed that people who were deviant committed deviant acts. But Becker showed that being labeled as deviant causes people to act in deviant ways. And so on.

It’s a great article for teaching, because it gets students thinking about how to frame their research so that readers perceive it as “interesting!” Because “interesting!” totally works for getting your research published. And Davis’s taxonomy is easy to apply to a wide range of sociological arguments. I’m not likely to stop using it in class.

But rereading it got me to thinking about the limitations of our fixation on “interesting!”

One is that “interesting!” isn’t a good criterion for normal science. Or rather, normal science can be interesting, but it’s got to include lots of not-so-interesting stuff, too, if progress is to be made.

A fixation on “interesting” is what leads us not to publish replications. Or to test scope conditions. Or to refine existing theories. Instead, we end up in a cycle of the novel and counterintuitive. One result is the situation in experimental psychology, in which the whole field seems skeptical of its own findings.

Another, I suspect, is the endless churn toward new theories and concepts that sociology is susceptible to. The quest for the interesting can produce the fractal dynamics Abbott describes. One rejects what has become mainstream in one’s subfield, arguing that an alternative approach can in fact produce new insights or challenge old assumptions. But unlike in Hegel, thesis and antithesis never seem to reach synthesis. And rejecting the status quo by upholding the status quo ante is hardly “interesting.” Instead, a new, narrower community develops around the “interesting” finding that challenges old assumptions, often under freshened-up language.

Finally, the quest for “interesting!” makes it harder to convey what we know beyond academia. Media attention goes to the research that challenge existing assumptions. So we “learn” that we were wrong about eating Mediterranean: it’s a low-carb, high-fat diet that will keep you healthy. At the extreme, this becomes the click-bait of academic research: You Thought You Knew About Social Mobility. But This One Weird Social Theory Will Prove You Wrong.

In the end, I still like the interesting: the unexpected finding, the surprising result. But it’s probably worth considering, every now and then, when we should pay some attention to the uninteresting, too.

Written by epopp

September 11, 2014 at 12:42 pm

let’s hear it for null results

A common, and important, critique of journals is that they don’t want to publish null results. So when I saw a new piece in Socio-Economic Review yesterday reporting essentially null findings, I thought it was worth a shout-out. The article, by economist Stefan Thewissen, is titled, “Is It the Income Distribution or Redistribution That Affects Growth?” (paywalled; email me for a copy). Here’s the abstract:

This study addresses the central question in political economy how the objectives of attaining economic growth and restricting income inequality are related. Thus far few studies explicitly distinguish between effects of income inequality as such and effects of redistributing public interventions to equalize incomes on economic growth. In fact, most studies rely on data that do not make this distinction properly and in which top-coding is applied so that enrichment at the top end of the distribution is not adequately captured. This study aims to contribute using a pooled time-series cross-section design covering 29 countries, using OECD, LIS, and World Top Income data. No robust association between inequality and growth or redistribution and growth is found. Yet there are signs for a positive association between top incomes and growth, although the coefficient is small and a causal interpretation does not seem to be warranted.

Okay, so there’s the “signs for a positive association” caveat. But “the coefficient is small and a causal interpretation does not seem to be warranted” seems pretty close to null to me.

In light of the attention this report from S&P has been getting — e.g. from Krugman today (h/t Dan H.) — all solid findings, null and otherwise, on the inequality-growth relationship warrant publication. Hats off to SER for publishing Thewissen’s.

 

Written by epopp

August 8, 2014 at 4:35 pm

is sociology a poor source of policy stories?

A few years ago, I bought a copy of Charles Tilly’s Why?, just for fun sociology reading. All the Important sociology reading got in the way, and I never read Why?

But while I was unpacking this week I came across it and thought I’d bring it along on a car ride to Providence over the weekend. Not only is it a fun read, as well as touchingly personal at times, it turned out to be surprisingly relevant to stuff I’ve been thinking about lately.

The book is organized around four types of reasons people give for things…any things: their incarceration in mental hospitals, why a plane just flew into the World Trade Center, whether the last-minute change of an elderly heiress’s will should be honored. In grand social science tradition, the reasons are organized into a 2 x 2 table:

Popular Specialized
Formulas Conventions Codes
Cause-Effect Accounts Stories Technical Accounts

 

Why? illustrates these types with a wide range of engaging examples, from eyewitness accounts of September 11th to the dialog between attending physicians and interns during hospital rounds.

Conventions are demonstrated by etiquette books: they are reasons that don’t mean much of anything and aren’t necessarily true, but that follow a convenient social formula: “I lost track of the time.” Stories are reasons that provide an explanation, but one focused on a protagonist—human or otherwise—who acts, and which often contain a moral edge: evangelist Jerry Falwell’s account of how he came to oppose segregation after God spoke to him through the African-American man who shined his shoes every week. Both conventions and stories are homely, everyday kinds of reasons.

Codes and technical accounts, on the other hand, are the reasons experts give. Reasons that conform to codes explain how an action was in accordance with some set of specialized rules. The Department of Public Works did not repair the air conditioning because they lacked a form 27B/6. While law is the quintessential code, Tilly shows that medicine follows codes to a surprising extent as well.

Finally, technical accounts attempt to provide cause-effect explanations of why some outcome occurs. Jared Diamond argues that Europe developed first because it had domesticable plants and animals and sufficient arable land, and lacked Africa’s north-south axis. Technical accounts draw on specialized bodies of knowledge, and attempt to produce truth, not just conformity with rules.


 

I’ve spent a lot of time in recent months thinking about what experts do in policy, and thinking about the different paths through which they can have effects. Lots of these effects are technical, of course. Expert opinion may not determine the outcome in debates over the macroeconomic effects of tax policy changes or what standards nutrition guidelines should be set at, but there’s no question that they’re informed by technical accounts.

But at least as important in influencing a wider audience are the stories experts can tell. Deborah Stone wrote about these “policy stories” decades ago, though she wasn’t especially focused on experts’ role in creating them. Political scientists like Ann Keller, however, have shown that scientists, too, translate their expertise into policy stories—for example, that human activity was creating the sulfur and nitrogen oxides that produce acid rain, destroying fisheries and making water undrinkable. These stories are grounded in technical accounts, but are simplified versions with moral undertones that point toward a particular range of policy solutions—in this case, doing something about the SOx and NOx emissions that the story identifies as creating the problem.

Some kinds of expertise, or rather some kinds of technical accounts, are more amenable than others to translation into policy stories. Economic models, in particular, are often friendly to such translation. For example, although this isn’t the language I use there, my book in part argues that U.S. science policy changed because of a model-turned-story. Robert Solow’s growth model, which includes technology as a factor that affects economic growth (by increasing the productivity of labor), became by the late 1970s the basis of a powerful policy story in which the U.S. needed to improve its capacity for technological innovation so that it could restore its economic position in the world.

Similarly, a basic human capital model in which investment in training results in higher wages easily becomes a story in which we need to improve or extend education so that people’s income increases.

Sociological models, even the formal ones, seem less amenable on average to these kinds of translations. Though Blau and Duncan’s well-known status attainment model could be read as suggesting education as a point of intervention to improve occupational status, it seems fairer to read it as saying that occupational status is largely determined by your father’s occupation and education. While this certainly has policy implications, they are not as natural an extension from the model itself. It hearkens back to that old saw—economics is about how people make choices; sociology is about how they don’t have any choices to make.

Blau & Duncan

I guess part of the appeal of Why? for me was that it mapped surprisingly well onto these questions that were already on my mind. Mostly I’ve thought about this in the context of economic models becoming policy stories. I wonder, though, whether my quick generalization about the technical accounts of sociology lending themselves less readily to compelling policy stories actually holds up. What are the obvious examples I’m missing?

Written by epopp

July 9, 2014 at 7:00 pm

on facebook and research methods

Twitter is, well, a-twitter with people worked up about the Facebook study. If you haven’t been paying attention, FB tested whether they could affect people’s status updates by showing 700,000 folks either “happier” or “sadder” updates for a week in January 2012. This did indeed cause users to post more happy or sad updates themselves. In addition, if FB showed fewer emotional posts (in either direction), people reduced their posting frequency. (PNAS article here, Atlantic summary here.)

What most people seem to be upset about (beyond a subset who are arguing about the adequacy of FB’s methods for identifying happy and sad posts) is the idea that FB could experiment on them without their knowledge. One person wondered whether FB’s IRB (apparently it was IRB approved — is that an internal process?) considered its effects on depressed people, for example.

While I agree that the whole idea is creepy, I had two reactions to this that seemed to differ from most.

1) Facebook is advertising! Use it, don’t use it, but the entire purpose of advertising is to manipulate your emotional state. People seem to have expectations that FB should show content “neutrally,” but I think it is entirely in keeping with the overall product: FB experiments with what it shows you in order to understand how you will react. That is how they stay in business. (Well, that and crazy Silicon Valley valuation dynamics.)

2) This is the least of it. I read a great post the other day at Microsoft Research’s Social Media Collective Blog (here) about all the weird and misleading things FB does (and social media algorithms do more generally) to identify what kinds of content to show you and market you to advertisers. To pick one example: if you “like” one thing from a source, you are considered to “like” all future content from that source, and your friends will be shown ads that list you as “liking” it. One result is dead people “liking” current news stories.

My husband, who spent 12 years working in advertising, pointed out that this research doesn’t even help FB directly, as you could imagine people responding better to ads when they’re happy or when they’re sad. And that the thing FB really needs to do to attract advertisers is avoid pissing off its user base. So, whoops.

Anyway, this raises interesting questions for people interested in using big data to answer sociological questions, particularly using some kind of experimental intervention. Does signing a user agreement when you create an account really constitute informed consent? And do companies that create platforms that are broadly adopted (and which become almost obligatory to use) have ethical obligations in the conduct of research that go beyond what we would expect from, say, market research firms? We’re entering a brave new world here.

Written by epopp

June 29, 2014 at 3:00 am

“you can’t fire your way to finland”

Last week a judge struck down tenure for California teachers on civil rights grounds. (NYT story here, court decision here.) Judge Rolf Treu based his argument on two claims. First, effective teachers are critical to student success. Second, it is poor and minority students who are most likely to get ineffective teachers who are still around because they have tenure — but moved from school to school in what Treu calls, colorfully, the “dance of the lemons.”*

To be honest, I have mixed feelings about teacher tenure. I’d rather see teachers follow a professional model of the sort Jal Mehta advocates than a traditional union model. This has personal roots as much as anything: I’m the offspring of two teachers who were not exactly in love with their union. But at the same time, the attack on teacher tenure just further chips away at the idea that organizations have any obligation to their workers, or that employees deserve any level of security.

But I digress. The point I want to make is about evidence, and how it is used in policy making — here, in a court decision.

Read the rest of this entry »

Written by epopp

June 18, 2014 at 3:00 pm

a rant on ant

Over at Scatterplot, Andy Perrin has a nice post pointing to a recent talk by Rodney Benson on actor-network theory and what Benson calls “the new descriptivism” in political communications. Benson argues that ANT is taking people away from institutional/field-theoretic causal explanation of what’s going on in the world and toward interesting but ultimately meaningless description. He also critiques ANT’s assumption that world is largely unsettled, with temporary stability as the development that must be explained.

At the end of the talk, Benson points to a couple of ways that institutional/field theory and ANT might “play nicely” together. ANT might be useful for analyzing the less-structured spaces between fields. And it helps draw attention toward the role of technologies and the material world in shaping social life. Benson seems less convinced that it makes sense to talk nonhumans as having agency; I like Edwin Sayes’ argument for at least a modest version of this claim.

I toyed with the possibility of reconciling institutionalism and ANT in an article on the creation of the Bayh-Dole Act a few years back. But really, the ontological assumptions of ANT just don’t line up with an institutionalist approach to causality. Institutionalism starts with fairly tidy individual and collective actors — people, organizations, professional groups. Even messy social movements are treated as well-enough-defined to have effects on laws or corporate behavior. The whole point of ANT is to destabilize such analyses.

That said, I think institutionalists can fruitfully borrow from ANT in ways that Latour would not approve of, just as they have used Bourdieu productively without adopting his whole apparatus. In particular, the insights of ANT can get us at least two things:

1)      It not only increases our attention to the role of technologies in shaping organizational and field-level outcomes, but ANT makes us pay attention to variation in the stability of those technologies. It is simply not possible to fully accounting for the mortgage crisis, for example, without understanding what securitization is; how tranching restructured, redistributed and sometimes hid risk; how it was stabilized more or less durably in particular times and places; and so on.

You can’t just treat “securitization” as a unitary explanatory factor. You need to think about the specific configuration of rules, organizational practices, technologies, evaluation cultures and so on that hold “securitization” together more or less stably in a specific time and place. Sure, technologies are sometimes stable enough to treat as unified and causal—for example, a widely used indicator like GDP, or a standardized technology like a new drug. But thinking about this as a question of degree improves explanatory capacity.

An example from my own current work: VSL, the value of a statistical life. Calculations of VSL are critical to cost-benefit analyses that justify regulatory decisions. They inform questions of environmental justice, of choice of medical treatment, of worker safety guidelines. All sorts of political assumptions — for example, that the lives of people in poor countries are worth less than people in rich ones — are baked into them. There is no uniform federal standard for calculating VSL — it varies widely across agencies. ANT sensitizes us not only to the importance of such technologies, but to their semi-stable nature—reasonably persistent within a single agency, but evolving over time and different across agencies.

2)      Second, ANT can help institutionalists deal better with evolving actors and partial institutionalization. For example, I’m interested in how economists became more important to U.S. policymaking over a few decades. The problem is that while you can define “economist” as “person with a PhD in economics,” what it means to be an economist changes over time, and differs across subfields, and is fuzzy around the borders.

I do think it’s meaningful to talk about “economists” becoming more influential, particularly because the production of PhDs happens in a fairly stable set of organizational locations. But you can’t just treat growth theorists of the 1960s and cost-benefit analysts from the 1980s and the people creating the FCC spectrum auctions in the 1990s as a unitary actor; you need ways to handle variety and evolution without losing sight of the larger category. And you need to understand not only how people called “economists” enter government, but also how people with other kinds of training start to reason a little more like economists.

Drawing from ANT helps me think about how economists and their intellectual tools gain a more-or-less durable position in policymaking: by establishing institutional positions for themselves, by circulating a style of reasoning (especially through law and public policy schools), and by establishing policy devices (like VSL). (See also my recent SER piece with Dan Hirschman.) Once these things have been accomplished, then economics is able to have effects on policy (that’s the second half of the book). While the language I use still sounds pretty institutionalist—although I find myself using the term “stabilized” more than I used to—it is definitely informed by ANT’s attention to the work it takes to make social arrangements last. Thus I end up with a very different story from, for example, Fligstein & McAdam’s about how skilled actors impose a new conception of a field — although new conceptions are indeed imposed.

I don’t have a lot of interest in fully adopting ANT as a methodology, and I don’t think the social always needs to be reassembled. The ANT insights also lend themselves better to qualitative, historical explanation than to quantitative hypothesis testing. But all in all, although I remain an institutionalist, I think my work is better for its engagement with ANT.

Written by epopp

June 5, 2014 at 8:31 pm

the real philosophy of social science puzzle

There is an intrinsic interest in the philosophy of social science. Ideally, we all want well motivated and logical explanations for how we should do our professional work. However, there is usually one question that you don’t hear much about – why does scholarship seem to progress in the absence of a well motivated philosophy? In other words, doctors probably have a bad philosophy of science, but I don’t see philosophers refusing the services of their physicians.

I don’t have an answer to this, I’ve only started to think about this issue. But I raise it in the shadow of our debate over critical realism and the earlier debate over post-modernism. The claim of some supporters is that social scientists really need a new theory of social science (e.g., critical realism) because social scientists rely on a flawed positivist theory. It may be true that positivist social science is wrong and that we should adopt a newer theory. This view does not take into account two issues: (a) The cost of adopting a new theory is steep. If Kieran can’t quite get critical theory after reading it for 18 months, then I sure won’t get it. (b) A new social science that proceeds along new rules of engagement may not generate enough differences to make it worthwhile. For example, now that Phil Gorski has adopted critical realism, how would his book, The Disciplinary Revolution, be written any differently? Not clear to me since  a lot of what Gorski does in that book is apply a specific theoretical lens in reading various developments in state formation. He might sprinkle a discussion of “multiple levels of causation” at the top but then he’d probably proceed to make similar arguments with similar data.

The ultimate puzzle, though, is in areas that seem to make progress even when practitioners work with a bad philosophy. This suggests that the demand for better foundations simply isn’t important for generating knowledge. Another datum is that advances in science, or social science, rarely require entirely new foundations. Take sociology. I don’t need to adopt anything new to, say, appreciate Swidler’s attack on functionalism. And I seem to be able to understand most feminist sociology by using meat and potatoes positivism. The bottom line is that, at the very least, there needs to be an explanation for the ubiquitous disjuncture between foundations and practice.

50+ chapters of grad skool advice goodness: From Black Power/Grad Skool Rulz 

Written by fabiorojas

March 13, 2014 at 12:30 am

IRB blues: s. venkatesh v. new york sex workers

A group of sex workers in New York city has openly criticized Sudhir Venkatesh’ recent ethnography of New York sex workers. There are many criticisms, one stands out for me. An article from the Museum of Sex blog relates how SWOP-NYC and SWANK, two sex worker groups thought that Venkatesh’ work increased the risk to prostitutes by reporting that clients could opt out of condoms for a 25% surcharge:

His conclusions, for example about large numbers sex workers advertising on Facebook, were easily shown by other researchers and commentators to be incorrect. Other conclusions such as the fiction that “there’s usually a 25% surcharge” to have sex without a condom not only bore no relationship to reality but also endangered sex workers and public health programs working with them.

 We were so concerned by what we uncovered that in October 2011we wrote a letter to the Columbia IRB to the Columbia University Institutional Review Board (IRB) and to the Sociology Department asking for some clarity about Sudhir Venkatesh’s research. Specifically, we asked for the research project titles, dates of research, and IRB approval numbers for each of the years he claimed to have conducted research while at Columbia University. We also wished to make Columbia University’s IRB and the Sociology Department aware of that the research appeared to create additional harms and risks for sex workers in the New York area. Our action is an example of the degree to which communities of sex workers have organized and the degree to which we will question research that we find harmful. We are no longer a “gift that keeps on giving” for Venkatesh, we are a community that speaks for itself.

For me, the IRB issue sticks out for legalistic reason. How exactly does a third party appeal to an IRB board? It’s obvious if the aggrieved person is a research subject. But what about third parties? Let’s say that SWOP & SWANK are correct that this book/article increases risk, what responsibility (if any) does an IRB board have?

The issue is unclear because IRB’s themselves are muddled institutions. They don’t operate through statute or contract. It’s an ad hoc administrative unit set up by universities to make sure research complies with federal guidelines. At most, they can inTterfere in research if you cross them. But they aren’t penal institutions – there’s no IRB police.  There’s no “human subjects 9-11.” Even though I am sympathetic to the claim that ethnographic publications may endanger at risk groups, it is unclear to me how third parties may leverage genuine concern into an actionable complaint.

Book advertisments: From Black Power/Grad Skool Rulz

Written by fabiorojas

November 11, 2013 at 2:24 am

take the pill

Today, I’ll directly address Sam Lucas’ article, “Beyond the Existence Proof: Ontological conditions, epistemological implications, and in-depth interview research,” published in 2012 in Quantity and Quality. In it, he argues that there is no basis at all for generalizing conclusions from the types of unrepresentative samples that are used by interview researchers. The best you can do is use the sample to document some fact (“an existence proof”), not make any out of sample generalizations. You can read Andrew Perrin’s commentary here.

To illustrate his argument, let’s return to yesterday’s hypothetical about unrepresentative samples. I said: “What if Professor Lucas suddenly found out that his heart medication was tested with an unrepresentative sample of white people from Utah? Should he continue taking the medication?” I’ll outline two answers to this question.

Read the rest of this entry »

Written by fabiorojas

October 11, 2013 at 12:01 am

three cheers for PLOS ONE!

Disclaimer: I’ve been a long time advocate for journals like PLoS One and I have an article that’s working its way through that journal, which I will shamelessly self-promote at a later time.

Last week, John Bohannon announced a hoax. He intentionally wrote an obviously flawed article on cancer research and submitted it to a bunch of open access journals. About two thirds of the journals accepted the paper. I’m glad these folks exposed such chicanery. Once you’ve been in academia for a few years, you quickly learn that there’s a lot of publishers who have no scruples. The sting even caught journals managed by “legitimate” vendors such as Elsevier.  Bring the sunlight.

Interestingly, one of the journals that did not fall for the hoax was the much maligned PLOS ONE (e.g., Andrew Gelman recently called it a “crap journal“).  From Bohannon’s article:

The rejections tell a story of their own. Some open-access journals that have been criticized for poor quality control provided the most rigorous peer review of all. For example, the flagship journal of the Public Library of Science, PLOS ONE, was the only journal that called attention to the paper’s potential ethical problems, such as its lack of documentation about the treatment of animals used to generate cells for the experiment. The journal meticulously checked with the fictional authors that this and other prerequisites of a proper scientific study were met before sending it out for review. PLOS ONE rejected the paper 2 weeks later on the basis of its scientific quality.

Good for them. This  speaks well of the PLOS ONE model. Normally, journals employ two criteria – technical competence (“is this study correctly carried out?”) and impact (“how important do we think this study is?”). PLoS sticks with the first criteria while rejecting the second. It’s an experiment that asks: “What happens when a journal publishes technically correct articles, but lets the scientific community – not the editors – decide what is important?”

Now we have part of the answer. A forum that drops editorial taste can still retain scientific integrity. By meticulously sticking to scientific procedure, bad science is likely to be weeded. And you’d be surprised how much gets weeded. Even though PLOS ONE is not competitive in any normal sense of the word, it still rejects over 30% of all submissions. In other words, almost one in three articles does not meet even the most basic standards of scientific competence.

Well managed open access journals like PLOS ONE will never replace traditional journals because we really do want juries to pick out winners. But having a platform where scientists can “let the people decide” is a good thing.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

October 8, 2013 at 12:01 am

unrepresentative samples, part deux

Today, I’ll situate my argument about unrepresentative samples a little bit more. Next week, I’ll provide some commentary on Sam Lucas’ (2012) article, “Beyond the Existence Proof: Ontological conditions, epistemological implications, and in-depth interview research,” which provides a counter-argument to my view.

First, it’s a relief to not be alone on this issue. For example, Pete Warden, who runs a consulting firm, made the following comment on his blog:

It’s uncontroversial in the commercial world that biased samples can still produce useful results, as long as you are careful. There are techniques that help you understand your sample, like bootstrapping, and we’re lucky enough to have frequent external validation because we’re almost always measuring so we can make changes, and then we see if they work according to our models. The comments on this post are worth reading because the approach seems to offend some sociologists viscerally.

The main issue is that academics strive toward purity. We want perfect experiments, logical proofs, and random samples. These aren’t necessarily bad things, but they aren’t needed in many cases when you have evidence that a non-random sample is close enough. We prefer “a reviewer can’t argue with me” to “this is probably correct.” Another commenter also draws our attention to a debate in the epidemiology literature, which is worth reading.

Second, I wanted to clarify a few things. For example, no one should read my post and think that I don’t like random samples. Just for the record:

  • Random samples are good!
  • If you have a choice between a good representative/random sample and one with bias, go for the random sample!
  • If you are conducting research, random samples should be your first option in research design. Reject random samples only if you have an overwhelmingly good reason.
  • You can’t just automatically make broad conclusions with non-random samples.
  • I use random samples in my research. When I can’t, I try to approximate them as best as I can.

So, where do I differ with mainstream consensus?

  • The lack of randomness doesn’t automatically mean that model estimates are biased. The situation, for lack of a better word, is “indeterminate.” The model may be way off, or a little off, or off in a way that can be corrected. Or it may be right on. You just don’t know. This is simply the basic logic of mathematics (e.g., X –> Y does not imply not X –> not Y).
  • If we have good reason, non-random samples should be the focus for more research that provides external assessment.
  • For specific research designs that produce unrepresentative samples, we can probably provide a well motivate analysis of when it produces data that can yield model estimates that are very close to the true model, or can be systematically corrected.

Why is this an important issue?

  • Some fields are resistant to random samples. The homeless population, for example, is notoriously hard to sample.
  • It is impossible to sample other fields. For example, it would be hard to sample Paris residents circa 1500. We’re stuck with non-random samples from the historical record.
  • Some fields are riddled with vagueness. In social movement studies, there is not a universally accepted definition of “activist.” Any definition will probably introduce some biases.

So we have to work with non-random samples to make progress in many important social science areas.

Finally, let me lay out my basic assumptions. At heart, I’m a Bayesian. I don’t believe that every study lives by itself. Instead, I have lots of information. That doesn’t mean that I accept all unrepresentative samples. Rather, it means that I accept them when I have additional information that suggests that the bias is small or systematic and correctable.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

October 1, 2013 at 12:01 am

More words on critical realism: Getting clear on the basics

One thing that I found dissatisfying about our earlier “discussion” on CR is that it ultimately left the task of actually getting clear on what CR “is” unfinished (or bungled).  Chris tried to provide a “bulletpoint” summary in one of the out of control comment threads, but his quick attempt at exposition mixed together two things that I think should be kept separate (what I call high level principles from substantively important derivations from those principles). This post tries to follow Chris Smith’s (sound advice) that ” We’ll all do better by focusing on important matters of intellectual substance, and put the others to rest.”

The task of getting clear on the nature of CR is particularly relevant for people who haven’t already formed strong opinions on CR and who are just curious about what it is. My argument here is that neither proponents nor critics do a good job of just telling people what CR is in its most basic form. The reason for this has to do with precisely the complex nature of CR as an ontology, epistemology, theory of science, and (most importantly) a set of interrelated theses about the natural, social, cultural, mental, world that are derived from applying the high level philosophical commitments to concrete problems. My argument is that CR will continue to draw incoherent reactions and counter-reactions (by both proponents and opponents) unless these aspects are disaggregated, and we get clear on what exactly we are disagreeing about.  One of these incoherent reactions is that CR is both a “giant” package of meta-theoretical commitments and that CR is actually a fairly “minimalist” set of principles the reasonable nature of which would only be denied by the certifiably insane.

In particular it is important to separate the high level “core” commitments from all the substantive derivations, because it is possible to accept the core commitments and disagree with the derivations. In essence, a lot of stuff (actually most of the stuff) that gets called “CR” consists of a particular theorist’s application of the high level principles to a given problem. For instance, one can apply (as did Bhaskar in the “original” contributions) the high level ontology to derive a (general) theory of science. One can (as Bhaskar also did) use the general theory of science to derive a local theory (both descriptive and normative) of social science (via the undemonstrable assumption that social science is just like other sciences).  And the same can be done for pretty much any other topic: I can use CR to derive a general theory of social structures, or human action, or culture, or the person, or whatever. Once again, the cautionary point above stands: I can vehemently disagree with all the special theories, while still agreeing with the high level CR principles. In other words I can disagree with the conclusion while agreeing with the high-level premises because I believe that you can’t get where you want to go from where you start. This may happen because let’s say, I can see the CR theorist engaging in all sorts of reasoning fallacies (begging the question, arguing against straw men, helping him or herself to undemonstrable but substantively important sub-theses, and so on) to get from the high level principles to the particular theory of (fill in the blank: the person, social structure, social mechanisms, human action, culture, and so on).

This is also I believe the best way to separate the “controversial” from the “uncontroversial” aspects of CR, and to make sense of why CR appears to be both trivial and controversial at the same time. In my view the high level principles are absolutely uncontroversial. It is the deployment of these principles to derive substantively meaningful special theories with strong and substantively important implications that results in controversial (because not necessarily coherent or valid at the level of reasoning) theories.

The High Level Basics.-

One thing that is seldom noted by either proponents or critics of CR is that the fundamental high level theses are actually pretty simple and in fact fairly uncontroversial. These only become “controversial” when counterposed to nutty epistemologies or theories of science that nobody holds or really believes (e.g. so-called “positivism”, radical social constructionism, or whatever). I argued against this way of introducing CR precisely because it confounds the level at which CR actually becomes controversial.

So what are these theses? As repeatedly pointed to by both Phil and Chris in the ridiculously long comment thread, and as ritualistically introduced by most CR writers in social theory (e.g. Dave Elder-Vass), these are simply a non-reductionist “realism” coupled to a non-reductionist, neo-Aristotelian ontology.

The non-reductionist realism part is usually the one that is much ballyhooed by proponents of CR, but in my view, this is actually the least interesting (and least distinctive) part of CR in relation to other options. In fact, if this was all that CR offered, there would be no reason to consider it any further. So the famous empirical/actual/real (EAR) triad is not really a particularly meaningful signature of CR. The only interesting high-level point that CR makes at this level is the “thou shall not reduce the real to the actual, or worse, to the empirical.” Essentially: the world throws surprises at you because it is not reducible to what you know, and is not reducible to what happens (or has happened or will happen). I don’t think that this is particularly interesting because no reasonable person will disagree with these premises. Yes, there are people that seem to say something different, but once you sit them down for 10 minutes and explain things to them, they would agree that the real is not reducible to our conceptions or our experiences of reality. Even the seemingly more controversial point (that reality is not reducible to the actual) is actually (pun intended) not that controversial. In this sense CR is just a vanilla form of realism.

When we consider the CR conception of ontology then things get more interesting. Most CR people propose an essentially neo-Aristotelian conception of the structure of world as composed of entities endowed with inherent causal powers. This conception links to the EAR distinction in the following sense: The real causal powers of an entity endow it with a dispositional set of tendencies or propensities to generate actual events in the world; these actual events may or may not be empirically observable. The causal powers of an entity are real in the sense that these powers and propensities exist even if they are never actualized or observed by anyone. To use the standard trite example, the causal power to break a window is a dispositional property of a rock; this property is real in these that it is there whether it is ever actualized (an actual window breaking with a rock event happens in the world), and whether anybody ever observes this event.

Reality then, is just such a collection of entities endowed with causal powers that come from their inherent nature. The nature of entities is not an unanalyzable monad but is itself the (“emergent” in the sense outlined below) result of the powers and dispositions of the lower level constituents of that entity suitably organized in the right configuration. What in earlier conceptions of science are called “laws of nature” happen to be simply observed events generated by the actualization of a mechanism, whereby a “mechanism” is simply a regular, coherently organized, collection of entities endowed inherent causal powers acting upon one another in a predictable fashion. Scientists isolate the mechanism when they are able to manipulate the organization of the entities in question so that the event is actualized with predictable regularity; these events are then linked to an observational system to generate the so-called phenomenological or empirical regularities (“the laws”) that formed the core of traditional (Hempelian) conceptions of science.

The laws thus result from the regular operation of “nomological machines” (in Cartwright’s sense). The CR point is thus that the phenomenological “laws” are secondary, because they are just the effect produced by hooking together a real mechanism to produce (potentially) observable events in a regular way. So the CR people would say that Hacking’s aphorism “if you can spray them they are real” is made sense of by the unobservable stuff that you can spray is an entity endowed with the causal power capable of generating observable phenomena when isolated as part of an actualized mechanism. The observability thing is secondary, because the powers are there whether you can observe the entity or not. That’s the CR “theory of science.”

The key to the CR ontology is that the nature of entities is understood using a “layered” ontological picture in which entities are understood as essentially wholes made of parts organized according to a given configuration (a system of relations). These “parts” are themselves other entities which may be decomposable into further parts (lower level entities organized in a system of relations and so on). Causal powers emerge at different levels and are not reducible to the causal powers of some “fundamental” level. Thus, CR proposes a non-reductionist, “layered” ontology, with emergent causal powers at each level.

This emergence is “ontological” and not “epistemic” in the sense that the causal powers at each level are “real” in the standard CR sense: they are not reducible to their actual manifestations nor are these “emergent” properties simply an epistemic gloss that we throw into the world because of our cognitive limitations. Thus, CR is an ontological democracy which retains the part-whole mereology of standard realist accounts, but rejects the reductionist implication that the structure of the world bottoms out at some fundamental level of reality where the really real causal powers can be found (and with higher level causal powers simply being a derivative shadow of the fundamental ones).

Getting controversial.-

Now you can see things getting interesting, because we have a stronger set of position takings. Note that from our initial vanilla realism, and our seemingly innocuous EAR distinction, along with a meatier conceptualization of entities as organized wholes endowed with powers and propensitities, we are now living in a world composed of a panoply of real entities at different levels of analysis, endowed with (non-reducible) real causal powers at each level. The key proposition that is beginning to generate premises that we can actually have arguments about is of course the premise of ontological emergence. I argue that this premise not a CR requirement. For instance, why can’t I be a reductionist critical realist? (RCR) Essentially, RCR accepts the EAR distinction, but privileges a fundamental level; this fundamental level may ultimately figure in our theoretical conceptions of reality but it is the bedrock upon which all actual and empirical events stand. In other words, the only true “mechanisms” that I accept are the ones composed of entities at the most fundamental level of reality, which may or may not ever be uncovered. I don’t seriously intend to defend this position, but just bring it up as an attempt to show that CR hooks together a lot of things that are logically independent (emergentist ontology, Aristotelian conception of entities, part-whole mereology, with a “causal powers” view of causation, among others).

In any case, my argument is that most of the substantively interesting CR theses do not emerge (pun intended) from the Bhaskarian theory or science, or the account of causation, or the EAR distinction. They emerge from hooking together (ontological) emergentism and an Aristotelian conceptions of entities and dispositional causal powers. For emergentism is what generates the (controversial) explosion of real entities in CR writing. Not only that, emergentism is the only calling card that CR writers have to provide what Dave Elder-Vass has called a “regional ontology” for the social sciences, that does not resolve into just repeating the boring EAR distinction or the (increasingly uncontroversial) “theory of science” that Bhaskar developed in A Realist Theory of Science and The Possibility of Naturalism. 

How to be a (controversial) Critical Realist in two easy steps.-

So now that we have that covered, it is easy to show how to produce a “controversial” CR argument. First, pick a mereology. Meaning, pick some entities to serve as the parts, preferably entities that themselves do not have a controversial status (most people would agree that the entities exist, form coherent wholes, have natures, and so on), and pick a more controversially coherent whole that these parts could conceivably be the parts of. Then argue that the parts do indeed form such a whole via the ontological emergence postulate. Note that the postulate allows you to fudge on this point, because you do not actually have to specify the mechanism via which this ontological emergence relation is actualized (you can argue that that is the job of empirical science and so on). Then hooking the CR notion of causal powers and the EAR distinction and the postulate of ontological democracy of all entities argue that this whole is now a super-addition to the usual vanilla reality. That is, the  new emergent entity is real in the same sense that other things (apples, rocks, leptons, cells) are real. It has a inherent nature, a set of dispositions to generate actual events, and most importantly it has causal powers. The powers of this new emergent entity may be manifested at its own level (by affecting same-level entities), or they may be exhibited by the constraining power of that entity upon the lower level constituent entities (the postulate of “downward causation”). For instance, (to mention one thing that could actually be of interest to readers of this blog), Dave Elder-Vass has provided an account of the reality of “organizations” (and the non-reducibility of organizational action to individual action) using just this CR recipe.

Now we have the materials to make some people (justifiably) discomfited about a substantive CR claim (or at least motivated to write a critical paper). For if you look at most of the contributions of CR to various issues they resolve themselves to  just the steps that I outlined above. So the CR “theory” of social structure, is precisely what you think. Social structure is composed of individuals, organized by a set of relations that form a coherent (configured) whole. This whole (social structure) is now a real entity endowed with its own causal powers, which now (may) exert “downward causation” on the individual’s the constitute it. These causal powers are not reducible to those of the individuals that constitute it. This how CR cashes in what John Levi Martin has referred to as the “substantive hunch” that animates all sociological research. “The social” emerges from the powers and activities of individuals but it never ultimately resolves itself into an aggregation of those powers and activities. Note that CR is opposed to any form of ontological reduction whether it is “downwards” or “upwards.” Thus attempts to reduce social structure to the mental or interactional level are “downward conflationist” and attempts to reduce individuals to social structure (or language or what have you), are “upward conflationist.” Thus, the “first” Archer trilogy can be read in this way. First, on the non-reducibility (and ontological independence between) social structure in relation to the individual or individual activity, then “culture” in relation to the individual or individual interaction, and later (in reverse) personal agency in relation to either social structure or culture.

Essentially, the stratified ontology postulate must be respected. Any attempt to simplify the ontological picture is rejected as so much covert (or overt) reductionism or “conflation.” Note that “conflation” is not technically a formal error of reasoning (as is begging the question) but simply an attempt by a theorist to simplify the ontological picture by abandoning the ontological democracy or ontological emergence postulates. A lot of the times CR theorists (like Archer) reject conflation as if it was such an error in reasoning, when in fact it is a substantive argument that cannot be dismissed in such an easy way. Note that this is weird because both the ontological democracy and the ontological emergence argument are themselves non-demonstrable but substantively important propositions in CR. Thus, most CR attempts to dismiss either reductionist or simplifying ontologies themselves do commit such a formal error of reasoning, namely, begging the question in favor of ontological emergence and ontological democracy.

Another way to make a CR argument is to start with a predetermined high level entity of choice. This kind of CR argument is more “defensive” than constructive. Here the analyst picks an entity the real status of which has (for some reason) become controversial, either because some theorists purport to show that it does not “really” exist (meaning that it is just a shorthand way to talk about some aggregate of actually existing lower level entities), or is not required to generate scientific accounts of some slice of the world (ontological simplification or reduction a la caloric or phlogiston). Here CR arguments essentially use the ontological democracy postulate to simply say that the preferred whole has ontological independence from either the lower constituents or higher level entities to which others seek to reduce the focal entity. Moreover, the CR theorist may argue that this ontological independence is demonstrated by the fact that this entity has (actualized and/or empirically observable) causal powers, once again above and beyond those provided by the lower level (or higher level ) entities or processes usually trotted out to “reduce it away.” This applies in particular to the “humanist” strand of CR that attempts to defend specific causal powers that are seen as inherent properties of persons (e.g. reflexivity in Archer’s case) or even the very notion of person (in Chris Smith’s What is a Person?) as an emergent whole endowed with specific causal powers, properties and propensities.

To recap, CR is a complex object composed of many parts. But not all parts are of the same nature. I have distinguished between roughly three parts, organized according to the generality of the claim and the specificity of the substantive points made. In this respect, I would distinguish between:

1) The parts that CR shares with all “vanilla” realisms. This includes the postulate of ontological realism (mind-independence of the existence of reality), the transitive/intransitive distinction, the EAR distinction, and so on. In itself, none of these theses make CR particularly distinctive, unique or useful. If you disagree with CR at this level, based on irrealist premises, congratulations. You are insane.

2) The Aristotelian ontology.- This specifies the kind of realism that CR proposes. Here things get more interesting, because there is actual philosophical debate about this (nobody seriously defends irrealist positions in Philosophy any more and most sociologists just like to pretend to be irrealists to show off at parties). Here CR could play a role in philosophical debates insofar as a neo-Aristotelian approach to realism and explanation is a coherent position in Philosophy of Science (although it is not without its challengers). Here belongs (among other things) the specific CR conceptualizations of objects and entities, the causal (dispositional) powers ontology (when hooked to the EAR distinction) and the specific “Theory of Science” and the “Theory of Explanation” that follows from these (essentially endorsing mechanismic and systems explanation over reductive, covering law stories). This is what I believe is the best ontological move and CR should be commended in this respect.

3) The stratified ontology.- This comes from yoking (1) and (2) to the ontological emergence and ontological democracy postulate. This is where you can find a lot of “controversial” (where by controversial I mean worth arguing about, worth specifying, worth clarifying, and in some cases worth rejecting) arguments in CR. These are of three types: ontological emergence arguments, augment the standard common-sense ontology of material entities to argue for the existence of higher level non-material entities; thus “social structures” are as real as the couch that you are lying on; the danger here is a world that comes to populated with a host of emergent “entities” with no principled way of deciding which ones are in fact real (beyond the theorist’s taste). This is the problem of ontological inflation, (2) “Downward causation” arguments add this postulate to suggest that the emergent (non-material or material) entities not only “exist” in a passive sense, but actually exert causal effect on lower level components or other higher-level entities, (3) “ontological independence” arguments attempt to show that a particular sort of entity that is usually done violence to in standard (reductionist) accounts has a level of ontological integrity that cannot be impugned and has a set of causal powers that cannot be dismissed. In humanist and personalist accounts, this entity is “the person” along with a host of powers and capacities that are usually blunted in “social-scientific” accounts (e.g. persons as centers of moral purpose) and the enemies are the positions that attempt to explain away these powers or capacities or that attempt to show that the don’t matter as much as other entities (e.g. “social structure”).

4) Continuing extensions of the stratified ontology argument.- This is the part of CR that has drawn (an unfair) amount of attention, because it extends the same set of arguments to defend both the reality but also the causal powers of a set of entities that (a) a lot of people are diffident about according the same level of reality to as the standard material entities, and (b) things that most people would have difficulty even calling entities. These may be “norms,” “the mental,” “the cultural,” “the discursive,” and “levels of reality” above and beyond the plain old material/empirical world that we all know and love (e.g. super-empirical domains of reality). You can see how CR can get controversial here.

5) Additional stuff.- A lot of other CR arguments do not directly follow from any of these, but are added as supplementary premises to round out CR as a holistic perspective. For instance, the rejection of the fact-value distinction in science is not really a logical derivation from the theory of science or the neo-Aristotelian ontology, and neither is the “judgmental rationality” postulate (that science progresses, gradually gets at the truth, etc.). I mean all realisms presuppose that we get better at science, but this is not really a logical derivation from realist premises (as argued by Arthur Fine). The fact/value thing is in the same boat, because it requires a detour through a lot of controversial group (3) and group (4) territory to be made to stick. For instance, given that persons are emergent entities, endowed with non-arbitrary properties and powers, then the “relativist” arguments that any social arrangement is as good as any other for the flourishing of personhood is clearly not valid. This means that social scientists have to take a strong stance on the value question (hence sociological inquiry cannot be value neutral). Because a mixture of Aristotelian ontology and ontological emergentism applied to human nature is incompatible with moral (and social-institutional) relativism, the the fact value distinction in social science is untenable. However, note that to get there  a lot of other premises, sub-premises, and substantive arguments for the reality of persons as emergent, neo-Aristotelian entities have to be accepted as valid. In this sense the fact/value thing is only a derivation from certain extensions of CR into controversial territory. As already intimated, What is a Person? is a (well-argued!) piece of controversial CR precisely in this sense.

Note that this clarifies the “giant package” versus “minimalist” CR debate. Let’s go back to the cable analogy. So you are considering signing  up for CR? Here’s the deal: The “basic” CR package would (in my view) be any acceptance of (1) and (2) (with some but not all elements of (3)). In this sense, I am a Critical Realist (and so should you). The “standard” CR package includes in addition to (1), (2) and all of (3), some elements of (4). Here we enter controversial territory, because a lot of CR arguments for the “reality” of this or that are not as tight or well-argued as their proponents suppose. In their worst forms, they resolve themselves into picking your favorite thing (e.g. self-reflexivity), and then calling it “real” and “causally powerful” because “emergent.” It is no surprise that Archer’s weakest work is of this (most recent) ilk. Here the obsession with ontological democracy prevents any consideration of ontological simplification or actual ontological stratification (meaning getting clear on which causal powers matter most rather than assigning each one their preferred, isolated level). Finally, the “turbo” package requires that you sign up for (1) through (5), this of course is undeniably controversial, because here CR goes from being a philosophy of scientific practice to being a philosophy of life, the universe and everything. Sometimes CR people seem to act surprised that people may be reluctant to adopt a philosophy of life, but I believe that this has to do with their penchant to suppose that once you accept the basic, then the chain or reasoning that will lead you to the standard and the turbo follows inexorably and unproblematically.

This is absolutely not the case, and this where CR folk would benefit most from talking to people who are not fully committed to the turbo, but who (like other sane people) are already 80% into the basic (and maybe even the standard). My sense is that we should certainly be arguing about the right things, and in my view the right things are at the central node (3), because this is the where the key set of argumentative devices that allows CR people to derive substantively meaningful (“controversial”) conclusions (both at that level—arguments for the reality of “social structure”—and about type (4) and (5) matters), and where most attempts to provide a workable ontology for the social sciences are either going to be cashed in, or be rejected as aesthetically pleasing formulations of dubious practical utility.

Written by Omar

September 14, 2013 at 7:48 pm

Three thousand more words on critical realism

The continuing brouhaha over Fabio’s (fallaciously premised) post*, and Kieran’s clarification and response has actually been much more informative than I thought it would be. While I agree that this forum is not the most adequate to seriously explore intellectual issues, it does have a (latent?) function that I consider equally as valuable in all intellectual endeavors, which is the creation of a modicum of common knowledge about certain stances, premises and even valuational judgments. CR is a great intellectual object in the contemporary intellectual marketplace precisely because of the fact that it seems to demand an intellectual response (whether by critics or proponents) thus forcing people (who otherwise wouldn’t) to take a stance.  The response may range from (seemingly facile) dismissal (maybe involving dairy products), to curiosity (what the heck is it?), to considered criticism, to ho hum neutralism, to critical acceptance, or to (sock-puppet aided) uncritical acceptance.  But the point is that it is actually fun to see people align themselves vis a vis CR because it provides an opportunity for those people to actually lay their cards on the table in way that seldom happens in their more considered academic work.

My own stance vis a vis CR is mostly positive. When reading CR or CR-inflected work, I seldom find myself vehemently disagreeing or shaking my head vigorously (this in itself I find a bit suspicious, but more on that below). I find most of the epistemological, and meta-methodological recommendations of people who have been influenced by CR (like my colleague Chris Smith, Phil Gorski, or George Steinmetz, or Margaret Archer) fruitful and useful, and in some sense believe that some of the most important of these are already part of sociological best practice. I think some of the work on “social structure” that has been written by CR-oriented folk (Doug Porpora and Margaret Archer early on and more recently Dave Elder-Vass) important reading, especially if you want to think straight about that hornet’s nest of issues. So I don’t think that CR is “lame.” Although like any multi-author, somewhat loose cluster of writings, I have indeed come across some work that claims to be CR which is indeed lame. But that would apply to anything (there are examples of lame pragmatism, lame field theory, lame network analysis, lame symbolic interactionism, etc. without making any of these lines of thought “lame” in their entirety).

That said, I agree with the basic descriptive premises of Kieran’s post. So this post is structured as a way to try to unhook the fruitful observations that Kieran made from the vociferous name-calling and defensive over-reactions to which these sort of things can lead. So think of this as my own reflections of what this implies for CR’s attempt to provide a unifying philosophical picture for sociology.

Read the rest of this entry »

united against critical realism

Written by fabiorojas

September 4, 2013 at 2:32 am