orgtheory.net

ethnography… what is it good for?

In the comments of my last post about guides for conducting ethnography, Teppo raised the question of “[what is] the role of ethnography in terms of generating local description versus general theories/mechanisms?”  Similarly, Gabriel asked, “have you found it to be a conflict in applying a method that is often very thick and substantively oriented to a subfield that is often very abstract and mechanism oriented? would you recommend any of the works in the list as being relevant to this issue?”

Here’s my attempt at a partial answer: Thick description has its place, especially when we are just starting to observe new or understudied phenomena.  (Indeed, in the sciences, researchers describe and categorize animals, plants, stars, etc. based on their characteristics.)  The difficult, but often mind-blowing (in a good way) challenge is to think through how an observed case can shed insight on phenomena more generally.  Often, this involves channeling C. Wright Mills’ Sociological Imagination (1959, Oxford University Press), intensive reading of others’ work, and building upon others’ research to make links.

In that vein, rather take a stance on the quantitative vs. qualitative debate and the “how can we make qualitative research more “scientific”?”, I will comment on how these issues suggest a shift in the conduct of research more generally.  For this, I will need to tell a story.  About a month ago, I had a conversation with an established physicist about governmental funding for experimental research.  This researcher noted that before making a decision about which experiments to fund, one governmental agency now demands that scientists run simulation models of the proposed experiment and submit these results as part of the grant application process.  However, to the chagrin of this physicist, the simulation models often were wrong, mostly due to the division of labor (experts in simulations develop and run the models, while physicists write the grants and manage the experiments).  When I asked why the government would demand these apparently useless simulation models, the physicist ventured that this was an attempt by the government to ensure that these experiments were likely to produce desired results.

Here’s what I interpreted as the take-away point: the conduct of research is experiencing an increasing demand for rationality.   From year-end reports about university researchers’ productivity to demands for simulation models in grant applications, the conduct of research is undergoing what George Ritzer calls McDonaldization.  We are subject to the demands for:

1. efficiency: for example, preferentially fund those who build or work with a large, sharable data set,

2. calculability: for example, judge quality of research based on counts of interviews/cases/etc., assess productivity based on a count of the number of publications, decide funding based on projected results, use technology such as MRIs (economics, psychology, neuroscience, etc.) to quantify

3. predictability: for example, fund those with a long list of publications over those who do not yet have a track record, and

4. control through nonhuman technology: for example, use computer programs to manipulate data

Of course, this drive toward rationality can be problematic, as interesting discoveries are sometimes unplanned or unanticipated (penicillin, rubber, X-rays, ceramic superconductors, Viagra, etc.).  Given all this, we should think: are we ceding control over the conduct of research?  Should we learn how to lobby for research?  Should we be EDUCATING our students and policy-makers about the importance of research, as well as the conduct of research?

To return back to the original request, here is a starting point for recommended readings (thanks for Mario Small and Matthew Hughey for two of the suggestions!):

Using physicist Richard Feynman’s cargo cult analogy, Mario Small discusses the difficulties of satisfying demands for the dreaded call for “generalizability” in his Ethnography article (2009), “‘How Many Cases Do I Need?’: On Science and the Logic of Case Selection in Field Based Research.”

Howard S. Becker provides a counterpoint to the 2009 NSF attempt to establish interdisciplinary standards with “How to Find Out How to Do Qualitative Research.”

In chapter 1 of Doormen (2005, University of Chicago Press), Peter Bearman argues the need to link models and thick description.

Other ethnographies such as Rachel Sherman’s Class Acts: Service and Inequality in Luxury Hotels (2007, University of California Press) show how to make links between qualitative data to larger phenomena.

Please write your suggestions for other readings in the comments!

Written by katherinekchen

October 1, 2009 at 6:08 pm

14 Responses

Subscribe to comments with RSS.

  1. […] observes and/or participates in activities, as well as the conduct of research more generally in a guest post on orgtheory.net. […]

    Like

  2. “Of course, this drive toward rationality can be problematic, as interesting discoveries are sometimes unplanned or unanticipated…”

    I agree that the drive towards rationalization can probably be problematic in many ways, but I don’t see how this would undermine the possibility of unanticipated findings. As long as you’re conducting research you’ll get those relatively often, right? The case you cite of a funding agency that requires simulations that (according to one source) aren’t that predictive seems like a case where unanticipated findings are probably quite common.

    Like

    Thorstein Veblen

    October 1, 2009 at 6:34 pm

  3. I really like Ragin’s and Becker’s edited book, What is a Case? The book is full of wonderful essays, including the final chapter in which Becker wrote the following:

    A major problem in any form of social research is reasoning from the parts we know to something about the whole they and parts like them make up. This is not a sampling question in the conventional sense. We are not trying to find out, by learning the proportion of cases which have property X in our sample, what the similar proportion is in the universe from which our cases come, or anything formally similar to that. Rather, we want to create an image of the entire organization or process, based on the parts we have been able to uncover. The logic of such an analysis is different. We ask: What kind of organization could accommodate a part like this? What would the rest of the organization have to be like for this part to be what it is? What would the whole story have to be for this step to occur as we have seen it occur?

    This kind of detailed detective work is hard if not impossible to do with quantitative data alone. Ethnography helps flesh out both the details of a particular locale and understand better how that locale fits with the whole. This type of analysis seems especially crucial in organizational research in which we have interdependent, complex systems of interaction.

    Like

    brayden

    October 1, 2009 at 6:51 pm

  4. Thorstein, let me add a follow-up to that sentence. Yes, one would hope that rationality would not preclude interesting discoveries. But let’s imagine you’re a young scientist who is lucky enough to get a tenure track position, which you would like to keep. You need to set up a lab, often with expensive equipment and space, and your experiments may take years to set up and complete. In addition to publishing, teaching, guiding students, and supervising postdocs and technicians, you need to bring in X amount of multi-million dollars of funding to run your lab. You know that no funding means no grad students, post docs, or equipment, which will lead to no publications, which then will lead to the inability to secure funding to do the research that could get you out of the hole, and you might lose your position or (if already established) standing in the field, your spouse leaves you, your kid calls you a loser, etc etc etc.

    So, what kind of research agenda are you going to pursue? Will you decide to settle for what is accepted in the field and promises a steady stream of papers? Will you gamble everything on something you deeply believe in or could possibly move the field? Where are you going to invest your sweat and tears? I’m guessing that most, who may have tenure requirements or funding guidelines in mind, are going to be moved to pick the first route. Thus, the potential to generate interesting results (or even follow-up on interesting results) may be ignored for expediency.

    Like

    katherinekchen

    October 1, 2009 at 6:54 pm

  5. Thanks for tackling this so seriously. It seems like there’s two issues going on, which can broadly be divided into what happens before the fieldwork (ie, grants) and what happens after the fieldwork (ie, write-up). You make a very compelling case that for all sorts of methods, not just ethnography, the grant review process is problematic. (though of course it’s easy to understand why the funders are attempting to manage uncertainty like this). The write-up thing is what I was originally asking about as it seems like ethnographers who make the theoretical hook central are sometimes criticized by some of their methodological peers as not providing enough thick description and those who focus exclusively on thick description are sometimes criticized by some of their theoretical peers as atheoretical. (I’ve witnessed both sorts of criticism).

    Anyway, I sincerely thank you for offering your thoughts and some more cites, which sound very good and which I look forward to checking out. I find this to be an interesting issue precisely because I see ethnographers getting such good data (and am frustrated with those cases that by omission or principle don’t find appropriate ways for that good data to speak to theory). It sounds like your cites speak to this without overreaching or making a fetish of theoretical relevance. (Likewise the Suddaby piece Brayden linked was very good on this issue as it basically argued that you can agree with everything you said about the pre-field stage and still have a write-up that speaks to theory).

    Like

    gabrielrossman

    October 1, 2009 at 6:56 pm

  6. “So, what kind of research agenda are you going to pursue? Will you decide to settle for what is accepted in the field and promises a steady stream of papers? Will you gamble everything on something you deeply believe in or could possibly move the field?”

    I agree that there are some significant pressures to adopt a more conservative research agenda (though it’s also worth noting that funding agencies also emphasize novelty and significance, criteria that help correct for some of these problems). But the examples you cite of accidental findings of great value don’t to me argue against rationalization so much as apologize for it, since I believe they were all instances where someone was doing routine work in an area and stumbled on something much more significant or interesting by accident. Isn’t the take-away from this that you can do normal science and you still can get unanticipated findings as well?

    Like

    Thorstein Veblen

    October 1, 2009 at 7:28 pm

  7. Dear Thorstein, I believe our comments to each other are at cross-purposes. If you would prefer to read the post minus the references to interesting discoveries, then you’ll see (hopefully) that my take-away point concerns how the conflicts in the field may reflect larger issues about how the risk to reward ratio has shifted for the conduct of research, and that particular institutions have encouraged this shift.

    Sociologists who study science might be better equipped to weigh in your question about whether “normal” science allows for unanticipated findings.

    Like

    katherinekchen

    October 1, 2009 at 7:48 pm

  8. Lovely post, Katherine, as have been your others. If I can ask more about one piece of your argument, some of the comments that led you here, I think, posit thick description as a point on a continuum, the other point being alternatives more applicable to the kinds of abstract theory generation/testing that increasingly characterize the field. (I think of this as separate from your points about the lower-stakes-center-of-the-field vs. higher-stakes-break-down-doors decision making for the intrepid asst. professor).

    Your answer seems reasonable, as I read it: to be attentive no how your case fits theory and other cases, no matter what your method. So you can have thick description but as long as it pulls a Mills/Becker, it’s going to be necessarily theory-helping as well.

    This helps with that favorite representative vs. in-depth tug-of-war. But I was wondering about differences in (non thick-description) ethnographic methods that might accomplish similar tasks. I have in mind extended case method, but perhaps others would pick others. That is a kind of method applying “reflexive science to ethnography in order to extract the general from the unique” (as Burawoy would say).

    Or is the problem of the dreary Weberian march of rationalism going to swamp whatever ethnographic methods are around?

    Like

    Peter

    October 1, 2009 at 8:17 pm

  9. Katherine – great post! I wonder though, how new are these trends you note, particularly the pressures from funding agencies to be efficient, predictable, calculable, and to draw on technological innovation. Has there been a big shift in recent times (the last 5? 10? 20? years)? Some of these trends seem much older, at least as broad trends of funding patterns for social science. The simulation, for example, could be part of a long, slow trend of requiring whatever the newest tools are (like requiring MRIs or whatnot). So is this just more of the same?

    Or perhaps there has been a large shift, but only with regards to qualitative research? One example would be the recent push to “scientize” qualitative research expressed in the two NSF guides to getting funding for qualitative research that Becker is criticizing in his piece. But I’ve not been doing this long enough to note the changes, and I’m curious as to your sense of the timing of all of this.

    Like

    Dan Hirschman

    October 2, 2009 at 1:40 pm

  10. […] on ethnographic methods October 2, 2009 Check out Katherine Chen’s short but useful post on ethnography. She includes some great links towards the bottom. Katherine’s post and her […]

    Like

  11. Thanks all for comments and additional suggestions. In answer to questions such as Peter’s, I think we should have room for different ways of understanding phenomena. But, pressures such as universities’ push for research funding, the competitiveness of research funding, the need for researchers to demonstrate regular or immediate productivity, and the willingness or ability of publication outlets to promote research may encourage people to do particular kinds of research.

    Understanding these factors may help answer Dan’s question about whether things have changed for the conduct of research in general or just in particular for qualitative research. I haven’t been around the block enough to see the scope of changes, but more senior colleagues can weigh in, or draw on what we know from the experiences of other disciplines such as physics.

    Like

    KatherineKChen

    October 3, 2009 at 8:24 pm

  12. […] research access to organizations, research, the IRB and risk, conducting ethnographic research, ethnography – what is it good for?, and writing up […]

    Like

  13. Those working under a relation sociology perspective/ANT have also debated this point.
    An interesting discussion can be found in this paper.

    http://oss.sagepub.com/content/30/12/1391.abstract

    Like

    Felipe

    July 6, 2012 at 9:53 am

  14. […] research access to organizations, research, the IRB and risk, conducting ethnographic research, ethnography – what is it good for?, and writing up ethnography, I discussed various questions and challenges of conducting […]

    Like


Comments are closed.

%d bloggers like this: