should sociologists stop interviewing people?

My friend Colin Jerolmack and scatterista Shamus Khan have a new article in Sociological Methods and Research that criticizes the way many social scientists use interview data. From “Talk is Cheap:”

This article examines the methodological implications of the fact that what people say is often a poor predictor of what they do. We argue that many interview and survey researchers routinely conflate self reports with behavior and assume a consistency between attitudes and action. We call this erroneous inference of situated behavior from verbal accounts the attitudinal fallacy. Though interviewing and ethnography are often lumped together as ‘‘qualitative methods,’’ by juxtaposing studies of ‘‘culture in action’’ based on verbal accounts with ethnographic investigations, we show that the latter routinely attempts to explain the ‘‘attitude–behavior problem’’ while the former regularly ignores it. Because
meaning and action are collectively negotiated and context-dependent, we contend that self-reports of attitudes and behaviors are of limited value in explaining what people actually do because they are overly individualistic and abstracted from lived experience.

Overall, I find much to like in the article, but I wouldn’t get carried away. First, interviews and surveys vary in the degree of bias. I probably trust a question about educational history more than I do, say, racial attitudes. On a related point, you can also assess the quality of questions. Political scientists have definitely found biases in survey questions and that tells you how good a question is. Second, in some cases, you don’t have any choice but to work with interviews and surveys. For example, interviews are crucial for historical work. So when I do interview research, I use with caution.

50+ chapters of grad skool advice goodness: From Black Power/Grad Skool Rulz 

Written by fabiorojas

March 17, 2014 at 12:42 am

Posted in fabio, mere empirics

10 Responses

Subscribe to comments with RSS.

  1. Also, interviews can be used to examine meanings and understandings. We shouldn’t stop doing a whole method, but rather we should seek to analyze data while taking into account its limitations as well as the insights it provides.

    Liked by 1 person


    March 17, 2014 at 1:46 am

  2. I think there is a danger in these discussions of conflating data and method. Jerolmack and Khan (from my first quick read) seem concerned with a kind of methodology that attempts to find the causes of individual action in the narratives provided by individuals in interviews – a common methodological approach usually referred to as “qualitative interviews” or the like. But using interviews for historical sociology looks very different, and will necessarily approach the interviews-as-data differently (as well as asking different questions). The “case” is rarely the individual, for example, and the purpose of the interview may be as much to reconstruct the pattern of happenings in a larger event, rather than trying to understand a pattern of action-narrative linkages across individuals. Thus, for one example, Jerolmack and Khan’s concern about reading culture at the individual level is likely to be less of an issue for historical sociology with events-as-cases that relies on individual-level interviews to help piece those events together.

    Put differently, I don’t know that there’s much in the piece either way for historical sociology which draws on interviews as data (usually alongside many other sources) except a reminder that people’s accounts and their actual past behavior may not coincide (a problem that historians and historical sociologists are, I think, reasonably comfortable with). Which is not a criticism of the article, since those are not its aims (history and historical sociology are not discussed, as far as I can tell), but rather a criticism of our tendency to conflate “kind of data” with “methodological approach.”

    Liked by 1 person

    Dan Hirschman

    March 17, 2014 at 2:51 am

  3. People like David Silverman, Jaber Gubrium and more recently the authors operating under the label of ‘practice based studies (see ch.1 of Nicolini, 2013 book, available on have been making the same point for years. None of these authors are named in this otherwise nice, clear and agreeable article, or at least the copy freely available on line.

    So maybe your next post should be on how the current explosion of scholarly communities, niches and journals at times novelty is an effect of fragmentation, lack of communication or the fact that reviewers and editors do not read as much as they should or do not do their work properly…


    Davide nicolini

    March 17, 2014 at 9:55 am

  4. Just to add to Dan’s comment, the benefit of historical sociology is that we already know what people did since the focus is the analysis of events. Interviews are really meant to help give interpretation and meaning to those events. Even if we can’t precisely identify why people did what they did, we can use interviews to help piece together different motivations, processes, and mechanisms that may explain why certain events occurred as they did.


    brayden king

    March 17, 2014 at 3:35 pm

  5. Reading through the article itself and the rest of the debate in Sociological Methods and Research, my feeling is that this blog post could have been titled more thoughtfully.

    for a rejoinder of sorts, I encourage folks to look at Lamont and Swidler’s piece on methodological pluralism: in addition to the responses in SMR by Vaisey and (coming soon) Dimaggio, Cerulo and others



    March 17, 2014 at 4:13 pm

  6. Hello all. As one of the authors I just want to chime in–Shamus and I are in agreement with Dan. While we may appear to disparage interviews in general, specifically our beef is with using verbal data about schemas, frames, dispositions, attitudes, toolkits, and the like gathered in an interview setting to make claims about behavior–in the absence of any other data about behavior. Lamont/Swidler are correct that observations without asking questions have major drawbacks for explaining social action, but any competent ethnographer includes interviews/question-asking and cares deeply about meaning. Aside from the “behavior in public places” model of fieldwork [William Whyte, Laud Humphreys, Jack Katz’s How Emotions Work; Erving Goffman], I cannot actually think of a single ethnography that ignores meaning. And so this is not about interviews vs. observations but rather about observations AND interviews [i.e., ethnography] vs. interviewing ALONE. If interviewers are totally disinterested in behavior, then our critique is irrelevant for them. But we find that many interviewers who study “sense-making” imply that understanding an actor’s “frames” or “schemas” allows them to make inferences about behavior. This is what we call the attitudinal fallacy, because studies show that talk and action are often unrelated. When ethnographers interview people, the meaning they make out of the verbal data is by relating it to data gathered by other means [observations, documents/archives, etc.; this seems similar to what Dan is saying about what many historians do with verbal data]. Of course, some people who primarily do interviews triangulate too, but a lot of the standard-bearers in cultural sociology and other subfields make or imply behavioral claims solely based on interview data.



    March 17, 2014 at 5:16 pm

  7. To piggyback on what Colin said… our paper doesn’t argue that sociologists should stop interviewing people (indeed, we both do this in our own work). It’s about delineating the kinds of conclusions one can draw from the kinds of data you gather. So the basic point is that if you gather attitudinal data, you should be very cautious about making behavioral claims. It certainly could be that attitudes tell us something (and in some cases, that they’re strongly related to behaviors). But we need to be clear about what attitudes tell us if they don’t reflect behavior; or, we need positive evidence of a strong relationship before jumping to arguments about behaviors from attitudinal evidence. In addition to the Lamont & Swidler paper (which strangely reads like an elaborate response to our paper without discussing our paper), I would encourage people interested to check out Allison Pugh’s work on this issue (which, contra our own paper, argues about a range of things that interviews are good for — many of which we’re in agreement with):
    Steve Vaisey has written a reply to that paper:

    Click to access interviews.pdf

    Allison Pugh has responded to Vaisey:
    Steve has also written a reply to our paper:
    And we’re replied to him (and Paul DiMaggio, Karen Cerulo, and Doug Maynard, whose replies to our paper aren’t up yet, but should be soon):
    We certainly don’t cite everything we could cite in the paper — but really the problem we talk about isn’t “our” problem; sociologists have been writing about it for almost a century (and others even longer). Indeed, there’s the classic 1934 LaPiere study on this:

    Finally, those at ASA might be interested in a presidential panel on these issues. It will be me, Howard Becker, Al Young, and Allison Pugh.



    March 17, 2014 at 8:26 pm

  8. Great. I’ve been waiting for this paper to come out. Can’t wait to read it. In general, I agree.

    I think the impetus for “interview only” studies in many cases is practicality. In other words, they’re easier to get off the ground. Someone with a decent research budget can arrange 25-30 interviews, pay to have them transcribed, and have a data set ready to go relatively quickly. The quickest ethnographic projects (participant-observation and interviews) that I’ve seen are 6-9 months time in the field. Mine was about a year and a half all total. The best are much longer. Also, you can’t pay someone to type up fieldnotes (although I think you can train someone to code them).

    I also agree that people’s words are a poor measure of their deeds. However, my workaround (if participants are talking about something I didn’t observe), is to get them to ground their talk in a specific example. This keeps participants from relying on the “generalized past” (Weiss). For example, if you ask people what they like to eat for breakfast, they might respond: “eggs and bacon.” However, with a little prodding, you can get them to walk you through what they’ve had the past ten days (it takes some time, but most people can do it if you prod them to remember the events before/after breakfast time, what day it was, what else they were doing, etc.). I trust those memories more than the generalized/hypothetical vision they have of themselves. From those data (what they actually ate), I can then draw some type of conclusion.

    The weak point of interviewing is that it becomes tempting to take participants’ conclusions for granted. This is fine (if you are analyzing how they make sense of things), but in the end their conclusions are effectively their folk-analysis: interesting in its own right, but not a replacement for sociological analysis (the conditions under which they draw those conclusions, how they justify them, the consequences of their conclusions for themselves and others, etc.).



    March 18, 2014 at 1:44 pm

  9. I’ve basically said what I have to say about this in my comments on Colin and Shamus’s paper and Allison Pugh’s paper (which you can get on my website). I will just say here that it doesn’t make sense to focus on a couple of areas where (a) attitudes don’t predict behavior well or (b) where self reports are inaccurate to make claims about both these phenomena in general. The facts are that:

    (1) relevant attitudes do a pretty good job predicting behavior in most (not all) cases;

    (2) people self-report their behavior pretty accurately in most (not all) cases.

    There are exceptions and they are indeed interesting. But that doesn’t make them the rule. So people should keep doing surveys, and interviews, and ethnography. But they should definitely think about what they’re doing.


    Steve Vaisey

    March 19, 2014 at 12:47 am

  10. Three issues that occur to me as part of the broader consideration of this issue:
    1) the impetus towards interviewing from the increasing ethical constraints on participant based observation
    2) the implications of different kinds of interviewing strategies – most particularly the open-ended interview.
    3) the difference between the description of (past) behaviour and the prediction of (future) behaviour.


    Neil Maclean

    March 22, 2014 at 12:09 am

Comments are closed.

%d bloggers like this: