orgtheory.net

biernacki book forum, part 3: the role of cultural competence

Part 1, Part2Scatterplot review by Andrew Perrin.

In this last post, I’ll discuss why I fundamentally disagree with the argument presented in Reinventing Evidence. There are two reasons. First, I agree with Andrew Perrin that Biernacki wants us to embrace a textual holism. One of Biernacki’s major arguments is that by isolating a single word, or passage, we are losing the entire meaning of the text. Thus, interpretation is the only valid approach to text. Coding and quantification is  invalid. Perrin points out that lots of things be isolated. For example, if I see the n-word, I can say that, on the average, the text is employing racist language.

Second, Biernacki does not seem to consider cultural competence. In other words, human beings are creatures that can often reliably capture the meaning of utterances made by other humans from the same cultural group. Of course, I am talking about things like every day speech or short and simple writings like newspaper articles. More complex texts, like novels, will have networks or dense layering of meaning that go beyond a human’s native capacity for communication. These probably could be coded, but it would require intense training and an elaborate theory of text, which sadly we don’t have in sociology. But my major point remains. There’s a lot of fairly simple text that can be coded. If you believe that people can accurately convey the meaning of a text or label some aspect of it because they are “native speakers” of the culture, then coding is a valid thing to do. To believe otherwise, is to assume a world of solipsistic culture where every act of utterance requires a stupendous level of interpretation on the part of the audience.

So to wrap things up. I give credit to Biernacki for making us think hard about the quality of coding which is lacking. The fact that science is presented in ritual is fair, but doesn’t address whether a particular procedure produces valid measurement or inference. And I think that the view that texts are essentially uncodable is in error.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

April 22, 2013 at 12:15 am

Posted in books, culture, fabio, research

16 Responses

Subscribe to comments with RSS.

  1. It’s important to distinguish the question of whether or not the meaning of a text can be grasped from the question of whether or not it is a fact that can be discovered by science. Roughly speaking, there is a difference between understanding the meaning of a text and knowing what a text means. In your critique of Biernacki you don’t observe this distinction. Basically, you’re saying that anything that can be understood at all can be understood by science.

    Biernacki’s argument is that some things can be understood but not “known” scientifically. At best, the ritual of coding conceals the process by which the analyst came to an understanding of the text in question, at worst, it allows the analyst to claim the text means something that it simply does not mean (as an ordinary reading of the text can verify).

    To consider your example. There’s a long discussion about the appearance of the n-word in Mark Twain, the question is precisely whether or not Twain is “employing racist language”. Your suggested “coding” will say that “on average” (though I’m a bit unclear about what that means here) he indeed is. But many readers of Twain disagree. The discussion might eventually turn on the meaning of “employ”. Twain might not be using the words to express his own (non-existent?) racism. But the word itself might constitute racist language, which Twain’s characters are “employing”. Etc.

    In the end, the “coding” lets us confirm that the n-word is used, say, 215 times. But it does not establish the “fact” of its meaning. It’s only once we ritualize the coding procedure and combine it with the analyst’s (unacknowledged) hermeneutic work that Twain’s “racist language” becomes a fact.

    Like

    Thomas

    April 22, 2013 at 7:14 am

  2. fabio I love your reviews. your neutrality & insights are so needed!

    Personally I think Biernacki’s social context for his argument (cuz all arguments are directed at something or someone) is that he’s seen a lot of sociologists who are producing publications/findings to meet the requirement of the field. When any field optimizes their work for metrics in a vacuum (e.g. publications & refusal to engage with a public that isn’t educated in sociological theory), we see consequences and in this case, biernacki is saying that he sees big problems in interpretive findings.

    So your point that he doesn’t attribute enough cultural competence to humans = I wonder if it’s because sociology as a field and the kind of sociologists he’s around are so inward focused that he deems them to be culturally incompetent!

    well anyways looove your review!

    Like

    tricia wang

    April 22, 2013 at 5:32 pm

  3. Garfinkel’s goldfish are adept at coding other goldfish.

    Like

    Randy

    April 22, 2013 at 7:10 pm

  4. To re-phrase Thomas’s response (not that it needs re-phrasing), there is a difference between subjective, inter-subjective and objective knowledge. In Reinventing Evidence, Biernacki is not advocating for a purely subjective interpretation of reality, a morass of solipsism where no one can possibly understand anyone else. Conversely, Biernacki critiques social scientists’ ability to produce worthwhile *objective* knowledge about a text (Thomas calls this ‘scientific’ knowledge). Simple word counts are possible — but without interpretation, they tell us nothing. To the sociologist, language itself, outside of its meaning (or its performance) is rarely interesting. (Of course the linguist would disagree.)

    If we believe that language does not have an objective kernel beyond its subjective meaning, then at best, coding can obtain high intersubjective reliability and never pure objectivity. Biernacki does not critique the possibility (or desirability) of intersubjective understanding — in fact, all understanding is intersubjective. He simply points out that coding is not ‘objective,’ and should not be treated as such. Perhaps many sociologists would agree with Bearman and Stovel’s analysis of a Nazi’s memoir; Biernacki does not abuse them of their right to agree (to use your words, he does not dismiss the possibility of a shared “cultural competence”). However, he does point out that the metric of intersubjective reliability could never reach 100%, or pure objectivity, unless all people interpreted language identically.

    If we see coding as inherently subjective, then we are forced to reevaluate our coding. In essence, it becomes simply another method of interpretation, another method of thinking about the meaning of language — but one that takes itself as somehow more objective. And therein lies the problem. To get all Marx-y about it, coders who think they are being ‘objective’ alienate themselves from the product of their labor, and confront their own reified codes as if they were fact. They then assert objective truths — but it’s always possible for someone to re-interpret the source material (i.e., reinvent the evidence)! Alternatively, when sociologists realize that they are engaging in an act of interpretation, they make more modest claims and don’t fetishize their data as “telling them” what is meaningful.

    Like

    Kelan Steel Lowney

    April 22, 2013 at 8:42 pm

  5. We can also put it this way: if we allow that, say, “close reading” can add to our understanding of a text, even after its meaning has been determined by coding analysis, then we must also grant (on my view at least) that a close reading can correct the coded analysis. Perhaps the quantitative prevalence of certain words suggest that a text is racist, but a closer examination (an actual reading) determines that the this interpretation is the least likely. Contexualization can no doubt have similar corrective effects.

    That’s roughly what Biernacki is doing. He’s showing that particular coding analyses have led to the wrong conclusions, about how one “becomes” a Nazi for example. If we want coding to be a harmless activity then it will have to confine itself to the saying things like: the n-word occurs 215 in Huckelberry Finn.

    Consider: suppose we follow Fabio’s line of reasoning: “if I see the n-word, I can say that, on the average, the text is employing racist language.” And suppose we go from this to the conclusion that the text is racist in its meaning. Now, not long ago a cleaned-up version of Huck Finn was published, the n-word replaced with “slave” (it sounds odd, and I’m getting this from Wikipedia, but even as a thought-up example this would work). This text would yield a very different result under coding, but will we really expect the underlying racism of the text that the original coding revealed to have been expunged? Doesn’t it lie much more in the characterization and plotting of the story than in the use and distribution of words?

    Can we turn an anti-semitic or homophobic novel into one that isn’t simply by removing words like “kike” and “fag” (or placing certain other linguistic markers at a safe distance from each other)? Coding analysis leaves me with the impression that we could. Since we obviously can’t (the relevant imagery and morality would obviously remain, just as it remains in Huck Finn … or, as I’m inclined to believe, is revealed to have never been there once the objectionable language is removed) that’s a strike against coding.

    In fact, there’s a disturbing consequence here for linguistic norms. If we buy into the coding approach, we can call people racists and bigots simply based on the words (or combinations of words) that they use, without considering context. And probably “terrorists” or “sympathizers” too. Since almost everything we say, meanwhile, is available for data analysis, it will become necessary to censor oneself according to how one’s words might appear to quantitative, coded analysis. The effects of such anxieties on language aren’t pleasant to think about. (And they are already with us in some form, of course.)

    Compare Gladwell’s somewhat hasty celebration of techniques to see whether people are lying by studying their facial expressions through video analysis. Suppose we really began to believe this is possible, that there is a “science” of telling whether people’s are lying by looking at their faces, and started to judge the veracity of what people said by how their faces performed (or would perform) under the relevant analysis. We’d all have some pretty weird looking faces. (The parallel here, of course, includes the fact that the technique doesn’t actually work.)

    The best way to understand a person or a text is to interpret their words charitably. What we’re dealing with here is what John Law once called a “ruthless” semiotic (meaning something different, I’m sure.)

    Like

    Thomas

    April 23, 2013 at 9:08 am

  6. I think Kelan and Thomas are being exceedingly…. charitable to Biernacki’s book.

    @Kelan: Biernacki goes well beyond complaining about the scientific aura surrounding coding; he claims explicitly that coding is epsitemologically false. As I showed in my review on scatterplot, he’s simply wrong about the scientific-aura claim, as in each case he examines the author(s) is/are careful to circumscribe the method and its claims.

    @Thomas: I don’t see any justification for “call[ing] people racists and bigots.” While I think Fabio’s example is not the best (since the presence of the n-word is not inherently racist but context-dependent, a point with which any scholar would agree and can certainly be implemented through coding), it’s different to identify patterns including racism in texts and to assign the requisite attitudes to those texts’ authors.

    Finally: the discussion seems to conflate the practice of coding–which is typically done by a human being, or several of them, based on reading a chunk of text and assigning it to one or more category/ies–and the practice of automated word-counting, which is typically done by a computer programmed with a rule-set. These are quite distinct in theory and practice.

    Like

    andrewperrin

    April 23, 2013 at 11:59 am

  7. Fair point about texts vs. authors, Andew. But what I worry about is what happens after the approach is popularized. I don’t expect the mass media or the policy makers who fear them to maintain the distinction. If someone determines that my blog is “on the average, employing racist language”, there will be a general sentiment that I’m a racist. And the really work is of course when I’m found to employ “terrorist language”!

    I actually agree with much of your review about the tone of the book. So, yes, I’ll grant I’m being a bit generous in my reading. I think it’s unfortunate that the book vulnerable to your critique because his argument is on target. As I’ve said elsewhere, I’m working on a few cases of my own, and I hope I can avoid some of the same problems.

    I didn’t find your epistemological argument convincing though. For reasons that are connected to my response on Mark Whipple’s blog. Biernacki is essentially doing a kind of “social epsitemology”, he’s showing what counts as knowing in a particular community, and it’s the epistemic standards of that community that he’s critiquing.

    That said, I’ve still got a lot of reading to do to verify the accuracy of his particular arguments/readings. And your review is of course going to guide me there. Thanks.

    Like

    Thomas

    April 23, 2013 at 12:46 pm

  8. Oopps: “really work” = “real worry”

    Like

    Thomas

    April 23, 2013 at 12:48 pm

  9. @Thomas, thanks for the link to Whipple’s blog, I hadn’t seen your response there.

    I agree, of course, that he’s critiquing the “epistemic standards” of this community. What I disagree with is the accuracy of that critique! If the three studies are subject to specific critiques about their use of coding, it follows that there is a better way of using coding–and that, therefore, coding is not epistemologically bankrupt. If, on the other hand, the problem is that coding is epistemologically bankrupt because chunks of text can’t be understood in any way outside their specific textual and cultural contexts (an absurd claim, but one that Biernacki advances), then the replications are pointless because the problem is foundational, not practical.

    Like

    andrewperrin

    April 23, 2013 at 1:05 pm

  10. @Andrew: I sort of thought it was a strength of the book that he does both. If he’d just said that coding can’t capture meaning in principle and left it at that, I don’t think he would have ruffled any feathers. He decided to carry it through and show what happens in practice when you try to do something that’s wrongheaded in principle. He’s more or less convinced me that the results of the coding stand in an arbitrary relation to the meaning that is imputed to the text. Sure, I guess that doesn’t prove that it has to be that way, but it does mean that if coding is ever going to work it’ll have to be done very differently than it is today.

    Finally, I really don’t think Biernacki’s argument hinges on the claim that “chunks of text can’t be understood in any way outside their specific textual and cultural contexts”. It’s that “their” that is wrong. All he is saying is that meaning depends on context, i.e., the context in which a text is read, and the coding analysis detaches those chunks from context as such, assigning a meaning to them anyway. It gives the impression that a text can mean something even when no one is reading it.

    Like

    Thomas

    April 23, 2013 at 1:23 pm

  11. @Thomas: with all due respect, I don’t think that’s “all he is saying” is that meaning depends on context. To wit:

    “[coding] demands that we fancifully expunge from view the multiple and idiosyncratic communicative projects inside of which each text feature was originally configured to function” (p. 24, emphasis added)
    “Dramatic omission and suggestive possibility in narratives stimulate our engagement as readers but render consistent coding nearly impossible.” (p. 37)
    “…the autobiography as a verbal whole is a structure of paradigmatic themes, which meaningfully configure all the way down how incidencts come to feature in the story as facts.” (p. 49, contrasting to scientific technique’s “assemblage of sentence clauses”)
    “Since outside texts brought into relation with texts inside a sample generate fresh meanings overall, a ‘scientifically’ insulated selection knows not of what it speaks.” (p. 57)
    “for any noteworthy purpose in cultural appreciation of textual evidence, the concept of the sampel is unreservedly inapplicable.” (p. 136)
    “meanings in action remain tied to concrete prototypes.” (p. 146)

    These all substantiate the much stronger claim that texts’ true meaning is the context in which they were written, and that the meaning of chunks of text is always about the context in which it is set (“concrete”) and “originally configured to function.”

    Like

    andrewperrin

    April 23, 2013 at 6:04 pm

  12. Again, it may be that I am being too charitable, but I don’t think Biernacki claims that “the multiple and idiosyncratic communicative projects inside of which each text feature was originally configured to function” determine the “true meaning” of those features (leaving aside for the moment what we mean by “features” and “chunks”). He’s saying that the fact that a feature was originally configured within multiple and idiosyncratic projects undermines the effort to establish “the (one, true) meaning” of a text (as a “fact”). Some of these projects (and this may be my assumption) are of course ongoing and some are not part of the original context at all. Also, our access to some of the projects, which affect the meaning of the features, is of course limited or entirely absent today (even in the case of contemporary projects).

    I read Biernacki as a reminder that there are “known unknowns” in our interpretation of texts. The ritual of coding, however, lets us proceed as though they are not relevant, i.e., in ignorance of them. Biernacki wants us to keep our interpretations open to recontextualization, and this is precisely what scholarship occasions. Coding (and this is very much my experience talking) protects an analysis from this kind of criticism, which is “merely” based on actually reading the same source text and pointing out an alternative, and perhaps more natural meaning.

    Like

    Thomas

    April 24, 2013 at 6:27 am

  13. @Thomas, thank you for your thoughtful discussion. I think we’ll have to leave these two alternative interpretations to stand in parallel. I agree that the case you make would be a reasonable and appropriate case. I just don’t think it’s the case that Biernacki makes in this book. But readers can draw their own conclusions.

    Like

    andrewperrin

    April 24, 2013 at 11:29 am

  14. Likewise, Andrew. I’m going to be talking about this at a conference on Friday. So this has been good prep. All the best!

    Like

    Thomas

    April 24, 2013 at 2:00 pm

  15. […] question and these possible answers remain front and center, as a sociology web debate about coding, textual data, and the Biernacki book Reinventing […]

    Like

  16. I apologize for the late reply. I’m happy that Mr. Perrin joined the discussion, if only because I also disagreed with his interpretation of Mr. Biernacki’s book on his own blog (referenced by Mr. Rojas). Since many of Perrin’s arguments came up again, here, please allow me to attempt to refute.

    As to your first comment, above, the existence of the “scientific aura” surrounding coding practices is not separate from the epistemological issue. Biernacki’s point is that the quantification process hides the act of interpretation, making it nearly impossible to replicate or refute, and turning acts of coding into tautologies. It does not matter, as you claim on your blog, that the authors “circumscribe the method,” or are demure about whether their research is “authoritative.” It doesn’t matter whether they sneak ‘weasel words’ into their truth claims (you say that Bearman only claims that his article “can provide insight”). The issue at hand is the validity of the claim that coding makes hidden evidence visible and subject to quantification. But this quantifiable “evidence” is only ever the residue of an act of prior interpretation, stripped of any contextual meaning and presented as a ‘fact.’

    Second, and relatedly, is the point of “textual wholism.” Far from being his preferred method, textual wholism is altogether at odds with the hermeneutic epistemological claims made by Biernacki. Textual wholism claims that the “actual” meaning of a piece of language can be discerned from its context in an identifiable whole. Biernacki’s hermeneutic approach provides no such privileged contextual foundation, as the “whole” is forever expanding to include new situations and audiences. Each new reader “possesses the capacity of beginning something anew,” in the words of Arendt. Or as Biernacki explicitly writes, “We all know that novel questions and contexts elicit fresh meanings from sources, which is enough to intimate that meaning is neither an encapsulated thing to be found nor a constructed fact of the matter” (p. 131). To use Thomas’ example, any interpretation of language in ‘Huckleberry Finn’ would be different in 1884 England, 1885 Kentucky, or 2013 California.

    But interpretation does not end at personal, cultural or historical situatedness. The rest of a text from which a fragment has been removed (the “verbal whole” of page 49) can and should matter to the interpreter. As can the “communicative project” (p.24), otherwise known as the “intent of the author.” But to discern both the ‘wholeness’ of the text (the sentence? the oeuvre?) and the intent of the author still require acts of interpretation. Interpretation necessarily stands upon shifting sands – contingent foundations.

    When Biernacki invokes the ‘whole,’ he does not advocate a textual wholism so much as he shows the epistemic flaws in textual isolation. Just as there can never be a ‘whole’ in which a piece of language can be universally and ahistorically situated, there can also never be a meaningful, isolated piece of language. To claim that the n-word in Huck Finn has a specific (objective) meaning outside of its context is just as epistemologically bankrupt as to claim that the n-word can, in fact, be totally (objectively, factually) understood within a ‘proper’ context!

    Instead, Biernacki argues for the middle ground of interpretive understanding, which acknowledges that interpretation requires context, even if contexts always change (ie., even if there is no such thing as a ‘textual whole’). This is exactly Biernacki’s project (as I stated in my original comment), the substitution of an honest inter-subjectivity for a false linguistic objectivity – but Perrin’s (and Rojas’) critique swings from the one extreme of objectivity to the other. The irony is that Biernacki specifically relegates “meaning” to the conclaves of contingent “cultural competence” (i.e., inter-subjective interpretation) that Rojas puts forth as an alternative.

    Third, (fourth in Perrin’s blog post), on the point of ‘problematic interpretive choices.’ Perrin claims that any critique of coding implies a better type of coding, which implies that coding is epistemologically tenable. This is reminiscent of Weber’s account of bureaucracy: the solution to any errant bureaucracy is always better bureaucracy. The solution to any errant coding procedure is always better coding. This is a logical trap.

    Biernacki ably shows how the practice of coding results in debatable codes. Not because there are better, “true” coding practices, but because all coding is an exercise in interpretation and are, thus, debatable (when we are aware that they have occurred!). The book actually goes out of its way to choose three texts that had been lauded as examples of quality content analysis (Evans’ book won several awards, Bearman runs an influential research center, etc), to show that even the best attempts at coding are flawed. Only after Biernacki has meticulously dissected them does Perrin have the luxury of saying, “Well, he only picked flawed studies. Let him try it with better research.” Logically, Perrin would only be satisfied once Biernacki showed the debatable choices made in every act of coding, ever. But the whole point of using examples instead of making a dry philosophical argument is to show, directly, what happens when we take our interpretations as factual. The solution to errant coding is transparency – the ability of others to challenge our interpretations – and, relatedly, reflexivity – awareness that our interpretations are debatable and not “factual.”

    Fourth (second in Perrin’s blog’s post), the problem of transparency is not that researchers sometimes stash their codebooks in attics. The problem is that, “were all coding sheets in hand, referents of the constructed ‘data’ might remain imperceptible” (p.127). Again, Biernacki is making an epistemological argument about the nature of texts as facts, not using an N=3 to complain about squirrel condos. The imperceptibility of interpretation is not even an epistemological problem until the codes are taken as “evidence” separate from the act of interpretation.

    Fifth … well, let’s stop there for now.

    Like

    Kelan Steel Lowney

    May 8, 2013 at 10:32 pm


Comments are closed.

%d bloggers like this: