orgtheory.net

on transparency

The word, “transparency,” implies a clear lens through which an object is seen more easily or in more detail.  Much like those advertisements for X-Ray Spex* that once populated comic books (right next to the ads for 10,000 paper army men), tools of transparency claim to allow outside audiences to look right through the opaque barriers of objects/organizations and see exactly what is going on underneath/inside.

Proponents argue, or at least suggest, that rankings and other measures produce transparency: they provide a lens through which everyone can see what is going on inside schools more clearly.  The implication is that if schools know that their activities are exposed, they will be more likely to act efficiently or effectively. These proponents—many of whom are economists—seem to be drawing an analogy to recent tactics, such as the Toxic Release Inventory (see Regulation Through Revelation by James Hamilton), designed to improve corporate activities.  This inventory, which was originally aimed at forcing companies to disclose their polluting activities to the government, turned into a device that could be used by the public to hold organizations accountable for the pollution they produced through shaming and negative publicity.  This public measure, then, lifted the veil, increased transparency, and provided outsiders with a tool to alter the behavior of corporations in new ways.  Everyone wins.

The analogy, however, fails in two important ways when thinking about rankings, and the insights here may reflect back in informative ways on measures like the TRI.  First, the rankings are far from clear lenses through which the true nature of school activity or quality is exposed. To be fair, neither are they funhouse mirrors—although they do make some similar distortions by magnifying the quality of, say, the top 10 schools (the gigantic head in the mirror) and reducing the apparent merits of those ranked lower (the pencil-thin waist).  They are probably most accurately characterized as a kind of prism that captures the general shape of the legal education field fairly well but that also misrepresents the details.  The problem with mistaking a prism for a lens is that schools adapt their behavior so as to excel according to the rankers’ definition of law school quality; the prism becomes the reality.  While we are generally pretty pleased with corporations polluting less because of public measures, altering educational standards according to the criteria invented by a magazine is less appetizing.

One reason why—and the second way in which the analogy fails—has to do with the type of transparency provided by these two different measures; this reason also speaks to important differences between ratings and rankings. The TRI is a rating and, at root, is most concerned with informing the government and the public which organizations fail to meet a particular threshold in the amount of pollution they produce; transparency works better here because the measures are only setting limits, and general ones at that, across which they are discouraging organizations from going past. Other types of ratings may have more categories, but they also remain most concerned with thresholds and are not zero-sum games.

Rankings, however, create a much different kind of representation because they attempt to make very fine and nuanced distinctions within the field.  They are not only trying to define, for example, whether or not a school is “of good quality”—something, by the way, that accrediting bodies do quite well—they are also trying to say precisely (way too precisely, many argue) that school X is better than school Y. These fine distinctions create even greater pressures to adjust behavior to the measures because schools are fighting hard for each specific ordinal improvement rather than just making sure they pass a general threshold. And, of course, each ordinal improvement comes at the expense of a competitor’s standing.

So, not only is the apparent transparency created by these measures actually a distortion, but this distortion also comes to play an important role in defining the field. This is very removed from simply gaining a view into organizational activity.

The more general point I would like to argue, though, is that we should be skeptical whenever transparency is invoked; after all, true transparency is as impossible as some Borges-ian map made to 1:1 scale. Such skepticism involves recognizing the rhetorical uses of the word and the motivations behind these uses. Aside from looking (Hawthorne-style) at the effects that the measurements themselves are having, it entails paying more attention to which aspects of organizations are made transparent and which are not, and why this might be.

 

*“Oh Bondage, Up Yours!” (gratuitous but necessary X-Ray Spex shout out)

 

Written by Michael

June 18, 2008 at 5:34 am

Posted in uncategorized

4 Responses

Subscribe to comments with RSS.

  1. “On Exactitude in Science” is one of my favorite Borges pieces. Umberto Eco has an excellent essay where he spends about 20 pages just trying to work out that single paragraph.

    Is there a name for the general problem in this form: a rating system or other quantification (SAT score, GPA, etc.) alters the situation it intends to simplify or measure thus making the rating less useful for its original intended purpose?

    Like

    Dan Hirschman

    June 18, 2008 at 1:19 pm

  2. […] has just posted a thoughtful piece on transparency, looking at the ability of rankings to distort organizational activity that they are […]

    Like

  3. Dan:

    Sorry to be so slow to reply–it’s still a little chaotic around here. I love that Borges piece, too. Although there are a lot (too many) names for altering what is measured, I don’t know of a term for describing the decline in usefulness of the measure as a result of this process. It would be a good thing to have a word for since it is reasonable to think that the problem could become increasingly worse over time. Maybe “Measure Degradation.”

    Like

    Michael

    June 20, 2008 at 1:37 am

  4. The name for one aspect of “measure degradation” is the “running down” of performance measures. See Rethinking Performance Measurement by Marshall Meyer (Cambridge: 2002).

    Like

    C.W. Lee

    September 12, 2008 at 6:33 pm


Comments are closed.

%d bloggers like this: