orgtheory.net

the rhetorics of numbers

As an extension of my work on rankings, I’ve been thinking lately about how different actors understand or interpret the same numbers differently.  This is still a little vague (ok, I’m being modest here—it’s a lot vague), but I thought this might be a good place to get some early feedback, especially in terms of some organizational applications.

A central and consistent finding of work on public evaluation is that once measures are released, they are very much unleashed: they are interpreted in unpredictable ways, they are used for a wide variety of political purposes, and they have the potential to generate as many unintended as intended consequences.  Most of the attention given to this disjuncture between how numbers are designed and how they are used has focused on the back end of this process—how numbers, once created, are used and misused. We know far less, however, about how those who make numbers, especially those who do so professionally, think and talk about these numbers as they produce them.  A better understanding of the front end of the process—this “culture of number production”—would, I think, provide some fresh insights into why some measures stray so far from the purposes for which they were designed  (although I’m sure it won’t prevent them from straying in the future).

It would then be interesting (at least to me) to compare the views of producers to how various users of these numbers think and talk about them. This might be best done with a very public evaluation such as the SAT, comparing how the people at ETS talk about the meaning of these numbers to the views of student test-takers (possibly both before they take the test and when they are applying to colleges or deciding not to), the parents of these test-takers, admissions officers at colleges, etc.  An important part of this would be trying to see if and how it mattered that some people talked about these numbers in terms of standard deviations and confidence intervals, others in terms of “smart” or “dumb,” and still others in terms of “college material.”  But such an approach could also be an informative way to look at any third party evaluation of organizations: What was the measure intended to relate and how was it designed to do so? How do those who are evaluated interpret the results? How does these interpretations differ, if at all, from those of interested constituents?

Finally, and a bit tangentially, it would be interesting to study how the presentation of numbers and statistics affects how people react to them.  For example, one of the ways in which the medical field is combating the spread of deadly but largely preventable infections within hospitals is by framing the statistics in a different way so that doctors and nurses take them more seriously.  Knowing that 10 out of 1000 patients contract staph infections may not stir medical personnel to change their behavior, but discussing the case histories of the 15 people who have died in your hospital, along with the specific accidents or acts of negligence that led to their death, might be more likely to motivate people to wash their hands more or be more careful with those darned catheters.  It seems possible, then, that the disjuncture between the creation and use of evaluative measurements could be improved through a better understanding of how people respond to variations in how these measurements are “packaged.”

Written by Michael

July 10, 2008 at 5:38 am

Posted in uncategorized

2 Responses

Subscribe to comments with RSS.

  1. Interesting post. I agree that it would be interesting to compare the way numbers are treated across organizational contexts. In some organizations the interpretation of numbers relating to quality/value measurements is much less ambiguous. The more institutionalized a system of value is, as it is in the world of corporate finance, the more likely it is that insiders and the audience will agree on the meaning of performance measures (of course, this only relates to financial value; rankings of corporations along other dimensions such as corporate responsibility are much more ambiguous and require more interpretation). The less accepted a performance metric is, the more divergence you’ll see in the way insiders and audiences interpret that metric.

    Like

    brayden

    July 10, 2008 at 1:20 pm

  2. I think that the roll-out of different NCLB testing schemes across different states — and even different schools/school systems within states — would be another fascinating application of this. I have heard so many conflicting views on the tests wrapped up partly along political divisions, but not entirely. For example, parents of children with special needs value the required testing and mandates to make sure that their children are cared for by schools which, without the testing mandates, easily ignored this population of students. On the other hand, parents of college-track students think that the testing dumbs-down education and is an unnecessary burden on their children. And this is just parents. Add teachers, school administrators, test administrators, politicians, and national lobbying groups (e.g. the for-profit education companies who would benefit from schools failing by getting grants to run schools) all have different evaluations. Additionally, the different types of programs administered in different states could provide some very intriguing comparisons.

    Like

    mike3550

    July 11, 2008 at 12:32 pm


Comments are closed.