from prices to prizes: competitions

In his Theory of Valuation (1939) John Dewey makes the following observation:

“[W]hen attention is confined to the usage of the verb ‘to value’, we find that common speech exhibits a double usage. For a glance at the dictionary will show that in ordinary speech the words ‘valuing’ and ‘valuation’ are verbally employed to designate both prizing, in the sense of holding precious, dear (and various other nearly equivalent activities, like honoring, regarding highly) and appraising in the sense of putting a value upon, assigning value to. This is an activity of rating, an act that involves comparison, as is explicit, for example, in appraisals in money terms of goods and services.”

Thus, in addition to market pricing, Dewey points out that valuation can also occur through prizing, suggesting that, in addition to market competition, valuation in modern economies can occur through organized (or semi-organized) competitions.  While awaiting Wendy Espeland’s new book on ratings and rankings, I briefly discuss how tests and contests interplay with ratings and rankings, and then point to some promising new research on the sociology of competitions.

Rankings, an ordinal list, can result from tests or from contests. Start with contests, and take first the forms in which competitors play against each other. The score in such a contest indicates which player (team) performed better (earned more runs or goals, ran faster, jumped higher) against another or others on a given day. The score of a soccer match is the result of a direct, head to head competition.  And the aggregation of these scores (in, for example, win-loss records) results in rankings – whether it be a soccer league or of all the professional tennis players or of all the Grand Master chess players in the world.  Note that in such contests there are referees and timekeepers but not judges. Technology contests (e.g., solving the problem of determining latitude on the open sea, building the fastest computer, the lightest airplane, robotic cars on ever-more challenging terrain) operate according to similar principles.

But there is another kind of scoring in contests where judges are involved.  Contestants do compete with each other in a given event at a given time.  But the scores, from which rankings are derived, indicate the degree of conformity to some set of relatively standardized criteria for evaluating performance.  Think of Olympic sports such as gymnastics, with their indices of “technical” and (contested) “artistic” scoring.  In a sense, these are contests organized around more or less simultaneous tests.  In principle, judges are not supposed to be ranking the performers directly but, instead, should be rating them according to how well they pass the set of tested performance criteria.  Thus, in contests organized around tests, rankings result from ratings.  These can be the averaged scores of several judges (e.g., various Olympic sports), the aggregations of scores across multiple judges (e.g., cumulative grade point averages), or an aggregation or index of the scores of a single judge across several evaluation criteria (e.g., rankings by critics in technology fields such as software, or think of FICO scores and bond ratings).

Contests in grant and fellowship competitions (e.g., most consequential, in terms of budget expenditures, grants competition at the National Science Foundation) frequently mix scoring a la tests with head-to-head agonistic competition (see Michele Lamont).  In such a mixed system, judges, juries, or “scientific review panels” use scoring procedures (“rate this candidate”) to produce a “short list” of finalist competitors, frequently available at the outset of their face-to-face meeting.  Jurors typically confer that this is merely a “provisional” or “rough ranking.”  The subsequent head-to-head competition directly comparing finalist proposals frequently overturns the initially-scored “rankings.”  It is telling that panelists often refer to this moment of agonistic competition as “agonizing” work.

The mixed character of grants and fellowship competitions also points to an important feature of certain types of competitions:  the selection criteria guiding the judges are not given at the outset but emerge during the jury’s deliberations.  Such is frequently the case in architectural competitions, as Kristian Kreiner demonstrates in a series of exemplary studies.  At first glance, the evaluative principles governing the jurors’ decision seem to be fixed at the outset: they are established in the “program,” the brief specifying the problems that the architectural design must solve.  But the various features of the client’s desiderata are frequently contradictory:  not all can be optimized or even harmoniously satisficed.  Indeed, as Kreiner shows, the greater the elaboration of multiple performance criteria, the more likely the winning entry will ignore the program, with aesthetic principles trumping other evaluative principles in the jury’s decision.

More importantly, Kreiner examines, in detail, the processes and practices whereby jurors (and hence clients) use the entries to learn more about the actual problems that can be solved and the operative principles for assessing a successful performance. What seems to be a case of analytic problem solving turns out to be a situation of interpretation (see Lester and Piore). Architectural competitions are an example of Dewey’s pragmatist approach through which we discover our principles for evaluation in the action of valuation. They are a social technology for exploration in the search when we don’t know what we are looking for but can recognize it when we find it.

Written by dstark

July 15, 2010 at 12:58 pm

Posted in uncategorized

One Response

Subscribe to comments with RSS.

  1. […] A+, A, A-, BBB+, BBB, B, and so on).   In this post, I want to examine a phenomenon that takes ratings and rankings to their logical absurdity – the proliferation of Top 10 lists.  The object is frivolous; but […]


Comments are closed.

%d bloggers like this: