is “public intellectual” oxymoronic?
A guest post by Jerry Davis. He is the Wilbur K. Pierpont Collegiate Professor of Management at the Ross School of Business at the University of Michigan.
By this point everyone in the academy is familiar with the arguments of Nicholas Kristof and his many, many critics regarding the value of academics writing for the broader public. This weekend provided a crypto-quasi-experiment that illustrated why aiming to do research that is accessible to the public may not be a great use of our time. It also showed how the “open access” model can create bad incentives for social science to write articles that are the nutritional equivalent of Cheetos.
Balazs Kovacs and Amanda Sharkey have a really nice article in the March issue of ASQ called “The Paradox of Publicity: How Awards Can Negatively Affect the Evaluation of Quality.” (You can read it here: http://asq.sagepub.com/content/59/1/1.abstract) The paper starts with the intriguing observation that when books win awards, their sales go up but their evaluations go down on average. One can think of lots of reasons why this should not be true, and several reasons why it should, all implying different mechanisms at work. The authors do an extremely sophisticated and meticulous job of figuring out which mechanism was ultimately responsible. (Matched sample of winning and non-winning books on the short list; difference-in-difference regression; model predicting reviewers’ ratings based on their prior reviews; several smart robustness checks; and transparency about the sample to enhance replicability.) As is traditional at ASQ, the authors faced smart and skeptical reviewers who put them through the wringer, and a harsh and generally negative editor (me). This is a really good paper, and you should read it immediately to find out whodunit.
The paper has gotten a fair bit of press, including write-ups in the New York Times and The Guardian (http://www.theguardian.com/books/2014/feb/21/literary-prizes-make-books-less-popular-booker). And what one discovers in the comments section of these write-ups is that (1) there is no reading comprehension test to get on the Internet, and (2) everyone is a methodologist. Wrote one Guardian reader:
The methodology of this research sounds really flawed. Are people who post on Goodreads representative of the general reading public and/or book market? Did they control for other factors when ‘pairing’ books of winners with non-winners? Did they take into account conditioning factors such as cultural bias (UK readers are surely different from US, and so on). How big was their sample? Unless they can answer these questions convincingly, I would say this article is based on fluff.
Actually, answers to some of these questions are in The Guardian’s write-up: the authors had “compared 38,817 reader reviews on GoodReads.com of 32 pairs of books. One book in each pair had won an award, such as the Man Booker prize, or America’s National Book Award. The other had been shortlisted for the same prize in the same year, but had not gone on to win.” And the authors DID answer these questions convincingly, through multiple rounds of rigorous review; that’s why it was published in ASQ. The Guardian included a link to the original study, where the budding methodologist-wannabe could read through tables of difference-in-difference regressions, robustness checks, data appendices, and more. But that would require two clicks of a functioning mouse, and an attention span greater than that of a 12-year-old.
This is a non story based on very iffy research. Like is not compared with like. A positive review in the New York Times is compared with a less complimentary reader review on GoodReads…I’ll wait to fully read the actual research in case it’s been badly reported or incorrectly written up
Evidently this person could not even be troubled to read The Guardian’s brief story, much less the original article, and I’m a bit skeptical that she will “wait to fully read the actual research” (where her detailed knowledge of Heckman selection models might come in handy). After this kind of response, one can understand why academics might prefer to write for colleagues with training and a background in the literature.
Now, on to the “experimental” condition of our crypto-quasi-experiment. The Times reported another study this weekend, this one published in PLoS One (of course), which found that people who walked down a hallway while texting on their phone walked slower, in a more stilted fashion, with shorter steps, and less straight than those who were not texting (http://well.blogs.nytimes.com/2014/02/20/the-difficult-balancing-act-of-texting-while-walking/). Shockingly, this study did not attract wannabe methodologists, but a flood of comments about how pedestrians who text are stupid and deserve what they get. Evidently the meticulousness of the research shone through the Times write-up.
One lesson from this weekend is that when it comes to research, the public prefers Cheetos to a healthy salad. A simple bite-sized chunk of topical knowledge goes down easy with the general public. (Recent findings that are frequently downloaded on PLoS One: racist white people love guns; time spent on Facebook makes young adults unhappy; personality and sex influence the words people use; and a tiny cabal of banks controls the global economy.)
A second lesson is that there are great potential downsides to the field embracing open access journals like PLoS One, no matter how enthusiastic Fabio is. Students enjoy seeing their professors cited in the news media, and deans like to see happy students and faculty who “translate their research.” This favors the simple over the meticulous, the insta-publication over work that emerges from engagement with skeptical experts in the field (a.k.a. reviewers). It will not be a good thing if the field starts gravitating toward media-friendly Cheeto-style work.