It is rather obvious that scholars have almost no incentives for replication or verification of other’s work. Even the guys who busted La Cour will get little for their efforts aside from a few pats on the back. But what is less noted is that editors have little incentive to issue corrections, publish replications, and commentaries:
- Editing a journal is a huge workload. Imagine an additional steady stream of replication notes that need to be refereed.
- Replication notes will never get cited like the original, so they drag down your journal’s impact factor.
- Replication studies, except in cases of fraud (e.g., the La Cour case), will rarely change the minds of people after they read the original. For example, the Bender, Moe and Schotts APSR replication essentially pointed out that a key point of garbage can theory is wrong, yet the garbage can model still gets piles of cites.
- Processing replication notes creates more conflict that editors need to deal with.
It’s sad that correcting the record and verification receives little reward. It’s a very anti-intellectual situation. Still, I think there are some good alternatives. One possible model is that folks interested in replication can simply archive them in arXiv, SSRN or other venues.Very important replications can be published in venues like PLoS One or Sociological Science as formal articles. Thus, there can be a record of which studies hold water and which don’t without demanding that journals spend time as arbitrators between replicators and the original authors.
Here’s the list (so far):
Some people might want to hand wave the problem away or jump to the conclusion that science is broken. There’s a more intuitive explanation – science is “brittle.” That is, once you get past some basic and important findings, you get to findings that are small in size, require many technical assumptions, or rely on very specific laboratory/data collection conditions.
There should be two responses. First, editors should reject submissions which might depend on “local conditions” or very small results or send them to lower tier journals. Second, other researchers should feel free to try to replicate research. This is appropriate work for early career academics who need to learn how work is done. Of course, people who publish in top journals, or obtain famous results, should expect replication requests.
A late response to the very interesting conversation.
Pat, Mike, and I are currently writing a new chapter on the institutional logics perspective (ILP) for a revised Handbook of Organizational Institutionalism. Elizabeth’s blog is timely to prompt us to clarify some points.
(1) In our 2012 book we envisioned ILP to be a scientific research program, one which builds on, but differs from the scientific research program of neoinstitutional theory. The introductory chapter draws on Berger and Zelditch (1993), Stanford theoreticians who were influenced by Lakatos’s goals in understanding scientific progress through research programs.
(2) Our ontological claim is that institutional logics are real phenomena. Institutional logics are real the same way bureaucracy is real and culture is real. We understand that not all users of ILP follow a realist ontology.
(3) The ideal types are not the institutional logics per se but an analytical representation of the logics. Here is an area where we have heard much misunderstanding of what we intended and perhaps we were not clear. The societal ideal types in our 2012 book provide an ideal-typical model of societal-level logics from a reading of canonical texts such as Weber’s Economy and Society and contemporary organization theory. They are meant to be an example and not the only possible model. The cells are intended to vary significantly based on specific research questions and specific instantiations. Other forms of representing and measuring logics beside ideal types are both possible and desirable, for example relying on vocabularies of practice.
(4) As noted throughout our work, institutional logics are historically contingent. Institutional orders and their corresponding logics evolve and change over time. There is very little research that examines historical change in logics within an institutional order, most is comparing across institutional orders, though this research is starting to emerge.
(5) ILP as a scientific research program seeks to uncover mechanisms that explain their origin, translation, transformation, and consequences. Cumulative knowledge about institutional logics is based on the accumulation of knowledge about the underlying mechanisms, not primarily on whether the same instantiations of logics are observed across contexts. One broad empirical finding that has been found is that the market logics have greatly increased in instantiations across contexts, (Thornton, Ocasio and Lounsbury, 2015, Emerging Trends in the Social Sciences by Wiley). But as Elizabeth’s works suggests this instantiation is not an isomorphic diffusion, but is subject to translation across contexts and hybridization with other logics.
(6) Institutional logics are cross-level phenomena and can be observed at different levels of analysis: societal, field, and organizational. There are top-down, bottom-up and horizontal mechanisms by which institutional logics at different levels affect one another. More research is needed on these cross-level effects and on the differences and similarities in logics across different levels.
(7) The meta-theory presented in the 2012 book outlined some mechanisms to explain the cross-level dynamics of institutional logics. The perspective differs from a purely structural or purely agentic meta-theory. ILP is not a closed-formed theory, although closed -form theories have been and will continue to be developed from the meta-theory.
(8) ILP has been quite generative of research hypotheses, propositions, and theory so far, and over time the generativity of the research program will prove its continued usefulness and scientific progress or not (Berger and Zelditch, 1993). Time will tell as to the generality of theory and conceptual mechanisms. Scope conditions may develop around mechanisms by which logics become instantiated and have effects—generalizable bits of theory (mechanisms) that are good across space and time subject to scope conditions. Individual mechanisms might end up becoming components of a broader category of mechanisms a la Tilly.
Willie, Pat, and Mike
Every October when the Nobel prize in economics is announced, you hear the same trite and hackneyed things. Already, the Guardian has one of those tedious “economics is not a science” articles just to prepare for tomorrow. To help you save time, I’ve collected the following cliches so you can just clip and paste them into your tweets, Facebook messages, and blog posts:
- Economics is not a science.
- Actually, there is no Nobel Prize in economics.
- The so-called Economics Nobel prize.
- This prize refutes the policies of [insert politician you hate].
- This prize supports the policies of [insert politician you love].
- This prize is long overdue.
- This prize rewards [my favorite field].
- This prize rewards free-market fundamentalists.
- This prize proves that free-market fundamentalists are wrong.
- This person did not deserve the prize.
- This person deserved the prize.
- This is a rather mathematical/statistical prize for a technical point that I can’t summarize here.
- This prize is for proving the obvious.
- I predicted this all along.
- I am completely surprised by this.
- I can’t believe they gave this to a non-economist.
- I can’t believe they gave this to a person not from [circle one: Harvard/MIT].
- Harvard is slipping, straight to toilet.
- Steve Levitt does/does not know the work of these prize winners.
Actually, I have a Granovetter post ready to go if he ever wins, since he is the sociologist whose work is most known in economics. Add your own cliches in the comments.
Last week I was finishing up a volume introduction and it prompted me to catch up on the last couple of years of the institutional logics literature. This gave me some thoughts, and now I can’t sleep, so I’m putting them out there. This is long so it’s broken into three parts. The first two reflect my personal saga with institutional logics. They set up the rest, but you can also skip to the third and final section for the punchline.
From the econ blogs this week, a paper by Chang and Li about whether you can replicate econ papers. Take a guess at what the answer is before you read the abstract:
Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say “Usually Not”, by Andrew C. Chang and Phillip Li, Finance and Economics Discussion Series 2015-083. Washington: Board of Governors of the Federal Reserve System: Abstract We attempt to replicate 67 papers published in 13 well-regarded economics journals using author-provided replication files that include both data and code. Some journals in our sample require data and code replication files, and other journals do not require such files. Aside from 6 papers that use confidential data, we obtain data and code replication files for 29 of 35 papers (83%) that are required to provide such files as a condition of publication, compared to 11 of 26 papers (42%) that are not required to provide data and code replication files. We successfully replicate the key qualitative result of 22 of 67 papers (33%) without contacting the authors. Excluding the 6 papers that use confidential data and the 2 papers that use software we do not possess, we replicate 29 of 59 papers (49%) with assistance from the authors. Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable. We conclude with recommendations on improving replication of economics research.
Houston, we have a problem.