orgtheory.net

questionable hypothesizing

Heads turn whenever accusations are made about academic impropriety, but it is especially provocative when one of the researchers involved in a study makes accusations about his/her own research. A forthcoming article in the Journal of Management Inquiry written by an anonymous senior scholar in organizational behavior does exactly this. The author, who remains anonymous to protect the identity of his/her coauthors, claims that research in organizational behavior routinely violates norms of scientific hypothesis testing.

I want to be clear: I never fudged data, but it did seem like I was fudging the framing of the work, by playing a little fast and loose with the rules of the game—as I thought I understood how the game should be played according to the rules of the scientific method. So, I must admit, it was not unusual for me to discover unforeseen results in the analysis phase, and I often would then create post hoc hypotheses in my mind to describe these unanticipated results. And yes, I would then write up the paper as if these hypotheses were truly a priori. In one way of thinking, the theory espoused (about the proper way of doing research) became the theory-in-practice (about how organizational research actually gets done).

I’m certain that some people reading this will say, “Big deal, people reformulate hypotheses all the time as they figure out what their analysis is telling them.”  The author recognizes this is the case and, I believe, relates his/her experience as a warning of how the field’s standards for writing are evolving in detrimental ways. For example, the author says, “there is a commonly understood structure or “script” for an article, and authors ask for trouble when they violate this structure. If you violate the script, you give the impression that you are inexperienced. Sometimes, even the editor requires an author to create false framing.” Sadly, this is true.

All too often reviewers feel that it is their role to tell the authors of a paper how to structure hypotheses, rewrite hypotheses, and explain analysis results. Some reviewers, especially inexperienced ones, may do this because they feel that they are doing the author(s) a favor – they’re helping make the paper cleaner and understandable. But the unintended consequence of this highly structured way of writing and presenting results is that it forces authors into a form of mild academic dishonesty in which they do not allow the exploratory part of the analytical process to be transparent.

Some journals have a much stronger ethos about hypothesis testing than others. AJS is looser and allows authors more freedom in this regard. But some social psych journals (like JPSP) have become extremely rigid in wanting to see hypotheses stated a priori and then tested systematically.  I would love to see more journals encourage variability in reporting of results and allow for the possibility that many of our results were, in fact, unexpected. I would love it if more editors chastised reviewers who want to force authors into a highly formulaic style of hypothesizing and writing results. It simply doesn’t reflect how most of us do research.

Perhaps the anonymous author’s tale will ignite a much needed discussion about how we write about social scientific analysis.

Written by brayden king

February 8, 2015 at 3:57 pm

12 Responses

Subscribe to comments with RSS.

  1. It is fairly interesting that the *H1 then H2 then test” style is nonexistent in economics. The overwhelming majority of papers explicitly state the model is there to explain the results, or that the model is consistent with the empirical findings, or that it can help to interpret quantitative results. Indeed, it is not uncommon to introduce a model *after* showing results.

    When I first read an article in the strategy/org journals, I was very confused by the Popperesque framing. That reviewers push for it is even more crazy.

    Like

    afinetheorem

    February 8, 2015 at 4:58 pm

  2. aft – yes, not all social science disciplines has such norms about stating hypotheses. My sense is that research done in lab settings (i.e., experiments) tends to be more rigid about hypothesis testing but gradually this norm has filtered throughout the rest of the OB world and has strongly influenced OT and economic sociology as well.

    It would be interesting to track the historical evolution of this norm in the social sciences (like Erin Leahy did with using p>.05 significance tests).

    Like

    brayden king

    February 8, 2015 at 5:10 pm

  3. That’s an interesting paper, but the author seems unaware of Norbert Kerr’s article “HARKing: Hypothesizing after the results are known” which came out in the late 1990s. Management scholars aren’t necessary of what’s going on in personality and social psych, so I’m not trying to malign “anonymous,” but just point out this is a rediscovery of something that Kerr wrote about.

    The problem in psychology is the canonical text on professional development “The Compleat Academic” contains an article by Daryl Bem that subtly encourages HARKing. It will probably be removed before the next edition comes out (if another edition comes out.)

    Like

    Chris M

    February 8, 2015 at 6:47 pm

  4. Agreed! I’ve had the experience of an editor insisting that I make my post hoc analysis of an unanticipated finding an a priori hypothesis in the final version of a paper. We tried to argue for why it should remain a post hoc, but got nowhere. Definitely the opposite of what the quant types say we are supposed to do. And then they say that my qualitative work is ‘less scientific!’

    Like

    corporation360

    February 8, 2015 at 7:15 pm

  5. I wonder if some of the problem is that we don’t have a standard form for inductive presentation. So it makes these papers harder to read, and annoy reviewers and editors. I could imagine a format of: a priori hypotheses–>analyses–>theorization of unanticipated results, i.e. just moving around parts of the current format, to be clear and useful.

    Liked by 2 people

    cwalken

    February 8, 2015 at 7:25 pm

  6. I’ve long felt that papers should be published in two parts, perhaps not even by the same authors. The literature and theory section could stand alone, culminating in a set of hypotheses. Then the author or others could take up the empirical tests in a separate publication.

    Like

    Bob Faris

    February 8, 2015 at 7:29 pm

  7. A worthy subject indeed, Brayden! I wonder what scholars who are provoked by the JMI article and your post would find if they examined what is being taught to current graduate students in what is referred to as Research Methodology. Would they be taught tired, inappropriate ideas like Popperian falsifiability, the hypothetico-deductive model of scientific inquiry, and THE scientific method? That is, is this portion of graduate training stuck in the 1960s, while theory and quant and qual methods have advanced? Walk over to a philosophy department that is nearby — one with philosophy of science as a concentration area — and show the JMI piece. Then ask how to approach epistemology and methodology in the sociology of organizations in this century.

    Eventually, perhaps, the behavioral patterns you note among researchers, editors, and reviewers can be altered to permit publications based upon abductive reasoning.

    Like

    Randy

    February 8, 2015 at 10:41 pm

  8. Interesting post, and thanks for alerting me to it Brayden. My view is that the fetish for hypotheses is actually a symptom of a deeper problem, which is the failure to recognize the difference between a theory and a list of hypotheses. I don’t really mind if an hypothesis was written up after peeking at the data, as long as it rests on a theory that is superior to past theories in (a) its explanatory power; (b) its logical coherence; and/or (c) the verisimilitude of its assumptions. The proof of the pudding is not in whether the hypotheses in the paper happen to match the data presented in that paper as much as it is in how valuable that theory is when taken to other data (including those that constitute the reader’s experience). The problem with reducing theory to lists of hypotheses (usually sprinkled with citations to authoritative references and perhaps a bit of light gloss on that literature) is that it is generally unclear what the theory is (i.e., what are the underlying assumptions, what logical moves are made to derive theorems from those assumptions), and what is the balance between explanatory power, realism, and parsimony that the theory is striking. Another way of making this point is as follows: I never regard a paper as compelling because the hypotheses are supported. What gets me excited is a clear idea that sheds insight on something I see in the world (perhaps shown to me for the first time by the authors) that I couldn’t understand before. Otherwise, I’m uninterested.

    Liked by 1 person

    ezrazuckerman

    February 9, 2015 at 2:52 am

  9. Thanks for all of the thought-provoking comments.

    Just to be clear, I’m not against hypothesis testing. I’m against a review process that encourages people to revise hypotheses or make hypotheses up to fit the data. Hypotheses can be useful, especially when they help clarify a theoretical argument or point out what your theoretical priors are. Unfortunately, our field’s use of hypotheses is becoming standardized and is stifling creativity and academic honesty.

    Neal Caren made a super interesting plot of the mentions of the word hypothesis in sociology articles published on JSTOR.

    Liked by 1 person

    brayden king

    February 9, 2015 at 4:46 pm

  10. I think this wouldn’t be an issue at all if replications were numerous and easy to publish. In that case what author describes would be just a peculiar convention for writing up the results. However, given the field’s fascination with *new* and *interesting* findings, the practice becomes dangerous. Combined with using p-value<0.05 as decision tool, one could conclude that a large part of results published in major journals are purely spurious findings.

    Liked by 1 person

    Ivan Z.

    February 10, 2015 at 2:58 pm

  11. Thanks, Brayden. I’ve been teaching about this in Methods for years: It is completely normal during quantitative analysis to test a number of model specifications, develop hypotheses to interpret unexpected findings, and present them as a priori hypotheses.

    A lot of the problem, I think, has to do with Research Methods textbooks, which mis-characterize both quantitative and qualitative research, by equating the former with a deductive approach and the latter with an inductive approach.

    One common table, which can be found in a variety of publications, associates quantitative research with being “deductive … objective … conclusive … aims at truth,” while denigrating qualitative research as “inductive … subjective … impressionistic … aims at understanding.” The original source seems to be Patton, M.Q. (1990) Qualitative Evaluation and Research Methods. SAGE. An example of this table is here (p 7): http://www.practitionerrenewal.ca/documents/MicrosoftPowerPoint-QualitativeResearchPresentation.pdf

    That is an egregious example, but it is symptomatic of a deep problem with Methods textbooks, which is that they seem to endlessly repeat and reproduce a caricature of the social research process by presenting quantitative research as following a strictly Popperian hypothetico-deductive model and qualitative as the other. In contrast, as Charles Ragin notes in his *Constructing Social Research*, all social research is retroductive, always bringing theory and data to bear on each other to produce findings.

    Liked by 1 person

    matt vidal

    February 12, 2015 at 9:33 am

  12. Update: The terrible table that I discussed above cites as sources Patton (1990) and Chisnall (2001), *Marketing Research*. I’ve now checked both of those sources. Patton does not even discuss quants and seems to have a good understanding of qualitative data. The table does not appear in Chisnall either, but a two-minute skim turned up a description of qualitative data as “intrinsically subjective” and “impressionistic.”

    In any case, here is the same table in another location: http://cuppacoffeewithme.blogspot.co.uk/2014/06/quantitative-vs-qualitative.html.

    Like

    matt vidal

    February 12, 2015 at 5:13 pm


Comments are closed.