orgtheory.net

stuff that doesn’t replicate

Here’s the list (so far):

Some people might want to hand wave the problem away or jump to the conclusion that science is broken. There’s a more intuitive explanation – science is “brittle.” That is, once you get past some basic and important findings, you get to findings that are small in size, require many technical assumptions, or rely on very specific laboratory/data collection conditions.

There should be two responses. First, editors should reject submissions which might depend on “local conditions” or very small results or send them to lower tier journals. Second, other researchers should feel free to try to replicate research. This is appropriate work for early career academics who need to learn how work is done. Of course, people who publish in top journals, or obtain famous results, should expect replication requests.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

October 13, 2015 at 12:01 am

7 Responses

Subscribe to comments with RSS.

  1. Lack of generalizability of findings across studies may reflect complexity of biological and social reality. Population diversity and strength of contextual, social, and cultural reality.

    In organization theory practically no findings are truly generalizable. Perhaps s-curved diffusion of innovations but even here there are exceptions.

    The world is complex. In the search for local parsimony complexity is often ignored

    Liked by 1 person

    wocasio

    October 13, 2015 at 12:29 am

  2. It’s not science that is broken or brittle, it’s the funding mechanisms, egos, and big pharma hedge-fund ROI expectations. Editors of any publishing tier should reject submissions which have not been replicated by an independent lab.

    Like

    Dan W

    October 13, 2015 at 12:33 am

  3. @Dan W: Great idea–who is paying? Because out here in the real world, there’s no money to get the study done the first time, nevermind paying someone else to do it.

    Like

    Mikaila

    October 13, 2015 at 1:41 am

  4. @Mikaila – if the study findings are reported to have significant positive or negative impact on human risk factors (e.g., health, safety, financial viability, etc.), then government subsidy should pay for the independent validation & verification. Based on fabiorojas findings, sounds like out the in the real world, researchers may be rationalizing an unexpectedly high volume of availability bias into their frequently unreplicable study results.

    Like

    Dan W

    October 13, 2015 at 1:42 pm

  5. I can imagine a system in which NSF/NIH/DOJ/etc. grants build in payments for such replication–but what are people who do not have federal funding to do? I am serious in this question–I agree this is a major problem, but remember that publication pressures have grown immensely for faculty and researchers in all sectors of higher ed, and thus unless there is a systematic procedure for ensuring that replication is paid for by those sectors which retain slightly deeper pockets, the rest of us will still find that we need to publish without an avenue for replication to occur.

    Like

    Mikaila

    October 13, 2015 at 1:55 pm

  6. Good points Mikaila. Perhaps this is where fabiorojas’ top journal/famous results recommendations are a best compromise fit?

    Liked by 1 person

    Dan W

    October 13, 2015 at 2:12 pm

  7. Fabio:

    I like your post. But I disagree with commenters who suggest that certain things shouldn’t get published. I think better to recognize that everything will get published, and to move away from the idea that publication, whether in Arxiv/Plos-one or Science/Nature/ASR, confers some badge of correctness.

    Like

    Andrew Gelman

    October 18, 2015 at 9:41 pm


Comments are closed.

%d bloggers like this: