the scientific method fallacy

Lieberson and Horwich’s article on implication (profiled yesterday) raised an issue that I think is very important in social research. I call it the “scientific method fallacy.” Here’s how  I would explain it:

The scientific method fallacy is when a researcher mistakenly assumes that a tool used by physical scientists is the only legitimate way to do research. In other words, they move from “X is a great tool for science” to “X is the only way to do science.”

Examples of the scientific method fallacy: “Experiments are the only way you can really know anything.” “You really don’t have a clear theory unless it is expressed mathematically.”

The underlying philosophical claim is that science is pragmatic. The world is too complex and hard to be captured by any single tool, so we need multiple tools. A formal model, or an ethnographic observation, is a map of the world, not the world itself, which suggests the needs for more kinds of maps.

If you actually look at what scientists do, you see that no single method rules, even though some are clearly more popular than others. Instead of saying that “real scientists use X,” one should  read science journals. A medical journal might include randomized controlled trials, qualitative case studies, observational data, and even opinion pieces. In engineering, it is common to find reports on prototypes. You can learn a lot from building something, even if they theory isn’t nice and neat. There’s even “grounded theory” in the physical sciences from time to time. For example, when the first particle colliders were invented, physicists had a great time just seeing what new particles were made and how that might lead to new theory. And of course, you also see lots of formal models and controlled experiments across scientific areas.

The bottom line is that reality is more complicated than suggested by those pushing the scientific method fallacy. Real science is messy and that means that science progresses on multiple fronts. If an experiment can be done to convincingly settle an issue, great. But a lot of times, it’s not possible, or even desirable. The lesson for social scientists is that we should stop listening to those who say “real science is done this way” and instead have the courage to make science’s many tools work for us. And that’s the way real science works.

About these ads

Written by fabiorojas

November 10, 2010 at 12:11 am

25 Responses

Subscribe to comments with RSS.

  1. You may like Susan Haack’s *Defending Science—Within Reason*

    She writes (clearly, Without jargon) about natural and social science from a classical pragmatist perspective


    November 10, 2010 at 2:28 am

  2. [...] “The scientific method fallacy” – [...]

  3. Your formulation of the “scientific method fallacy” is rather strong; I think there are very few people who actually endorse the claim that Method X is the *only* way to do (social) science. The more common claim seems to be that “Method X is the *best* way to do science, and to be preferred if available.” As such, it is much less clear that this is actually a fallacy…


    November 10, 2010 at 5:24 pm

  4. Like Rense, I wonder if anyone actually commits the SMF. I would think most people think something like: “I’m good at this method, and it works well in cases I’m interested it, so I use it. Other methods? Sure, they might work, especially for other things. But I don’t know them very well, and certainly wouldn’t claim to be competent.” I wonder if there is an example of someone who says, “This method is the only one real scientists use, so we should use it too, and it alone.”


    November 10, 2010 at 7:45 pm

  5. Thomas and Rense: The SMF sounds silly when you say it explicitly. But there are people who really do believe some version of the SMF. For example, in policy circles, there are people who believe that you can learn nothing about policy unless you have an experimental design.


    November 10, 2010 at 8:04 pm

  6. I’m not very well versed in the policy field, but it’s sounds absurd (or, yes, silly) to say that you can’t learn anything about policy without experiment. Document analysis. Discourse analysis. Historical methods. Again, I don’t know much about the object (“policy”), but surely there are other legitimate ways in. You can get me to believe that there are people who don’t study policy except by experiment. But not that anyone actually proposes to confine the study of policy of experimental methods. At the very least, give me an example of someone who makes that argument.


    November 10, 2010 at 8:27 pm

  7. I think a similar version of this fallacy is often made by extremely rigid KKVers. To wit, a boutique industry has risen up around critiquing KKV (see Brady & Collier, Ragin, Bennett & George, Gerring, Mahoney, etc.) It gets down to the attitudes that many have about quantitative vs. qualitative research; they pay lip service to the merits of both but, in private, will often espouse an opinion such as “this is really interesting, but you need to increase your N.”


    November 10, 2010 at 8:37 pm

  8. Thomas and Resne: If you want a concrete example, I was listening to the Econ Talk podcast with Russ Roberts. I think it was in a discussion with UCLA prof Ed Leamer about econometrics and macro that he says once or twice something about something being “real science” only when you can do the experiment. This “common sense” idea that “control experiments = only way of learning anything for certain” is more common than you might think.


    November 10, 2010 at 8:44 pm

  9. Maybe I’m too tired right now, but I just listened to some of that podcast and isn’t Leamer arguing precisely that there’s more to knowing than experiment? Experiment, he says, is available to some sciences, but most must find other ways. Instead theory + evidence, we’ve got pattern recognition + storytelling. Maybe he does suggest that “some people” are too hooked on one approach rather than the other. But he doesn’t, I think, say that there’s only one way.


    November 10, 2010 at 9:41 pm

  10. I wrote incorrectly. The host, Russ Roberts, is the one who makes the connection between experiment and science. Leamer has a long history of being a bit more pragmatic.


    November 10, 2010 at 9:43 pm

  11. But Roberts seems like very sympathetic ear for Leamer’s pragmatism. So, again, Roberts might have a certain image of “science”, but he’s not imposing on everyone (even in his dreams). If anything, both Leamer and Roberts are burning the same straw man you’re after here. “There are people who believe science offers only one way to knowledge.” We still don’t know who they are. I have some ideas about why we like that straw man, of course; but, in response to your comment before your last one, I think *pragmatism* is more common than *you* think.


    November 10, 2010 at 10:02 pm

  12. True, Roberts is not an identification fundamentalist, but he does seem to have this idea of “Experiment = science” that does appear from time to time. My purpose wasn’t to paint Roberts as an id fanatic, but merely to point out that many people do subscribe to this odd view of science.


    November 10, 2010 at 10:26 pm

  13. But what’s so odd about it? If he’s isn’t a fanatic about it, it seems perfectly reasonable to say that “science” is essentially grounded in experiment. (On some days, I feel that way too.) As long as I don’t say, “Science (thus defined) is the only legitimate form of inquiry,” I am not saying anything very strange. I’m just defining science quite narrowly.

    I’m reminded of Martin Orans’s view that anthropologists can choose between “scientific” and “humanistic” approaches. He isn’t saying that the latter won’t bring “knowledge”; he’s just asking us to be clear about what we’re doing. Such a view is only objectionable to people who are determined to call their research “science” at the outset.

    Also, I’m assuming that people who say “experiment = science” would grant that observatory-based research (like astronomy) is just as scientific as laboratory-based research (like chemistry). So what we mean by “experiment” is itself open to discussion: if we widen the definition to include all structured “experience”, the view also becomes less odd.


    November 11, 2010 at 5:05 am

  14. I think the Scientific Method Fallacy is very much existent. I am a psychologist by training, and in my field, there are quite a number of people advocating that quantitative (and especially experimental) research is the only way to go.
    One way of endorsing this fallacy is to actually just not teach a certain kind of method. In my time as a undergrad and grad student, we had one course on observational methods, and even that was pretty “quantitative”. Nothing about interviews. Instead, about 4-5 mandatory courses on quantitative methods. As I was interested in qualitative methods, I attended courses at the sociology department, only to be told by those guys, that intense narrative interviews is “the only way to go”… Go figure.


    November 11, 2010 at 8:44 am

  15. I think this fallacy and similar lines of reasoning has more to do with how deeply institutionalized neo-Humeanism has become (which is probably a result of most social scientists getting their philosophy from statisticians and literature departments rather than from actual philosophers) than it has to do with physics envy. If you really believe that regularities among sequences of events is all there is to the world (or slightly less forcefully, that it is all we can reliably know about it), then you will probably want to stick to methodologies that help you uncover those regularities and stay away from those that doesn’t.

    The only way for methodological pluralism to make sense in a scientific context is if you leave Hume behind (as you should) and repaint science as an activity that not only uncovers regularities, but also the stuff (powers, processes, mechanisms, or whatever) that produces those regularities. Alternatively, you could (but shouldn’t) argue that Hume was really right and that science really is all about uncovering regularities, but that because this is generally problematic in the social realm we should be doing something else entirely (Verstehen and what have you) and leave science to the scientists.

    Basically, I think this issue is just an effect of more fundamental (ontological and epistemological) problems that would probably resolve itself if we just abandonded a few well-established (but probably false) assumptions about the way the world works.


    November 11, 2010 at 9:34 am

  16. @Johann: it’s always hard to assess that sort of anecdotal evidence. Who are “those guys” (sociologists doing intense narrative interviews)? And have they ever put their position in writing?

    Obviously, at the curriculum-design level you’d want teaching to reflect the methodological pluralism (or lack thereof) of the field. My hunch is that you misunderstood an engaged teacher of interview methods and another engaged teacher of quantitive methods as saying “This is the only method that brings knowledge” when all they said was: “this method totally rawks!”

    I’m still looking for examples of people who actually make the SMF in writing. I.e., people who in fact claim that knowledge (about a certain class of phenomena) only comes by one route.

    @Mike: the idea that the SMF can be traced back to Hume seems to me vaguely to commit the same kind of fallacy. “There’s a deep, underlying cause of ignorance”. Actually, Fabio already sort of commits that error in positing the SMF in the first place. In actual fact, I think the great majority of errors in research are unrelated to the choice of methodologies and the degree of pluralism in a field. They stem from simple errors in the application of methods.

    It’s not a lack of pragmatism that’s holding us back, but a lack of sheer practical competence. “Pluralism” takes the wrong lesson from Feyerabend. When he said “anything goes” he meant that any method, if carefully and thoughtfully applied to the problem of overcoming our ignorance, can do the trick.

    This error of emphasis can also be seen in what Rorty aptly called “the consequences of pragmatism” (which, I think, greatly exceed his wildest nightmares … certainly Dewey’s). The truth is not *anything* that works, but precisely *what* works.

    Feyerabend, in any case, didn’t mean that it was wrong to zealously pursue a method of choice. In fact, he would be fine with that. As long as you don’t meddle in other people’s epistemologies.

    The tricky question to Fabio here is: are you suggesting greater methodological tolerance, or are you actually just intolerant of people who seriously believe in their own methods for their own purposes? This is not a rhetorical question. It’s a topic worthy of a long conversation.


    November 11, 2010 at 1:51 pm

  17. I don’t think anything goes, as it seems pretty plain to me that some strategies are always going to be intrinsically better (by whichever measure of goodness we want to use) than others at doing whatever it is we want to do. For example, I’m pretty confident that engaging with the scholarly community on a given matter is generally going to produce more desirable results than if you were to ignore all previous work on the topic and go at it alone. So in a sense I am indeed commited to the SMF as Fabio formulated it; there are good scientific practices and there are bad scientific practices.

    But as I tried to explain, I don’t think Fabio’s way of putting it is very helpful. A better way to think of the SMF is to treat it as concerning the relationship between ontology and methodology, because the efficacy of the latter obviously depends on the former. So, if you buy into the regularity view, then (presumably unlike Fabio) I’d say that it makes good sense to employ methodologies that uncover regularities while dismissing those that don’t. But if the regularity view is false, then such a dismissal becomes fallacious. naturally, this also holds true for any other relationship between a particular ontology and a particular methodology (that is, it would be equally fallacious to e.g. dismiss a methodology because it isn’t good at uncovering causal powers if there are no causal powers to be uncovered).

    So I suppose that when I wrote “the only way for methodological pluralism to make sense in a scientific context is if you leave Hume behind”, I really should have written something closer to “a higher degree of methodological inclusiveness only makes sense in a scientific context if you leave Hume behind”.


    November 11, 2010 at 4:02 pm

  18. I like the idea of locating the problem between ontology and methodology. As long as there’s a good fit there (and both are in themselves sound, of course), you”re going to get knowledge. And it’ll work even if you’re somewhat narrowminded about methodology.


    November 11, 2010 at 4:17 pm

  19. in reply to @Thomas:
    When I think back about my psychology curriculum, I remember the lack of courses on qualitative methods, as I mentioned. I remember lessons in the philosophy of science, where we learned about Popper, and no others. I remember responses from fellow students to my interest in qualitative methods like “But that’s not scientific!” One of the “fathers” of the institute wrote a book on “General Experimental Psychology” that was treated almost like a Bible. I would have to look it up (if I still have it), but I am pretty certain that other scientific methods are denounced (at least indirectly) in this book. So, in sum, it was not just the enthusiastic statistics professor. After my time there, I was relieved to see that other universities are much more open towards other methodological approaches. I felt a little bit “indoctrinated”. I still think that putting loads of students through this narrow curriculum is irresponsible.
    And about the sociologist whose courses on narrative interviews I took, yes, maybe she would fit into your description of “method enthusiast”. But even she wrote a book called “Interpretative Social Research”, in which she describes her narrative-biographical approach in about 5 chapters, whereas Grounded Theory and Qualitative Content Analysis get one chapter, and in it, she choses an example to show the inadequateness of Qualitative Content Analysis.
    But I have to add that this all took place in Germany. And it seems to me that in Europe, the methods dispute is and has always been much more fierce than in the US. See for example:


    November 12, 2010 at 9:52 am

  20. Hi Johann, I don’t doubt your experience, just whether it proves that there’s a widespread belief in the SMF. Your philosophy of science curriculum certainly seems inadequate (the field is much more pluralistic than the class you describe). That suggests that the same may be true in your other classes as well. I don’t think it is possible today to publish a paper in psychology that explicitly commits the SMF. It is not the positive part of the belief (“my method is great”) that is fallacious, after all, but the negative one (“all other methods suck”). Now, some people do actually believe that. I don’t doubt it (though, like I say, I wonder how many there actually are). And I also don’t doubt that, if put in charge of a teaching program, they will destroy it with their zealotry.

    I just don’t think their personal views are especially tenable in the professional world of research. (Witness the failure of Jeffrey Pfeffer to win adherents to his very moderate position in the “paradigm wars”.) Unfortnunately, the opposite view, namely, “anything goes”, is altogether tenable.


    November 12, 2010 at 11:28 am

  21. My favorite example of a SMF is the testing of medical treatment efficacy. Only large, randomized double blind experiments are accepted. These tests can invalidate treatments that are effective for a small subgroup of people. Granted, individual results are suspect because of placebo effects or results not caused by the treatment being tested, but large studies tend to ignore the fact that individuals can respond very differently to the same treatment. Unless statistical analysis is granular enough to adequately separate people with different responses, a treatment which is valuable for some is discarded as not statistically significant.


    November 16, 2010 at 3:43 am

  22. What you describe is a cognitive bias called the “Focusing Effect”,
    There’s no indictment to be made of the Scientific Method, but, instead, of the scientist falling victim to this cognitive bias and not correcting his mistake.


    November 16, 2010 at 4:32 am

  23. [...] the scientific method fallacy [...]

    Sunday links « info what now

    February 13, 2011 at 7:23 am

  24. [...] we saying we should become like economists? Dear Lord, no. As a group, economists have committed the scientific method fallacy. They assume that one really good tool for science accounts for all of science. They have [...]

  25. I am surprised no one has brought up Hayek’s Nobel prize lecture yet. I read it in the first year of my phd studies – at the time viewing Hayek as a very authoratitive writer. I think it has deeply influenced me, working at a technical university as I am.

    You can find the prize lecture, labeled The pretense of knowledge, here:

    Marcus Linder

    March 2, 2011 at 8:31 am

Comments are closed.


Get every new post delivered to your Inbox.

Join 1,070 other followers

%d bloggers like this: