orgtheory.net

making fun of identification nuts

I get tired, really tired, when I hear another “experiment are the gold standard” or “that’s just observational data.” I’m not against experiments, but, like all methods, they have important strengths and weaknesses. So it is with great delight that I saw the following satire in the British Medical Journal:

Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials

Objectives To determine whether parachutes are effective in preventing major trauma related to gravitational challenge.

Design Systematic review of randomised controlled trials.

Data sources: Medline, Web of Science, Embase, and the Cochrane Library databases; appropriate internet sites and citation lists.

Study selection: Studies showing the effects of using a parachute during free fall.

Main outcome measure Death or major trauma, defined as an injury severity score > 15.

Results We were unable to identify any randomised controlled trials of parachute intervention.

Conclusions As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.

Have at it.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

July 19, 2013 at 12:21 am

Posted in fabio, mere empirics

10 Responses

Subscribe to comments with RSS.

  1. I am curious about the weaknesses you perceive with experiments. There are obvious ones. For example, the incapacity to randomize many domains of social life (due to cost or out of ethical concerns), the lack of realism (the face validity problem), not to mention skeptical journal reviews (who might be unfamiliar or suspicious of them), etc. In my own experience conducting survey experiments, I have also noticed that the kinds of questions one asks become narrower. It is also difficult to notice and incorporate new information from the ground up since the researcher creates every detail of the research design. In contrast, interviews or ethnographies are great at showing you unexpected processes one had not consider.

    Like

    curious

    July 19, 2013 at 2:59 am

  2. To stay on the fun side, footnotes are also worth reading:
    Footnotes
    Contributors: GCSS had the original idea. JPP tried to talk him out of it. JPP did the first literature search but GCSS lost it. GCSS drafted the manuscript but JPP deleted all the best jokes. GCSS is the guarantor, and JPP says it serves him right.

    Like

    Bernard Forgues

    July 19, 2013 at 7:31 am

  3. Peter Bearman cited this in his comments (2009, here) on the anti-vaccine mentality, science, and common sense. The issue becomes even more relevant with the apotheosis of Jenny McCarthy.

    Like

    Jay Livingston

    July 19, 2013 at 3:14 pm

  4. curious: I think you summarized it well – experimental conditions aren’t the same as real world conditions; narrowness of questions; many processes can’t be replicated in lab settings; biases in selection of experimental subjects; etc.

    Like

    fabiorojas

    July 19, 2013 at 6:14 pm

  5. Where experiments DO matter and ARE especially appropriate is in applied arenas like education, medicine, welfare or criminal justice policies where you do things to people in the belief that those things produce desired outcomes. Any situation that involves some kind of “treatment” is a situation that IS in its inherent logic and experimental situation. Thus, paradoxically, the statement “experimental conditions are not the same as real world conditions” is one of the most wrong statements that sociologists typically make about experiments.

    Like

    olderwoman

    July 19, 2013 at 8:37 pm

  6. @o.w.: When I wrote that, I had in mind experiment that make broad claims using samples of mainly White 22 year old college students. I stand by that.

    But I do cede your broader point about the importance of experiments for policy. However, very few policy experiments are conducted in ways that make me confident that it really has a clear implication for the real world. For example, how many policy experiments stand up the standards of the Rand experiment on health insurance? Precious few. Many experiments are conducted in ways that make me doubt the treatment would work in a way similar to everyday life.

    Finally, let me note that the cautious attitude toward experiment is not limited to social science. There is now an entire field of biomedical science called “translation science” which investigates why experimental results don’t work in clinical settings. While we trumpet the uncritical view of experiments in social science, other scientists know that a pill manufactured for a university hospital where patients have a strict compliance regime simply ain’t the same thing as grandma taking a generic pill from CVS. It just isn’t.

    Like

    fabiorojas

    July 19, 2013 at 8:44 pm

  7. Fabio: How is anything you said any different for any other method of research or analysis used in sociology?

    Your opening sentence is complaining about the sample, not the method, by the way.

    Like

    olderwoman

    July 19, 2013 at 9:51 pm

  8. @o.w.:

    Exactly! Experiments are not flawless and final arbiters of science! This may be obvious to you, but it’s not to the id nuts in poli sci and econ (and many physical sciences).

    PS. Method and sample are connected for experiments. Many experiments require that people take time out to go to the lab. This creates an incentive for easy samples like college students. Aside from experiments based on surveys, how many experiments actually get a truly random sample of the target population? Not many.

    Like

    fabiorojas

    July 20, 2013 at 3:41 am

  9. I realize this is methodological baby talk we are exchanging, but I actually think the ill-informed prejudice against experiments is a very big problem in sociology (a problem I see your flip post and other remarks as contributing to), even as I acknowledge the problems with ill-informed adulation of experiments. My point is that it is very important to distinguish the distinct elements of a project and to understand the distinct methodological imperatives and how they are met by different elements of a project. External validity, the ability to generalize from a sample to a larger population by way of random sampling, is only ONE criterion for good research. The external validity issue is not only “random sample of the target population” but the definition of the target population (the sampling frame) and the problem of non-response, which has become so huge in real-life research (response rates are now around 25% if you are lucky) that “random” survey samples are really volunteer samples.

    Experiments are not meant to maximize external validity, they are meant to maximize internal validity. The crux of an experiment is a manipulated independent variable and the use of a control group procedure, with randomized assignment to treatments being the “gold standard.” Of course you cannot do an experiment if your independent variable of interest is not manipulable; that is why sociologists don’t use experiments so much, because we are mostly studying the consequences of demographic or institutional factors we cannot manipulate. This tradition also considers the matter of blind and double-blind assessment to minimize observer bias. Both of these concepts are central and (in my opinion) too-stupidly thrown out by people who complain that experiments are “not realistic.” Experimental design also considers the problem of demand effects (also a problem in interviewing and survey methods) and a tradition of thinking seriously about extraneous variables that might affect causal attributions. This tradition has long engaged the question of whether a finding that emerged in one setting can be replicated in another.

    As I said before, the whole “not realistic” argument is irrelevant to the large number of field experiments. Instead, the problem in field experiments is to gain adequate control of alternate sources of variation (to increase internal validity) and to trade that control off against the matter of external generalization. Sociologists need to actually understand experimental methods and not just reject them out of prejudice so they can think carefully and logically about the basis for their own causal attributions in any research project.

    Like

    olderwoman

    July 20, 2013 at 3:18 pm

  10. What olderwoman said.
    I have found that even if you don’t conduct experiments, adopting an “experimenter” perspective (i.e. being cognizant of sources of variance threatening internal validity) is very instructive. It helps you assess where problems in terms of causal inference may arise. For those interested, the book by Shadish, Cook, and Campbell (2002) has been great resource for me.
    As for the issue of external validity, I guess the extent to which that’s problematic depends in part on what you are studying. Psychologists, who use experiments a lot, usually don’t want to recreate the real world, but the impression of a real situation, which they (hopefully) check for. Using only undergrads as samples is still problematic, of course, although psyc journals are decreasingly tolerant of this practice, as far as I know. Conducting experiments in sociology (or related fields) is probably more complicated. As the variables of interest don’t lie inside people’s heads, but in the social situation, there is a much stronger trade-off between internal and external validity.

    Like

    HB

    July 22, 2013 at 7:57 am


Comments are closed.