Archive for the ‘psychology’ Category
Long time readers know that I am a skeptic when it comes to letters of recommendation. The last time I wrote about the topic, I relied on a well cited 1993 article by Aamodt, Bryan amd Whitcomb in Public Personnel Management that reviews the literature and shows that LoR’s have very little validity. I.e., they are poor predictors of future job performance. But what if the literature has changed in the meanwhile? Maybe these earlier studies were flawed, or based on limited samples, or better research methods provide more compelling answers. So I went back and read some more recent research on the validity of LoRs. The answer? With a few exceptions, still garbage.
For example, the journal Academic Medicine published a 2014 article that analyzed LoR for three cohorts of students at a medical school. From the abstract:
Results: Four hundred thirty-seven LORs were included. Of 76 LOR characteristics, 7 were associated with graduation status (P ≤ .05), and 3 remained significant in the regression model. Being rated as “the best” among peers and having an employer or supervisor as the LOR author were associated with induction into AOA, whereas having nonpositive comments was associated with bottom of the class students.
Conclusions: LORs have limited value to admission committees, as very few LOR characteristics predict how students perform during medical school.
Translation: Almost all information in letters is useless, except the occasional negative comment (which academics strive not to say). The other exception is explicit comparison with other candidates, which is not a standard feature of many (or most?) letters in academia.
Ok, maybe this finding is limited to med students. What about other contexts? Once again, LoRs do poorly unless you torture specific data out of them. From a 2014 meta-analysis of LoR recommendation research in education from the International Journal of Selection and Assessment:
… Second, letters of recommendation are not very reliable. Research suggests that the interrater reliability of letters of recommendation is only about .40 (Baxter, et al., 1981; Mosel & Goheen, 1952, 1959; Rim, 1976). Aamodt, Bryan & Whitcomb (1993) summarized this issue pointedly when they noted, ‘The reliability problem is so severe that Baxter et al. (1981) found that there is more agreement between two recommendations written by the same person for two different applicants than there is between two people writing recommendations for the same person’ (Aamodt et al., 1993, p. 82). Third, letter readers tend to favor letters written by people they know (Nicklin & Roch, 2009), despite any evidence that this leads to superior judgments.
Despite this troubling evidence, the letter of recommendation is not only frequently used; it is consistently evaluated as being nearly as important as test scores and prior grades (Bonifazi, Crespy, & Reiker, 1997; Hines, 1986). There is a clear and gross imbalance between the importance placed on letters and the research that has actually documented their efficacy. The scope of this problem is considerable when we consider that there is a very large literature, including a number of reviews and meta-analyses on standardized tests and no such research on letters. Put another way, if letters were a new psychological test they would not come close to meeting minimum professional criteria (i.e., Standards) for use in decision making (AERA, APA, & NCME, 1999). This study is a step toward addressing this need by evaluating what is known, identifying key gaps, and providing recommendations for use and research. [Note: bolded by me.]
As with other studies, there is a small amount of information in LoRs. The authors note that “… letters do appear to provide incremental information about degree attainment, a difficult and heavily motivationally determined outcome.” That’s something, I guess, for a tool that would fail standard tests of validity.
That’s the name of an article in the New Yorker that explores the work of my good friend political scientist Brendan Nyhan. The essence of pretty simple: people don’t change beliefs if it somehow challenges their identity:
Last month, Brendan Nyhan, a professor of political science at Dartmouth, published the results of a study that he and a team of pediatricians and political scientists had been working on for three years. They had followed a group of almost two thousand parents, all of whom had at least one child under the age of seventeen, to test a simple relationship: Could various pro-vaccination campaigns change parental attitudes toward vaccines? Each household received one of four messages: a leaflet from the Centers for Disease Control and Prevention stating that there had been no evidence linking the measles, mumps, and rubella (M.M.R.) vaccine and autism; a leaflet from the Vaccine Information Statement on the dangers of the diseases that the M.M.R. vaccine prevents; photographs of children who had suffered from the diseases; and a dramatic story from a Centers for Disease Control and Prevention about an infant who almost died of measles. A control group did not receive any information at all. The goal was to test whether facts, science, emotions, or stories could make people change their minds.
The result was dramatic: a whole lot of nothing. None of the interventions worked. The first leaflet—focussed on a lack of evidence connecting vaccines and autism—seemed to reduce misperceptions about the link, but it did nothing to affect intentions to vaccinate. It even decreased intent among parents who held the most negative attitudes toward vaccines, a phenomenon known as the backfire effect. The other two interventions fared even worse: the images of sick children increased the belief that vaccines cause autism, while the dramatic narrative somehow managed to increase beliefs about the dangers of vaccines. “It’s depressing,” Nyhan said. “We were definitely depressed,” he repeated, after a pause.
It’s the realization that persistently false beliefs stem from issues closely tied to our conception of self that prompted Nyhan and his colleagues to look at less traditional methods of rectifying misinformation. Rather than correcting or augmenting facts, they decided to target people’s beliefs about themselves. In a series of studies that they’ve just submitted for publication, the Dartmouth team approached false-belief correction from a self-affirmation angle, an approach that had previously been used for fighting prejudice and low self-esteem. The theory, pioneered by Claude Steele, suggests that, when people feel their sense of self threatened by the outside world, they are strongly motivated to correct the misperception, be it by reasoning away the inconsistency or by modifying their behavior. For example, when women are asked to state their gender before taking a math or science test, they end up performing worse than if no such statement appears, conforming their behavior to societal beliefs about female math-and-science ability. To address this so-called stereotype threat, Steele proposes an exercise in self-affirmation: either write down or say aloud positive moments from your past that reaffirm your sense of self and are related to the threat in question. Steele’s research suggests that affirmation makes people far more resilient and high performing, be it on an S.A.T., an I.Q. test, or at a book-club meeting.
Normally, self-affirmation is reserved for instances in which identity is threatened in direct ways: race, gender, age, weight, and the like. Here, Nyhan decided to apply it in an unrelated context: Could recalling a time when you felt good about yourself make you more broad-minded about highly politicized issues, like the Iraq surge or global warming? As it turns out, it would. On all issues, attitudes became more accurate with self-affirmation, and remained just as inaccurate without. That effect held even when no additional information was presented—that is, when people were simply asked the same questions twice, before and after the self-affirmation.
Read the whole thing.
Psych experiments show that we tend to overvalue objects that we possess – according to a coffee mug experiment, we would be willing to sell one that we have at a certain price, but others would not be willing to pay that same price. What happens when the object is a non-human family member?
When negotiating the sale of their home, one Australian family was willing to give up their cat Tiffany to the new homeowners for $140,000 (about $120K in US dollars). Some readers of the article announcing this exchange felt their pets were priceless, while others pointed out that cats are territorial and may not tolerate moves.
Don’t expect some cats to reciprocate your affectionate feelings – according to one medical examiner, cats will consume your lips and other edibles should you expire in your home. Sweet dreams, kitty owners.
Twitter is, well, a-twitter with people worked up about the Facebook study. If you haven’t been paying attention, FB tested whether they could affect people’s status updates by showing 700,000 folks either “happier” or “sadder” updates for a week in January 2012. This did indeed cause users to post more happy or sad updates themselves. In addition, if FB showed fewer emotional posts (in either direction), people reduced their posting frequency. (PNAS article here, Atlantic summary here.)
What most people seem to be upset about (beyond a subset who are arguing about the adequacy of FB’s methods for identifying happy and sad posts) is the idea that FB could experiment on them without their knowledge. One person wondered whether FB’s IRB (apparently it was IRB approved — is that an internal process?) considered its effects on depressed people, for example.
While I agree that the whole idea is creepy, I had two reactions to this that seemed to differ from most.
1) Facebook is advertising! Use it, don’t use it, but the entire purpose of advertising is to manipulate your emotional state. People seem to have expectations that FB should show content “neutrally,” but I think it is entirely in keeping with the overall product: FB experiments with what it shows you in order to understand how you will react. That is how they stay in business. (Well, that and crazy Silicon Valley valuation dynamics.)
2) This is the least of it. I read a great post the other day at Microsoft Research’s Social Media Collective Blog (here) about all the weird and misleading things FB does (and social media algorithms do more generally) to identify what kinds of content to show you and market you to advertisers. To pick one example: if you “like” one thing from a source, you are considered to “like” all future content from that source, and your friends will be shown ads that list you as “liking” it. One result is dead people “liking” current news stories.
My husband, who spent 12 years working in advertising, pointed out that this research doesn’t even help FB directly, as you could imagine people responding better to ads when they’re happy or when they’re sad. And that the thing FB really needs to do to attract advertisers is avoid pissing off its user base. So, whoops.
Anyway, this raises interesting questions for people interested in using big data to answer sociological questions, particularly using some kind of experimental intervention. Does signing a user agreement when you create an account really constitute informed consent? And do companies that create platforms that are broadly adopted (and which become almost obligatory to use) have ethical obligations in the conduct of research that go beyond what we would expect from, say, market research firms? We’re entering a brave new world here.
One of the awesome aspects of grad school (besides the occasional “free” pizza as you listen to the latest in research) is the sharing of resources among colleagues who are undergoing the same experiences. One grad school friend gave out copies of David Burns‘ seminal Feeling Good, an exercise book that explains how to practice cognitive behavioral therapy (CBT).
Back in the fall, the Stanford alumni magazine had an article about how Burns became convinced of CBT’s efficacy over prescription drugs as a tool for treating depression, anxiety, perfectionism, and other paralyzing feelings:
What Burns did in Feeling Good, the first mass-market, evidence-based, self-help book for the relief of depression, was explain the tenets of cognitive behavioral therapy (CBT) for the lay person: that depression is caused by self-defeating beliefs and negative thoughts—thoughts like “I’m not good enough,” “I’ll never amount to anything,” or “I have no friends.” Feeling Good included exercises readers could use to change how they reacted to such thoughts and to stop depression before it spiraled down into an endless abyss of despair and pain. Study after study has since demonstrated CBT’s effectiveness.
Burns did not invent CBT; its philosophical underpinnings can be traced back to the Buddha or to Epictetus, the Stoic. Credit for laying the foundation of modern CBT generally goes to Philadelphia psychiatrist T. Aaron Beck and the late New York psychologist Albert Ellis. Burns remembers when he, like most psychiatrists, didn’t believe that something as simple as how we think could cause depression.
Working at the University of Pennsylvania’s Depression Research Unit in the 1970s, Burns researched the theory that low serotonin levels cause depression, an idea widely accepted as the “chemical imbalance theory” and conventional wisdom among popular media, many physicians and much of the public. Although Burns won the A. E. Bennett award from the Society of Biological Psychiatry in 1975 for his research on brain serotonin metabolism, he was not convinced that the chemical imbalance theory was valid. In one study, he and his colleagues gave massive daily doses of the amino acid l-Trytophan to depressed veterans in a double-blind study. L-Trytophan goes directly from the stomach to the blood to the brain, where it is transformed into serotonin. If depression results from a deficiency of brain serotonin, the massive increase should have triggered clinical improvement, but it didn’t.
The study was published in a top research journal but did little to dim the growing excitement about the chemical imbalance theory. In 1988, Lilly launched the world’s first blockbuster SSRI antidepressant, a drug with powerful effects on brain serotonin receptors. During its first 13 years, Prozac generated $21 billion in sales, or 30 percent of Lilly’s revenues. Burns still wasn’t convinced.
“I always wanted to see people’s lives transformed from depression and anxiety to joy and peace,” he says. In his clinical work, he didn’t see that happening very often, no matter how many pills he prescribed. His department chair suggested that he sit in on one of Dr. Aaron Beck’s weekly cognitive therapy seminars.
At first, Burns thought Beck’s presentation sounded like “pure hucksterism”; still, he began using CBT methods if only to prove to himself that they didn’t work. Soon, many patients he’d been treating with drugs and “you talk, I’ll listen” therapy started to get better. A lot better.
Burns felt torn. He had just won a five-year grant to develop a brain serotonin lab at Penn. Yet he wasn’t convinced serotonin played a role in depression or any other psychiatric disorder. After three agonizing months, Burns decided he’d “rather spend my life doing something that works.” He left Penn and opened a private practice “in a storeroom with a window,” two stories below Beck’s Center for Cognitive Therapy.
Burns’s doubts were vindicated by a landmark 2002 metastudy conducted by psychologist Irving Kirsch, now at Harvard, of all trials submitted to the FDA by the manufacturers of the six most widely prescribed antidepressants approved between 1987 and 1999. Not widely publicized until a 60 Minutes report in February 2012, it showed only a slight difference in patient response between the drugs and placebos.
Ezra Klein interviews Kevin Roose, who has a new book about young Ivy League graduates who work on Wall Street. The take home point is simple: people who graduate from competitive schools graduate toward these jobs not because they love business, but because they want security. Wall Street jobs are high paid, require little experience, and have a bit of prestige. On the origins of the short term Wall Street job:
Wall Street invented this new way of recruiting in the early 80s. Before that they hired like any other industry. If you wanted to be a banker you applied for a job at a bank and they hired you or they didn’t. But in the early 80s Goldman Sachs and others figured out they could broaden their net and get lots of really smart people if they made it a temporary position rather than a permanent one.
So they created the two-and-out program. The idea is you’re there for two years and then you move onto something else. That let them attract not just hardcore econ majors but people majoring in other subjects who had a passing interest in finance and didn’t know what else to do. People now think going to a bank for two years will help prepare them for the next thing and keep them from having to make these hard decisions about the rest of their life. It made it like an extension of college. And it was genius. It led to this huge explosion in recruitment and something like a third of Ivy League graduates going to Wall Street.
Of course, it’s a mixed bag for the grads:
EK: So after writing this book, what would you say to a college senior thinking of going to Wall Street?
KR: First I would ask them why they wanted to work in an investment bank. If the answer is “because I’m tremendously in debt and need to pay it out” or “I’ve been reading Barron’s since I was 12 years old and I desperately want to be an investment banker” then those are legitimate reasons. Go ahead. But if it’s just about taking risk off the table and doing the safe prestigious thing, I’d tell them first that it will make them truly miserable, the kind of miserable it could take years to recover from, and that it also no longer has that imprimatur. It can actually hinder you. I’ve spoken to tech recruiters who say they only hire bankers in their first year or two because after that banking ruins them.
EK: How does it ruin them?
KR: It makes them too risk conscious. It gets them used to a standard of lifestyle they may not be able to replicate in any other industry. And it has a deleterious effect on creativity. Of the eight people I followed, a few came out very damaged by the experience. And not in a way a vacation can cure. It’s not about having bags under your eyes. It destroys your ability to think in creative ways about what it means to build something of value. The people I followed would admit they got a lot out of being a banker but I don’t think they’re all that tuned into the ways the experience changed them.
Check it out.
This guest post on the politics of sociology is written by Chris Martin, a doctoral student in sociology at Emory University.
Conservativism doesn’t seem to be a unipolar thing, according to much of the social psychological research on political attitudes. Rather, you can be conservative by being high in either social dominance orientation (SD) or right-wing authoritarianism (RWA). Of course, the two dimensions are moderately correlated but they’re not the same thing: high-SDO people dislike socially subordinate groups, and high RWO dislike socially deviant (or unconventional) groups. As a centrist, however, I’ve found that there’s a lack of research on the opposite poles of these scales even though there clearly seem to be a subset of liberals who like socially subordinate groups and a subset who like socially deviant groups. Again, there’s considerable overlap between these two subsets. And there’s a small subset of libertarian liberals who don’t lean toward either pole.
This comes across in social psychological work on religious freedom. Early research showed that high-RWA people are more supportive of Christian than Muslim mandatory prayer, while low-RWA people oppose both types of prayer equally. However, if you change “mandatory” to “voluntary,” you find that low-RWA people no longer disfavor both types. Rather, they more strongly favor Muslim than Christian school prayer space.
To some degree, I’ve found that sociology has become so ideologically homogenous that it’s now the disciplinary norm to avoid using “inequality” to describe preferential treatment of subordinate or deviant groups. In the race domain, in fact, centrists can get accused of supporting colorblind ideology or denying White privilege, even if they have a well-reasoned critique of preferential treatment. And in the gender/sexuality domain, the norm is for 50% of the research to focus on people who are deviant by conventional standards. But this skewness of focus isn’t termed inequality. My point isn’t about race or gender, though, but the large issue of whether there’s place for centrists in sociology—people who neither valorize nor condemn subordinate and deviant groups. Psychological social scientists have begun to address this issue—see Jonathan Haidt and Lee Jussim in particular—focusing on how this political homogeneity harms science. Where does sociology stand?