Archive for the ‘psychology’ Category
This week, I will spend quite a bit of time discussing a book called The Triumphs of Experience by George Vaillant. I’ve written briefly about the book before, but I didn’t appreciate the magnitude of the book until I assigned it for a class. Roughly speaking, the book follows a cohort of college men from the 1940s to the mid 2000s. Thus, the book tracks people from young adulthood to old age. It’s a powerful book in that it uses enormously rich data to analyze the life course and identify factors that contribute to our well being. You won’t find many other books that have such deep data to address one of life’s most important questions – What makes us happy? What is the good life?
In this first post, I want to briefly summarize the book and then note a few drawbacks. Later this week, I want to delve into two topics in more detail: alcoholism and parental bonds. To start: the Grant Study of Human development randomly selected a few hundred male Harvard undergrads for a long term study on health and the life course. It’s a biased sample, but it’s well suited for studying long life and work (remember, many women became home makers in that era) while controlling for educational attainment. The strength of this book is an ability to mine rich qualitative data on the life course and then mapping the associations over decades. The data is rich enough that the authors can actually consider alternative hypotheses and build multi-cause explanations.
A few drawbacks: Rhetorically, I thought the book was a bit wordier and longer than it needed to be. Also, I wish that the book had a glossary or appendix where one can look up definitions. More importantly, this book will note be convincing to folks who are obsessed with identification. It is very “1960s” in that they collect a lot of data and then channel their energies into looking at cross-group differences. But still, considering that doing RCT with your family is not possible and the importance of the data, I’m willing to forgive. Wednesday: The importance of your family.
There’s been a paper making round and a few folks have asked me for comments. It is called “Political Diversity will Improve Social Psychological Science.” It is forthcoming in Behavioral and Brain Sciences and is co-authored by Jose Duarte, Jarett Crawford, Jonathan Haidt, Lee Jussim and Phil Tetlock. Duarte et al. make the following claims:
- Social psychology, like most academic areas, is politically homogeneous.
- Intellectual diversity is a good thing.
“The underrepresentation of non-liberals in social psychology is most likely due to a combination of self-selection, hostile climate, and discrimination.”
My overall reaction is sympathetic, but critical. In my comments, I will start with evidence that is specific to social psychology, but also comment on the broader issue of professorial partisanship.
The lopsided political slant of academia is to be lamented. Since social scientists study human values and ethical behavior, we definitely lose something if only one side of the argument is represented. I also think that when sociologists move from disagreement to hostility, they do a disservice. All students should feel like it is permissible to disagree with an instructor and no one wants to be judged on their political views when it comes to graduate school admissions and appointment to the faculty.
On other points, I am more critical. For example, their coverage of the debate over discrimination is lacking. It is true that that many academics exhibit confirmation bias – they are more likely to approve of studies that support their ideological view. That is a logically consistent story for why, say, sociologists might be overwhelmingly liberal because we deal with lots of research that have social implications. But it doesn’t really explain other facts, like that a majority of physical scientists vote Democrat (see page 29 of the Gross and Simmons’ book on professors*). How would people possibly know the party preference of mathematicians? It’s not on the CV and in the eight years I spent in math departments, I still have no idea what the preferences of my fellow students and teachers were.
Another point of criticism is that they take uncritically the evidence on self-reports of willingness to discriminate. They cover a number of studies showing that liberals admit they would discriminate, while others do not. They see this as strong evidence. I do not because of social desirability bias. The default response is for people to admit they do not discriminate. My hypothesis that overly zealous academic liberals are simply more motivated to admit personal fault, which means they deviate from the socially desirable answer at much, much higher rates.
One point that is never brought up is that liberal disciplines become noticeably more conservative if they try hard enough. For example, it is my impression that economics used to be dominated by Keynesians up until about 1960 or so. Now, there are many notable conservative and libertarian economists who are very prominent. Similarly, there are disciplines which have an even Republican/Democrat balance, such as engineering (see page 29 of Gross and Simmons). Those who think discrimination is the smoking gun in this story need to explain why economics has become more conservative over time, how discrimination is supposed to work in a-political fields like math, and why liberals never conquered other areas. Most of the story about liberal over-representation is about humanities and social sciences which, do indeed, have lopsided tendencies.
Perhaps the point that I always think about is one that Duarte et al. and others always seem to miss. A major finding of Gross’s 2013 book on academic liberals is that there is indeed a self-image problems. And yes, much of it has to do with the fact that conservative students don’t think they will “fit in” with liberal professors. But there is another very strong reason which Gross covers – money. Academia is a low paying profession and conservative students value income in jobs more than other types of students.
This finding dovetails well with an observation about professions. Liberals tend to dominate in areas that are low paying and focus on issues like education, learning, care giving, and culture. These include the arts, entertainment, academia, K-12 teaching, nursing, social work, and science. Once you add some high income, conservatives start appearing in large numbers (e.g., the Dem/GOP is way different for doctors vs. nurses; artists vs. art managers; lawyering vs. other humanities oriented work; physical science vs. engineering).
That summarizes my response to Duarte et al. 2012. The basic point is correct – social psychology, and other areas, by implication, are politically lopsided and it’s likely not a good thing. On other points, I think they over read the evidence. I am certain that discrimination might occur, but when you look at a broad range of evidence, the story gets complicated very quickly.
* Disclosure: I have a chapter on this book on the history of ethnic studies.
Long time readers know that I am a skeptic when it comes to letters of recommendation. The last time I wrote about the topic, I relied on a well cited 1993 article by Aamodt, Bryan amd Whitcomb in Public Personnel Management that reviews the literature and shows that LoR’s have very little validity. I.e., they are poor predictors of future job performance. But what if the literature has changed in the meanwhile? Maybe these earlier studies were flawed, or based on limited samples, or better research methods provide more compelling answers. So I went back and read some more recent research on the validity of LoRs. The answer? With a few exceptions, still garbage.
For example, the journal Academic Medicine published a 2014 article that analyzed LoR for three cohorts of students at a medical school. From the abstract:
Results: Four hundred thirty-seven LORs were included. Of 76 LOR characteristics, 7 were associated with graduation status (P ≤ .05), and 3 remained significant in the regression model. Being rated as “the best” among peers and having an employer or supervisor as the LOR author were associated with induction into AOA, whereas having nonpositive comments was associated with bottom of the class students.
Conclusions: LORs have limited value to admission committees, as very few LOR characteristics predict how students perform during medical school.
Translation: Almost all information in letters is useless, except the occasional negative comment (which academics strive not to say). The other exception is explicit comparison with other candidates, which is not a standard feature of many (or most?) letters in academia.
Ok, maybe this finding is limited to med students. What about other contexts? Once again, LoRs do poorly unless you torture specific data out of them. From a 2014 meta-analysis of LoR recommendation research in education from the International Journal of Selection and Assessment:
… Second, letters of recommendation are not very reliable. Research suggests that the interrater reliability of letters of recommendation is only about .40 (Baxter, et al., 1981; Mosel & Goheen, 1952, 1959; Rim, 1976). Aamodt, Bryan & Whitcomb (1993) summarized this issue pointedly when they noted, ‘The reliability problem is so severe that Baxter et al. (1981) found that there is more agreement between two recommendations written by the same person for two different applicants than there is between two people writing recommendations for the same person’ (Aamodt et al., 1993, p. 82). Third, letter readers tend to favor letters written by people they know (Nicklin & Roch, 2009), despite any evidence that this leads to superior judgments.
Despite this troubling evidence, the letter of recommendation is not only frequently used; it is consistently evaluated as being nearly as important as test scores and prior grades (Bonifazi, Crespy, & Reiker, 1997; Hines, 1986). There is a clear and gross imbalance between the importance placed on letters and the research that has actually documented their efficacy. The scope of this problem is considerable when we consider that there is a very large literature, including a number of reviews and meta-analyses on standardized tests and no such research on letters. Put another way, if letters were a new psychological test they would not come close to meeting minimum professional criteria (i.e., Standards) for use in decision making (AERA, APA, & NCME, 1999). This study is a step toward addressing this need by evaluating what is known, identifying key gaps, and providing recommendations for use and research. [Note: bolded by me.]
As with other studies, there is a small amount of information in LoRs. The authors note that “… letters do appear to provide incremental information about degree attainment, a difficult and heavily motivationally determined outcome.” That’s something, I guess, for a tool that would fail standard tests of validity.
That’s the name of an article in the New Yorker that explores the work of my good friend political scientist Brendan Nyhan. The essence of pretty simple: people don’t change beliefs if it somehow challenges their identity:
Last month, Brendan Nyhan, a professor of political science at Dartmouth, published the results of a study that he and a team of pediatricians and political scientists had been working on for three years. They had followed a group of almost two thousand parents, all of whom had at least one child under the age of seventeen, to test a simple relationship: Could various pro-vaccination campaigns change parental attitudes toward vaccines? Each household received one of four messages: a leaflet from the Centers for Disease Control and Prevention stating that there had been no evidence linking the measles, mumps, and rubella (M.M.R.) vaccine and autism; a leaflet from the Vaccine Information Statement on the dangers of the diseases that the M.M.R. vaccine prevents; photographs of children who had suffered from the diseases; and a dramatic story from a Centers for Disease Control and Prevention about an infant who almost died of measles. A control group did not receive any information at all. The goal was to test whether facts, science, emotions, or stories could make people change their minds.
The result was dramatic: a whole lot of nothing. None of the interventions worked. The first leaflet—focussed on a lack of evidence connecting vaccines and autism—seemed to reduce misperceptions about the link, but it did nothing to affect intentions to vaccinate. It even decreased intent among parents who held the most negative attitudes toward vaccines, a phenomenon known as the backfire effect. The other two interventions fared even worse: the images of sick children increased the belief that vaccines cause autism, while the dramatic narrative somehow managed to increase beliefs about the dangers of vaccines. “It’s depressing,” Nyhan said. “We were definitely depressed,” he repeated, after a pause.
It’s the realization that persistently false beliefs stem from issues closely tied to our conception of self that prompted Nyhan and his colleagues to look at less traditional methods of rectifying misinformation. Rather than correcting or augmenting facts, they decided to target people’s beliefs about themselves. In a series of studies that they’ve just submitted for publication, the Dartmouth team approached false-belief correction from a self-affirmation angle, an approach that had previously been used for fighting prejudice and low self-esteem. The theory, pioneered by Claude Steele, suggests that, when people feel their sense of self threatened by the outside world, they are strongly motivated to correct the misperception, be it by reasoning away the inconsistency or by modifying their behavior. For example, when women are asked to state their gender before taking a math or science test, they end up performing worse than if no such statement appears, conforming their behavior to societal beliefs about female math-and-science ability. To address this so-called stereotype threat, Steele proposes an exercise in self-affirmation: either write down or say aloud positive moments from your past that reaffirm your sense of self and are related to the threat in question. Steele’s research suggests that affirmation makes people far more resilient and high performing, be it on an S.A.T., an I.Q. test, or at a book-club meeting.
Normally, self-affirmation is reserved for instances in which identity is threatened in direct ways: race, gender, age, weight, and the like. Here, Nyhan decided to apply it in an unrelated context: Could recalling a time when you felt good about yourself make you more broad-minded about highly politicized issues, like the Iraq surge or global warming? As it turns out, it would. On all issues, attitudes became more accurate with self-affirmation, and remained just as inaccurate without. That effect held even when no additional information was presented—that is, when people were simply asked the same questions twice, before and after the self-affirmation.
Read the whole thing.
Psych experiments show that we tend to overvalue objects that we possess – according to a coffee mug experiment, we would be willing to sell one that we have at a certain price, but others would not be willing to pay that same price. What happens when the object is a non-human family member?
When negotiating the sale of their home, one Australian family was willing to give up their cat Tiffany to the new homeowners for $140,000 (about $120K in US dollars). Some readers of the article announcing this exchange felt their pets were priceless, while others pointed out that cats are territorial and may not tolerate moves.
Don’t expect some cats to reciprocate your affectionate feelings – according to one medical examiner, cats will consume your lips and other edibles should you expire in your home. Sweet dreams, kitty owners.
Twitter is, well, a-twitter with people worked up about the Facebook study. If you haven’t been paying attention, FB tested whether they could affect people’s status updates by showing 700,000 folks either “happier” or “sadder” updates for a week in January 2012. This did indeed cause users to post more happy or sad updates themselves. In addition, if FB showed fewer emotional posts (in either direction), people reduced their posting frequency. (PNAS article here, Atlantic summary here.)
What most people seem to be upset about (beyond a subset who are arguing about the adequacy of FB’s methods for identifying happy and sad posts) is the idea that FB could experiment on them without their knowledge. One person wondered whether FB’s IRB (apparently it was IRB approved — is that an internal process?) considered its effects on depressed people, for example.
While I agree that the whole idea is creepy, I had two reactions to this that seemed to differ from most.
1) Facebook is advertising! Use it, don’t use it, but the entire purpose of advertising is to manipulate your emotional state. People seem to have expectations that FB should show content “neutrally,” but I think it is entirely in keeping with the overall product: FB experiments with what it shows you in order to understand how you will react. That is how they stay in business. (Well, that and crazy Silicon Valley valuation dynamics.)
2) This is the least of it. I read a great post the other day at Microsoft Research’s Social Media Collective Blog (here) about all the weird and misleading things FB does (and social media algorithms do more generally) to identify what kinds of content to show you and market you to advertisers. To pick one example: if you “like” one thing from a source, you are considered to “like” all future content from that source, and your friends will be shown ads that list you as “liking” it. One result is dead people “liking” current news stories.
My husband, who spent 12 years working in advertising, pointed out that this research doesn’t even help FB directly, as you could imagine people responding better to ads when they’re happy or when they’re sad. And that the thing FB really needs to do to attract advertisers is avoid pissing off its user base. So, whoops.
Anyway, this raises interesting questions for people interested in using big data to answer sociological questions, particularly using some kind of experimental intervention. Does signing a user agreement when you create an account really constitute informed consent? And do companies that create platforms that are broadly adopted (and which become almost obligatory to use) have ethical obligations in the conduct of research that go beyond what we would expect from, say, market research firms? We’re entering a brave new world here.