Archive for the ‘political science’ Category
That’s the name of an article in the New Yorker that explores the work of my good friend political scientist Brendan Nyhan. The essence of pretty simple: people don’t change beliefs if it somehow challenges their identity:
Last month, Brendan Nyhan, a professor of political science at Dartmouth, published the results of a study that he and a team of pediatricians and political scientists had been working on for three years. They had followed a group of almost two thousand parents, all of whom had at least one child under the age of seventeen, to test a simple relationship: Could various pro-vaccination campaigns change parental attitudes toward vaccines? Each household received one of four messages: a leaflet from the Centers for Disease Control and Prevention stating that there had been no evidence linking the measles, mumps, and rubella (M.M.R.) vaccine and autism; a leaflet from the Vaccine Information Statement on the dangers of the diseases that the M.M.R. vaccine prevents; photographs of children who had suffered from the diseases; and a dramatic story from a Centers for Disease Control and Prevention about an infant who almost died of measles. A control group did not receive any information at all. The goal was to test whether facts, science, emotions, or stories could make people change their minds.
The result was dramatic: a whole lot of nothing. None of the interventions worked. The first leaflet—focussed on a lack of evidence connecting vaccines and autism—seemed to reduce misperceptions about the link, but it did nothing to affect intentions to vaccinate. It even decreased intent among parents who held the most negative attitudes toward vaccines, a phenomenon known as the backfire effect. The other two interventions fared even worse: the images of sick children increased the belief that vaccines cause autism, while the dramatic narrative somehow managed to increase beliefs about the dangers of vaccines. “It’s depressing,” Nyhan said. “We were definitely depressed,” he repeated, after a pause.
It’s the realization that persistently false beliefs stem from issues closely tied to our conception of self that prompted Nyhan and his colleagues to look at less traditional methods of rectifying misinformation. Rather than correcting or augmenting facts, they decided to target people’s beliefs about themselves. In a series of studies that they’ve just submitted for publication, the Dartmouth team approached false-belief correction from a self-affirmation angle, an approach that had previously been used for fighting prejudice and low self-esteem. The theory, pioneered by Claude Steele, suggests that, when people feel their sense of self threatened by the outside world, they are strongly motivated to correct the misperception, be it by reasoning away the inconsistency or by modifying their behavior. For example, when women are asked to state their gender before taking a math or science test, they end up performing worse than if no such statement appears, conforming their behavior to societal beliefs about female math-and-science ability. To address this so-called stereotype threat, Steele proposes an exercise in self-affirmation: either write down or say aloud positive moments from your past that reaffirm your sense of self and are related to the threat in question. Steele’s research suggests that affirmation makes people far more resilient and high performing, be it on an S.A.T., an I.Q. test, or at a book-club meeting.
Normally, self-affirmation is reserved for instances in which identity is threatened in direct ways: race, gender, age, weight, and the like. Here, Nyhan decided to apply it in an unrelated context: Could recalling a time when you felt good about yourself make you more broad-minded about highly politicized issues, like the Iraq surge or global warming? As it turns out, it would. On all issues, attitudes became more accurate with self-affirmation, and remained just as inaccurate without. That effect held even when no additional information was presented—that is, when people were simply asked the same questions twice, before and after the self-affirmation.
Read the whole thing.
Last week, I argued that academics face poor incentives. We are rewarded for solving hard problems, but rarely rewarded for simple, but important, problems. On Twitter, Eric Crampton suggested that my argument could be seen as a vote for think tanks as policy vehicles:
There’s a simple logic here. Policy is the whole point of think tanks. In practice, there would probably be a bias in favor of simple solutions as voters and politicians would have a tough time understanding complex solutions.
Still, I don’t see most think tanks as immune from perverse incentives. Rather, they have a different audience that imposes its own incentives. For example, an Atlantic article chronicles the decline of the Heritage Foundation as the primary source of high quality conservative policy work. The story is straightforward, the need for funding made it hard to resist the Tea Party. Heritage flipped on so many issues from health care to immigration that it’s hard to recognize it as the same organization.
Academia has the perverse incentive of rewarding people for technical skill at the expense of real world importance. The think tank world has a different problem. These organizations depend on fickle donors. So yes, simple is good, until the winds change.
At Overcoming Bias, Robin Hanson observes that his fellow economists don’t always focus on the policies that have broad consensus, are easy to understand, and easy to implement. He uses the example of road pricing:
Heavy traffic is a problem every economist in the world knows how to solve: price road access, and charge high prices during rush hour. With technologies like E-ZPass and mobile apps, it’s easier than ever. That we don’t pick this low-hanging fruit is a pretty serious indictment of public policy. If we can’t address what is literally a principles-level textbook example of a negative spillover with a fairly easy fix, what hope do we have for effective public policy on other margins?
I agree. Think about status in economics – what sorts of work gets you the rewards? For a while, it was really, really hard math. Also, macro-economics, which is a notoriously hard field. Recently, insanely clever identification work. What do these have in common? They are hard. In contrast, how many Bates or Nobel prizes have been awarded for simple, high impact work, like road pricing? Nearly zero is my guess.
The same is true in sociology. Sociologists often imagine themselves coming up with marvelous approaches to solving deeply rooted social inequalities. For example, a few months ago, we discussed research on gender inequality and how it might be explained, partially, by the relative over- or under-confidence of men and women. In other words, it might be that women are overly cautious in terms of promotions.
One simple solution would be to require all eligible people to apply for promotions (e.g., require that all associate profs apply for full professorship after a few years). It is a simple rule and would almost certainly help. The response in the comments? The solution doesn’t remedy gender prejudice. Well, of course not, but that wasn’t the point. The point was to fix a specific issue – under representation of women in applicant pools. I have no idea how to eliminate the bias against women, but I can make sure they get promoted at work often – and it’s easy!
Bottom line: Social scientists have their priorities reversed. They get rewarded for trying to solve insanely hard problems, while leaving a lot of simple problems alone. That’s leaving cash on the table.