leaps of faith
(I swear I drafted this before Fabio posted about economic imperialism! I guess it’s just economics week here at orgtheory.)
People who make high-quality decisions are wealthier, even after controlling for income and other factors. This is the takeaway from a study in the latest American Economic Review, “Who Is (More) Rational?”
I got kind of obsessed with this paper the other day. It’s interesting in its own right, but the real reason I’m stuck on it is because it illustrates a much larger issue: the gap between the modest discoveries we humans can actually make about ourselves and the much broader conclusions we draw about how the social world works and how we should act in it.
Although there are the standard academic hedges, the clear story of the paper is that some people are more rational than others, and that those people become wealthier, net of other factors. The paper ends with a policy implication:
If differences in decision-making ability are important sources of heterogeneity in economic outcomes, then even quite costly policy changes aimed at ‘soft’ or ‘libertarian’ paternalism may hold substantial promise.
Now, that last bit may have been a throwaway line. I would hate to be judged on whatever thing I threw into the final paragraph of the 18 millionth revision of some paper. And the issues I’m raising here aren’t at all specific to economics. They probably apply to 95% of sociology papers with some kind of policy takeaway. The leap in logic just grabbed me, for some reason, in this particular case.
That all said, what does the paper show? (Feel free to skip down to “Okay. Still with me?” for the takeaway. Also, standard disclaimer, I am not an economist — please let me know if I’m getting something wrong.)
The authors conducted a web-based experiment on a representative sample of Dutch households. Respondents were given a budget line that looked something like the one below. (Presumably without the 45-degree line, and with numbers instead of letters.)
The respondents had to play a game. It’s a bit hard to describe, but imagine that each round involves a coin flip. Before the flip, you have to pick a point on the budget line above. Let’s say that B = 3 on the x-axis, and A = 10 on the y-axis.
You have to pick a point somewhere on that line. Let’s say you pick B: (3, 0). Now the coin is flipped. If the coin turns up heads, you get the number of points on the x-axis: 3. But if the coin turns up tails, you get the number on the y-axis: 0.
Alternatively, you can pick point A: (0, 10). Now, if the coin is heads, you get 0 points. But you get 10 if it turns up tails.
Someone who likes risk would pick point A, where you have a 50% chance of getting the full 10 points. But someone who is risk-averse would pick point C, where you will get the same (lower) return whether the coin turns up heads or tails. Nobody who likes money would pick point B, unless you think it’s a weighted coin. (Points were redeemable for cash at the end of the game.)
The respondents played the game 25 times. Each time, the line was at a different angle – so perhaps B crossed the x-axis at 6, and A crossed the y-axis at 4. But there would always be a tradeoff between the riskier, higher-payout option, and the no-risk, lower-payout option.
Now, if your points cumulated across all the games, you’d presumably pick the risky option because it would pay off after 25 games. But here’s the catch: at the end of the 25 rounds, the computer picks one round at random, and you get paid for the experiment based on your choice in that round.
Okay. Still with me?
So what was the experiment measuring? It was measuring whether decisions were internally consistent with respect to risk preferences. So if you were a risk minimizer, what “rational” meant in this case was that you minimized risk consistently across games. A rational risk-taker, on the other hand, would always take the high-payoff bet. Or your preferences could lie somewhere in between, so long as you were consistent about it. The independent variable in the study is how close to consistent your decisions were with the preference for risk that your own decisions implied.
The study then went on to show that “a standard deviation increase in consistency is associated with 15-19% more household wealth.” This finding held up not only after controlling for income and demographic factors, but also for performance on a simple three-question cognitive test.
One important question is why, exactly, this kind of consistency would be associated with greater wealth. The paper doesn’t have a clear answer for this.*
More interesting to me, though, are the leaps made in interpreting the results. Here, two big things stand out.
One is that the actual finding—that for some fairly mysterious reason, people who make decisions logically consistent with having a particular taste for risk have higher household wealth—is stated in terms that imply a much broader conclusion: more rational people become wealthier.**
The other is the political interpretation. The authors suggest we should make policies that will improve people’s decision-making—that is, structure choices so that people’s decisions are more consistent with their own implied preferences. Of course, doing this in practice would be incredibly messy.
But another political reading–not mutually exclusive–would focus not on decision quality as a cause of wealth, but (lack of) wealth as a cause of (bad) decision-making: poorer people make worse choices because poverty is cognitively taxing.
And there’s another, uglier one as well. To be clear, I don’t at all think the authors intend or mean to imply this. But I can imagine other people, engaging perhaps in a little motivated reasoning, reading the title and abstract and coming away with the following: Who is (more) rational? The wealthy are more rational. The poor are just reaping the results of their own bad decisions.
The point is, though, that even though the research itself is thoughtful, thorough, and interesting, any of these takeaways would be way too big, particularly given the need to define “high-quality decision-making” as consistency with one’s own fixed utility function. The experiment can provide only modest bits of information about how the human world works, and even less about what policy can achieve in it. Yet should the paper gain any policy attention, these broader sorts of inferences are what is likely to stick. Indeed, almost all our knowledge about the social world, and our decisions about how to govern it, is built on such modest pieces of information—followed by great leaps of faith.
* It does have some exploration of why this happens. It shows, for example, that rational decision-makers are more likely to own a house, and less likely to keep their money in low-interest savings and checking accounts. But since “rational” can mean a consistent preference for low risk in this case, that’s not completely compelling.
** As Jason Kerwin pointed out to me, the working version of the paper was less focused on the causal claim that better decisions –> wealth; it also was a little vaguer on the policy takeaway. Perhaps a reviewer somewhere out there is responsible for the change?