So the stock market has been freaking out a bit the last couple of weeks. Secular stagnation, Ebola, a five-year bull market—who knows why. Anyway, over the weekend I was listening to someone on NPR explain what the average person should do under such circumstances (answer: hang tight, don’t try to time the market). This reminded me of one of my pet quibbles with financial advice, which I think applies to a lot of social science more generally.
For years, the conventional wisdom around what ordinary folks should do with their money has gone something like this. Save a lot. Put it in tax-favored retirement accounts. Invest it mostly in index funds—the S&P 500 is good. Don’t mess with it. In the long run this is likely to net you a reliable 7% return after inflation, about the best you’re likely to do.
Now, it’s not that I think this is bad advice. In fact, this is pretty much exactly what I do, with some small tweaks.
But it has always struck me how, in news stories and advice columns and talk shows, people talk about how this is a good strategy because it’s worked for SO LONG. For 30 years! Or since 1929! Or since 1900! (Adjust returns accordingly.)
And yes, 30 years, or 85, or 114, are all a long time relative to human life. And we have to make decisions based on the knowledge we’ve got.
But it’s always seemed to me that if what you’re interested in is what will happen over the 30+ years of someone’s earning life (more if you’re not in academia!), you’ve basically got an N of 1 to 4 here. I mean, sure, this may be a reasonable guess, but I don’t think there’s any strong reason to believe that the next 100 years are likely to look very similar to the last 100. Odds are better if you’re just interested in the next 30, but even then, I’m always surprised by just how confident the conventional wisdom is around the idea that the market always coming out ahead over a 25- or 30-year period—going ALL THE WAY BACK TO 1929—is rock solid evidence that it will do so in the future.
Of course, there are lots of people who don’t believe this, too, as evidenced by what happened to gold prices after the financial crisis. Or by, you know, survivalists.
Anyway, I think this overconfidence in the lessons of the recent past is something we as social scientists tend to be susceptible to. The study that comes most immediately to mind here is the Raj Chetty study on value-added estimates of teachers (paper 1, paper 2, NYT article).
The gist of the argument is that teachers’ effects on student test scores, net of student characteristics (their value added), predicts students’ eventual income at age 28. Now, there’s a lot that could be discussed about this study (latest round of critique, media coverage thereof).
But I just want to point to it—or rather, broader interpretations of it—as illustrating a similar overconfidence in the ability of the past to predict the future.
Here we have a study based on a massive (2.5 million students) dataset over a twenty-year period (1989-2009). Just thinking about the scale of the study and taking its results at face value, it’s hard to imagine how much more certain one could be in social science than at the end of such an endeavor.
And much of the media coverage takes that certainty and projects it into the future (see the NYT article again). If you replace a low value-added teacher with an average one, the classroom’s lifetime earnings will increase by more than $250,000.
And yet to make such a leap, you have to be willing to assume so many things about the future will be like the past: not only that incentivizing teachers differently and making tests more important won’t change their predictive effects (which the papers acknowledge), but, just as importantly, that the effects of education on earnings—or, more specifically, of teacher value-added on earnings—will be similar in future 20-year periods as it was from 1989-2009. And that nothing else meaningful about teachers, students, schools, or earnings will evolve over the next 20 years in ways that mess with that relationship in a significant way.
I think we do this a lot—project into the future based on our understanding of a past that is, really, quite recent. Of course knowledge about the (relatively) recent past still should inform the decisions we make about the future. But rather a lot of modesty is called for when making blanket claims that assume the future is going to look just like the past. Maybe it’s human nature. But I think that modesty is often missing.
In response to Siri’s post about multi-disciplinary work, Peter Levin wrote the following:
For what it’s worth, working in a corporate environment, on big hairy systemic questions like, ‘How can we design an ecosystem for technologies to support precision agriculture over the next 2 decades?’ I work with a psychologist, an engineer, two anthropologists, an MBA/physicist, and a French literature PhD.
It’s a specifically-academia problem.
I agree. But I want to add a few comments. First, the evidence indicates that the problem is worse in social sciences than physical sciences. Social scientists are very territorial, as this article by Lada Adamic & co shows. Second, this system is reinforced by journal editors and tenure committees. Deans and administrators may sing the praises of interdisciplinary work, they routinely allow departments to punish faculty who don’t publish within discipline and journal editors are happy to let reviewers shoot down articles that use out of discipline ideas.
So, yes, interdisciplinary is important and needed, but until the system of rewards changes in the academy, it will remain the rhetoric of enthusiastic administrators.
The IGM panel of economic experts got some recent buzz because 63% of their experts — 81%, when weighted by confidence — disagree with the Piketty-inspired argument that r > g is driving recent wealth inequality in the U.S.
I always enjoy reading these surveys. The panel includes 50 or so top academic economists, from a variety of subfields and political orientations, and asks them whether they agree or disagree with a policy-relevant economic statement. Respondents answer on a Likert scale, and indicate their degree of certainty as well as their level of agreement. Sometimes they add a short comment.
The results usually aren’t incredibly surprising. Not really shocking that 100% of economists agree that
Letting car services such as Uber or Lyft compete with taxi firms on equal footing regarding genuine safety and insurance requirements, but without restrictions on prices or routes, raises consumer welfare.
They’re a little more nervous about selling kidneys (45% favor, but nearly 30% find themselves “uncertain” — the highest proportion for any recent question besides whether ending net neutrality is a good thing). The most interesting ones are those where there’s disagreement (Have the last decade of airline mergers improved things for travelers?) or that counter the stereotype (54% disagree that giving holiday presents — rather than cash — is inefficient. Okay, counters it a little).
Anyway, this got me wondering. What if sociology had a similar panel? I mean, aside from the fact that no one would care. I can think of empirical findings we’d have broad confidence in that much of the public wouldn’t buy — for example, that there’s lots of hiring discrimination against African-Americans. But are there policy prescriptions we’d agree on — ones that are grounded in the discipline, as opposed coming solely from our left-leaning tendencies, though of course the two are hard to separate — that would tell us, Yep, sociologists WOULD say that.
EDITED TO ADD: Yes, I know that Piketty does not actually argue r > g is the cause of recent inequality growth in the US, which is what the question asks. But if they can headline the poll “Piketty on Inequality,” it seems fair to call the statement “Piketty-inspired.”
Siri Ann Terjesen is an assistant professor of management and international business at Indiana University and an Associate Editor of the Academy of Management Learning & Education. She is an entrepreneurship researcher and she also does work on supply chains and related issues. This guest post addresses multidisciplinary scholarship.
I am interested in orgtheory readers’ perspectives on a critical but under-examined issue in academia, including scholarship about organizations. That is, in academia, individual scholars are incentivized to focus on a particular issue in a particular discipline and discouraged from developing deep expertise in multiple fields. For example, business scholars examine the same universe (e.g., firms, employees, etc.), albeit through different branches (disciplines such as strategy, organizational behavior, operations management, finance, accounting, ethics, law, etc.) which do not dialogue actively with one another—and there are very few academics who develop a real repertoire across multiple fields- that is, are truly multidisciplinary ‘protean’ scholars who contribute to leading journals in multiple disciplines (e.g., disciplines as distinct as ethics and operations management or accounting and organizational behavior) and have a profound influence across these distinct arenas.
This is surprising because history shows us that some of the greatest learning and paradigm shifts come from the contributions of polymaths- individuals whose expertise draws on a wide range of knowledge- from early historical examples (Francis Bacon, Erasmus, Galileo Galelei, Hildegard von Bingen, and Ben Franklin) to more recent scholars (Michael Polanyi and Linus Pauling). Researchers in the applied sciences are beginning to recognize the power of polymath, protean scholars who bring new innovations through their openness to variety and flexibility and operations across multidisciplinary spaces. There are also personal motivations- individuals who have many repertoires of knowledge may develop a broader understanding and appreciation of all human accomplishments and are personally able to enjoy the pursuit of multiple paths to excellence and to have more peak experiences across these fields. Certainly there are prevailing counterarguments concerning a “Jack-of-all-Trades but master of none” and the sheer costs of operating in multiple institutions with distinct players, particularly gatekeepers. I welcome orgtheory readers’ insights and debates on this issue in any respect- theoretical perspectives, pros/cons, examples, personal experiences, etc.
This week, I’d like to focus on the sociology of race. We’ll discuss Shiao et al.’s Sociological Theory article The Genomic Challenge to the Social Construction of Race, which is the subject of a symposium. After you read the article and symposium, you might enjoy the Scatterplot discussion.
In this first post, I’d like to discuss the definitional problems associated with the concept “race.” The underlying concept is that people differ in some systematic way that goes beyond learned traits (like language). One aspect of the “person in the street” view of race is that it reflects common ancestry, which produces correlated physical and social traits. When thinking about this approach to race, most sociologists adopt the constructivist view which says that: (a) the way we group people together reflects our historical moment, not a genuine grouping of people with shared traits and (b) the only physical differences between people are superficial.
One thing to note about the constructivist approach to race is that the first claim is very easy to defend and the other is very challenging. The classifications used by the “person on the street” are essentially fleeting social conventions. For example, Americans used the “one drop rule” to classify people, but it makes little sense because putting more weight on Black ancestors than White ancestors is arbitrary. Furthermore, ethnic classifications vary by place and even year to year. The ethnic classifications used in social practice flunk the basic tests of reliability and validity that one would want from any measurement of the social world.
The second claim is that there are no meaningful differences between people in general. This claim is much harder to make. This is not an assessment of truth of the claim, but the evidence needed to make is of a tall order. Namely, to make the strong constructivist argument, you would need (a) a definition of which traits matter, (b) a systematic measurement of those traits from a very large sample of people, (c) criteria for clustering people based on data, and (d) a clear test that all (or even most) reasonable clustering methods show a single group of people. As you can see, you need *a lot* of evidence to make that work.
That is where Shiao et al get into the game. They never dispute the first claim, but suggest that the second claim is indefensible – there is evidence of non-random clustering of people using genomic data. This is very important because it disentangles two important issues – race as social category and race as intra-group similarity. It’s like saying the Average Joe may be mistaken about air, earth, water, and fire, but real scientists can see that there are elements out there and you can do real science with them.