Archive for the ‘psychology’ Category
Starting on July 1, we will discuss “The Suffocation Model: Why Marriage in America is Becoming an All or Nothing Institution” by Finkel et al. This appeared last summer in Directions in Psychological Science. The discussion will be less intense than a full blown book forum, but we will dedicate one or two posts to it. This article was suggested by Chris Martin.
My colleague Johan Bollen and his colleagues have been working on a project that tries to measure and verify the “happiness paradox,” which is an extension and elaboration of the “friendship paradox.” From the MIT technology review:
The friendship paradox is straightforward to explain. It comes about because of the skewed way people collect friends on online social networks such as Twitter and Facebook. Most people have a small number of friends—a few dozen or so. But a tiny fraction of people have huge numbers of friends millions or tens of millions of followers in some cases.
This has two effects. First, it makes them much more likely to appear in a random person’s list of friends. And second, it dramatically skews the answer when calculating the average number of friends that a person’s friends have.
Bollen et al then explain the analogous happiness paradox:
Bollen and co begin by analyzing the most recent 3,000 tweets sent by some 40,000 Twitter users. They use a standard algorithm to analyze each tweet to determine its sentiment—whether positive or negative—and then assume this gives a sense of the user’s happiness level. In other words, they assume that people who are less happy send more negative tweets. They also include in the analysis the number of followers and followees for each individual.
The results make for interesting reading. Bollen and co say there is clear friendship paradox at work in this network, as expected. But they also say there is a less striking but nonetheless significant happiness paradox at work, too.
Indeed, Bollen co say their evidence suggests that the more unhappy the individual, the stronger the happiness paradox they face. “Although happy and unhappy groups of subjects are both affected by a significant happiness paradox, unhappy subjects are most strongly affected,” they say.
The original paper is here. Recommended.
Psychology Today has an article on a new analysis of play and the mental health of young people. The gist is that (a) recently, we let kids have less unstructured play and (b) unstructured play increases the belief that one has direct control over their life, which in turn has positive effect on mental health and various measures of well being. From the article:
The standard measure of sense of control is a questionnaire developed by Julien Rotter in the late 1950s called the Internal-External Locus of Control Scale. The questionnaire consists of 23 pairs of statements. One statement in each pair represents belief in an Internal locus of control (control by the person) and the other represents belief in an External locus of control (control by circumstances outside of the person). The person taking the test must decide which statement in each pair is more true. One pair, for example, is the following:
- (a) I have found that what is going to happen will happen.
- (b) Trusting to fate has never turned out as well for me as making a decision to take a definite course of action.
In this case, choice (a) represents an External locus of control and (b) represents an Internal locus of control.
Many studies over the years have shown that people who score toward the Internal end of Rotter’s scale fare better in life than do those who score toward the External end. They are more likely to get good jobs that they enjoy, take care of their health, and play active roles in their communities—and they are less likely to become anxious or depressed.
In a research study published a few years ago, Twenge and her colleagues analyzed the results of many previous studies that used Rotter’s Scale with young people from 1960 through 2002. They found that over this period average scores shifted dramatically—for children aged 9 to 14 as well as for college students—away from the Internal toward the External end of the scale. In fact, the shift was so great that the average young person in 2002 was more External than were 80% of young people in the 1960s. The rise in Externality on Rotter’s scale over the 42-year period showed the same linear trend as did the rise in depression and anxiety.
[Correction: The locus of control data used by Twenge and her colleagues for children age 9 to 14 came from the Nowicki-Strickland Scale, developed by Bonnie Strickland and Steve Nowicki, not from the Rotter Scale. Their scale is similar to Rotter’s, but modified for use with children.]
It is reasonable to suggest that the rise of Externality (and decline of Internality) is causally related to the rise in anxiety and depression. When people believe that they have little or no control over their fate they become anxious: “Something terrible can happen to me at any time and I will be unable to do anything about it.” When the anxiety and sense of helplessness become too great people become depressed: “There is no use trying; I’m doomed.”
Wow. Later in the article, they talk about how this shift correlates with mental health outcomes. So don’t schedule them or boss the, let them play.
This week, I will spend quite a bit of time discussing a book called The Triumphs of Experience by George Vaillant. I’ve written briefly about the book before, but I didn’t appreciate the magnitude of the book until I assigned it for a class. Roughly speaking, the book follows a cohort of college men from the 1940s to the mid 2000s. Thus, the book tracks people from young adulthood to old age. It’s a powerful book in that it uses enormously rich data to analyze the life course and identify factors that contribute to our well being. You won’t find many other books that have such deep data to address one of life’s most important questions – What makes us happy? What is the good life?
In this first post, I want to briefly summarize the book and then note a few drawbacks. Later this week, I want to delve into two topics in more detail: alcoholism and parental bonds. To start: the Grant Study of Human development randomly selected a few hundred male Harvard undergrads for a long term study on health and the life course. It’s a biased sample, but it’s well suited for studying long life and work (remember, many women became home makers in that era) while controlling for educational attainment. The strength of this book is an ability to mine rich qualitative data on the life course and then mapping the associations over decades. The data is rich enough that the authors can actually consider alternative hypotheses and build multi-cause explanations.
A few drawbacks: Rhetorically, I thought the book was a bit wordier and longer than it needed to be. Also, I wish that the book had a glossary or appendix where one can look up definitions. More importantly, this book will note be convincing to folks who are obsessed with identification. It is very “1960s” in that they collect a lot of data and then channel their energies into looking at cross-group differences. But still, considering that doing RCT with your family is not possible and the importance of the data, I’m willing to forgive. Wednesday: The importance of your family.
There’s been a paper making round and a few folks have asked me for comments. It is called “Political Diversity will Improve Social Psychological Science.” It is forthcoming in Behavioral and Brain Sciences and is co-authored by Jose Duarte, Jarett Crawford, Jonathan Haidt, Lee Jussim and Phil Tetlock. Duarte et al. make the following claims:
- Social psychology, like most academic areas, is politically homogeneous.
- Intellectual diversity is a good thing.
“The underrepresentation of non-liberals in social psychology is most likely due to a combination of self-selection, hostile climate, and discrimination.”
My overall reaction is sympathetic, but critical. In my comments, I will start with evidence that is specific to social psychology, but also comment on the broader issue of professorial partisanship.
The lopsided political slant of academia is to be lamented. Since social scientists study human values and ethical behavior, we definitely lose something if only one side of the argument is represented. I also think that when sociologists move from disagreement to hostility, they do a disservice. All students should feel like it is permissible to disagree with an instructor and no one wants to be judged on their political views when it comes to graduate school admissions and appointment to the faculty.
On other points, I am more critical. For example, their coverage of the debate over discrimination is lacking. It is true that that many academics exhibit confirmation bias – they are more likely to approve of studies that support their ideological view. That is a logically consistent story for why, say, sociologists might be overwhelmingly liberal because we deal with lots of research that have social implications. But it doesn’t really explain other facts, like that a majority of physical scientists vote Democrat (see page 29 of the Gross and Simmons’ book on professors*). How would people possibly know the party preference of mathematicians? It’s not on the CV and in the eight years I spent in math departments, I still have no idea what the preferences of my fellow students and teachers were.
Another point of criticism is that they take uncritically the evidence on self-reports of willingness to discriminate. They cover a number of studies showing that liberals admit they would discriminate, while others do not. They see this as strong evidence. I do not because of social desirability bias. The default response is for people to admit they do not discriminate. My hypothesis that overly zealous academic liberals are simply more motivated to admit personal fault, which means they deviate from the socially desirable answer at much, much higher rates.
One point that is never brought up is that liberal disciplines become noticeably more conservative if they try hard enough. For example, it is my impression that economics used to be dominated by Keynesians up until about 1960 or so. Now, there are many notable conservative and libertarian economists who are very prominent. Similarly, there are disciplines which have an even Republican/Democrat balance, such as engineering (see page 29 of Gross and Simmons). Those who think discrimination is the smoking gun in this story need to explain why economics has become more conservative over time, how discrimination is supposed to work in a-political fields like math, and why liberals never conquered other areas. Most of the story about liberal over-representation is about humanities and social sciences which, do indeed, have lopsided tendencies.
Perhaps the point that I always think about is one that Duarte et al. and others always seem to miss. A major finding of Gross’s 2013 book on academic liberals is that there is indeed a self-image problems. And yes, much of it has to do with the fact that conservative students don’t think they will “fit in” with liberal professors. But there is another very strong reason which Gross covers – money. Academia is a low paying profession and conservative students value income in jobs more than other types of students.
This finding dovetails well with an observation about professions. Liberals tend to dominate in areas that are low paying and focus on issues like education, learning, care giving, and culture. These include the arts, entertainment, academia, K-12 teaching, nursing, social work, and science. Once you add some high income, conservatives start appearing in large numbers (e.g., the Dem/GOP is way different for doctors vs. nurses; artists vs. art managers; lawyering vs. other humanities oriented work; physical science vs. engineering).
That summarizes my response to Duarte et al. 2012. The basic point is correct – social psychology, and other areas, by implication, are politically lopsided and it’s likely not a good thing. On other points, I think they over read the evidence. I am certain that discrimination might occur, but when you look at a broad range of evidence, the story gets complicated very quickly.
* Disclosure: I have a chapter on this book on the history of ethnic studies.
Long time readers know that I am a skeptic when it comes to letters of recommendation. The last time I wrote about the topic, I relied on a well cited 1993 article by Aamodt, Bryan amd Whitcomb in Public Personnel Management that reviews the literature and shows that LoR’s have very little validity. I.e., they are poor predictors of future job performance. But what if the literature has changed in the meanwhile? Maybe these earlier studies were flawed, or based on limited samples, or better research methods provide more compelling answers. So I went back and read some more recent research on the validity of LoRs. The answer? With a few exceptions, still garbage.
For example, the journal Academic Medicine published a 2014 article that analyzed LoR for three cohorts of students at a medical school. From the abstract:
Results: Four hundred thirty-seven LORs were included. Of 76 LOR characteristics, 7 were associated with graduation status (P ≤ .05), and 3 remained significant in the regression model. Being rated as “the best” among peers and having an employer or supervisor as the LOR author were associated with induction into AOA, whereas having nonpositive comments was associated with bottom of the class students.
Conclusions: LORs have limited value to admission committees, as very few LOR characteristics predict how students perform during medical school.
Translation: Almost all information in letters is useless, except the occasional negative comment (which academics strive not to say). The other exception is explicit comparison with other candidates, which is not a standard feature of many (or most?) letters in academia.
Ok, maybe this finding is limited to med students. What about other contexts? Once again, LoRs do poorly unless you torture specific data out of them. From a 2014 meta-analysis of LoR recommendation research in education from the International Journal of Selection and Assessment:
… Second, letters of recommendation are not very reliable. Research suggests that the interrater reliability of letters of recommendation is only about .40 (Baxter, et al., 1981; Mosel & Goheen, 1952, 1959; Rim, 1976). Aamodt, Bryan & Whitcomb (1993) summarized this issue pointedly when they noted, ‘The reliability problem is so severe that Baxter et al. (1981) found that there is more agreement between two recommendations written by the same person for two different applicants than there is between two people writing recommendations for the same person’ (Aamodt et al., 1993, p. 82). Third, letter readers tend to favor letters written by people they know (Nicklin & Roch, 2009), despite any evidence that this leads to superior judgments.
Despite this troubling evidence, the letter of recommendation is not only frequently used; it is consistently evaluated as being nearly as important as test scores and prior grades (Bonifazi, Crespy, & Reiker, 1997; Hines, 1986). There is a clear and gross imbalance between the importance placed on letters and the research that has actually documented their efficacy. The scope of this problem is considerable when we consider that there is a very large literature, including a number of reviews and meta-analyses on standardized tests and no such research on letters. Put another way, if letters were a new psychological test they would not come close to meeting minimum professional criteria (i.e., Standards) for use in decision making (AERA, APA, & NCME, 1999). This study is a step toward addressing this need by evaluating what is known, identifying key gaps, and providing recommendations for use and research. [Note: bolded by me.]
As with other studies, there is a small amount of information in LoRs. The authors note that “… letters do appear to provide incremental information about degree attainment, a difficult and heavily motivationally determined outcome.” That’s something, I guess, for a tool that would fail standard tests of validity.