Archive for the ‘psychology’ Category
Twitter is, well, a-twitter with people worked up about the Facebook study. If you haven’t been paying attention, FB tested whether they could affect people’s status updates by showing 700,000 folks either “happier” or “sadder” updates for a week in January 2012. This did indeed cause users to post more happy or sad updates themselves. In addition, if FB showed fewer emotional posts (in either direction), people reduced their posting frequency. (PNAS article here, Atlantic summary here.)
What most people seem to be upset about (beyond a subset who are arguing about the adequacy of FB’s methods for identifying happy and sad posts) is the idea that FB could experiment on them without their knowledge. One person wondered whether FB’s IRB (apparently it was IRB approved — is that an internal process?) considered its effects on depressed people, for example.
While I agree that the whole idea is creepy, I had two reactions to this that seemed to differ from most.
1) Facebook is advertising! Use it, don’t use it, but the entire purpose of advertising is to manipulate your emotional state. People seem to have expectations that FB should show content “neutrally,” but I think it is entirely in keeping with the overall product: FB experiments with what it shows you in order to understand how you will react. That is how they stay in business. (Well, that and crazy Silicon Valley valuation dynamics.)
2) This is the least of it. I read a great post the other day at Microsoft Research’s Social Media Collective Blog (here) about all the weird and misleading things FB does (and social media algorithms do more generally) to identify what kinds of content to show you and market you to advertisers. To pick one example: if you “like” one thing from a source, you are considered to “like” all future content from that source, and your friends will be shown ads that list you as “liking” it. One result is dead people “liking” current news stories.
My husband, who spent 12 years working in advertising, pointed out that this research doesn’t even help FB directly, as you could imagine people responding better to ads when they’re happy or when they’re sad. And that the thing FB really needs to do to attract advertisers is avoid pissing off its user base. So, whoops.
Anyway, this raises interesting questions for people interested in using big data to answer sociological questions, particularly using some kind of experimental intervention. Does signing a user agreement when you create an account really constitute informed consent? And do companies that create platforms that are broadly adopted (and which become almost obligatory to use) have ethical obligations in the conduct of research that go beyond what we would expect from, say, market research firms? We’re entering a brave new world here.
One of the awesome aspects of grad school (besides the occasional “free” pizza as you listen to the latest in research) is the sharing of resources among colleagues who are undergoing the same experiences. One grad school friend gave out copies of David Burns‘ seminal Feeling Good, an exercise book that explains how to practice cognitive behavioral therapy (CBT).
Back in the fall, the Stanford alumni magazine had an article about how Burns became convinced of CBT’s efficacy over prescription drugs as a tool for treating depression, anxiety, perfectionism, and other paralyzing feelings:
What Burns did in Feeling Good, the first mass-market, evidence-based, self-help book for the relief of depression, was explain the tenets of cognitive behavioral therapy (CBT) for the lay person: that depression is caused by self-defeating beliefs and negative thoughts—thoughts like “I’m not good enough,” “I’ll never amount to anything,” or “I have no friends.” Feeling Good included exercises readers could use to change how they reacted to such thoughts and to stop depression before it spiraled down into an endless abyss of despair and pain. Study after study has since demonstrated CBT’s effectiveness.
Burns did not invent CBT; its philosophical underpinnings can be traced back to the Buddha or to Epictetus, the Stoic. Credit for laying the foundation of modern CBT generally goes to Philadelphia psychiatrist T. Aaron Beck and the late New York psychologist Albert Ellis. Burns remembers when he, like most psychiatrists, didn’t believe that something as simple as how we think could cause depression.
Working at the University of Pennsylvania’s Depression Research Unit in the 1970s, Burns researched the theory that low serotonin levels cause depression, an idea widely accepted as the “chemical imbalance theory” and conventional wisdom among popular media, many physicians and much of the public. Although Burns won the A. E. Bennett award from the Society of Biological Psychiatry in 1975 for his research on brain serotonin metabolism, he was not convinced that the chemical imbalance theory was valid. In one study, he and his colleagues gave massive daily doses of the amino acid l-Trytophan to depressed veterans in a double-blind study. L-Trytophan goes directly from the stomach to the blood to the brain, where it is transformed into serotonin. If depression results from a deficiency of brain serotonin, the massive increase should have triggered clinical improvement, but it didn’t.
The study was published in a top research journal but did little to dim the growing excitement about the chemical imbalance theory. In 1988, Lilly launched the world’s first blockbuster SSRI antidepressant, a drug with powerful effects on brain serotonin receptors. During its first 13 years, Prozac generated $21 billion in sales, or 30 percent of Lilly’s revenues. Burns still wasn’t convinced.
“I always wanted to see people’s lives transformed from depression and anxiety to joy and peace,” he says. In his clinical work, he didn’t see that happening very often, no matter how many pills he prescribed. His department chair suggested that he sit in on one of Dr. Aaron Beck’s weekly cognitive therapy seminars.
At first, Burns thought Beck’s presentation sounded like “pure hucksterism”; still, he began using CBT methods if only to prove to himself that they didn’t work. Soon, many patients he’d been treating with drugs and “you talk, I’ll listen” therapy started to get better. A lot better.
Burns felt torn. He had just won a five-year grant to develop a brain serotonin lab at Penn. Yet he wasn’t convinced serotonin played a role in depression or any other psychiatric disorder. After three agonizing months, Burns decided he’d “rather spend my life doing something that works.” He left Penn and opened a private practice “in a storeroom with a window,” two stories below Beck’s Center for Cognitive Therapy.
Burns’s doubts were vindicated by a landmark 2002 metastudy conducted by psychologist Irving Kirsch, now at Harvard, of all trials submitted to the FDA by the manufacturers of the six most widely prescribed antidepressants approved between 1987 and 1999. Not widely publicized until a 60 Minutes report in February 2012, it showed only a slight difference in patient response between the drugs and placebos.
Ezra Klein interviews Kevin Roose, who has a new book about young Ivy League graduates who work on Wall Street. The take home point is simple: people who graduate from competitive schools graduate toward these jobs not because they love business, but because they want security. Wall Street jobs are high paid, require little experience, and have a bit of prestige. On the origins of the short term Wall Street job:
Wall Street invented this new way of recruiting in the early 80s. Before that they hired like any other industry. If you wanted to be a banker you applied for a job at a bank and they hired you or they didn’t. But in the early 80s Goldman Sachs and others figured out they could broaden their net and get lots of really smart people if they made it a temporary position rather than a permanent one.
So they created the two-and-out program. The idea is you’re there for two years and then you move onto something else. That let them attract not just hardcore econ majors but people majoring in other subjects who had a passing interest in finance and didn’t know what else to do. People now think going to a bank for two years will help prepare them for the next thing and keep them from having to make these hard decisions about the rest of their life. It made it like an extension of college. And it was genius. It led to this huge explosion in recruitment and something like a third of Ivy League graduates going to Wall Street.
Of course, it’s a mixed bag for the grads:
EK: So after writing this book, what would you say to a college senior thinking of going to Wall Street?
KR: First I would ask them why they wanted to work in an investment bank. If the answer is “because I’m tremendously in debt and need to pay it out” or “I’ve been reading Barron’s since I was 12 years old and I desperately want to be an investment banker” then those are legitimate reasons. Go ahead. But if it’s just about taking risk off the table and doing the safe prestigious thing, I’d tell them first that it will make them truly miserable, the kind of miserable it could take years to recover from, and that it also no longer has that imprimatur. It can actually hinder you. I’ve spoken to tech recruiters who say they only hire bankers in their first year or two because after that banking ruins them.
EK: How does it ruin them?
KR: It makes them too risk conscious. It gets them used to a standard of lifestyle they may not be able to replicate in any other industry. And it has a deleterious effect on creativity. Of the eight people I followed, a few came out very damaged by the experience. And not in a way a vacation can cure. It’s not about having bags under your eyes. It destroys your ability to think in creative ways about what it means to build something of value. The people I followed would admit they got a lot out of being a banker but I don’t think they’re all that tuned into the ways the experience changed them.
Check it out.
This guest post on the politics of sociology is written by Chris Martin, a doctoral student in sociology at Emory University.
Conservativism doesn’t seem to be a unipolar thing, according to much of the social psychological research on political attitudes. Rather, you can be conservative by being high in either social dominance orientation (SD) or right-wing authoritarianism (RWA). Of course, the two dimensions are moderately correlated but they’re not the same thing: high-SDO people dislike socially subordinate groups, and high RWO dislike socially deviant (or unconventional) groups. As a centrist, however, I’ve found that there’s a lack of research on the opposite poles of these scales even though there clearly seem to be a subset of liberals who like socially subordinate groups and a subset who like socially deviant groups. Again, there’s considerable overlap between these two subsets. And there’s a small subset of libertarian liberals who don’t lean toward either pole.
This comes across in social psychological work on religious freedom. Early research showed that high-RWA people are more supportive of Christian than Muslim mandatory prayer, while low-RWA people oppose both types of prayer equally. However, if you change “mandatory” to “voluntary,” you find that low-RWA people no longer disfavor both types. Rather, they more strongly favor Muslim than Christian school prayer space.
To some degree, I’ve found that sociology has become so ideologically homogenous that it’s now the disciplinary norm to avoid using “inequality” to describe preferential treatment of subordinate or deviant groups. In the race domain, in fact, centrists can get accused of supporting colorblind ideology or denying White privilege, even if they have a well-reasoned critique of preferential treatment. And in the gender/sexuality domain, the norm is for 50% of the research to focus on people who are deviant by conventional standards. But this skewness of focus isn’t termed inequality. My point isn’t about race or gender, though, but the large issue of whether there’s place for centrists in sociology—people who neither valorize nor condemn subordinate and deviant groups. Psychological social scientists have begun to address this issue—see Jonathan Haidt and Lee Jussim in particular—focusing on how this political homogeneity harms science. Where does sociology stand?
The Atlantic has a new article called “The Confidence Gap.” Katty Kay and Claire Shipman review the academic literature to discuss one source of gender inequality – the systematic differences in confidence. Roughly speaking, Kay and Shipman suggest that one reason that men are more likely to rise faster through careers is that men are simply overconfident. The fortune cookie version of the argument is that women will apply for a job only if they are sure that they 100% qualified, while men will take a shot if they are half qualified.
A few comments: While I believe that sexism exists, the article is consistent with a “sexism without sexists” style argument as well. In other words, if A and B compose half the population but A applies for raises 66% of the time and B applies 33% of the time, you will very quickly get inequality even when bosses do not consider gender.
A policy observation from some of the experimental work. Kay and Shipman describe an experiment where men and women subjects try to solve a puzzle and initially men do better because they answer almost all questions. Women will try only when they are sure of the answer. When women are required to do the puzzles, the scores equalize. The policy implication is that raises and promotions should be routine. People are automatically considered for raises and promotions, or everyone will be considered if the situation arises.
The article has a lot to think about for folks interested in gender and inequality.
Are humans by nature social animals? My colleague, Adam Waytz, argues in a provocative essay for Edge.org that the idea that humans are naturally social may be more myth than reality. That is, if we define human sociability as the tendency to be cooperative with others, compassionate, and empathetic, it’s hardly the case that humans will always act or think in a social way. Adam’s essay is geared towards psychologists, where the trend has been to describe humans’ brains, hormones, and cognition as innately social.
He points out various ways in which psychological research points out that this is just not true. Humans are as competitive as they are cooperative, and in certain situations competition overrides cooperation. Empathy isn’t an automatic response. Humans may have a strong in-group bias and a tendency to treat people outside of our group with suspicion and lack of trust. Social behaviors seem to be triggered by certain situational characteristics rather than being the default. Moreover, our capacity to be social may be much more limited than we have previously recognized.
Because motivation and cognition are finite, so too is our capacity to be social. Thus, any intervention that intends to increase consideration of others in terms of empathy, benevolence, and compassion is limited in its ability to do so. At some point, the well of working memory on which our most valuable social abilities rely will run dry.
Rather than sociability being the natural response to human interaction, it may actually be an achievement of society that we have created the right institutions that enable sociability. Sociologists, of course, have a lot to say about the latter.
A recent Washington Post op-ed describes recent research showing that interviews are poor predictors of future job performance. The idea is old, but the results elaborate in new ways. From Daniel Willingham, a psychologist at the University of Virginia:
You do end feeling as though you have a richer impression of the person than that gleaned from the stark facts on a resume. But there’s no evidence that interviews prompt better decisions (e.g., Huffcutt & Arthur, 1994).
A new study (Dana, Dawes, & Peterson, 2013) gives us some understanding of why.
The information on a resume is limited but mostly valuable: it reliably predicts future job performance. The information in an interview is abundant–too abundant actually. Some of it will have to be ignored. So the question is whether people ignore irrelevant information and pick out the useful. The hypothesis that they don’t is called dilution. The useful information is diluted by noise.
Dana and colleagues also examined a second possible mechanism. Given people’s general propensity for sense-making, they thought that interviewers might have a tendency to try to weave all information into a coherent story, rather than to discard what was quirky or incoherent.
Three experiments supported both hypothesized mechanisms.
In other words, interviews encourage people to see patterns in the data where none exist. They also distract us with irrelevant information. Toss this in the file of “we have evidence it don’t work, but people will do it anyway.”