orgtheory.net

on facebook and research methods

Twitter is, well, a-twitter with people worked up about the Facebook study. If you haven’t been paying attention, FB tested whether they could affect people’s status updates by showing 700,000 folks either “happier” or “sadder” updates for a week in January 2012. This did indeed cause users to post more happy or sad updates themselves. In addition, if FB showed fewer emotional posts (in either direction), people reduced their posting frequency. (PNAS article here, Atlantic summary here.)

What most people seem to be upset about (beyond a subset who are arguing about the adequacy of FB’s methods for identifying happy and sad posts) is the idea that FB could experiment on them without their knowledge. One person wondered whether FB’s IRB (apparently it was IRB approved — is that an internal process?) considered its effects on depressed people, for example.

While I agree that the whole idea is creepy, I had two reactions to this that seemed to differ from most.

1) Facebook is advertising! Use it, don’t use it, but the entire purpose of advertising is to manipulate your emotional state. People seem to have expectations that FB should show content “neutrally,” but I think it is entirely in keeping with the overall product: FB experiments with what it shows you in order to understand how you will react. That is how they stay in business. (Well, that and crazy Silicon Valley valuation dynamics.)

2) This is the least of it. I read a great post the other day at Microsoft Research’s Social Media Collective Blog (here) about all the weird and misleading things FB does (and social media algorithms do more generally) to identify what kinds of content to show you and market you to advertisers. To pick one example: if you “like” one thing from a source, you are considered to “like” all future content from that source, and your friends will be shown ads that list you as “liking” it. One result is dead people “liking” current news stories.

My husband, who spent 12 years working in advertising, pointed out that this research doesn’t even help FB directly, as you could imagine people responding better to ads when they’re happy or when they’re sad. And that the thing FB really needs to do to attract advertisers is avoid pissing off its user base. So, whoops.

Anyway, this raises interesting questions for people interested in using big data to answer sociological questions, particularly using some kind of experimental intervention. Does signing a user agreement when you create an account really constitute informed consent? And do companies that create platforms that are broadly adopted (and which become almost obligatory to use) have ethical obligations in the conduct of research that go beyond what we would expect from, say, market research firms? We’re entering a brave new world here.

Written by epopp

June 29, 2014 at 3:00 am

15 Responses

Subscribe to comments with RSS.

  1. Beth, I think you’re right that we’re in a “brave new world” where our technology may be moving faster than our laws, institutions and social norms.

    Interestingly, PNAS editor Susan Fiske went from being mildly defensive earlier today…

    …to admitting being “a little creeped out” in a matter of hours.

    http://m.theatlantic.com/technology/archive/2014/06/even-the-editor-of-facebooks-mood-study-thought-it-was-creepy/373649/

    Presumably, the IRBs at Cornell and UCSF vetted and approved this research. To what degree is it the responsibility or role of the editor of PNAS – or any journal – to second-guess or vet the work of university IRBs? Since private companies such as Google, Microsoft, Facebook and Apple do extensive research and have access to goldmines of data, researchers affiliated with those companies – or at least collaborating with them – will likely play a big role in social science research in the future. If private companies have less onerous IRBs, does or should this shift the burden of vetting research ethics to editors and reviewers? Presumably, somebody’s going to have to do it. As Fiske wisely and ominously pointed out, “who knows what other research they’re doing.”

    Perhaps it’s a moot point, but ethics aside, I’m not sure this research was worth publishing at all. Maybe I’m missing something here, but the finding that if you flood/prime people with negative (positive) stories and emotions, they’ll begin to feel more negatively (positively) strikes me as a banal finding. I think Ezra mentioned that certain journals tend to be overly enthralled with flashy data, sometimes at the expense of substantive findings…

    Like

    Kyle Siler

    June 29, 2014 at 3:28 am

  2. […] at orgtheory, Elizabeth Popp Berman agrees that “the whole idea is creepy” but also argues […]

    Like

  3. I completely agreed with this defense of the study.

    My main response is this (hysterical response to the study) is why we can’t have nice things (i.e., big data randomized field trial cooperation with industry). But I guess some people like reading respondents 500 words of legalese before asking them Likert scale questions about whether we should remove books written by atheists from the local library.

    Like

    gabrielrossman

    June 29, 2014 at 2:18 pm

  4. I’m not sure the biggest issues here are the IRB/ethics/consent issues. Rather, I think that Zeynep Tufekci gets it exactly right when she writes:

    “I’m struck by how this kind of power can be seen as no big deal. Large corporations exist to sell us things, and to impose their interests, and I don’t understand why we as the research/academic community should just think that’s totally fine. That is the key strength of independent academia: we can speak up in spite of corporate or government interests.

    To me, this resignation to online corporate power is a troubling attitude because these large corporations (and governments and political campaigns) now have new tools and stealth methods to quietly model our personality, our vulnerabilities, identify our networks, and effectively nudge and shape our ideas, desires and dreams. These tools are new, this power is new and evolving. It’s exactly the time to speak up!”

    I recommend the whole (short) post as I think it moves the discussion from the narrow realm of IRBs and the ethics of consent to the problem of a new era of subtle, data-driven corporate influence, and academics’ role in that entire conversation.

    Like

    Dan Hirschman

    June 29, 2014 at 5:46 pm

  5. […] sociologist Elizabeth Popp Berman points out, the study raises some pretty massive questions that are going to be important for Facebook, […]

    Like

  6. The ethical gymnastics here are pretty entertaining to watch. The OP – is she “hysterical” too, Gabriel? – and GR are sure on the same page about one thing: only dummies have scruples about being unwittingly manipulated by corporations and their academic lackeys. The hip kids know that the sole _purpose_ of FB is such manipulation and that, anyway, they are always, everywhere being manipulated. I mean, really, _everyone_ knows that, right? I bet GR has “I don’t care about informed consent” on vinyl.

    Like

    Rich

    June 29, 2014 at 7:35 pm

  7. Anybody concerned about how huge corporations might manipulate innocent individuals should be thrilled about this study. It shows that Facebook actually can’t really manipulate users – the effect size is ridiculously small, even though it has to incorporate the effects of people posting about funerals, cancer etc. that surely would make people less likely to tell all sorts of silly positive stuff. So, this is a good story for anyone worried about the ethical implications of experimental manipulations.

    Like

    Anonymous

    June 29, 2014 at 8:14 pm

  8. […] with what it shows you in order to understand how you will react,” sociologist Elizabeth Popp Berman writes. “That is how they stay in […]

    Like

  9. Rich,

    Don’t be so gloomy. After all it’s not that awful. Like the fella says, in Silicon Valley for 30 years under the VCs they had acquisitions, IPOs, and patent suits, but they produced the PC, Google, and the iPhone. In sociology they had brotherly love – they had 100 years of theory and analysis, and what did that produce? The gut major.

    You know, I never feel comfortable on these sort of things. Human subjects? Don’t be melodramatic. Look down there. Tell me. Would you really feel any pity if one of those dots felt mildly bummed out for a few seconds? If I offered you a PNAS for every dot that had an “awesome day at the beach” status update randomly deleted from their feed, would you really, old man, tell me to keep my publication, or would you calculate how many dots you’d need for a statistically significant result?

    Like

    gabrielrossman

    June 30, 2014 at 4:25 am

  10. I think social media should be run as a public utility under rules set by government bureaucrats under legislative control. Then we could set rules for this stuff publicly instead of having this constant agonizing. It’s amazing to me how disempowered everyone feels, and assumes it’s inevitable and ok to feel, about such massive operations being run for profit instead of public good.

    Like

    Philip N. Cohen

    June 30, 2014 at 10:47 am

  11. …and a 35mm negative of The Third Man as well, apparently. I think you make my point GR. But what do I know? I’m just a dummy typing on the internets who doesn’t care about your major or your iphone. Cuckoo clock or Mark Zuckerberg? Are those really your options?

    Like

    Rich

    June 30, 2014 at 1:27 pm

  12. I think the reason many of us were insufficiently outraged by the NSA revelations is that we know we are being spied on and manipulated constantly by the large private companies and we feel powerless to do anything about it. With this scandal, I’m afraid my outrage is focused less on what FB did but on a) the fact that no legitimate academic researcher would ever get a project like this through an IRB; b) my IRB at least says directly that academics OUGHT to be held to higher standards than private businesses and the fact that a research project proposes to use personal information readily available on the Internet is not sufficient justification for an exemption if the IRB can think up some scenario in which a person might be hurt by that information including saying “well it may be public, but they should not have made it public”; c) my IRB at least requires all FB research to abide by FB’s “privacy statement” which basically says that FB owns everything on its pages and anyone else needs to get individualized consent to do anything with the content; d) now non-academic researchers who are not subject to the increasingly intrusive IRB regulations are intruding into publishing in academic journals. I’m not saying my outrage should be so narrowly focused, but it is.

    Like

    olderwoman

    June 30, 2014 at 2:01 pm

  13. Well, various back-and-forth ugly snark aside, I mostly agree with Dan’s referred post. One reason I think it’s completely disingenuous to dismiss the lack of informed consent in the study is because it would also be completely trivial for facebook to have debriefed/notified people. But imagine if they had: “By the way, ye producer-of-our-marketable-lifestyle-content-in-the-aggregate, we just messed with your feed because we wanted to see if we could manipulate how you feel! Just so you know!” It would give the lie to their constant mythologizing refrain about connecting people, making the world better, etc. To say the least, there is a deep conflict of interest for them between minimal research ethics and the marketing that sustains their business. Shouldn’t we keep that in mind?

    I’m also uncomfortable with some of the drool about the data itself, as though having lots of information from this trial (and a glitzy PNAS) somehow justify the deception on the face of it. Besides, the (to my eye, perverse) defense of the deception of one of the study authors was that the effect size was so small that it didn’t really affect people. So, either the findings are substantially inconsequential (even if they are statistically significant) OR they’re justified by the benefit to science of a corporation manipulating its producers (after all, people who participate on facebook produce the raw material constituting Facebook’s product.)

    So I’m looking for the least worst option here–I see a bunch of corportations manipulating me (Google, Facebook, Apple) pretty much on their whim (if we want to talk about mission creep–actually read your ToS on any service you use lately? Makes me want to LAMP a home server–but that violates my ISP ToS!); I see a bunch of academics trying to excuse this because they seem to really like the data they’re getting; and I see a lurching, overreaching bureaucratic cluster-jam called the IRB that seems mostly with self-perpetuation and limiting liability. Life is good.

    Like

    Bellerophon

    June 30, 2014 at 9:48 pm

  14. Late to the party, and it looks like the thread has already been hijacked by Jeremy over at scatterplot, but this story seems to have some legs. The study in question was just discussed on CNN as yet another FB abuse of its users. The panel of talking heads was (remarkably) of one mind in judging it a very bad idea, if only from a PR perspective.

    Like

    lisa

    July 1, 2014 at 12:39 am

  15. […] colleagues are currently going at it on Scatterplot and Orgtheory about the ethics of the recent Facebook study that nudged people’s emotions without telling […]

    Like


Comments are closed.