Archive for the ‘networks’ Category
Sociological Science has a new paper by Sara Cowan discusses when people share information using data on abortion:
Though abortion is a more common event in the United States than miscarriage, this article shows that more Americans hear of women who have had miscarriages than they hear of women who have had abortions. This is a result of both the patterns of secret telling and keeping: more Americans tell miscarriage secrets to more people than abortion secrets, and more Americans keep abortion secrets from more people than miscarriage secrets.In the introduction, I described two scenarios: one in which people tend to hear secrets they previously approved, and this pattern would contribute to a stasis in public opinion and a second scenario in which people hear secrets they previously condemned and this scenario would inspire social influence and facilitate social change. The data analyzed here illustrate the first scenario. They show a strong trend whereby individuals who hold restrictive views toward abortion are less likely than their liberal peers to report knowing someone who has had one. People tend to hear those secrets about which they already approve and are less likely to hear secrets about which they disapprove. Secret keeping and selective disclosure intensify this experience of homophily above and beyond any objective network segregation.
This is not a post about Ello. Because Ello is so last Friday. But the rapid rise of and backlash against upstart social media network Ello (if you haven’t been paying attention, see here, here, here) reminded me of something I was wondering a while back.
Lots of people are dissatisfied with Facebook — ad-heavy, curated in a way the user has little control over, privacy-poor. And it looks like Twitter, which really needs bring in more revenue, is taking steps to move in the same direction: algorithmic display of tweets, with the ultimate goal of making users more valuable to advertisers.
The question is, what’s the alternative? There have been a lot of social network flavors of the month, built on a variety of business models. Some of them, like Google Plus, are owned by already-large companies that would be subject to similar business pressures as Facebook and Twitter. Others, like Diaspora (remember Diaspora?), were startups with an anti-Facebook mission (privacy, decentralization), but collapsed under the weight of their own hype.
I can’t imagine that a public utility model would work for a social network — I just don’t see “government-owned” and “fast-moving technological change” going together successfully. But I keep wondering why a Wikipedia model couldn’t work. Make it a 501(c)3. Attract some foundation funding — it’s a pro-democracy project. Solicit gifts from pro-privacy people in the tech industry — there are lots of those. Then once it’s off the ground, ask users for donations.
Sure, there is the huge, huge hurdle of getting enough of a network base to attract new users. But it seems like the costs should not be insane. If it only takes 200 employees to run Wikipedia, as large as it is, how many would it take to get a big social network off the ground? Facebook employs 7000, but a lot of them have to be in the business of figuring out how to sell Facebook.
Maybe there have been (failed) efforts like this and I just haven’t noticed. Or maybe the getting-the-user-base issue is really insurmountable. But it seems like if a real Facebook alternative is to emerge, it can’t just be from a corporate competitor (e.g. Google), and the startup/VC model (e.g. Ello) is going to be susceptible to all the same problems as it grows. Why not a different model?
Twitter is, well, a-twitter with people worked up about the Facebook study. If you haven’t been paying attention, FB tested whether they could affect people’s status updates by showing 700,000 folks either “happier” or “sadder” updates for a week in January 2012. This did indeed cause users to post more happy or sad updates themselves. In addition, if FB showed fewer emotional posts (in either direction), people reduced their posting frequency. (PNAS article here, Atlantic summary here.)
What most people seem to be upset about (beyond a subset who are arguing about the adequacy of FB’s methods for identifying happy and sad posts) is the idea that FB could experiment on them without their knowledge. One person wondered whether FB’s IRB (apparently it was IRB approved — is that an internal process?) considered its effects on depressed people, for example.
While I agree that the whole idea is creepy, I had two reactions to this that seemed to differ from most.
1) Facebook is advertising! Use it, don’t use it, but the entire purpose of advertising is to manipulate your emotional state. People seem to have expectations that FB should show content “neutrally,” but I think it is entirely in keeping with the overall product: FB experiments with what it shows you in order to understand how you will react. That is how they stay in business. (Well, that and crazy Silicon Valley valuation dynamics.)
2) This is the least of it. I read a great post the other day at Microsoft Research’s Social Media Collective Blog (here) about all the weird and misleading things FB does (and social media algorithms do more generally) to identify what kinds of content to show you and market you to advertisers. To pick one example: if you “like” one thing from a source, you are considered to “like” all future content from that source, and your friends will be shown ads that list you as “liking” it. One result is dead people “liking” current news stories.
My husband, who spent 12 years working in advertising, pointed out that this research doesn’t even help FB directly, as you could imagine people responding better to ads when they’re happy or when they’re sad. And that the thing FB really needs to do to attract advertisers is avoid pissing off its user base. So, whoops.
Anyway, this raises interesting questions for people interested in using big data to answer sociological questions, particularly using some kind of experimental intervention. Does signing a user agreement when you create an account really constitute informed consent? And do companies that create platforms that are broadly adopted (and which become almost obligatory to use) have ethical obligations in the conduct of research that go beyond what we would expect from, say, market research firms? We’re entering a brave new world here.
university of chicago visit – everything you wanted to know about tweets and votes, but were afraid to ask
I will be a guest of the computational social science workshop at the University of Chicago this coming Friday. I will present a very detailed talk on the more tweets/more votes phenomena called “Everything You Wanted to Know About the Tweets-Votes Correlation, but Were Afraid to Ask.” If you want to chat or hang out, please email me.
Refreshments will be served.
Last Saturday, Andrew Gelman responded to a post about a discussion in my social network analysis course. In that post, my student asked about different strengths of a network effect reported in a paper. Gelman (and Cosima Shalizi) both noted that the paper does not show a statistically significant difference. I quote the concluding paragraphs of Andrew’s commentary:
I’m doing this all not to rag on Rojas, who, after all, did nothing more than repeat an interesting conversation he had with a curious student. This is just a good opportunity to bring up an issue that occurs a lot in social science: lots of theorizing to explain natural fluctuations that occur in a random sample. (For some infamous examples, see here and here.) The point here is not that some anonymous student made a mistake but rather that this is a mistake that gets made by researchers, journalists, and the general public all the time.
I have no problem with speculation and theory. Just remember that if, as is here, the data are equivocal, that it would be just as valuable to give explanations that go in the opposite direction. The data here are completely consistent with the alternative hypothesis that people follow their spouses more than their friends when it comes to obesity.
Fair enough. Let me add a pedagogical perspective. When I teach network science to undergrads, I generally have a few goals. First, I want to show them how to convert social tie data into a matrix that can be analyzed. Second, I want students to learn how network concepts might operationalize social science concepts (e.g., how group cohesion might be described as high density). Third, I want to spark their imagination a little and see how network analysis can be used to describe or analyze a wide range of phenomena and thus encourage students to generate explanations. Given that students have very, very modest math skills and real problems generating hypotheses, getting down into the weeds with the papers is often last.
So when I teach the week on networks and health, my discussion questions are like this: “Why do you think health might be transmitted from one person to another? How would that work?” I also try to get into basic research design: “How do you measure health? Do you know what BMI is?” So the C&F paper has many up sides. The downside is that the paper has an interesting hypotheses and you can easily get distracted from the methodological controversy the paper has generated, or even some very sensible observations on confidence intervals. The bottom line is that when you have to teach everything (theory, methods, research design and topic), you don’t quite get everything. But still, if a student, who self-admitedly knows little math or stats, can get to a point about asking about mechanisms, then that’s a teaching victory.