Archive for the ‘networks’ Category
Next week, we’ll discuss sex and sociology. Here are the topics:
- Why sex is important for sociologists to study
- My experience teaching social science research on sex
- Lessons from Laumann et al. (1994)
- Professional lessons from my first article on networks and STD’s
- The unexpected literature that sprung up from that article
If you want to discuss other topics, mention them in the comments and we’ll work it in.
Everyone wants to know the secrets to academic success. But despite the sizable academic self-help genre, actual evidence on whether scholars who pursue certain strategies are more successful than others is fairly thin on the ground.
Erin Leahey has written about the returns to research specialization, and I know of a couple of papers on the characteristics of highly cited scientists (gated links, sorry). There’s probably more in the voluminous scientometrics literature.
Some of our standard theories in organization theory suggest different answers to this question — and in particular, to the question of what research topic you should pick. (Assuming maximum academic success is your goal and not, say, following your passion.)
A whole line of research following from Ezra Zuckerman’s 1999 article on the penalty to category breaching suggests that not fitting into predefined categories can hurt a product. Audiences, for example, find genre-spanning work less appealing. On the flip side, though, Ron Burt’s work on structural holes would seem to imply that academics who bridge poorly connected networks are in a good position to benefit from their brokerage.
Of course, none of this work (at least the stuff I know) has looked specifically at academic research. But both theories fit plausible narratives of scholarly success.
It makes a lot of sense that people who bridge disconnected research communities would be in a position to bring useful ideas from one into the other, and reap the rewards that result. On the other hand, I can think of several examples of folks who seem to achieve less success than they merit because their work falls outside, or fits awkwardly between, well-defined research communities. A penalty to category-breaching or genre-spanning sounds entirely plausible too.
If I had to guess, I’d suspect that these two patterns may both exist in academia but intersect in fairly complex ways. So the network-broker can benefit from her ability to borrow insights from another discipline, or community. But only if the insights are recognizable enough to her home discipline that others can mentally place those insights in an understandable location within their field — that is, in an existing category.
The question is whether there’s a sweet spot — being just enough of a broker to benefit, without being so radical as to trigger a category-breaching penalty. Or maybe there’s a benefit to brokerage, but only in certain structural holes — ones that don’t cause the category problem. Or maybe there are a couple of mutually exclusive strategies for success.
What do you think? Will academic brokers be hit with an illegitimacy penalty for their category breaching? Or are these in fact orthogonal issues for ambitious academics? Maybe there’s actual research that speaks to this.
(H/T to Tim Bartley for the conversation that spurred these musings.)
Vox has a nice interview with Dartmouth political scientist Brendan Nyhan about vaccine skeptics. What can be done to convince them? Brendan does research on political beliefs and has shown that in experimental settings, people don’t like to change beliefs even when confronted with correct information. His experiments show that this is true not only for political beliefs, but also controversial health beliefs like believing in the vaccine-autism link.
But there was an additional section in the interview that I found extremely interesting. Nyhan notes that it is easier to be a vaccine skeptic when you don’t actually see a lot of disease: “… many of the diseases that vaccines prevent today are essentially invisible in the US. Vaccines are a victim of their own success here.” This reminded me of a 2002 paper I wrote on STD/HIV transmission. In a model worked out by Kirby Schroeder and myself about people proposing to have risky sex with each other, we wrote that the model has an unusual prediction. If people are proposing risky sex based on how often their friends are infected, you may get unexpected outbreaks of disease:
In the models we have presented, there is no replacement; the population is stable. If we allow for replacement, then we arrive at a novel prediction: as uninfected individuals the population (through birth, migration, etc.) and HIV+ individuals leave (through illness), the proportion of infected individuals will decrease. Once this proportion falls, prior beliefs about the proportion of infected individuals will fall, and if this new prior belief is low enough , then HIV- negative individuals will switch from protected to unprotected sex. The long-term effect of replacement in our model, then, is an oscillation of infection rates… There is some evidence that oscillations in infection rates do occur… An intriguing avenue for research would be to link these patterns in infection rates to the behavior depicted in our model.
In other words, if your model of the world assumes that people take risk based on the infection rates of their buddies, then it is entirely possible, even predictable, that you will see sudden spikes or outbreaks because people “let their guard down.” For HIV, as more people use condoms and other measures, people may engage in more risky sex because few of their friends are infected. For measles and other childhood infections, people who live in very safe places may feel free to deviate from the standard practices that create that safety in their first place. I don’t know how to make vaccine skeptics change their minds, but I do know that movements like vaccine skepticism are some what predictable and we can prepare for it.
Sociological Science has a new paper by Sara Cowan discusses when people share information using data on abortion:
Though abortion is a more common event in the United States than miscarriage, this article shows that more Americans hear of women who have had miscarriages than they hear of women who have had abortions. This is a result of both the patterns of secret telling and keeping: more Americans tell miscarriage secrets to more people than abortion secrets, and more Americans keep abortion secrets from more people than miscarriage secrets.In the introduction, I described two scenarios: one in which people tend to hear secrets they previously approved, and this pattern would contribute to a stasis in public opinion and a second scenario in which people hear secrets they previously condemned and this scenario would inspire social influence and facilitate social change. The data analyzed here illustrate the first scenario. They show a strong trend whereby individuals who hold restrictive views toward abortion are less likely than their liberal peers to report knowing someone who has had one. People tend to hear those secrets about which they already approve and are less likely to hear secrets about which they disapprove. Secret keeping and selective disclosure intensify this experience of homophily above and beyond any objective network segregation.
This is not a post about Ello. Because Ello is so last Friday. But the rapid rise of and backlash against upstart social media network Ello (if you haven’t been paying attention, see here, here, here) reminded me of something I was wondering a while back.
Lots of people are dissatisfied with Facebook — ad-heavy, curated in a way the user has little control over, privacy-poor. And it looks like Twitter, which really needs bring in more revenue, is taking steps to move in the same direction: algorithmic display of tweets, with the ultimate goal of making users more valuable to advertisers.
The question is, what’s the alternative? There have been a lot of social network flavors of the month, built on a variety of business models. Some of them, like Google Plus, are owned by already-large companies that would be subject to similar business pressures as Facebook and Twitter. Others, like Diaspora (remember Diaspora?), were startups with an anti-Facebook mission (privacy, decentralization), but collapsed under the weight of their own hype.
I can’t imagine that a public utility model would work for a social network — I just don’t see “government-owned” and “fast-moving technological change” going together successfully. But I keep wondering why a Wikipedia model couldn’t work. Make it a 501(c)3. Attract some foundation funding — it’s a pro-democracy project. Solicit gifts from pro-privacy people in the tech industry — there are lots of those. Then once it’s off the ground, ask users for donations.
Sure, there is the huge, huge hurdle of getting enough of a network base to attract new users. But it seems like the costs should not be insane. If it only takes 200 employees to run Wikipedia, as large as it is, how many would it take to get a big social network off the ground? Facebook employs 7000, but a lot of them have to be in the business of figuring out how to sell Facebook.
Maybe there have been (failed) efforts like this and I just haven’t noticed. Or maybe the getting-the-user-base issue is really insurmountable. But it seems like if a real Facebook alternative is to emerge, it can’t just be from a corporate competitor (e.g. Google), and the startup/VC model (e.g. Ello) is going to be susceptible to all the same problems as it grows. Why not a different model?
Twitter is, well, a-twitter with people worked up about the Facebook study. If you haven’t been paying attention, FB tested whether they could affect people’s status updates by showing 700,000 folks either “happier” or “sadder” updates for a week in January 2012. This did indeed cause users to post more happy or sad updates themselves. In addition, if FB showed fewer emotional posts (in either direction), people reduced their posting frequency. (PNAS article here, Atlantic summary here.)
What most people seem to be upset about (beyond a subset who are arguing about the adequacy of FB’s methods for identifying happy and sad posts) is the idea that FB could experiment on them without their knowledge. One person wondered whether FB’s IRB (apparently it was IRB approved — is that an internal process?) considered its effects on depressed people, for example.
While I agree that the whole idea is creepy, I had two reactions to this that seemed to differ from most.
1) Facebook is advertising! Use it, don’t use it, but the entire purpose of advertising is to manipulate your emotional state. People seem to have expectations that FB should show content “neutrally,” but I think it is entirely in keeping with the overall product: FB experiments with what it shows you in order to understand how you will react. That is how they stay in business. (Well, that and crazy Silicon Valley valuation dynamics.)
2) This is the least of it. I read a great post the other day at Microsoft Research’s Social Media Collective Blog (here) about all the weird and misleading things FB does (and social media algorithms do more generally) to identify what kinds of content to show you and market you to advertisers. To pick one example: if you “like” one thing from a source, you are considered to “like” all future content from that source, and your friends will be shown ads that list you as “liking” it. One result is dead people “liking” current news stories.
My husband, who spent 12 years working in advertising, pointed out that this research doesn’t even help FB directly, as you could imagine people responding better to ads when they’re happy or when they’re sad. And that the thing FB really needs to do to attract advertisers is avoid pissing off its user base. So, whoops.
Anyway, this raises interesting questions for people interested in using big data to answer sociological questions, particularly using some kind of experimental intervention. Does signing a user agreement when you create an account really constitute informed consent? And do companies that create platforms that are broadly adopted (and which become almost obligatory to use) have ethical obligations in the conduct of research that go beyond what we would expect from, say, market research firms? We’re entering a brave new world here.