orgtheory.net

Archive for the ‘technology’ Category

this is not a post about ello

This is not a post about Ello. Because Ello is so last Friday. But the rapid rise of and backlash against upstart social media network Ello (if you haven’t been paying attention, see here, here, here) reminded me of something I was wondering a while back.

Lots of people are dissatisfied with Facebook — ad-heavy, curated in a way the user has little control over, privacy-poor. And it looks like Twitter, which really needs bring in more revenue, is taking steps to move in the same direction: algorithmic display of tweets, with the ultimate goal of making users more valuable to advertisers.

The question is, what’s the alternative? There have been a lot of social network flavors of the month, built on a variety of business models. Some of them, like Google Plus, are owned by already-large companies that would be subject to similar business pressures as Facebook and Twitter. Others, like Diaspora (remember Diaspora?), were startups with an anti-Facebook mission (privacy, decentralization), but collapsed under the weight of their own hype.

I can’t imagine that a public utility model would work for a social network — I just don’t see “government-owned” and “fast-moving technological change” going together successfully. But I keep wondering why a Wikipedia model couldn’t work. Make it a 501(c)3. Attract some foundation funding — it’s a pro-democracy project. Solicit gifts from pro-privacy people in the tech industry — there are lots of those. Then once it’s off the ground, ask users for donations.

Sure, there is the huge, huge hurdle of getting enough of a network base to attract new users. But it seems like the costs should not be insane. If it only takes 200 employees to run Wikipedia, as large as it is, how many would it take to get a big social network off the ground? Facebook employs 7000, but a lot of them have to be in the business of figuring out how to sell Facebook.

Maybe there have been (failed) efforts like this and I just haven’t noticed. Or maybe the getting-the-user-base issue is really insurmountable. But it seems like if a real Facebook alternative is to emerge, it can’t just be from a corporate competitor (e.g. Google), and the startup/VC model (e.g. Ello) is going to be susceptible to all the same problems as it grows. Why not a different model?

Written by epopp

September 30, 2014 at 2:21 pm

sample computer science/sociology syllabus

Loyal orgtheorista and sociologist Amy Binder has forwarded me this course syllabus for a course at UC San Diego. It is called Soc 211 Computational Methods in Social Science and was taught by Edward Hunter and Akos Rona-­Tas. The authors are working on a textbook, the course was made open to a wide range of students, a and it was supported by the Dean at UCSD. I heard people had a nerdy good time. Click here to read  the soc211_syllabus.

50+ chapters of grad skool advice goodness: Grad Skool Rulz/From Black Power

Written by fabiorojas

September 23, 2014 at 12:01 am

why the facebook/okcupid conversation needs the history of technology

I’m in the Poconos this week with old college friends and only intermittently paying attention to the larger world. And I’m hesitant to opine about the latest in the world of online experimentation (see here, here, or here) because honestly, it’s not my issue. I don’t study social media. I don’t have deep answers to questions about the ethics of algorithms, or how we should live with, limit, or reshape digital practices. And plenty of virtual ink has already been spilled by people more knowledgeable about the details of these particular cases.

But I do want to make the case that it’s important to have this conversation at this particular moment. Here is why:

If there’s one thing the history of technology teaches us, it’s that technology is path-dependent, and as a particular technology becomes dominant, the social and material world develop along with it in ways that have a lasting impact.

The QWERTY story, in which an inefficient keyboard layout was created to slow down the users of jam-prone typewriters but long outlasted those machines, may be apocryphal. Perhaps a better example is streetcars.

Historian Kenneth Jackson, in the classic Crabgrass Frontier, showed how U.S. cities were first reshaped by streetcars. Streetcars made it possible to commute some distance from home to work, and helped give dense, well-bounded cities a hub-and-spokes shape, with the spokes made up of rail lines. This was made possible by new technology.

Early in the 20th century, another new technology became widely available: the automobile. The car made suburbanization, in the American sense involving sprawl and highways and a turn away from center cities, possible.

But the car alone was not enough to suburbanize the United States. Jackson’s real contribution was to show how technological developments intersected with 1) cultural responses to the crowded, dirty realities of urban life, and 2) government policies that encouraged both white homeownership and white flight, to create the diffuse, car-dependent American suburbs we know and love. The two evolve together: technological possibilities and social decisions about how to use the technologies. As they lock in, both become harder to change — until the next disruptive technology (((ducking))) comes along.

So what does all this have to do with OKCupid?

The lesson here is that technologies and their uses can evolve in multiple ways. European cities developed very differently from American cities, even though both had access to the same transportation technologies. But there are particular moments, periods of transition, when we start to lock in a set of institutions — normative, legal, organizational — around a developing new technology.

We’re never going to be able to predict all the effects that a particular social decision will have on how we use some technology. Government support of racist red-lining practices is one reason for the white flight that encouraged suburbanization. But even if the 1930s U.S. mortgage policy hadn’t been racist, other aspects of it — for example, making the globally uncommon fixed-rate mortgage the U.S. norm — still would have promoted decentralization and encouraged the car-based suburbs. Some of that was probably unforeseeable.*

But some of it wasn’t. And I can’t help but think that more loud and timely conversation about the decisions and nondecisions the U.S. was making in the early decades of the 20th century might have led the country down a less car-dependent path. Once the decisions are made, though, they become very difficult to change.

Right now, it is 1910. We have the technology to know more about individuals than it has ever been possible to know, and maybe to change their behavior. We don’t know how we’re going to govern that technology. We don’t really know what its effects will be. But this is the time to talk about the possibilities, loudly and repeatedly if necessary. Maybe the effects on online experimentation will turn out to be to be harmless. Maybe just trusting that Facebook and OKCupid aren’t setting us on the wrong path will work out. But I’d hate to think that we unintentionally create a new set of freedom-restricting, inequality-reproducing institutions that look pretty lousy in a few decades just because we didn’t talk enough about what might — or might not — be at stake.

That’s how websites work” isn’t a good enough answer. If 62% of Facebook users don’t even know there’s an algorithm, there’s plenty of room for conversation.

* There is a story that GM drove the streetcars out of business by buying up streetcar companies and then dismantling the streetcars. There are a number of accounts purporting to debunk this story. This version, which splits the difference (GM tried, but it wasn’t a conspiracy, and it was only one of several causes) seems knowledgeable, but I’d love a pointer to an authoritative source on GM’s role.

 

Written by epopp

July 29, 2014 at 7:32 pm

how much to quantify the self?

Over at Scatterplot, Jeremy’s been writing about his life gamification experiment, which involves giving himself points for various activities he’d like to be doing more of. I find this sort of thing totally compelling and have to admit I’m now giving myself all sorts of points in my head. (Finish unpacking one box — 5 points! Send an email I’ve been procrastinating on — 5 points!) Although not in 100 million years could I get my husband to play along with me, even for brunch, of which he is fond.

Anyway, the game brought to mind this post from Stephen Wolfram, in which Wolfram presents a bunch of data from the last 25 years of his life. Here, for example, are all the emails he’s sent since 1989. (Note the sharp time shift in 2002, when he stopped being completely nocturnal.) He’s also got keystroke data, times of calendar events, time on the phone, and physical activity.

Plot with a dot showing the time of each of the third of a million pieces of email

Fascinating to read about, but perhaps not terribly healthy to pursue in practice. Although in Wolfram’s case, it sounds like he was mostly just collecting the data, not using it to guide his day-to-day decisions. Others become more obsessive. I don’t know if David Sedaris has really been spending nine hours a day walking the English countryside, a slave to his Fitbit, or if he’s taking poetic license, but it’s a heck of an image.

Clearly there are a lot of people into this sort of thing. In fact, there is a whole Quantified Self movement, complete with conferences and meet-up groups. One obvious take on this is that we’re all becoming perfect neoliberal subjects, rational, entrepreneurial and self-disciplined.

For me, though, what is fun and appealing as a choice — and I do think it’s a choice — becomes repellent and dehumanizing when someone pushes it on me. So while I’ll happily track my work hours and tally my steps just because I like to — and yes, I realize that’s kind of weird — I hate the idea of judging tenure cases based on points for various kinds of publications, and am uneasy with UPS’s use of data to ding drivers who back up too frequently.

It’s possible that I’m being inconsistent here. But really, I think it’s authority I have the problem with, not quantification.

Written by epopp

July 15, 2014 at 10:27 pm

on facebook and research methods

Twitter is, well, a-twitter with people worked up about the Facebook study. If you haven’t been paying attention, FB tested whether they could affect people’s status updates by showing 700,000 folks either “happier” or “sadder” updates for a week in January 2012. This did indeed cause users to post more happy or sad updates themselves. In addition, if FB showed fewer emotional posts (in either direction), people reduced their posting frequency. (PNAS article here, Atlantic summary here.)

What most people seem to be upset about (beyond a subset who are arguing about the adequacy of FB’s methods for identifying happy and sad posts) is the idea that FB could experiment on them without their knowledge. One person wondered whether FB’s IRB (apparently it was IRB approved — is that an internal process?) considered its effects on depressed people, for example.

While I agree that the whole idea is creepy, I had two reactions to this that seemed to differ from most.

1) Facebook is advertising! Use it, don’t use it, but the entire purpose of advertising is to manipulate your emotional state. People seem to have expectations that FB should show content “neutrally,” but I think it is entirely in keeping with the overall product: FB experiments with what it shows you in order to understand how you will react. That is how they stay in business. (Well, that and crazy Silicon Valley valuation dynamics.)

2) This is the least of it. I read a great post the other day at Microsoft Research’s Social Media Collective Blog (here) about all the weird and misleading things FB does (and social media algorithms do more generally) to identify what kinds of content to show you and market you to advertisers. To pick one example: if you “like” one thing from a source, you are considered to “like” all future content from that source, and your friends will be shown ads that list you as “liking” it. One result is dead people “liking” current news stories.

My husband, who spent 12 years working in advertising, pointed out that this research doesn’t even help FB directly, as you could imagine people responding better to ads when they’re happy or when they’re sad. And that the thing FB really needs to do to attract advertisers is avoid pissing off its user base. So, whoops.

Anyway, this raises interesting questions for people interested in using big data to answer sociological questions, particularly using some kind of experimental intervention. Does signing a user agreement when you create an account really constitute informed consent? And do companies that create platforms that are broadly adopted (and which become almost obligatory to use) have ethical obligations in the conduct of research that go beyond what we would expect from, say, market research firms? We’re entering a brave new world here.

Written by epopp

June 29, 2014 at 3:00 am

twitter publics

The first “tweets/votes” paper established the basic correlation between tweet share and vote share in a a large sample of elections. Now, we’re working on papers that try to get a sense of who is driving the correlation. A new paper in Information, Communication, and Society reports on some progress. Authored by Karissa McKelvey, Joe DiGrazia and myself, “Twitter publics: how online political communities signaled electoral outcomes in the 2010 US house election” argues that the tweet-votes correlation is strongest when people compose syntactically simple messages. In other words, the people online who use social media in a very quotidian way are a sort of “issue public,” to use a political science term. They tend to follow politics and the talk correlates with the voting, especially if it is simple talk. We call this online audience for politics a “twitter public.” Thus, one goals of sociological research on social media is to assess when online “publics” act as a barometer or leading indicator of collective behavior.

50+ chapters of grad skool advice goodness: From Black Power/Grad Skool Rulz 

Written by fabiorojas

March 18, 2014 at 12:01 am

world wide heavy metal

My good friend Jeffrey Timberlake and his MA student Adam Mayer have a forthcoming paper on the world wide diffusion of heavy metal (early version here). It is coming out in Sociological Perspectives:

The purpose of this paper is to explain the timing and location of the diffusion of heavy metal music. We use data from an Internet archive to measure the population-adjusted rate of metal band foundings in 150 countries for the 1991–2008 period. We hypothesize that growth in “digital capacity” (Internet and personal computer use) catalyzed the diffusion of metal music. We include time-varying controls for gross national income, political regime, global economic integration, and degree of metal penetration of countries sharing a land or maritime border with each country. We find that digital capacity is positively associated with heavy metal band foundings, but, net of all controls, the effect is much stronger for countries with no history of metal music prior to 1990. Hence, our results indicate that increasing global digital capacity may be a stronger catalyst for between-country than for within-country diffusion of cultural products.

My inner Beavis yearns to come out.

50+ chapters of grad skool advice goodness: From Black Power/Grad Skool Rulz 

Written by fabiorojas

March 10, 2014 at 12:05 am

Follow

Get every new post delivered to your Inbox.

Join 1,190 other followers