Archive for the ‘technology’ Category
I’m in the Poconos this week with old college friends and only intermittently paying attention to the larger world. And I’m hesitant to opine about the latest in the world of online experimentation (see here, here, or here) because honestly, it’s not my issue. I don’t study social media. I don’t have deep answers to questions about the ethics of algorithms, or how we should live with, limit, or reshape digital practices. And plenty of virtual ink has already been spilled by people more knowledgeable about the details of these particular cases.
But I do want to make the case that it’s important to have this conversation at this particular moment. Here is why:
If there’s one thing the history of technology teaches us, it’s that technology is path-dependent, and as a particular technology becomes dominant, the social and material world develop along with it in ways that have a lasting impact.
The QWERTY story, in which an inefficient keyboard layout was created to slow down the users of jam-prone typewriters but long outlasted those machines, may be apocryphal. Perhaps a better example is streetcars.
Historian Kenneth Jackson, in the classic Crabgrass Frontier, showed how U.S. cities were first reshaped by streetcars. Streetcars made it possible to commute some distance from home to work, and helped give dense, well-bounded cities a hub-and-spokes shape, with the spokes made up of rail lines. This was made possible by new technology.
Early in the 20th century, another new technology became widely available: the automobile. The car made suburbanization, in the American sense involving sprawl and highways and a turn away from center cities, possible.
But the car alone was not enough to suburbanize the United States. Jackson’s real contribution was to show how technological developments intersected with 1) cultural responses to the crowded, dirty realities of urban life, and 2) government policies that encouraged both white homeownership and white flight, to create the diffuse, car-dependent American suburbs we know and love. The two evolve together: technological possibilities and social decisions about how to use the technologies. As they lock in, both become harder to change — until the next disruptive technology (((ducking))) comes along.
So what does all this have to do with OKCupid?
The lesson here is that technologies and their uses can evolve in multiple ways. European cities developed very differently from American cities, even though both had access to the same transportation technologies. But there are particular moments, periods of transition, when we start to lock in a set of institutions — normative, legal, organizational — around a developing new technology.
We’re never going to be able to predict all the effects that a particular social decision will have on how we use some technology. Government support of racist red-lining practices is one reason for the white flight that encouraged suburbanization. But even if the 1930s U.S. mortgage policy hadn’t been racist, other aspects of it — for example, making the globally uncommon fixed-rate mortgage the U.S. norm — still would have promoted decentralization and encouraged the car-based suburbs. Some of that was probably unforeseeable.*
But some of it wasn’t. And I can’t help but think that more loud and timely conversation about the decisions and nondecisions the U.S. was making in the early decades of the 20th century might have led the country down a less car-dependent path. Once the decisions are made, though, they become very difficult to change.
Right now, it is 1910. We have the technology to know more about individuals than it has ever been possible to know, and maybe to change their behavior. We don’t know how we’re going to govern that technology. We don’t really know what its effects will be. But this is the time to talk about the possibilities, loudly and repeatedly if necessary. Maybe the effects on online experimentation will turn out to be to be harmless. Maybe just trusting that Facebook and OKCupid aren’t setting us on the wrong path will work out. But I’d hate to think that we unintentionally create a new set of freedom-restricting, inequality-reproducing institutions that look pretty lousy in a few decades just because we didn’t talk enough about what might — or might not — be at stake.
* There is a story that GM drove the streetcars out of business by buying up streetcar companies and then dismantling the streetcars. There are a number of accounts purporting to debunk this story. This version, which splits the difference (GM tried, but it wasn’t a conspiracy, and it was only one of several causes) seems knowledgeable, but I’d love a pointer to an authoritative source on GM’s role.
Over at Scatterplot, Jeremy’s been writing about his life gamification experiment, which involves giving himself points for various activities he’d like to be doing more of. I find this sort of thing totally compelling and have to admit I’m now giving myself all sorts of points in my head. (Finish unpacking one box — 5 points! Send an email I’ve been procrastinating on — 5 points!) Although not in 100 million years could I get my husband to play along with me, even for brunch, of which he is fond.
Anyway, the game brought to mind this post from Stephen Wolfram, in which Wolfram presents a bunch of data from the last 25 years of his life. Here, for example, are all the emails he’s sent since 1989. (Note the sharp time shift in 2002, when he stopped being completely nocturnal.) He’s also got keystroke data, times of calendar events, time on the phone, and physical activity.
Fascinating to read about, but perhaps not terribly healthy to pursue in practice. Although in Wolfram’s case, it sounds like he was mostly just collecting the data, not using it to guide his day-to-day decisions. Others become more obsessive. I don’t know if David Sedaris has really been spending nine hours a day walking the English countryside, a slave to his Fitbit, or if he’s taking poetic license, but it’s a heck of an image.
Clearly there are a lot of people into this sort of thing. In fact, there is a whole Quantified Self movement, complete with conferences and meet-up groups. One obvious take on this is that we’re all becoming perfect neoliberal subjects, rational, entrepreneurial and self-disciplined.
For me, though, what is fun and appealing as a choice — and I do think it’s a choice — becomes repellent and dehumanizing when someone pushes it on me. So while I’ll happily track my work hours and tally my steps just because I like to — and yes, I realize that’s kind of weird — I hate the idea of judging tenure cases based on points for various kinds of publications, and am uneasy with UPS’s use of data to ding drivers who back up too frequently.
It’s possible that I’m being inconsistent here. But really, I think it’s authority I have the problem with, not quantification.
Twitter is, well, a-twitter with people worked up about the Facebook study. If you haven’t been paying attention, FB tested whether they could affect people’s status updates by showing 700,000 folks either “happier” or “sadder” updates for a week in January 2012. This did indeed cause users to post more happy or sad updates themselves. In addition, if FB showed fewer emotional posts (in either direction), people reduced their posting frequency. (PNAS article here, Atlantic summary here.)
What most people seem to be upset about (beyond a subset who are arguing about the adequacy of FB’s methods for identifying happy and sad posts) is the idea that FB could experiment on them without their knowledge. One person wondered whether FB’s IRB (apparently it was IRB approved — is that an internal process?) considered its effects on depressed people, for example.
While I agree that the whole idea is creepy, I had two reactions to this that seemed to differ from most.
1) Facebook is advertising! Use it, don’t use it, but the entire purpose of advertising is to manipulate your emotional state. People seem to have expectations that FB should show content “neutrally,” but I think it is entirely in keeping with the overall product: FB experiments with what it shows you in order to understand how you will react. That is how they stay in business. (Well, that and crazy Silicon Valley valuation dynamics.)
2) This is the least of it. I read a great post the other day at Microsoft Research’s Social Media Collective Blog (here) about all the weird and misleading things FB does (and social media algorithms do more generally) to identify what kinds of content to show you and market you to advertisers. To pick one example: if you “like” one thing from a source, you are considered to “like” all future content from that source, and your friends will be shown ads that list you as “liking” it. One result is dead people “liking” current news stories.
My husband, who spent 12 years working in advertising, pointed out that this research doesn’t even help FB directly, as you could imagine people responding better to ads when they’re happy or when they’re sad. And that the thing FB really needs to do to attract advertisers is avoid pissing off its user base. So, whoops.
Anyway, this raises interesting questions for people interested in using big data to answer sociological questions, particularly using some kind of experimental intervention. Does signing a user agreement when you create an account really constitute informed consent? And do companies that create platforms that are broadly adopted (and which become almost obligatory to use) have ethical obligations in the conduct of research that go beyond what we would expect from, say, market research firms? We’re entering a brave new world here.
The first “tweets/votes” paper established the basic correlation between tweet share and vote share in a a large sample of elections. Now, we’re working on papers that try to get a sense of who is driving the correlation. A new paper in Information, Communication, and Society reports on some progress. Authored by Karissa McKelvey, Joe DiGrazia and myself, “Twitter publics: how online political communities signaled electoral outcomes in the 2010 US house election” argues that the tweet-votes correlation is strongest when people compose syntactically simple messages. In other words, the people online who use social media in a very quotidian way are a sort of “issue public,” to use a political science term. They tend to follow politics and the talk correlates with the voting, especially if it is simple talk. We call this online audience for politics a “twitter public.” Thus, one goals of sociological research on social media is to assess when online “publics” act as a barometer or leading indicator of collective behavior.
My good friend Jeffrey Timberlake and his MA student Adam Mayer have a forthcoming paper on the world wide diffusion of heavy metal (early version here). It is coming out in Sociological Perspectives:
The purpose of this paper is to explain the timing and location of the diffusion of heavy metal music. We use data from an Internet archive to measure the population-adjusted rate of metal band foundings in 150 countries for the 1991–2008 period. We hypothesize that growth in “digital capacity” (Internet and personal computer use) catalyzed the diffusion of metal music. We include time-varying controls for gross national income, political regime, global economic integration, and degree of metal penetration of countries sharing a land or maritime border with each country. We find that digital capacity is positively associated with heavy metal band foundings, but, net of all controls, the effect is much stronger for countries with no history of metal music prior to 1990. Hence, our results indicate that increasing global digital capacity may be a stronger catalyst for between-country than for within-country diffusion of cultural products.
My inner Beavis yearns to come out.
This coming August 15, Dan McFarland of Stanford University and I will host a conference on the new computational sociology at the Stanford campus. The goal is to bring together social scientists, informatics researchers, and computer scientists who are interested in how modern computation can be brought to bear on issues that are of central importance to sociology and related disciplines. Interested people should go to the following web site for information on registration and presentation topics. I hope to see you there.
three visiting fellowships on innovation at the Technische Universitat in Berlin – due Feb. 15, 2014
One of our orgtheory readers, Jan-Peter Ferdinand, forwarded a flier about a fellowship opportunity at the Technische Universität in Berlin, Germany. This sounds like a great opportunity for grad students and prospective post-docs who are studying innovation.
Here’s an overview:
The DFG graduate school “Innovation society today” at the Technische Universität Berlin, Germany, is pleased to advertise 3 visiting fellowships. The fellowships are available for a period of three months, either from April to June 2014 or October to December 2014.
The graduate school addresses the following key questions: How is novelty created reflexively; in which areas do we find reflexive innovation; and which actors are involved? Practices, orientations, and processes of innovations are studied in and between various fields, such as (a) science and technology, (b) the industrial and service sectors, (c) arts and culture, and (d) political governance, social planning of urban and regional spaces. More information about the graduate school can be found on our website: http://www.innovation.tu-berlin.de (click on the flag at the top of the page for an English version).
By following an extended notion of innovation, the graduate school strives to develop a sophisticated sociological view on innovation, which is more encompassing than conventional economic perspectives. Our doctoral students are currently undertaking a first series of case studies to promote a deeper and empirically founded understanding of the meaning of innovation in contemporary society and of the social processes it involves.
See this PDF (GW_Ausschreibung-2014) for more info, including deadline (Feb. 15, 2014) and application materials needed.
Michael Corey is a PhD candidate in sociology at the University of Chicago. This guest post explains his experiences working for Facebook, the world’s leading social networking website (as if you didn’t know that!).
Another Dispatch from Industry
Last summer I moved from Chicago to the bay area to work as a quantitative researcher at Facebook. I’d done six years in the PhD program at Chicago and left with drafts of all my dissertation papers but without a cohesive dissertation to turn in (3 paper dissertations aren’t exactly allowed). Six months at Facebook has been eye opening and weird. Below I’ll try to give readers a feel for what it is like to go from an academic track to an industry job.
The FB Culture:
The culture at Facebook is really fun. I work at the main campus in Menlo Park, where a few thousand people work on the various FB platforms and the associated companies (Parse, Onavo, Instagram, etc). My mother-in-law describes it as an Oxford College designed by Willy Wonka, which is pretty fair. The campus houses everything you need to reduce any external friction that would take you off-campus during the day [http://cnettv.cnet.com/barber-candy-shop-bank-among-deluxe-perks-facebook/9742-1_53-50153870.html]. It is pretty easy to drink the Kool-Aid about how great FB is, and I would imagine that it is hard to work here if you don’t. I wasn’t the biggest FB user when I started here, but having been off the site for a long time I learned to recognize how much I missed by not being on it. For so many of my peers it is the only medium to communicate news, baby pictures, or cat memes to weak ties. Risk taking is encouraged and speed is considered a virtue.
university of chicago visit – everything you wanted to know about tweets and votes, but were afraid to ask
I will be a guest of the computational social science workshop at the University of Chicago this coming Friday. I will present a very detailed talk on the more tweets/more votes phenomena called “Everything You Wanted to Know About the Tweets-Votes Correlation, but Were Afraid to Ask.” If you want to chat or hang out, please email me.
Refreshments will be served.
Jim Moody and I are writing an article on data visualization in Sociology. Here’s a picture that won’t be in the final version, but I like it all the same.
I keep hearing about the coming big data revolution. Data scientists are now using huge data sets, many produced through online interactions and media, that shed light on basic social processes. Big data data sets, from sources like Twitter, Facebook, or mobile phones, give social scientists ways to tap into interactions and cultural output at a scale that has never been seen before in social science. The way we analyze data in sociology and organizational theory are bound to change due to this influx of new data.
Unfortunately, the big data revolution has yet to happen. When I see job candidates or new scholars present their research, they are mostly using the same methods that their predecessors did, although with incremental improvements to study design. I see more field experiments for sure, and scholars seem more attuned to identification issues, but the data sources are fairly similar to what you would have seen in 2003. With a few notable exceptions, big data have yet to change the way we do our work. Why is that?
Last week Fabio had a really interesting post about brain drain in academia. One reason we might see less big data than we’d like is because the skills needed to handle this type of analysis are rare and much of the talent in this area is finding that research jobs in the for-profit world are more lucrative and rewarding than what they’re being offered in academia. I believe that’s true, especially for the kinds of people who are attracted to data mining techniques. The other problem though, I think, is that social scientists are having a hard time figuring out how to fit big data techniques into the traditional milieu of social science. Sociologists, for example, want studies to be framed in a theoretically compelling way. Organizational theorist would like scholars to use data that map on to the conceptual problems of the field. It’s not always clear in many of the studies that I’ve read and reviewed that big data analyses are doing anything new other than using big data. If big data studies are going to take over the field they need to address pressing theoretical problems.
With that in mind, you should really read a new paper by Chris Bail (forthcoming in Theory and Society) about using big data in cultural sociology. Chris makes the case that cultural sociology, a subfield that is obsessed with understanding the origins of and practical uses of meaning, is prime for a big data surge. Cultural sociology has the theoretical questions, and big data research offers the methods.
More data were accumulated in 2002 than all previous years of human history combined. By 2011, the amount of data collected prior to 2002 was being collected very two days. This dramatic growth in data spans nearly every part of our lives from gene sequencing to consumer behavior. While much of these data are binary and quantitative, text-based data is also being accumulated on an unprecedented scale. In an era of social science research plagued by declining survey response rates and concerns about the generalizability of qualitative research, these data hold considerable potential. Yet social scientists – and cultural sociologists in particular – have ignored the promise of so-called ‘big data.’ Instead, cultural sociologists have left this wellspring of information about the arguments, worldviews, or values of hundreds of millions of people from internet sites and other digitized texts to computer scientists who possess the technological expertise to extract and manage such data but lack the theoretical direction to interpret their meaning in situ….[C]ultural sociologists have made very few ventures into the universe of big data. In this article, I argue inattention to big data among cultural sociologists is particularly surprising since it is naturally occurring – unlike survey research or cross-sectional qualitative interviews – and therefore critical to understanding the evolution of meaning structures in situ. That is, many archived texts are the product of conversations between individuals, groups, or organizations instead of responses to questions created by researchers who usually have only post-hoc intuition about the relevant factors in meaning-making – much less how culture evolves in ‘real time’ (note: footnotes and references removed).
Chris goes on to offer suggestions about how cultural sociology might use big data to address big theoretical questions. For example, he believes that scholars studying discursive fields would be wise to use big data methods to evaluate the content of such fields, the relationships between actors and ideas, and the relationships between different fields. Of course, much of the paper is about how to use big data analysis to enhance or replace traditional methods used in cultural sociology. He discusses how Twitter and Facebook data might supplement newspaper analysis, a fairly common method in cultural and political sociology. Although he doesn’t go into great detail about how you would do it, an implicit argument he makes is that big data analysis might replace some survey methods as ways to explore public opinion.
I continue to think there is enormous potential for using big data in the social sciences. The key for having it accepted more broadly is for data scientists to figure out how to use big data to address important theoretical questions. If you can do that, you’re gold.
In the More Tweets, More Votes paper, we established that Twitter share correlates with future Congressional election results (e.g., % of tweets that mention GOP in a district correlates with the GOP vote share in the district). The deeper question – why? We’ve got a working paper that suggests an answer: Twitter, in some respects, mimics conventional text, which means that is close enough to the grass roots. In other words, people are more likely to use technology if it resembles what they know – an idea going back to a classic paper by Kwon and Zmud.
We can tease out testable implications. Specifically, technologies that are more sophisticated will be less likely to correlate with mass politics. In others, social media that is easy to use and relies mainly on pre-existing language skills are more likely to correlate with social trends than social media that require higher levels of functionality.
We test this with our tweets/votes data. We measured three types of candidate tweet share – “free text,” @mentions, and #hashtags. Free text is the “people’s” method of tweeting, while @mentions and #hashtags are syntaxes that require more knowledge. The grassroots hypothesis implies free text mentions of candidates will have a stronger correlation with election outcomes than @mentions or #hashtags. The results? Free texts correlate (as per the original paper) but the others are not significantly different from zero. The picture says it all.
Stark result. The implication is profound for social scientific studies of social media. If your data requires distinctly Internet based skills, it is less likely to speak to population level trends. Sophistication is probably the mark of connoisseur. Indeed, additional analysis of our data shows that @mention and #hashtag users are “intense” Internet users. For example, they have bigger median followers and are more likely to be “verified” by Twitter.
Last week, it was revealed that the NSA collects important data about all Verizon phone calls and has access to the servers of most major Internet firms like Facebook and Google. Of course, this sort of behavior is exactly what civil rights activists had warned about for years.
But there is a deeper lesson – the Internet has made it remarkably easy for the Federal government to collect enormous amounts of information on many aspects of our lives. If the reports are to be believed, the Prism program, which allows the Feds to search Internet firms, costs only $20 million. I can’t imagine the downloading of Verizon data can’t be that much more expensive. When communication was mainly done through voice and paper, this simply was not possible at the same scale.
So it has come to this. The Internet gives us cheap and easy communication, but it also makes a low cost copy of everything that third parties can hold onto, whether we like it or not. It is clear that the Courts, Congress, and the President aren’t in a rush to make sure that searches are done for probable cause. As I type this, President Obama asserts that it’s ok because they can’t hear your calls, but they just know who you are calling all the time. It’s not clear to me that there is anything that will reverse the erosion of privacy in the Internet age.
Definition: Given a set of X characters, a twiggle is the total possible number of tweets, or X^140. Since most English speakers will use the ASCII characters, one standard English twiggle is 95^140:
This was computed using the Online Long Number Calculator.
Over at A Programmer’s Tale, “jewsin” argues that Facebook is a failure because it encourages people to post junk. Two clips:
I am signed into Facebook right now. At a quick glance, the entire list of posts on the first screen are irrelevant to me. If I scrolled down I can find 4 stories I actually care about, from a list of about 30. The most important page on Facebook has more than three-fourths of absolutely useless content.
Surprising. Facebook is a company with a very large number of talented people. They know a lot about me. Yet, their product looks like one of those spam filled mailboxes from the nineties.
Since everybody is on Facebook, one can expect that it will in some way mirror the behavior of society in general. In the real world however, people’s opinions only have a limited reach.
Facebook is godsent for people who love to talk, but have nothing to say. Here is a network that doesn’t care about originality or the quality of content. In the time it takes to create something original, they could share dozens of things.
I agree, but “jewsin” is missing the point. The point of Facebook isn’t to produce high quality content. It’s a tool for getting millions of people to divulge precious marketing data (however coarse) in exchange for creating a platform where they hang out and stalk their high school crushes. On that count, it’s a mind blowing success.
A number of people have asked me a very important question about the More Tweets, More Votes paper. Do relative tweet rates merely correlate with elections or is there is a causal link?
The paper itself does not settle the issue. The purpose of the paper is merely to document this striking correlation. Given that qualification, let me explain the argument from both sides and my priors.
- Correlation: Twitter is a passive record of how excited people are. If a candidate somehow garners the attention of the public, they get excited and start talking about it, which translates into a higher twitter presence.
- Causal: The unusual attention that a candidate attracts in social media sways undecided or weakly committed voters. In a sense, highly active twitter users are the “opinion leaders” of modern society.
My prior: 75% correlation, 25% cause. How would tease out these arguments? For example, what variable could instrument the district level tweet counts? Interesting to find out.
When people read our More Tweets, More Votes paper, they often wonder – where is the “sentiment analysis?” In other words, why don’t we try to measure whether a tweet is positive or negative? Joe DiGrazia, the lead author, addressed this in a recent interview with techpresident.com:
DiGrazia said the researchers were “kind of surprised” that they saw a correlation without doing sentiment analysis of the Tweets. “We thought we were going to have to look at the sentiment,” he said. He speculated that one reason for the correlation could be a so-called Pollyanna Hypothesis, “that people are more likely to gravitate toward subjects that they are positive about and are more likely to talk about candidates that they support.”
The idea is simply this: the frequency of speech is often a relatively decent approximation of how imporant people think that topic is relative to salient alternatives. If people say “Obama” a little more often than the competition, then it’s not unreasonable to believe that he is more favored. And you don’t need content analysis to suss that out.
Unit of analysis: US House elections in 2010 and 2012. X-Axis: (# of tweets mentioning the GOP candidate)/(# of tweets mentioning either major party candidate). Y-axis: GOP margin of victory.
I have a new working paper with Joe DiGrazia*, Karissa McKelvey and Johan Bollen asking if social media data actually forecasts offline behavior. The abstract:
Is social media a valid indicator of political behavior? We answer this question using a random sample of 537,231,508 tweets from August 1 to November 1, 2010 and data from 406 competitive U.S. congressional elections provided by the Federal Election Commission. Our results show that the percentage of Republican-candidate name mentions correlates with the Republican vote margin in the subsequent election. This finding persists even when controlling for incumbency, district partisanship, media coverage of the race, time, and demographic variables such as the district’s racial and gender composition. With over 500 million active users in 2012, Twitter now represents a new frontier for the study of human behavior. This research provides a framework for incorporating this emerging medium into the computational social science toolkit.
The working paper (short!) is here. I’d appreciate your comments.
* Yes, he’ll be in the market in the Fall.
The interesting thing about technology is that early adopters tend to be very technical people. The average person who owned a computer in 1982 was probably educated and very interested in technology. A Popular Mechanics reader, if you will. Later, there is nothing remarkable about computer owners. Scientific literacy is not a precondition for computer use.
That leads me to a distinction: computer literacy vs. digital natives. The computer literate is someone who is steeped in the ways of computing. Not a professional engineer, but they approach a computer the way some people approach a car. It’s a machine, you can take it apart, make it do things, and so forth. The digital native is some who is comfortable with computers because they grew up around them. They are consumers of computers, not builders. They know how to use computer, but they can’t really write code or otherwise command a computer. This isn’t necessarily a bad thing. It should be expected that when a technology is well diffused that it is easy to use and requires little training or knowledge.
Here’s Joel West giving a primer (at Berkeley) on open and user innovation.
I’m a sucker for nutty futurist speculations. So bear with me on this one.
A few nights ago I was watching Neal Stephenson’s talk on “getting big stuff done,” where he bemoans the lack of aggressive technological progress in the past forty or so years. There’s obviously some debate about this, though he makes some good points. He raises the question of why, for example, we haven’t yet built a 20km tall building despite the fact that it appears to be technologically very feasible with extant materials. Nutty. But an interesting question. From a sci-fi writer.
Stephenson ends his talk on an organizational note and asks:
What is going on in the financial and management worlds that has caused us to narrow our scope and reduce our ambitions so drastically?
I like that question. Even if you think that ambitions have not been lowered, I think all of us would like to see the big problems of the world addressed more aggressively. (Unless one subscribes to the Leibnizian view that we live in the “best of all possible [organizational] worlds.”) Surely organization theory is central to this. This is particularly true in cases where technologies and solutions for big problems seemingly already exist – but it is the social technologies and organizational solutions that appear to be sub-optimal. So, how can more aggressive forms of collective action and organizational performance be realized? I don’t see org theorists really wrestling with these types of questions, systematically anyways. It would be great to see some more wide-eyed speculation about the organizational forms and theories that perhaps might facilitate more aggressive technological, social and human progress.
I can see several reasons for why organization theorists don’t engage with these types of, “futurist” questions. First, theories of organization tend to lag practice. That is, organizational scholars describe and explain the world (in its current or past state), though they don’t often engage in speculative forecasting (about possible future states). Second, many of the organizational sub-fields suited for wide-eyed speculation are in a bit of a lull, or they represent small niches. For example, organization design isn’t a super “hot” area these days (certainly with exceptions) — despite its obvious importance. Institutional and environmental theories of organization have taken hold in many parts, and agentic theories are often seen as overly naive. Environmental and institutional theories of course are valuable, but they delimit and are incremental, and are perhaps just self-fulfilling and thus may not always be practically helpful for thinking about the future.
That’s my (very speculative) two cents.
I’ve been reading up on intellectual property of late. Here are some sources worth perusing and reading (some of them can be downloaded for free), along with some interviews and clips.
- Boldrine, M. and Levine, D. (2008.) Against Intellectual Monopoly (you can download all the chapters on the website). Cambridge University Press.
- Boyle, J. (2008.) The Public Domain: Enclosing the Commons of the Mind. Yale University Press. (Here’s a short lecture based on the book.)
- Cohen, J. (2012.) Configuring the Networked Self: Law, Code and the Play of Everyday Practice. Yale University Press. Here’s the open version. (And, lecture at Berkman.)
- Johns, A. (2010.) Piracy: The Intellectual Property Wars from Gutenberg to Gates. University of Chicago Press. (Here’s a C-SPAN interview.)
- Lessig, L. (2001.) The Future of Ideas: The Fate of the Commons in a Connect World. Random House.
- Merges, R (2011.) Justifying Intellectual Property. Oxford University Press.
- Zemer, L. (2007). The Idea of Authorship in Copyright. Ashgate Publishing.
Interestingly, there isn’t meaningfully any kind of sociology of intellectual property, that I am aware of (feel free to correct me). Though several of the above scholars do call for increased dialogue between law and the social sciences (e.g., Julie Cohen), though this seems to be a relatively nascent area.
There is of course the “social construction” argument (e.g., that authorship or ownership is a myth)—a favorite argument of mine (e.g., see Beethoven and the Construction of Genius)—or the ubiquitous and tired references to “networks” (help!), but it seems that there is much opportunity in this space.
I’m sort of intrigued by the various innovations emerging from the Occupy Wallstreet Movement (I posted at strategyprofs about some of the tech ones, specifically apps).
One of the cooler, more low-tech innovations (ok, ok, these have been around for a long time – but still) is the use of the “human microphone” – note that the wiki entry was initiated just two weeks ago. Occupy also has its own hand signals (and, check out the hand signals for consensus decision-making). Cool. Twinkles.
Here’s a hand signal tutorial:
[link via David Lazer]
Twitter is getting lots of interest from social scientists. Here’s a piece from the current issue of Science about how “social scientists wade into the tweet stream” (the figure below is from this article). And, an NPR piece on a forthcoming Science article by Macy and Golder on affect and mood and twitter.