Archive for the ‘research’ Category
For seventy five years, Harvard University has conducted a longitudinal study of 269 men who graduated in 1938. It’s an attempt to learn, in detail, about the factors that might contribute to a good life. Business Insider has a nice summary of a new book, Triumphs of Experience, that presents the results of the study. A few take home points:
- Alcoholism is a disorder of great destructive power.” Alcoholism was the main cause of divorce between the Grant Study men and their wives; it was strongly correlated with neurosis and depression (which tended to follow alcohol abuse, rather than precede it); and—together with associated cigarette smoking—it was the single greatest contributor to their early morbidity and death.
- Above a certain level, intelligence doesn’t matter. I assume that it doesn’t matter for the types of life course outcomes social scientists measure (employment, health, happiness, marriage).
- Relationships matter, a lot: “Men who had “warm” childhood relationships with their mothers earned an average of $87,000 more a year than men whose mothers were uncaring.” and “Late in their professional lives, the men’s boyhood relationships with their mothers—but not with their fathers—were associated with effectiveness at work.”
- Dad matters as well: “warm childhood relations with fathers correlated with lower rates of adult anxiety, greater enjoyment of vacations, and increased “life satisfaction” at age 75—whereas the warmth of childhood relationships with mothers had no significant bearing on life satisfaction at 75.”
The formula for a good life: no alcohol or smoking; be nice too people, especially your kids; and you’re probably good enough to get what you want out of life.
Last Fri., I attended a talk by Sarah Babb of Boston College. In her talk, titled “Beyond the Horror Stories: Non-Experimental Social Researchers’ Encounters with Institutional Review Boards (IRB),” Babb revealed findings that included misconceptions about federal guidelines for human subjects. Contrary to what some IRB review boards demand from principal investigators (PIs) undertaking qualitative research, the federal guidelines do not require:
- signed consent from a low risk population
- an institutional research permission slip
To repeat, the above two are “not in federal regulations at all.”
Babb noted that at larger institutions, IRB boards often involve nonprofessionals – that is, those who don’t have appropriate professional expertise – in the decision-making processes about proposals. Moreover, qualitative research don’t fit well into the one-size-fits-all medical template often used to vet research proposals. Compounding these challenges is the lack of accountability in terms of IRB review boards’ responsibilities to PIs. Only 20% of IRBs that Babb examined had an appeals procedure that would allow PIs to contest decisions.
Not surprisingly, this talk evoked spirited discussion of the myriad problems encountered by researchers going through the IRB process at their institutions, as well as the unintended consequences of a review process ostensibly intended to protect human subjects. The audience noted the following unintended and undesired consequences: (1) normalized deviance,* (2) chilling effect upon the types of research undertaken, and (3) mission creep in which IRB review boards critique the suitability or worth of the research design, rather than evaluating risk to human subjects. In particular, senior researchers worried that tenure-track faculty and graduate students face great uncertainty about whether their project proposals will successfully navigate the IRB process in a timely fashion.
Audience members asked whether the sociologists’ professional association, the American Sociological Association (ASA), had taken an official position on IRB guidelines. None present were aware of any such activities (if you know of anything brewing from this or other associations, do write them in the comments). Attendees noted that because a tenured faculty member may be more able to surmount IRB issues on his/her own (or not need to go through the IRB process because of the type of research conducted), fashioning IRB standards that are more appropriate for a wider variety of research methods is a collective action problem.
I opined that these identified problems need to be considered a commons issue. Those with more power should consider it a professional responsibility to help budding researchers – undergraduate students, graduate students, junior faculty – go through an IRB process that is appropriate to their research methods and questions, especially if researchers hope to have future generations of audiences and colleagues. Unfortunately, dark humor may not be sufficient to get the point across – when a psychology colleague sent his IRB board a proposal to reproduce the Stanley Milgram experiment on April Fool’s Day, an IRB staffer called to inquire if the proposal was serious.
* One of my past posts discussing the IRB draws a steady stream of traffic from those searching for the answer to one of the quiz questions on the online Collaborative Institutional Training Initiative (CITI), a certification program mandatory for researchers and students at some institiutions.
Jenn Lena broke the news before I could. I’ll add my excitement and say that creating an open source sociology journal with a fast and limited review process that allows online comments and community engagement is something that needed to happen. And it IS happening. In Fall 2013 you can submit your papers to Sociological Science and, if you get through the evaluation process, you can see your paper published within months of submission. One of the most exciting aspects of the journal is how reviews work. Rather than forcing authors to go through months (or years) of agonizing back-and-forth with reviewers, the editors will make an up-or-down decision based on an initial review. The reviews will be evaluative, not developmental. Once published, readers can respond to articles and “challenge or extend other people’s work.” Publication will be continuous, and so as soon as your article has been accepted and edited, it will go online as a published article.
I think the journal is going to fill an important niche in sociology. I hope that one consequence of the journal will be to pressure other journals to speed up the process and to make publications be more interactive. It’s still too early to tell how the journal will fare in attracting high quality papers. I sincerely hope that people will send some of their best stuff to the journal. If they do, then I wonder what consequence this will have for the vast set of secondary/specialist journals in our field. Journals like Social Forces and Social Problems will be those most likely to take hits.
I have a bleg. What do you think are the best organizational theory papers published in a sociology or management journal in 2012? I’m on a nominations committee and I don’t want to miss anything. Let me know what you think in the comments.
Unit of analysis: US House elections in 2010 and 2012. X-Axis: (# of tweets mentioning the GOP candidate)/(# of tweets mentioning either major party candidate). Y-axis: GOP margin of victory.
I have a new working paper with Joe DiGrazia*, Karissa McKelvey and Johan Bollen asking if social media data actually forecasts offline behavior. The abstract:
Is social media a valid indicator of political behavior? We answer this question using a random sample of 537,231,508 tweets from August 1 to November 1, 2010 and data from 406 competitive U.S. congressional elections provided by the Federal Election Commission. Our results show that the percentage of Republican-candidate name mentions correlates with the Republican vote margin in the subsequent election. This finding persists even when controlling for incumbency, district partisanship, media coverage of the race, time, and demographic variables such as the district’s racial and gender composition. With over 500 million active users in 2012, Twitter now represents a new frontier for the study of human behavior. This research provides a framework for incorporating this emerging medium into the computational social science toolkit.
The working paper (short!) is here. I’d appreciate your comments.
* Yes, he’ll be in the market in the Fall.
In this last post, I’ll discuss why I fundamentally disagree with the argument presented in Reinventing Evidence. There are two reasons. First, I agree with Andrew Perrin that Biernacki wants us to embrace a textual holism. One of Biernacki’s major arguments is that by isolating a single word, or passage, we are losing the entire meaning of the text. Thus, interpretation is the only valid approach to text. Coding and quantification is invalid. Perrin points out that lots of things be isolated. For example, if I see the n-word, I can say that, on the average, the text is employing racist language.
Second, Biernacki does not seem to consider cultural competence. In other words, human beings are creatures that can often reliably capture the meaning of utterances made by other humans from the same cultural group. Of course, I am talking about things like every day speech or short and simple writings like newspaper articles. More complex texts, like novels, will have networks or dense layering of meaning that go beyond a human’s native capacity for communication. These probably could be coded, but it would require intense training and an elaborate theory of text, which sadly we don’t have in sociology. But my major point remains. There’s a lot of fairly simple text that can be coded. If you believe that people can accurately convey the meaning of a text or label some aspect of it because they are “native speakers” of the culture, then coding is a valid thing to do. To believe otherwise, is to assume a world of solipsistic culture where every act of utterance requires a stupendous level of interpretation on the part of the audience.
So to wrap things up. I give credit to Biernacki for making us think hard about the quality of coding which is lacking. The fact that science is presented in ritual is fair, but doesn’t address whether a particular procedure produces valid measurement or inference. And I think that the view that texts are essentially uncodable is in error.
I am not sure what this accomplishes, but some journalists are trying to get further records from the University of Central Florida, where the editor of Social Science Research works. From the website of activist and author John Becker:
Despite the wide reach of the New Family Structures Study, much about the process by which it was peer-reviewed and published by the journal Social Science Research remains unknown. We know that the timetable was extraordinarily compressed — according to data from the University of Texas and SSR, Regnerus submitted his paper 20 days before the end of the data collection period and 23 days before the data file was delivered to the university. Sounds fishy, doesn’t it? And the entire process, including the paper’s initial submission, review, revision, and acceptance, took place within six weeks. But why? What are the reasons for moving so quickly? Did Regnerus just catch a lucky break, or is there more to the story? We already know that his funders had an anti-gay agenda and the study itself was plagued by troubling conflicts of interest; were the peer review and publication processes similarly compromised?
Last month, I filed a Freedom of Information Act request with the University of Central Florida, which houses Social Science Research, seeking public records relating to the peer review and publication of the New Family Structures Study. My goal is simply to discover the truth: whether everything was above board and best practices and ethical standards were followed, or whether something more sinister occurred. The documents I requested from UCF may help to answer these important questions.
This Spring, our book forum will address Richard Biernacki’s Reinventing Evidence in Social Inquiry: Decoding Facts and Variables. In this initial post, I’ll describe the book and give you my summary judgment. Reinventing Evidence, roughly speaking, claims that numerically coding extended texts is a very, very bad idea. How bad? It is soooo bad that sociologists should just stop coding text and abandon any hope of providing a quantitative or numerical coding of texts or speech. It’s all about interpretation. This is an argument that prevents a much needed integration of the different approaches to sociology, and it deserves a serious hearing.
In support of this point, Biernacki does a few things. He makes an argument about how coding text lacks validity (i.e., associating a number to a text does correctly measure what we want it to measure). Then he spends three chapters going back to well known studies that use content analysis and argues, at varying points, that the coding is misleading, obviously incorrect, or that there were no consistent standard for handling the text or the data.
As a proponent of mixed methods, I was rather dismayed to read this argument. I do not agree that coding of text is a hopeless task and that we should retreat into the interpretive framework of the humanities. There seem to be regularities in speech, and other text, that makes us want to group them together. If you accept that statement, then it follows that a code can be developed. So, on one level I don’t buy into the main argument of the book.
At a more surface level, I think the book does some things rather well. For example, the meat of the book is in replication, which many of us, like Jeremy Freese, have advocated. Biernacki goes back and examines a number of high profile publications that rely on coding texts and finds a lot to be desired.
Next week, we’ll get into some details of the argument. Also, please check out our little buddy blog, Scatterplot. Andrew Perrin will discussing the book and offering his own views.
As I posted earlier, I’ll be presiding over a conversation between George Ritzer and Carmen Sirianni from 3:30-5pm on Fri., March 22, 2013 at ESS in the Whittier Room (4th Flr) of the Boston Park Plaza hotel.
In the past several years, disasters like Hurricane Sandy and Katrina have sparked growing interest in what both conventional and innovative organizations can (and cannot) do given conditions of uncertainty vs. certainty. Both featured scholars’ work cover the limits of particular organizing practices (i.e., Ritzer’s work on McDonaldization), as well as the potential of organized action (i.e., Sirianni’s work on collaborative governance). Thus, I’ve given this particular conversation the broad title “Organizations and Societal Resilience: How Organizing Practices Can Either Inhibit or Enable Sustainable Communities.”
What would you be interested in hearing Ritzer and Sirianni discuss about organizations and society? Please put your qs or comments in the discussion thread.
For those unfamiliar with Ritzer and Sirianni, here is some background about their work:
George Ritzer is best known for his work on McDonaldization and more recently, the spread of prosumption in which people are both producers and consumers.
J. Mike Ryan‘s interview of Ritzer about his McDonaldization work:
J. Mike Ryan’s interview of Ritzer about why we should learn about McDonaldization (corrected link):
Carmen Sirianni is known for his work on democratic governance.
A brief video of Sirianni arguing that citizens should be “co-producers” in building society.
A more extensive video of Sirianni presenting on his book Investing in Democracy: Engaging Citizens in Collaborative Governance (Brookings Press, 2009).
Over a week ago, a colleague called to let me know that our advisor, Harvard Prof. J. Richard Hackman, had passed. For months, I knew that this news would eventually come, but it’s still painful to accept. I will miss hearing Richard’s booming voice, having my eyeglasses crushed to my face from a bear hug (Richard was well over 6 feet tall), or being gleefully gifted with a funny hand-written note imparting his sage advice on a matter.
Richard was a greatly respected work redesign and teams researcher. At Harvard, his classes included a highly regular and popular (despite its “early” morning time slot) course on teamwork. For those undergraduate and graduate students who have been lucky enough to take Richard’s course on teams, the course interweaves concept and practice as students must work in teams, something that most of us get very little practice with outside of organized sports or music.
In July 2012, Richard emailed several of his former teaching fellows asking us to join him in Cambridge and help him rework this course. On short notice, we assembled at the top floor of William James Hall and went over the materials, with Richard expertly leading us as a team, with clearly designated boundaries (those of us assembled for the task), a compelling direction (revising the material to attract students across disciplines), enabling structure (norms that valued contributions of team members, no matter their place in the academic hierarchy), and a supportive context (reward = tasty food, an incentive that always works on former graduate students, and good fellowship).
During this last meeting, Richard asked us about how we thought his course on teamwork could most impact individuals. I opined that his biggest impact wouldn’t be through just the students who took his course, but via those of us who would continue to teach teamwork and conduct research in other settings. This question may have been Richard’s gentle way of telling us that he was passing on the baton.
Here are several ways that I think Richard’s legacy lives on.
Read the rest of this entry »
Recently, at a faculty meeting of professors and graduate students from several disciplines, discussion turned to the IRB’s interpretation of human subjects guidelines and the implications for students’ efforts to document phenomena for class assignments. Participants pointed out a variety of problems, including changes over the years in IRB decisions about whether results of projects could be publicly shared – in this case, whether students’ videorecorded interview of a retired elected official could be publicly shared under today’s IRB guidelines. Faculty and graduate students also described delays in getting feedback from their IRBs, raising concerns about how the lack of accountability on the part of some IRBs increases the uncertainty of planning class research, students’ timely graduation, and faculty productivity.
At orgtheory, we’ve discussed how researchers face challenges concerning the IRB here and here. Although the IRB offers detailed guidelines that can protect human subjects in medical research, how the IRB and human subjects concerns can contribute to the conduct of qualitative research, particularly organizational ethnography, is less clear.
Several recent publications offer researchers’ experiences with these issues.
Is there any relationship between accusations of corporate deviance and the diffusion of new practices? My coauthor, Ed Carberry, and I think so. In a new paper that just came out in the Journal of Management Studies we show that firms began using stock option expensing, a practice that used to be seen as quite problematic and undesirable by executives and boards, after a series of scandals rocked the corporate world in the early 2000s, causing firms to look for new ways to restore their credibility. Stock option expensing became a tool that companies could use to distance themselves from the stigma associated with corporate scandal. Our analyses show that firms facing media scrutiny around claims of corporate fraud and firms that were targets of shareholder activism around corporate governance were much more likely adopt stock option expensing. Firms that faced both intense media scrutiny and shareholder activism were especially likely to adopt the practice. We argue that in the period directly following the Enron scandal stock option expensing became seen as an impression management tactic that firms could use to restore confidence in their accountability to the public.
The title of the paper is “Defensive Practice Adoption in the Face of Organizational Stigma: Impression Management and the Diffusion of Stock Option Expensing.” You can download the paper on my website. Here is the abstract.
Although most diffusion research focuses on firms adopting new practices to maintain their legitimacy, this paper examines a setting in which firms adopted a controversial practice to defend themselves against relating to corporate deviance. We argue that understanding defensive adoption requires attending to both the dynamics of organizational stigma and impression management, and test our theoretical claims by analysing the diffusion of an accounting practice, stock option expensing (SOPEX), following the Enron scandal. We first provide evidence that the media and shareholder activists transformed the practice into a defensive device by theorizing it as a solution to problems relating to corporate fraud and corporate governance. Using event history analysis, we then show that corporations that became targets of stigma- inducing threats were more likely to adopt SOPEX and that the media were a key force channeling these threats.
Academy of Management Review, AMR, has a call out for a special issue on “communication, cognition and institutions.” Be sure to check it out.
Former orgtheory guester Eero Vaara is involved as an editor, along with John Lammers, Joep Cornelissen, Rodolphe Durand and Peer Fiss. Submissions are due July, 2013.
The most recent Nature features an article by a team of political scientists and network scholars who did an experiment using Facebook to show that strong ties influenced voting behavior in the last election. You may say, so what? We’ve known for a long time that social influence operates through strong ties in interpersonal networks. That’s not a new insight. But I think the study is innovative for a couple of reasons. The first is that the impact of of using direct messaging through Facebook was substantively significant – that is, just messaging people reminders to go out and vote increased the likelihood that the person would vote – but that the larger effect was transmitted indirectly via social contagion. Consider the setup of the experiment.
What makes a study interesting? Is it the empirical phenomena that we study or is it the theoretical contribution? For those of who are really paying attention (and I applaud you if you are), you’ll notice that I’ve asked this question before. It’s become a sort of obsession of mine. For the field of organizational theory, it’s an important discussion to have, although it’s not one that will likely yield any consensus. Scholars tend to have very strong opinions about this. Some people feel that as a field we’ve fetishized theory to the point of making our research inapplicable to the bigger world we live in. Others claim that by making “theoretical contribution” such a key component of any paper’s value, we ignore really important empirical problems. But in contrast, some scholars maintain that what makes our field lively and essential is that we are linked to one another (and across generations) via a stream of ideas that constitute theory. What makes an empirical problem worthy of study is that it can be boiled down to a crucial theoretical problem that makes it generalizable to a class of phenomena and puzzles.
At this year’s Academy of Management meetings, I was involved in a couple of panels where this issue came up. It was posed as a question, should we be interested in problems or theory? If we are interested in studying problems, we shouldn’t let theoretical trends bog us down. We should just study whatever real world problems are most compelling to us. If we’re interested primarily in theory, we need to let theory deductively guide us to those problems that help us solve a particular theoretical puzzle. Some very senior scholars in the field threw their weight behind the former view. I don’t want to name any names here, but one of the scholars who suggested we should be more interested in real-world problems is now the editor of a major journal of our field. He offered several examples of papers recently published in that journal that were primarily driven by interesting observations about empirical phenomena.
One of the new assistant professors in the crowd threw a pointed objection to the editor. And I paraphrase, “This all sounds great. I’d love to study empirical problems, but reviewers won’t let me! They keep asking me to identify the theoretical gap I’m addressing. They demand that I make a theoretical contribution.” Good point young scholar. Reviewers do that a lot. We’ve had it drilled into us from our grad school days that this is what makes a study interesting. If the paper lacks a theoretical contribution, reject it (no matter how interesting the empirical contribution may be)! This is a major obstacle, and I don’t think the esteemed editor could offer a strong counter-argument to the objection. Editors, after all, are somewhat constrained by the reviews they get. I think what we need is a new way to think about what makes a study valuable. We need new language to talk about research quality.
Finally got around to reading the Regnerus SSR article. A few comments:
1. Up front – my politics: I am against laws that distinguish between people based on sexual orientation. I also believe that people should be tolerant of many sexual orientations, not just heterosexuality.
2. My prior scientific belief: I believe that it may be possible that sexual orientation may be correlated with family outcomes in positive or negative ways, just as there might be differences between other groups (e.g., Latinos may be better or worse parents according to some measure).
3. My prior legal belief: The presence of group differences doesn’t entail policy change unless the differences are extreme. For example, we might discover that alcoholics are worse parents, but I would be against a law that banned alcoholics from having children. In other words, I can believe that some group (e.g., gays) may be better or worse parents than other groups, but that doesn’t mean we should discriminate.
4. What I’ve been told: I am not a family sociologist, but multiple people have told me that prior research tends to find little or no difference between children of straight parents and gay parents. They could be wrong and I’d be willing to update my belief if a sufficiently strong study came out.
5. The actual Regnerus study contains a modest, but interesting result. According to multiple measures, people seem to be worse off if they had a parent who had same gender sexual relations. This isn’t surprising given that most reported two person families were mixed gender. That suggests that same sex contacts were outside the family. In other words, the study measured the “Larry Craigs” of the world. I am not shocked that their children may be worse off in some way.
6. The issue, to me, seems to be the claim that the data provide evidence against same sex marriage. First, even the author admits, there are very few people who reported two parents of the same gender (17, to be exact). Second, there is a severe selection effect. Most of the survey respondents grew up when same sex marriage was illegal, thus preventing what might the equilibrium state in an environment where same gender marriage is legal. To be blunt, the gay people who set up families a decade or more ago are not the same people who might set up families in the current environment.
7. There was petition asking the Social Science Research editor to explain how this paper went through the review process. As an author whose papers have gotten stuck for *years* at a time, I was shocked to learn that it went through in a matter of weeks.
8. Critics claim that the outcry was a “witch hunt” (see the Scatter discussion). That’s a vague and charged term, so I will ignore it. But a few things are safe to say. Science is built on skepticism. If a paper comes out claiming that all previous work in the topic was flawed and produces a controversial result, it would be normal for people to ask questions. The proper response is to provide an explanation of how the research was conducted, not accuse people of a “witch hunt.” No one is asking that anyone be fired or banned from doing sociology. The petitioners are merely asking, “why was this published?”
9. There is some truth to the charge that the outcry is political. Consider a thought experiment, what if a researcher produced a flawed article that supported a liberal policy? Has there been an equivalent level of outcry against bad research that supports liberal points of view? This doesn’t mean that the Regnerus critics should stop. They were right to ask questions. Rather, it means that we should bring the same skepticism to all research, regardless of policy implications. Liberals and conservatives should be equally fearful of sociology’s methodology police.
10. Darren “BMX” Sherkat was asked the the SSR editor to do an audit of the paper and its review process. Personally, I think this is excessive. The editor, James Wright, is an accomplished scholar and likely knew that the paper would be controversial, even used as ammunition in a political dispute. We give editors great leeway. They may agree with reviewers or override them. He chose to publish this paper after getting some feedback, which is normal. Darren found that the reviewers had some connection with the author. This isn’t always bad. I’m sure that the reviewers of some of my rejected papers know me personally, and that I’ve rejected the papers of people who I like and admire. The bottom line is that the James Wright is an adult and sociology is a contact sport.
11. Bottom line: I think this a modest paper that presents an intuitive result. If one of your parents is gay but is with a different gender partner, then kids may be worse off. A family where there a sever mismatch in orientation between parents is likely to be stressful, to say the least. At best, the paper would have to be severely rewritten to match the text and the results. At worse, as Darren notes, the paper should just be rejected along with other papers where the claims don’t match the data. The extremely fast publication process suggests that these options were not seriously considered.
Last week, I argued that retractions are good for science. Thomas Basbøll correctly points out that retractions are hard. Nobody wants to retract. Good point, but my argument wasn’t about how easy it is to retract. Rather, it’s about the fact that science is exceptional in that it has a built in error correction mechanism.
In reviewing the debate, Andrew Gelman wrote:
One challenge, though, is that uncovering the problem and forcing the retraction is a near-thankless job…. OK, fine, but let’s talk incentives. If retractions are a good thing, and fraudsters and plagiarists are not generally going to retract on their own, then somebody’s going to have to do the hard work of discovering, exposing, and confronting scholarly misconduct. If these discoverers, exposers, and confronters are going to be attacked back by their targets (which would be natural enough) and they’re going to be attacked by the fraudsters’ friends and colleagues (also natural) and even have their work disparaged by outsiders who think they’re going too far, then, hey, they need some incentives in the other direction.
A few thoughts. First, fraud busting should be done by those who have some security – the tenured folks – or folks who don’t care so much (e.g., non-tenure track researchers in industry). Second, data with code should be made available on journal websites, with output files. Already, some journals are doing that. That reduces fraud. Third, we should revive the tradition of the research note. Our journals used to publish short notes. These can be used for replications, verifications, error reporting and so forth. Fourth, we should rely on journal models like PLoS. In other words, the editors will publish any competent piece of research and do so in a low cost and timely way. Fraud busting and error correction will never be easy, but we can make it easier and it’s not hard to do so.
Thanks to those who suggested additional examples of self-managing organizations on my previous post about self-managing organizations! In the comments, Usman has also kindly provided a link to a documentary, The Take. Such examples show how people use self-managing organizations to reverse economic decline or stagnation, as well as defend their community, dignity, and livelihoods. For more examples of how grassroots organizations and democratic organizations can underpin economic revitalization, Orgheads might be interested in Jeremy Brecher‘s Banded Together: Economic Democratization in the Brass Valley (2011, University of IL Press). Drawing on archival research, participant-observations of meetings, and interviews conducted about efforts to revitalize Western Connecticut’s Naugatuck Valley in residents and workers’ interests using Alinskyite methods, Brecher delves into several case studies of reorganizing the workplace, from factory to home-care. (See my review of Brecher’s book in Contemporary Sociology for a more detailed synopsis.)
Participatory practices are also spreading to local governance in the US. Last fall, with the help of local organization Community Voices Heard, the Participatory Budgeting Project, and scholars and other groups, and trained volunteers such as myself, four districts in NYC experimented with participatory budgeting. Those who live, work, or attend school in these districts could propose and then prioritize projects on how to allocate several million dollars of city funds to improve community life. Volunteer budget delegates then developed proposals selected at the neighborhood assemblies, which they presented to the public. Residents aged 18 years and older voted for their top choices. Elected officials then allocated funding to these choices; some allocated additional funds for proposals that hadn’t won the popular vote. For more info on this experiment, see a PBS segment, which includes an interview with Celina Su, one of the advisers to this experiment. (Su published Streetwise for Street Smarts: Grassroots Organizing and Education Reform in the Bronx, which compares Frierian and Alinskyite organizing tactics.) See also my op-ed about this experiment and its implications for otherwise underrepresented voices in a local paper.
Think these practices might work in your hometown or organizations? Add your comments and recommendations below.
Hello fellow orgtheory readers! Orgtheory was kind enough to invite me back for another stint of guest blogging. For those of you who missed my original posts, you can read my 2009 series of posts on analyzing “unusual” cases, gaining research access to organizations, research, the IRB and risk, conducting ethnographic research, ethnography – what is it good for?, and writing up ethnography.
Those of you are familiar with my research know that I have studied an organization that mixed democratic or collectivist practices with bureaucratic practices. Here’s a puzzle: although we operate in a democracy, most of our organizations, including voluntary associations, rely upon topdown bureaucracy. However, this doesn’t mean that alternative ways of organizing can’t thrive.
Valve, the game developer behind Portal, has attracted much buzz (for example, see this article in the WSJ and various tech blogs entries, such as here and here) about its self-managing processes. The company prides itself on having no bosses, and their employee handbook details their unusual workplace practices. For example, instead of waiting for orders from above, workers literally vote with their feet by moving their desks to join projects that they deem worthy of their time and effort. Similarly, anthropologist Thomas Malaby describes how Linden Lab workers, who developed the virtual reality Second Life, vote how to allocate efforts among projects proposed by workers. Sociologist David Stark has described how workers mixed socialist and capitalist practices in a factory in post-Communist Hungary to get work jobs done, dubbing these heterarchies.
Interestingly, several of the conditions specified by Joyce Rothschild and J. Allen Whitt as allowing collectivist organizations to survive may also apply to these workplace organizations – for example, recruiting those like existing members and staying small in size. However, my research on Burning Man suggests that these are not always necessary or desirable conditions, particularly if members value diversity.
Although these self-managing practices may seem revolutionary to contemporary workers, orgtheory readers might recall that prior to the rise of management and managerial theories such as Taylor’s scientific management, workers could self-determine the pacing of projects. Could we make a full circle?
Any thoughts? Know of other interesting organizations or have recommendations for research that we can learn from? Put them in the comments.
Several people have pointed out Neal Caren lists of most cited works. I appreciate how hard it is to do something like this and I appreciate the work Neal Caren has done. So my criticism is intended more to get us closer to the truth here and to caution against this list getting reified. I also have some suggestions for Neal Caren’s next foray here.
The idea, as I understand it, is to try and create a list of the 100 most cited sociology books and papers in the period 2005-2010. Leaving aside the fact that the SSCI under counts sociology cites by a wide margin, (maybe a factor of 400-500% if you believe what comes out of Google Scholar), the basic problem with the list is that it is not based on a good sample of all of the works in sociology. Because the journals were chosen on an ad hoc basis, one has no idea as to what the bias is in making that choice. The theory Neal Caren is working with, is that these journals are somehow a sample of all sociology journals and that their citation patterns reflect the discipline at large. The only way to make this kind of assertion is to randomly sample from all sociology journals.
The idea here is that if Bourdieu’s Distinctions is really the most cited work in sociology (an inference people are drawing from the list), then it should be equally likely to appear in all sociology articles and all sociology journals at a similar rate. The only way to know if this is true, is to sample all journals or all articles, not some subset chosen purposively. Adding ASQ to this, does not matter because it only adds one more arbitrary choice in a nonrandom sampling scheme. .
I note that the Social Science Citation Index follows 139 Sociology journals. A random sample of 20% would yield 28 journals and looking at those papers across a random sample of journals is going to get us a better idea at finding out which works are the most cited in sociology.
Is there any evidence that the nonrandom sample chosen by Neal Caren is nonrandom? The last three cites on his list include one by Latour (49 cites), Byrk (49 cites) and Blair Loy (49 cites). If one goes to the SSCI and looks up all of the cites to these works from 2005-2010, not just the ones that appear in these journals, one comes to a startling result: Latour has 1266 cites, Bryk, 124, and Blair Loy 152. At the top of the list, Bourdieu’s Distinctions has 218 on Neal Caren’s list but the SSCI shows Distinctions as having 865 cites overall. Latour’s book should put him at the top of the list, but the way the journals are chosen here puts him at the bottom. It ought to make anyone who looks at this list nervous, that Latour’s real citation count is 25 times larger than reported and it puts him ahead of Bourdieu’s Distinctions.
The list is also clearly nonrandom for what is left off. Brayden King mentioned that the list is light on organizational and economic sociology. So, I did some checking. Woody Powell’s 1990 paper called “Neither markets nor hierarchies” has 464 cites from 2005-2010 and his paper with three other colleagues that appeared in the AJS in 2005, “Networks dynamics and field evolution” has 267 cites. In my own work, my 1996 ASR paper “Markets as politics” has 363 cites and my 2001 book “The Architecture of Markets” has 454 from 2005-2010. If without much work, I can find four articles or books that have more cites than two of the three bottom cites on the list (i.e. Byrk’s 124 and Blair Loy’s 152 done the same way), there must be lots more missing.
This suggests that if we really want to understand what are the most cited and core works in sociology in any time period, we cannot use purposive samples of journals. What is required is a substantial number of journals being sampled, and then all of the cites to the papers or books tallied for those books and papers from the SSCI in order to see which works really are the most cited. I assume that many of the books and papers on the list will still be there, i.e. things like Bourdieu, Granovetter, DiMaggio and Powell, Meyer and Rowan, Swidler, and Sewell. But because of the nonrandom sampling, lots of things that appear to be missing are probably, well, missing.
Neal Caren has compiled a list of the 102 most cited works in sociology journals over the last five years. There are a lot of familiar faces at the top of the list. Bourdieu’s Distinction, Raudenbush’s and Bryk’s Hierarchical Linear Models, Putnam’s Bowling Alone, Wilson’s The Truly Disadvantaged, and Grannovetter’s “Strength of Weak Ties” make up the top 5. It’s notable that Grannovetter’s 1973 piece is the only article in the top 5. The rest are books. I was also interested to see that people are still citing Coleman. He has three works on the list, including his 1990 book at the number 6 spot. Sadly, Selznick is nowhere to be found on the list (but then neither is Stinchcombe). Much of the work is highly theoretical and abstract. There is a smaller, but still prominent, set of work dedicated to methods (e.g., Raudenbush and Bryk). I’m glad to see there is still a place for big theory.
It’s striking, however, how little organizational theory there is on the list. Not counting Granovetter, whose work is really about networks and the economy broadly, no organizational theory appears on the list until 15 and 16, where Hochschild’s The Managed Heart (which might be there due to the number of citations it gets from gender scholars) and Dimaggio’s and Powell’s 1983 paper show up. There are several highly influential papers in organizational theory that I was surprised were not on the list. One could deduce from the list that sociology and organizational theory have parted ways.
I don’t think this is really true, but I think it speaks to some trends in sociology. The first is that most organizational sociology, excluding research on work and occupations, no longer appears in generalist sociology journals outside of the American Sociological Review and the American Journal of Sociology. Journals like Social Forces or Social Problems just don’t publish a lot of organizational theory. Now, there are a lot of great organizational papers that get published in ASR and AJS, but that is a very small subset of the entire population of sociology articles. The second is that Administrative Science Quarterly no longer seems to count in most sociologists’ minds as a sociology journal anymore. Perhaps its omission leads to some significant pieces of organizational sociology being underrepresented (or perhaps not since ASQ publishes fewer articles than many of the sociology journals). To be fair to Neal, I don’t think he’s unique among sociologists as failing to recognize ASQ as an important source of sociology.* One reason for this, I’m guessing, is because a lot of non-sociologists publish in it. But a lot of non-sociologists publish in other journals that are on the list as well, including Social Psychological Quarterly, Mobilization, and Social Science Research. Another reason may just be that it’s because a lot of organizational sociology is no longer taking place in sociology departments, making the subfield invisible to our peer sociologists. Although I have no data to support this, my intuition is that fewer organizational theory classes are taught in sociology Phd programs today than were taught twenty years ago. Because of this, younger sociologists are not coming into contact with organizational theory, and so they are not citing it. Again, I have no evidence that this is the case.
I don’t think organizational research is waning in quality. A lot of organizational research still gets published in ASR and AJS. But a lot of it is probably not read or consumed by most sociologists.
UPDATE: Neal has updated the analysis to include ASQ. The major effect has been to boost DiMaggio and Powell to number 10.
*And yes, I’m lobbying Neal to include ASQ in future citation analyses.
Here’s an interesting piece extending Tajfel et al by studying 5-year-olds and intergroup bias: “Consequences of ‘‘Minimal’’ Group Afﬁliations in Children” Child Development. So, do 5-year-olds have a bias toward members of their in-group, even if they are arbitrarily assigned to these groups? They do.
Interesting paper. The paper also raises questions about whether in-group bias is learned (“enculturation,” Spielman, 2000), or whether it perhaps is an evolutionary-survival-type thing, or something driven by expectations of reciprocity or competition. Or something else.
Here’s the abstract:
Three experiments (total N = 140) tested the hypothesis that 5-year-old children’s membership in randomly assigned ‘‘minimal’’ groups would be sufﬁcient to induce intergroup bias. Children were randomly assigned to groups and engaged in tasks involving judgments of unfamiliar in-group or out-group children. Despite an absence of information regarding the relative status of groups or any competitive context, in-group preferences were observed on explicit and implicit measures of attitude and resource allocation (Experiment 1), behavioral attribution, and expectations of reciprocity, with preferences persisting when groups were not described via a noun label (Experiment 2). In addition, children systematically distorted incoming information by preferentially encoding positive information about in-group members (Experiment 3). Implications for the developmental origins of intergroup bias are discussed.
Adam Grant and Tim Pollock, two very prolific senior editors at the Academy of Management Journal, tell us how to write a compelling introduction to a scholarly article.
We only get one chance to make a first impression, and in academic publishing the introduction to your submission or your article is that chance. A good introduction hooks the reader by elucidating the topic’s impact; what scholars now know, what we do not know, and why that matters; and how the research contributes to an ongoing research conversation or starts a new conversation.
They interviewed 16 past winners of AMJ’s best article award about the process of writing introductions. Here are some of the key findings from their interviews:
At what point in the drafting process did they write their introductions? Nine percent wrote it when they first developed the idea; 23 percent wrote it at the very beginning of the drafting process; 9 percent wrote it at the very end of the process; and 59 percent wrote it somewhere in the middle of the process, often times jotting notes when they first developed the idea and/or before data collection and analysis were finished….The average award winner estimated spending 24 percent of the total writing time on the introduction.
As noted earlier, the average winner reported rewriting the introduction ten times. The minimum was three, and 45 percent reported rewriting it ten or more times.
For further insights, we asked the Best Article Award winners for their advice on how to write a great introduction. A content analysis revealed three primary categories: focusing (45%), engaging the reader (32%), and problematizing the literature (23%).
I usually give my PhD students the advice that they should write the introduction as if they are laying out a puzzle that needs to be solved (see also Ezra Zuckerman’s helpful advice about this point). Dave Whetten once told me that I ought to write the introduction as if I were speaking to just two or three people whom I’d like to convince of something. Picking those two or three people helps focus your argument. Lately I’ve found it useful when starting a new paper to write the first draft in a loose, conversational manner, ignoring academic conventions and just getting the core of the argument out there. I’ve found this helps me overcome the initial writer’s block I always face whenever starting a new project. I think it also clarifies my thinking. Rather than getting bogged down in a lengthy (and boring) literature review, writing in a more conversational tone focuses my writing on what I really want to say in the paper.
I think it’s kind of weird for someone to review a single command, but here it goes. The most recent version of Stata includes a command/package called “mi impute.” It is supposed to be a flexible all purpose utility for addressing missing data using multiple imputation (e.g., filling in missing data through constrained random draws, and then combining estimate). I’ve used it on and off since the Fall and I want to talk about my experiences.
First, as with most Stata software, mi impute is rather impressive. When you type “mi impute,” you enable a whole package of tools for doing multiple imputation analysis. It’s much like “svy,” “st” and other commands that allow you to do do all kinds of operations that are needed for specific types of analyses (Cox models, time series, etc). The documentation is extensive and the options available would help most run of the mill social scientists, like me.
Second, there are some serious drawbacks. Let’s start with speed. Mi impute is very expensive in terms of time. A student of mine recently worked with a very well known data set with 10,000 cases. The UNIX machines took hours to impute. My desktop will come a halt for a few minutes doing 5 imputations for N=690.
Another drawback is that mi impute is very fussy. Once you deviate from linear variables, mi impute produces a multitude of errors and warnings. It is not entirely obvious that using the various fixes is the correct and proper way to do things.
Finally, as with many multiple imputation methods, you are fairly constrained with what you can do with the final model. Because mi requires you to combine data sets, there is often no confidence interval for the coefficients, which very much limits post-estimation commands.
Overall, I admire mi impute and I’m glad it’s part of Stata 11. But at the same time, the cost-benefit ratio is out of whack. I can get similar and valid answers by using much simpler imputation methods without crashing my machine or making lots of dubious choices with the options.
From the home office in Hyde Park, guest blogger emeritus Mario draws my attention to a conference that should be of special interest to Midwest ethnographers:
University of Chicago Urban Network are sponsoring a conference, “Causal Thinking and Ethnographic Research,” devoted to understanding the contributions of ethnographic research to contemporary causal thinking and scientific inference. Is counterfactual thinking useful to ethnographers? Does ethnographic research help identify its flaws? Are the deductive methods underlying QCA appropriate to a research endeavor primarily driven by induction and abduction? Do mechanism-based explanations simply push the difficulties of causal inference deeper? What approaches to inference in ethnographic research would constitute a better alternative? Many of the most interesting and promising ethnographers in sociology will be addressing these and other questions.
Here’s the conference website. Check it out.
It would be nice if the world of quantitative data analysis in social science was like the one envisioned by people like Christopher Achen and every regression was a bivariate regression or at worst a model with 2-4 right-hand side variables. Instead, we are stuck living in a “garbage can” world where we write papers in which even though we only care about the relation between one thing and some other thing, end up including long vectors of other stuff that we don’t care about.
How we talk about the rest of the garbage can however, is important since it reflects mini-theories about the research process (sometimes consciously held most of the time mindlessly deployed) that reflect what you (think you) are doing when you regress Y on whatever. Different people recommend different lingo, and this lingo is related what they think you are accomplishing by augmenting your regression specification. So get clear on what you are doing and modify your vocabulary accordingly.
In my view, there are two broad practical purposes of regression analysis and they are reflected in the relevant vocabulary.
Descriptive inference.- For instance, Andrew Gelman prefers to talk about “input” variables (rather than independent variables); and what do you do with input variables? Well, you “adjust” for them. This kind of language is in my view appropriate for the bulk of quantitative analysis of observational data in social science. Here, the researcher cannot, does not or should not be making strong claims that they favorite X has a causal effect on their favorite Y. Instead, I’m just interested in the “net effect” of X on Y, and adjust for other inputs to make sure that the effect is there within levels of their (additive) combination. This language is nice and neutral and does not commit you to problematic assumptions about inference.
Causal inference.- People like Stephen Morgan on other hand, like to talk about “conditioning on” the values of the other Xs. This language is borrowed from a long tradition of causal inference in experimental and non-experimental research and it is appropriate when that’s exactly what the researcher thinks that he or she is doing. In particular, if you follow the recent systematization of what it is that those other Xs that you directly don’t care about actually do in the context of causal inference using directed acyclic graphs (in the recent writings by Pearl and Morgan and Winship), then it is clear why is it that you are “conditioning” on them (you block backdoor paths linking your favorite X to your favorite Y by conditioning on those other pesky Xs–and only those– that connect the fave with the outcome). The language of “conditioning” here also makes explicit that you are working from within the conditional independence assumption which is the fundamental bedrock of causal inference with observational data.
So there you have it. If you are just running descriptive regressions to see what goes with what and are interested in “effects” (as long as you don’t fall into the trap of talking about these effects as having anything to do with causes), then when you write your papers, you just say “adjust for.” Also, I think it would be good to follow Gelman and stay away from the language of “dependent” and “independent” variables; this is just mindless importation of lingo that comes from the experimental tradition into a setting where it does not belong. On the other hand, if you are working from an explicit framework in which your main interest is in the estimation of causal effects, then you should say “conditioning on.” This makes it clear that what you are referring to (“conditional on X2, when I jiggle my favorite X my favorite Y also jiggles”).
What does that leave out? Well it leaves out the language that most people use when talking about the Xs that they don’t care about: the language of “controlling for.” I think this is a generally stupid way of referring to what you are doing. This vocabulary reveals that you may not even be self-conscious as to what exactly you are doing in that piece of research. It imports categories from the experimental tradition (the notion of a “control group”) that have no business in observational data analysis and which makes it sound as if you are not aware of the spectacular inappropriateness of such assumptions in that context (every paper I’ve written that analyzes quantitative data uses this phrase making me moron number one). So stop saying “control for”!
Of course, I am under no illusions that people will actually stop using this phrase given how mindlessly and insidiously it has become insinuated into the language of social research; but you can always hope.
Perspectives on Psychological Science has a short piece on using Amazon’s Mechanical Turk as a subject pool: “Amazon’s Mechanical Turk: A New Source for Inexpensive, Yet High-Quality, Data?”
As Google Scholar shows, Mechanical Turk is being used in lots of clever ways.
Mechanical Turk has been called a digital sweatshop. Here are two perspectives – an Economic Letters piece: “The condition of the Turking class: Are online employers fair and honest?” And, a piece calling for intervention: “Working the crowd: Employment and labor law in the crowdsourcing industry.”
@viil linked to a Boston Herald article that talks about how crowdsourcing is changing science. Lots of cool initiatives going on related to “citizen science” — for example, check out zooniverse.org and scistarter.com and projectnoah.org (very cool, including iPhone app to help catalogue species).
Earlier this morning, I discussed the Federal governemnt’s proposed changes to the system of institutional review boards. In the comments, Gabriel asked – where is the ASA? He is reponding to the NY Times article on IRB reform:
I was partly pissed off that ASA didn’t appear anywhere in that article (as compared to anthro, three different historical societies, and the consortium). The only thing I could find about this on asanet.org was a reference in the April Footnotes to us paying our dues to COSSA (which in turn does good work on the human subjects issue). Assuming that I didn’t simply miss something, I have to wonder why ASA hasn’t done anything directly about one of the biggest problems facing the discipline.
It would be one thing if ASA deliberately took a minimalist mandate limited to publishing the journals and organizing the annual meeting, but it’s back asswards for ASA to ignore an issue that is core to our professional interests and where we could actually sway administrative lawmaking while taking a position on every hot button political issue that is irrelevant to the practice of sociology and on which we all know the association’s efforts are completely swamped by more powerful political actors. Remind me why exactly we’re paying for them to be on K-Street instead of in (say) Nashville? The way I see it, if the Consortium of Social Science Associations is doing the heavy lifting on lobbying for issues that actually matter to the discipline then I’m all in favor of ASA paying them for their efforts and passing along the costs to the membership. However this also implies that there’s no point in keeping up ASA’s independent lobbying efforts (and the expensive locational decisions that go with that).
So, in good faith, where is the ASA’s statement on this issue? What have they done to lobby the Federal government so we can get a less onerous regulation of human subject research? Can someone direct me to the ASA website that explains what they have done?
Looking back on it, getting an MBA at the University of Chicago (1981) is really what led me to academia. Back then, course readings were 30-40 academic journal articles. Rarely did a textbook accompany a class. As students, we knew we were there to learn the latest-and-greatest academic thinking. In our view, courses based upon some textbook anybody could get at their student bookstore for $50 had to be worth little more than, well, $50. Forget about classes taught by the grey-hairs (you know, classes in which some big-shot ex-executive sits around and regales students with war stories) — total waste of time, in our view. No, we wanted the meaty stuff. The stuff that wouldn’t be “best-practices” for another 10 years. Commercializing that knowledge, yeah, that’s where the money was.
So, I specialized in Finance (what else?) and launched into an exciting decade+ of business practice. At some point, I started consulting and, at some point after that, I was asked to work on a strategy project. I knew nothing about strategy at the time — BUT! — I knew how to read academic journals. No problem. Off to the library to read the pink strategy journal! Up to speed and 10 years ahead of practice in a few sittings. That b-school training was truly awesome. (In case you are wondering, btw, years later when I was a rookie PhD, I interviewed at Chicago. My old Micro prof, Sam Peltzman, took me to dinner. When I asked him what journal articles he was putting in his MBA course, he did a spit-take and said, “Wall Street Journal articles.” More on this later.)
I guess it would be fair to say that I found the strategy literature sadly wanting in comparison to the precision and mathematical sophistication I was used to in the Finance literature (mind you, this was as a practitioner). My reaction was: big opportunity here. This was the 90s and, for those who are not aware of it, the methodological advances in economics were really expanding at that time: game theoretic learning, evolutionary economics, behavioral economics, computational methods … cool technological approaches that held some promise in tackling the complexities inherent in the strategy problem domain. Off I went to get a math econ degree and I’ve never looked back with any regrets. (I do look back and marvel at the level of hubris that propelled me on my way — though, without it, where would any of us be in this academic hustle?)
Over time, outside of trying to stay up on promising methodological developments, I became less attentive to what people were doing in economics. Early-on, I tried to get my IO friends interested in the issues that so animated my own research. Typically, 3 minutes into describing something I was working on to a respected IO colleague, I could see the eyes glaze over and hear the responses go on autopilot. I really was a strategy guy and, clearly, the strategy literature was where my career would rise or fall. When asked, I explained it in this way, “The central question in strategy is who gets what, why and for how long. IO economists, IO being in many ways a mirror field, are interested in how the most value gets created. The dichotomy is one between distributional vs. efficiency issues. We want to tell Apple how to make more profit. They want to tell the FTC how to increase social welfare.”
This is not to say there weren’t always great economists in the bi-curious category. Of course there were. But, they were not the majority and I was smugly comfortable in my belief that, regardless of how frustratingly slow progress in strategy was, the field had little to worry about from economics. In fact, just as recently as last year, I had this discussion with one of my dearest colleagues, Jan Rivkin. I was somewhat surprised when he, in so many words, told me I was full of it. I felt sorry (for him) that I couldn’t bring him around in that discussion. Eventually, though, I knew I would win him over.
That was until about a month ago. That was about a time the paper by Chad Syverson (2011) started making the rounds. Entitled, “What Determines Productivity?” it is a wide-ranging survey paper that collects and organizes work in economics on persistent differences in firm productivity levels. Almost all the papers are from 2000 on. I found the quantity and quality of work cited, frankly, jaw-dropping. Now, those who have followed the narrative to this point will say, “Yes, but it’s work on productivity — that means the interest is still all about efficiency!” True. But, here’s the catch: “efficiency” in this work is typically measured as Revenue/Cost. Take the numerator and subtract the denominator and — PRESTO — you have the object of focus in strategy.
I’m still digesting this. It could be good news. After all, I’d love to have more outlets for my work. On the other hand, young scholars like Syverson are smart … and teched-up … and full of youthful energy. What I can say is that the bar for strategy research has stealthily gone up over last decade.