Archive for the ‘research’ Category
For those grad students who are studying non-profits, voluntary associations, and philanthropy, here’s an opportunity to work alongside colleagues and Prof. Peter Frumkin this summer:
Join PhD students from around the country (and world) to critically examine issues in the nonprofit sector and to work on your own research in nonprofit management, volunteerism, international civil society, social entrepreneurship and philanthropic studies.
Under the direction of Dr. Peter Frumkin, students participate in an intensive four week seminar that culminates in the completion of a publishable paper that is ready to be submitted to a peer-reviewed journal. Students are expected to submit a draft research paper that they would like to refine and prepare for academic publication during the summer program. This is a continuation of the program that Dr. Frumkin ran for five years at the RGK Center in Austin, Texas and that had helped dozens of students advance their careers.
Graduate students enrolled in doctoral-level PhD programs are invited to apply for the Penn Summer Fellows Program:
Dates: June 7 – July 1, 2014
- Application process is competitive and takes into consideration the academic potential of the student and the working paper topic
- $3,000 stipends are provided to each Summer Fellow
- Housing in Philadelphia, PA will be arranged and paid for by the Nonprofit Leadership Program
- Application Deadline: March 14, 2014
- Email a current resume, draft paper, and abstract to Leeamy1 [at] sp2 [dot] upenn [dot] edu.
- Selection is based on past record and academic potential
People often complain, justifiably, that “big data” is a catchy phrase, not a real concept. And yes, it certainly is hot, but that doesn’t mean that you can’t come up with a useful definition that can guide research. Here is my definition – big data is data that has the following properties:
- Size: The data is “large” when compared to the data normally used in social science. Normally, surveys only have data from a few thousand people. The World Values Survey, probably the largest conventional data set used by social scientists, has about two hundred thousand people in it. “Big data” starts in the millions of observations.
- Source: The data is generated through the use of the Internet – email, social media, web sites, etc.
- Natural: It generated through routine daily activity (e.g., email or Facebook likes) . It is not, primarily, created in the artificial environment of a survey or an experiment.
In other words, the data is bigger than normal social science data; it is “native” to the Internet; and it is not mainly concocted by the researcher. This is a definition meant for social scientists- it is useful because it marks a fairly intuitive boundary between big data and older data types like surveys. It also identifies the need for a skill set that combines social science research tools and computer science techniques.
This year, there are many great pre-conferences. In addition to the New Computational Sociology conference on August 15, there is also:
- Digitizing Demography - hosted by Facebook and our guest blogger Michael Corey.
- The Hackathon at UC Berkeley – hosted by Wisconsite Alex Hanna. Get together and code all night long.
- Junior Theory Symposium – hang out with the cool kids!
Please put links to more ASA pre-conferences in the comments.
The new open access journal, Sociological Science, is now here. The goal is fast publication and open access. Review is “up or out.” On Monday, they published their first batch of articles. Among them:
- The Structure of Online Activism by Lewis, Gay, and Meierhenrich.
- Time as a Network Good by Young and Lim.
- Political Ideology and Preferences in Online Dating by Anderson et al.
Check it out, use the comments section, and submit your work. Let’s move sociological journals into the present.
This coming August 15, Dan McFarland of Stanford University and I will host a conference on the new computational sociology at the Stanford campus. The goal is to bring together social scientists, informatics researchers, and computer scientists who are interested in how modern computation can be brought to bear on issues that are of central importance to sociology and related disciplines. Interested people should go to the following web site for information on registration and presentation topics. I hope to see you there.
three visiting fellowships on innovation at the Technische Universitat in Berlin – due Feb. 15, 2014
One of our orgtheory readers, Jan-Peter Ferdinand, forwarded a flier about a fellowship opportunity at the Technische Universität in Berlin, Germany. This sounds like a great opportunity for grad students and prospective post-docs who are studying innovation.
Here’s an overview:
The DFG graduate school “Innovation society today” at the Technische Universität Berlin, Germany, is pleased to advertise 3 visiting fellowships. The fellowships are available for a period of three months, either from April to June 2014 or October to December 2014.
The graduate school addresses the following key questions: How is novelty created reflexively; in which areas do we find reflexive innovation; and which actors are involved? Practices, orientations, and processes of innovations are studied in and between various fields, such as (a) science and technology, (b) the industrial and service sectors, (c) arts and culture, and (d) political governance, social planning of urban and regional spaces. More information about the graduate school can be found on our website: http://www.innovation.tu-berlin.de (click on the flag at the top of the page for an English version).
By following an extended notion of innovation, the graduate school strives to develop a sophisticated sociological view on innovation, which is more encompassing than conventional economic perspectives. Our doctoral students are currently undertaking a first series of case studies to promote a deeper and empirically founded understanding of the meaning of innovation in contemporary society and of the social processes it involves.
See this PDF (GW_Ausschreibung-2014) for more info, including deadline (Feb. 15, 2014) and application materials needed.
Administrative Science Quarterly now has a blog – aptly named The ASQ Blog. The purpose of the blog is a bit different than your typical rambling academic blog. Each post contains an interview with the author(s) of a recent article published in the journal. For example, there are interviews with Chad McPherson and Mike Sauder about their article on drug court deliberations, with Michael Dahl, Cristian Dezső, and David Ross on CEO fatherhood and its effect on employee wages, and András Tilcsik and Chris Marquis about their research on natural disasters and corporate philanthropy. The interviews are informal, try to get at the research and thought process behind the article, and allow reader comments. I think its innovative of the ASQ editorial team to come up with this in an effort to make research more open and to draw more eyes to the cutting edge research at ASQ.
A couple of years ago I served on an ASQ task force (with Marc-David Seidel and Jean Bartunek) to explore different ways that the journal could better use online media to engage readers. At the time, ASQ was way behind the curve. It was difficult to even find a permanent hyperlink to its articles. Since that time ASQ and most journals have greatly improved their online accessibility . The blog is just one example. ASQ’s editor, Jerry Davis, said in a recent email to the editorial board that they recognize that “younger scholars connect with the literature in ways that rarely involve visits to the library or print subscriptions.” To maintain relevance in today’s academic “attention economy” (for lack of a better term), journals have to be active on multiple platforms. ASQ gets it; Sociological Science’s (hyper)active tweeter (@SociologicalSci) gets it too. In the end, everyone hopes the best research will float to the top and get the attention it deserves, but if the best research is hard to find or is being out-hyped by other journals, it may never get noticed.
It made me wonder, how do people most commonly find out about new research? I know that orgtheory readers are not the most representative sample, but this seems to be the crowd that Jerry referred to in his email. So, below is a poll. You can choose up to three different methods for finding research. But please, beyond adding to the poll results, tell us in comments what your strategy is.
Since defending my dissertation (in 2003), I’ve worked in both academia and for the DMV. Prior to moving back to California, I was an assistant professor in the Department of Sociology and Anthropology at Texas Christian University (TCU). I’ve continued to teach as an adjunct in the Department of Sociology at CSU-Sacramento while working full-time for DMV.
There are a number of differences between the two types of organization. Some of these differences are obvious, some not. Some of these differences tend to be in favor of the academy, and others tend to be in favor of state service. First, and most obviously, there is the matter of pay and benefits. The salary range of our research program specialist series (the main job titles that are associated with research work in California state service) are roughly equivalent to the salary range of tenure-track faculty in the CSU system. In addition, we have a defined-benefit pension in retirement. On the other hand – and this point will surely hit home for this audience – working for state government involves a non-trivial step down in occupational prestige. Not a season goes by but what I have to answer some version of the question “you have a PhD and you work where?!?” Usually these questions come from persons outside the academy, and probably reflect some inherent sense of the disjuncture between having a high-status degree and working for one of the most “common” government departments (as opposed to a more “rarified” shop like the Demographic Research Unit at the state’s Department of Finance – an outfit which, I should note, does truly outstanding work). Academics may be puzzled by my position, but they are often quickly curious as to the kinds of data we have access to, the kinds of methodologies we use, the publication possibilities, etc. Those working in government are, if anything, impressed by the fact that California DMV has an R&D unit (most states don’t); they are also respectful of the fact that our agency (and the legislature) takes empirical research into account when setting policy.
In a less tangible manner, working for a government agency is stressful in different ways than teaching. While students expect professors to be available 24/7, this is not true of state service. Once the work day is done, I can go home and not worry about checking my e-mail until the next day. That said, the implications of a mistake are very different. In teaching, if we say something that isn’t quite right in lecture we can usually address it in a subsequent class session. If we make an error in a publication, we can issue an erratum. It’s embarrassing, but usually not grossly consequential. In state service, on the other hand, a mistaken statement – or worse, a faulty set of analyses and recommendations – can have real, dramatic, and long-lasting effects on policy and revenues, and ultimately on people’s lives. For that reason (among others), we have multiple layers of review for our studies and publications.
Finally, I would note that when considering a career in government service, it is useful to think about the implications of the grand logics of different types of organizations (cf. Weber 1922; Dobbin 1994; Meyer and Rowan 1977; Meyer and Scott 1983; DiMaggio and Powell 1991). We all know, for instance, that those working in for-profit businesses generally judge a given person’s performance through monetary means. It is only more rarely that we reflect on the fact that those working in government agencies tend to judge personal performance through metrics of power. In my experience, it is rarer yet that we admit the truth about academic institutions: that people are judged almost entirely in terms of reputation (and not just one’s own, but more broadly that of one’s advisor as well as one’s institutions, both past and present – see Etzkowitz, Kemelgor and Uzzi 2000). Switching from one field to another usually necessitates that one be prepared to operate under a different set of institutional rules and expectations. In the case of moving from the academy to the state, this means (among other things) caring much less about what people think about one’s work, and caring much more about making things happen.
First, I’d like to thank Katherine and Teppo for allowing me to guest-blog on this site. I’ve put together three pieces: (1) what does doing research for a state agency involve, (2) how does working in the public sector compare to working in the academy, and (3) are we hiring (yes) and what do we look for in candidates?
We have two units within our branch here at DMV Research and Development (R&D). I work in the Driver Competency and Safety Projects Unit; there is also the Alcohol and Impaired Driving Unit. The distinction between the units is not substantial – many projects involve collaboration between researchers, and in many cases we use very similar types of data and methods to conduct our projects.
In general, I’ve worked on projects that involve the screening, testing, and assessment of physical, visual, and mental functions that may affect driving. If you’ve ever read a newspaper article about some tragic incident where someone pressed on the accelerator instead of the brake, and drove into a fast-food restaurant, you may have wondered “gee, I wonder if anyone’s doing research on this problem?” The answer is “yes,” and I’m one of the people that works on that type of question (if you’re curious, such incidents often involve some element of cognitive impairment – such as occurs in early-stage dementia). The kinds of projects that I’ve worked on as a researcher include: (1) evaluating the results of a pilot project that used novel screening and education tools to identify drivers that may be at risk of unsafe driving due to a physical, visual, or cognitive impairment; (2) calculating projections about the number of cases DMV may see in the next few years of drivers who are referred for evaluation due to a medical problem of one type or another; (3) developing a method by which we can determine the reliability and validity of a drive test that we use (rarely) for persons who drive in extremely limited circumstances, on defined routes or in bounded areas.
In terms of publication opportunities, we mainly publish monographs ourselves (after a rigorous process of internal review). We also submit articles to peer-reviewed journals in the field of traffic safety. Finally, we present our findings at national and regional conferences (Transportation Research Board, LifeSavers, California Office of Traffic Safety Summit, etc.).
I was recently promoted, and my current duties include overseeing the research of others. Some of these projects include: (1) assessing the reliability of machines used to screen people for problems with visual acuity, (2) determining at a descriptive level the incidence of distracted driving incidents, particularly those that involve crashes where there is some indication that usage of a cell phone contributed to the crash, (3) calculating the effect of a particular novice driver training and education program on subsequent risk for crashes and violations. Finally, and perhaps most exciting, I’ve had the opportunity to participate in reviewing research (conducted by others) on automated vehicles; this review has been for the purpose of assisting our policy and legal staff in developing regulations that will govern the testing by manufacturers, and use by the general public, of automated vehicle technology on public roads.
Oh, and since I know at least one person might wonder: my dissertation had absolutely zippo to do with any of these topics. We will address this area (what do we actually look for in job candidates for our research shop) in post #3.
Orgtheorist and loyal orgtheory commenter Howard E. Aldrich is featured in a video about his intellectual trajectory and the history of organizational studies. Learn about Howard’s start in urban sociology and organizational studies, why he finds cross-sectional studies “abhorrent,” his years at Cornell where he overlapped with Bill Starbuck, and how he got started publishing in organizational ecology. He also explains how the variation, selection, and retention VSR) approach was a “revelation” for him, and how various institutions (University of Michigan, Stanford, and others) have promoted his intellectual development via contact with various colleagues, collaborators, and graduate students. Towards the end of the interview, Aldrich describes his latest research on the Maker movement, including hacking and the rise of affordable 3-D printing and other hardware and software that may propel technological innovation.*
The videoed interview is courtesy of Victor Nee’s Center for Economy & Society at Cornell University. More videos, including a presentation on his work on entrepreneurship, are viewable here. Also, those looking for an organizational studies text should see his seminal Organizations Evolving with Martin Reuf here.
* The Maker movement has strong affinities with Burning Man. In fact, that’s partly how I started attending Maker Faire – check out my photos of past Maker Faires, which included performance artists from the now-defunct Deitch Art Parade.
Becker and Faulkner’s Thinking Together: An E-mail Exchange and All That Jazz now available in print
Today, I met with first year grad students who wanted to know how sociologists develop research questions and studies while navigating grad school, academia, and other contexts. Although sociologists do give retrospective accounts in their publications and presentations, it’s not easy to fully convey the “back stage” behind the research. Rarely do readers get to see how a study unfolds. Luckily, Howie Becker and Bob Faulkner‘s latest book is now available both as an ebook and print book (update: corrected link), for those of us who like to read old school-style. According to Franck Leibovici,
the paperback version produces a different experience [from the ebook]. for example, it has an index which allows you to visualise how many people, scholars, musicians, anonymous people, have been mobilized to produce this investigation.
For those who like the ebook format, see our earlier post, which includes a summary by Becker himself.
Here’s the official summary of Thinking Together: An E-mail Exchange and All That Jazz:
When Rob Faulkner and Howie Becker, two sociologists who were also experienced professionals in the music business, decided to write something about this other part of their lives, they lived at opposite ends of the North American continent: Faulkner in Massachusetts, Becker in San Francisco. They managed the cooperation writing a book requires through e-mail. Instead of sitting around talking, they wrote e-mails to each other.
And so every step of their thinking, the false steps as well as the ideas that worked, existed in written form. So, when Franck Leibovici asked them to contribute something which showed the “form of life” that supported their work, they (helped along by a timely tip from Dianne Hagaman), they sent him the correspondence.
The result is one of the most complete and revealing records of scientific collaboration ever made public. And one of the most intimate pictures of the creative process in all its details that anyone interested in that topic could ask for. Investigative writing is not only about formulating chains of rational ideas (as the usual format of scientific articles would like us to believe), but also mixes plays on words, stories, and arguments in new arrangements.
this book is a contribution to the art project (forms of life)—an ecology of artistic practices, paris, 2011-2012, by franck leibovici.
curated by grégory castéra and edited by les laboratoires d’aubervilliers and questions théoriques, with the support of fnagp, la maison rouge, le fonds de dotation agnès b. see www.desformesdevie.org.
One of the songs that helped the two authors work on their cases of how musicians build their repertories:
Control Point Group, a political consultancy firm, asked my opinion on a recent Pew study of public opinion and twitter. I’ll quote Politico reporter Dylan Byers, who summarized the Pew study:
Sixteen percent of U.S. adults use Twitter and just half that many use it as a news source, making it an unreliable proxy for public opinion, according to a new survey from the Pew Research Center and tyhe John S. and James L. Knight Foundation.
Take last year’s Republican primary, for example: “During the 2012 presidential race, Republican candidate Ron Paul easily won the Twitter primary — 55% of the conversation about him was positive, with only 15% negative,” Pew writes. “Voters rendered a very different verdict.”
Or the Newtown school shooting: “After the Newtown tragedy, 64% of the Twitter conversation supported stricter gun controls, while 21% opposed them. A Pew Research Center survey in the same period produced a far more mixed verdict, with 49% saying it is more important to control gun ownership and 42% saying it is more important to protect gun rights.”
That’s worth keeping in mind next time you see the reaction-on-Twitter piece in the wake of any major national news event. However, Twitter may be a more reliable indicator of youth sentiment.
This is a subtle point. Pew is doing what computer scientists call a sentiment analysis. Roughly speaking, you write a program that guesses whether some text, in this case a tweet, reflect a positive or negative sentiment. The literature (including the Pew study cited) shows very mixed results. The take away point for me is that sentiment is either tricky to measure feelings properly or that emotional context of text doesn’t correlate well with behaviors that we care about.
In contrast, our research (and that done by others) shows that relative shares of mentions, regardless of sentiment, do show a positive correlation with some political behaviors, like voting. My hypothesis is that the relative volume of talk is simply a proxy for buzz, name recognition, popularity, or some other variable. Regardless, the correlation is there.
storytelling in organizations, the state of the field of organizations and values, and a freebie article
I’ve recently published two articles* that might be of interest to orgheads, and Emerald publisher has ungated one of my articles:
1. Chen, Katherine K. 2013. “Storytelling: An Informal Mechanism of Accountability for Voluntary Organizations.” Nonprofit and Voluntary Sector Quarterly 42(5): 902-922.**
Using observations, interviews, and archival research of an organization that coordinates the annual Burning Man event, I argue that storytelling is a mechanism by which stakeholders can demand accountability to their needs for recognition and voice. I identify particular frames, or perspectives and guides to action, articulated in members’ stories. Deploying a personalistic frame, storytellers recounted individuals’ contributions toward a collective endeavor. Such storytelling commemorated efforts overlooked by official accounts and fostered bonds among members. Other storytellers identified problems and organizing possibilities for consideration under the civic society or anarchist frames. By familiarizing organizations with members’ perspectives and interests, stories facilitate organizational learning that can better serve stakeholders’ interests. Additional research could explore whether (1) consistent face-to-face relations (2) within a bounded setting, such as an organization, and (3) practices that encourage participation in organizing decisions and activities are necessary conditions under which storytelling can enable accountability to members’ interests.
2. Chen, Katherine K., Howard Lune, and Edward L. Queen, II. 2013. “‘How Values Shape and are Shaped by Nonprofit and Voluntary Organizations:’ The Current State of the Field.” Nonprofit and Voluntary Sector Quarterly 42(5): 856-885.
To advance understanding of the relationship between values and organizations, this review synthesizes classic and recent organizational and sociological research, including this symposium’s articles on voluntary associations. We argue that all organizations reflect, enact, and propagate values. Organizations draw on culture, which offers a tool kit of possible actions supported by institutional logics that delineate appropriate activities and goals. Through institutional work, organizations can secure acceptance for unfamiliar practices and their associated values, often under the logic of democracy. Values may be discerned in any organization’s goals, practices, and forms, including “value-free” bureaucracies and collectivist organizations with participatory practices. We offer suggestions for enhancing understanding of how collectivities advance particular values within their groups or society.
3. In addition, one of my previously published articles received the “Outstanding Author Contribution Award Winner at the Literati Network Awards for Excellence 2013.” Because of the award, Emerald publisher has ungated this article (or, as Burners like to say, contributed a gift to the gift economy :) ) to download here (click on the HTML or PDF button to initiate the download):
Chen, Katherine K. 2012. “Laboring for the Man: Augmenting Authority in a Voluntary Association.” Research in the Sociology of Organizations 34: 135-164.
Drawing on Bourdieu’s field, habitus, and capital, I show how disparate experiences and “dispositions” shaped several departments’ development in the organization behind the annual Burning Man event. Observations and interviews with organizers and members indicated that in departments with hierarchical professional norms or total institution-like conditions, members privileged their capital over others’ capital to enhance their authority and departmental solidarity. For another department, the availability of multiple practices in their field fostered disagreement, forcing members to articulate stances. These comparisons uncover conditions that exacerbate conflicts over authority and show how members use different types of capital to augment their authority.
* If you don’t have access to these articles at your institution, please contact me for a PDF.
** Looking for more storytelling articles? Check out another one here.
Today, I’ll directly address Sam Lucas’ article, “Beyond the Existence Proof: Ontological conditions, epistemological implications, and in-depth interview research,” published in 2012 in Quantity and Quality. In it, he argues that there is no basis at all for generalizing conclusions from the types of unrepresentative samples that are used by interview researchers. The best you can do is use the sample to document some fact (“an existence proof”), not make any out of sample generalizations. You can read Andrew Perrin’s commentary here.
To illustrate his argument, let’s return to yesterday’s hypothetical about unrepresentative samples. I said: “What if Professor Lucas suddenly found out that his heart medication was tested with an unrepresentative sample of white people from Utah? Should he continue taking the medication?” I’ll outline two answers to this question.
Today and tomorrow, I’ll respond to Sam Lucas’ argument on unrepresentative samples that was published in the journal Quantity and Quality. You can read it, for free, here. It’s good, so I recommend that you all take a look.
Today, I start with a not-so-crazy hypothetical question about samples and statistical inference. It goes like this:
Professor Lucas is taking a drug, “Berkeleyflaxin,” (Bf) for a very serious heart condition. His doctor said that Bf prevents heart attacks in 75% of patients who use the drug and have that specific condition. Later, Professor Lucas finds out that the clinical trials for Bf were based on a non-representative sample. Specifically, like many drug trials, no African-Americans were included in the trial, which is an issue given Professor Lucas’ personal background. The research was based on a convenience sample of all-White volunteers in the mountains of Utah. Let’s further stipulate, like many real drugs, that Bf is expensive and has a serious side effect, like nausea, vomiting, or increased risk of stroke. That means that taking Bf and hoping for a placebo effect is a costly option. You really have to choose between taking it and not taking it.
Should Professor Lucas take the pill? Show your work! Tomorrow, I’ll summarize Professor Lucas’ article on non-representative samples and give you my opinion of what he should do in this hypothetical.
On Fri., Graduate Center faculty and affiliates got together to meet with sociology graduate students. In my group, which included Paul Attewell, Pam Stone, Ruth Milkman, Sophia Catsambis, and myself, we discussed what we thought might be hot topics in the areas of labor, organizations, and work. Not only was this an invigorating conversation, but also an opportunity to hear of research in the pipeline and upcoming and recent publications. I’m sharing some of these ideas here.
- Rise of precarious work (cf. Guy Standing’s The Precariat, Leah Vosko’s edited volume Precarious Employment: Understanding Labour Market Insecurity in Canada) and how can contemporary labor movements can mobilize workers
- Impact of the Affordable Care Act (aka Obamacare) – ideal for a pre- and post- study!: whether it liberates employees who only stay with a particular workplace for the health insurance, how organizations that would have attracted members for health insurance (i.e., freelancers union) will now adjust
- How do people find jobs? Universities now aggressively push career-building and networking for students. Someone needs to update Granovetter’s research on networks.
- Employment and health: how does chronic illness impact career trajectories and employment?
- How do the so-called “Millenials” conceive of work – how do their parents’ experience with work (i.e., downsizing, long hours, minimal or no rewards for worker loyalty) and governance (weakened state protections) inform adult children’s conceptions of ideal workplaces? For example, are some younger workers viewing workplaces as sites of self-actualization, manageable work hours, and contractual work?
- Transnationalization of work: worker flows via the H1B visa
- Inequality: How do organizations dampen, reinforce, or exacerbate inequalities? Interesting contexts include organizations that deliver healthcare.
- How to imagine alternatives to contemporary hierarchical organizations: the impact of Occupy and other contemporary democratic groups.
Of course, no discussion was complete without stories about dealing with the IRB.
If you’re working on one of the above ideas, or have other ideas for where the discipline can go, please do add them into the comments.
Last week, we had a fruitful discussion of graduate school and publishing. I think we all agreed that most graduate students should learn how to publish quickly. But we also raised some red flags. For example, we shouldn’t encourage people to publish “bad” articles. Others thought that we shouldn’t publish “too much.”
So let’s begin with a consensus: yes, if you are a graduate student, you should definitely learn the publishing process. No let’s move on to lower consensus issues. First, what counts as “bad” research? A few definitions:
- Research that is fraudulent.
- Research that is in a technical sense correct, but misleading.
- Research that is sloppy or poorly written.
- Research that is made in good faith, but in error.
- Research that is chopped up into lots of small chunks, in terms of article length/word or page counts.
- Research that makes extremely small or incremental arguments.
- Research that is in the “wrong” journal – low prestige, niche, online, or in a lower status discipline.
Now, when is it bad to publish work in any of these categories? There is overwhelming consensus that #1 is bad and should never be tolerated. In fact, academia has such a strong norm on #1 that fraudulent articles are almost always retracted and people might lose their job. I think we’d agree that #2 is also bad, though there is disagreement about what should be done with misleading articles, as we found out when discussing He-Who-Shall-Not-Be-Named-in-Texas.
Once we get past fraudulent and misleading research, it’s very unclear that any of the remaining categories can be claimed to be uniformly bad. For example, garbage can paper (March, Cohen, and Olson 1972) was successfully shown to be a very sloppy work (see the Bendor, Moe, & Schott 2000 APSR article). No way around it. But, as olderwoman points out, powerful ideas are often presented in sloppy packages.
Then we get to #4: good faith papers with mistakes. In some cases, #4 is obviously bad. We find out that the answer is different when we correct our code – retraction. But in other cases, it’s ok. For example, among mathematicians, incorrect proofs are sometimes left in the record. The overall idea remains promising, but maybe some future scholar can read the mistake and fix it.
#5, #6 and #7 are clearly not universal. If you look around academia, you see that some fields hate, hate, hate small articles (history) while other fields exist primarily in tiny, tiny articles spread out in big and small journals. Even with one field, like sociology, you see huge variance. Demographers routinely “chop and spread it,” while ethnographers save it all for one big AJS/ASR article.
I’ll finish with how I think about my own publication strategy. My first allegiance is to knowledge. So I have never suppressed any article that I thought had a specific contribution to make, big or small. Also, in my own experience, I have benefited greatly from articles published in some obscure places. “Small” doesn’t mean dumb or useless. Just small – which might be very important to someone out there (including me).
What I have ended up with is a sort of triage: articles that are “big” in some sense are channeled to major journals, while “small” contributions are sent to niche journals. That results in a output stream where the modal is “small” but the stream is punctuated by a few “bigs.” Finally, one thing that I don’t do is rewrite the same article over and over. I make no claim that this is optimal, only that this is what you get if you believe that “small” contributions and niche journals have a place in the academic world.
David Courpasson is finishing his term as the editor of Organization Studies, the official publication of the European Group for Organizational Studies (EGOS). As a parting gift, he wrote an essay about what he feels is right and wrong (okay, mostly wrong) about the current state of organizational scholarship. The essay is provocative and a bit pessimistic, although not unfairly so. One of the major problems plaguing our field, Courpasson believes, is the development of a culture of productivity in social science, which seems to have most severely infected organizational and management research. In this culture of productivity, scholarship is not evaluated based on relevance or the quality of ideas but rather on the sheer volume of research that a scholar can produce. Professors are compelled to write lots of journal articles, and they push them out quickly in order to boost the length, but not necessarily the quality, of their CVs. Although he doesn’t mention it, this culture of productivity seems to have numerous institutional sources, including the practice of many departments that determine merit raises and tenure cases by “number counting” (i.e., deciding that someone deserves tenure based on the number of “A journal publications” the person has produced).
The consequences of this culture of productivity is to increase the sheer volume of publications but at the sacrifice of social relevance. The culture also has negative effects on the review and editing processes. Reviewers are worn out, editors are overwhelmed with new submissions, and there are simply too many journal articles to read and process. Here is an excerpt from Courpasson’s article:
[O]ur current system of scientific manufacturing creates more papers to review, with less committed and less timely reviewers, with a lower density of challenging ideas, as well as of ideas that are less significant for ‘the world’; in other words, for other worlds than the closest colleagues and networks. The culture of ideas is therefore vanishing: due to publishing pressures, people feel more and more pushed to submit any paper because rejection is not necessarily harmful: a new dynamic is created where work is routinely submitted anyway, sometimes in a real hurry (that is to say, even when clearly unfinished, including incomplete lists of references or variety of colours in the text), overburdening journals and editors. Here individual arbitrations surely play a role: authors’ visibility can indeed be maximized by small improvements enabled by journals’ insightful reviews; at the same time, thanks to this principle of productivity, potential papers to submit by a single author are multiplied, often in a logic of replication and repetition that also leads to ‘deviant’ behaviours such as self-plagiarism. But that adds some items in a resume and that is important because items are counted. Again, this is a counterproductive game: because volume does not always match quality and innovation, editors are more and more inclined to focus on flaws to purposively (although not willingly) narrow down the number of papers under review and obviously, in this ‘negativist’ cycle, innovative papers can be sacrificed by the necessity of correlating the ‘quality’ of a journal and a high (desk) rejection rate.
Let’s start with a question: Why should you believe that representative samples are good? The answer: representative samples produce estimates of the mean that are normally distributed. In plain English, a nice, big random sample will produce an estimate that is probably close to the real answer.
Follow up question: Does it logically follow that all unrepresentative samples produce systematically biased estimates? No, it doesn’t. To see why, you need a little Logic 101. In logic, it is well known that (A –> B) does not automatically entail ( not A –> not B), which is called the “inversion.” To see why ( not A –> not B) might be false, think about this conditional: “If it is a bat, it is a mammal.” Obviously, “if it is not a bat, then it is not a mammal” is not true. In general, you can’t really make any inference about an inversion from a conditional. They are simply different animals.
Let’s return to social science research methods. Our original conditional is: ( the sample is random –> the estimates are normally distributed around the real mean). It doesn’t automatically follow that ( sample is not random –> the estimates are not normally distributed around the real mean). It *might* be true, but it’s not automatically true. It requires a separate argument.
So far, I have not read a general argument showing that unrepresentative or biased samples in *all* cases leads to systematic biases in the estimated parameters. That’s because there probably isn’t such an argument because samples can be biased for all kinds of reasons. In some cases, the bias may matter but in other cases it may irrelevant.
What’s the point? The point is that social scientists should abandon their knee-jerk rejection of unrepresentative samples. Instead, we should take a “case by case” approach to unrepresentative samples. We have to individually investigate each type of unrepresentative sample to determine if it can be used to estimate a parameter.
If you can accept that, then you open up a whole new world of data. For example, a lot of people can’t accept that futures markets are accurate forecasters of events. One of the arguments is that traders do not accurate resemble voters. True, but that doesn’t logically entail that it is impossible for trader behavior to mimic voter preferences. It *might* be true, but it is not automatically true based on the principle of random sampling. So what does the research say? Well, trading markets predict presidential election tallies better than random samples of voters (polls) 74% of the time. Not bad.
Bottom line: Yes, random samples are good, but they aren’t the last word. Social scientists should be on the lookout for data sources that perform well despite biases in sampling.
An alert about National Science Foundation (NSF) funding:
Earlier this year, restrictions were imposed on all National Science Foundation funding in political science, resulting in the cancellation of the Fall Grant cycle for political science. Legislation for the 2014 funding year is being considered now. Please contact your representatives in the House and Senate and tell them you want continued support for the Directorate on Social, Behavioral and Economic Sciences and oppose additional governmental restrictions.
A petition is available here.
For background on the political milieu leading to the cancellation of funding, see this past post.
Off-list, Howard Aldrich penned Brayden and me a heartfelt lament about the one-sided exchange between sociology and economics. He described a recently published article in which an economist urges fellow economists to conduct research on how organizational identity motivates workers to work hard because (surprise!) monetary incentives aren’t sufficient.
With Aldrich’s permission (but without naming the offending article and author), I am excerpting his thoughts here:
“What is heartbreaking is that there’s no sign in this article that the author has any clue that sociology and management & organization theory have been concerned with such questions for decades, or that there is a rich and robust literature on organizational culture, social identity, and so forth. Although the author mentions the social psychology of identity at one point (Ed. Note: plus 2 mentions of March and Simon’s work as “seminal”), all but a handful of the 60+ references are to the literature in economics.
Several years ago, I had a similar experience when I read a special issue of an entrepreneurship journal that was devoted to entrepreneurial teams. It contained an economist’s algorithmically driven analysis of why and how entrepreneurial teams should form. Plenty of other economists were cited, but he seemed clueless to the fact that, five years previously, a couple of sociologists (namely, Martin Ruef and me, together with a business administration scholar) had written an empirical paper, based on a nationally representative sample, addressing precisely some of the idle speculation he’d written up in his paper. I was so irritated that I called up the special issue editor, who apologized profusely but offered no explanation.
So, for economics, all that matters is what other economists have done. I’m sure this simplifies the literature search process, but one can imagine that some insights might be sparked if economists were occasionally to dip into the literature of other fields. For example, what came to mind immediately upon reading the first article was Bill Ouchi‘s rather famous – - at least to me – - book from 1981, Theory Z, which was one of the first books to ride the wave of the “organizational culture” phenomena in organization and management studies.”
In a follow-up email, Aldrich opined the desire for economists either to share or return home:
“I just want them to either go back to their own village or else begin engaging in a more fair exchange….The problem is that I doubt very much whether we can ever create a truly equitable exchange with economists – - I’ve seen the same pattern for years, and indeed Chick Perrow actually talked about something like “invasion of the body snatchers” in talking about when economists came into our field.”*
Since economists are supposedly prone to practicing what they preach, could it be that the discipline of economics is ill-suited to contributing to a knowledge commons?
Several sociologists (Matt Wray, Jon Stern, and myself) and an anthropologist (S. Megan Heller) have a round table discussion on Burning Man at the Society Pages. We’ve all done research at Burning Man, an annual temporary community in Nevada that has inspired events and organizations worldwide.
Have a peek at our discussion, which includes ideas for future studies. We discuss answers to questions such as:
“Why might the demographics of the Burning Man population be of interest to researchers? For instance, there is a cultural trope that people who go to Burning Man are often marginalized individuals—outsiders in some way. Could the festival’s annual Census be used to measure this rather subjective characteristic of the population? Is there a single “modal demographic” (that is, a specific Burner “type”) or are there many? What else does the Census Lab measure (or not measure)?”
“Burning Man sometimes gets portrayed as little more than a giant rave—a psychedelic party on the playa. It is like a party in many ways, but those of us who go know that the label doesn’t begin to capture the full experience. What larger phenomena does Burning Man represent in your research? In other words, how do you categorize the event and why should we take it seriously?“
The shoe has dropped for the political scientists. The NSF has suspended funding, probably out fear of Congress.
My take away? Don’t be so dependent on one customer. Sociology doesn’t get that much from NSF anyway, but we should think about alternate sources.
Here’s a simple idea. Why not take all that sweet ASR subscription money and funnel it into an ASA controlled foundation that supports sociological research? That way, we have independence.
The plaintiff: Andrew Gelman – fellow blogger and poli sci pugilist. The defendant: Nicholas Christakis – sociologist, physician, tweeter. The claim: Christakis wrote the following, which made Gelman, like, really mad:
The social sciences have stagnated. They offer essentially the same set of academic departments and disciplines that they have for nearly 100 years: sociology, economics, anthropology, psychology and political science. This is not only boring but also counterproductive, constraining engagement with the scientific cutting edge and stifling the creation of new and useful knowledge. . . .
I’m not suggesting that social scientists stop teaching and investigating classic topics like monopoly power, racial profiling and health inequality. But everyone knows that monopoly power is bad for markets, that people are racially biased and that illness is unequally distributed by social class. There are diminishing returns from the continuing study of many such topics. And repeatedly observing these phenomena does not help us fix them.
Gelman’s complaint? It’s a little hard for me to understand, but he doesn’t like the fact that Christakis said that we have really beat some topics into the ground and that maybe we should expand a little:
Regarding the question of illness being distributed by social class: Is it really true that “everybody knows,” for example, that Finland has higher suicide rates than Sweden, or thatforeign-born Latinos have lower rates of psychiatric disorders. These findings are based on public data so everybody should know them, but in any case the goal of social science is not (just) to educate people on what should be known to them, but also to understand why. Why why why. And also to model the effects of potential interventions.
Christakis is making a point about the maturity of research topics, not public knowledge of specific results. For example, the “SES gradient” is one of the most well established results in all of health research. It appears in every single sociology of health class and it is not easy (though certainly not impossible) to find a health condition where SES (or income or status) doesn’t affect the likelihood of contracting the condition or recovering. In other words, if you know anything about sociology or health, you know this finding and it is very, very, very well established.
Of course, within any field, there are notable puzzles, like the finding that immigrants (in the US) tend to be healthier than second generation people. I’m a bit puzzled by the importance of the suicide fact. Perhaps suicide is an exception, but I believe the SES gradient enough that I’d wager that for many important health conditions that (a) SES within Finland (or Sweden) makes a big difference or (b) that wealthy countries do better on the condition that poor countries (e.g., Finland v. Sweden is probably not as important as Finland v. Gambia or Guatemala).
Gelman raises the issue of causation, and once again, it seems like he’s missing the point. Christakis is not suggesting that people stop investigating causes. Rather, it’s about the relative amount of effort. Hundreds of papers have attempted to explain the SES gradient in one way or another. In fact, it’s come to the point that if I see a talk that is about SES and health, I can nearly always predict the tables and coefficients – and I’m not even a specialist on the topic. This suggests that the marginal benefit of yet another study on the SES gradient is likely to be small. Instead, maybe people should look into new areas of inquiry unless you have a really, really, really amazing way to get at causation.
Judgment: The Court of Orgtheory finds against the plaintiff and in favor of meeting some new people.
In a previous post’s discussion, Graham Peterson kindly shared a great link of videos with Howie Becker’s thoughts about conducting research, specifically at the graduate level. I found the last video clip of particular interest. In “10. Savoir finir : Comment achever une thèse alors que les données de terrain ne cessent d’affluer ? [Knowing how to end: how to finish a thesis when field data keeps arising?],” Becker discusses the issue of knowing when to end data collection. Some signals make this clear – the funding runs out, the return ticket’s date shows up, or less frequently, the field site closes. (For historians, the re-closure of an archive is the equivalent.)
I would add another possibility – sometimes the data collection doesn’t end, and a researcher continues to analyze, write, and publish along the way, particularly if the phenomena under study changes and sparks additional areas of inquiry. New research questions may arise, or the researcher may add other field sites for comparison. As I tell grad students who are deciding among research projects, it’s likely that a researcher will live with an ethnographic research project for years beyond the dissertation’s completion, particularly if s/he writes a book and needs to publicize it. Of course, out of expedience or boredom, some researchers will quickly move onto another research project. However, with ethnographic research, the researcher who can return to the same field site faces the dilemma of sunk costs – forming relations with informants and developing expertise and local knowledge all take time, and it may be difficult to give all that up, especially when the passage of time starts to reveal dynamics not readily apparent before.
I keep hearing about the coming big data revolution. Data scientists are now using huge data sets, many produced through online interactions and media, that shed light on basic social processes. Big data data sets, from sources like Twitter, Facebook, or mobile phones, give social scientists ways to tap into interactions and cultural output at a scale that has never been seen before in social science. The way we analyze data in sociology and organizational theory are bound to change due to this influx of new data.
Unfortunately, the big data revolution has yet to happen. When I see job candidates or new scholars present their research, they are mostly using the same methods that their predecessors did, although with incremental improvements to study design. I see more field experiments for sure, and scholars seem more attuned to identification issues, but the data sources are fairly similar to what you would have seen in 2003. With a few notable exceptions, big data have yet to change the way we do our work. Why is that?
Last week Fabio had a really interesting post about brain drain in academia. One reason we might see less big data than we’d like is because the skills needed to handle this type of analysis are rare and much of the talent in this area is finding that research jobs in the for-profit world are more lucrative and rewarding than what they’re being offered in academia. I believe that’s true, especially for the kinds of people who are attracted to data mining techniques. The other problem though, I think, is that social scientists are having a hard time figuring out how to fit big data techniques into the traditional milieu of social science. Sociologists, for example, want studies to be framed in a theoretically compelling way. Organizational theorist would like scholars to use data that map on to the conceptual problems of the field. It’s not always clear in many of the studies that I’ve read and reviewed that big data analyses are doing anything new other than using big data. If big data studies are going to take over the field they need to address pressing theoretical problems.
With that in mind, you should really read a new paper by Chris Bail (forthcoming in Theory and Society) about using big data in cultural sociology. Chris makes the case that cultural sociology, a subfield that is obsessed with understanding the origins of and practical uses of meaning, is prime for a big data surge. Cultural sociology has the theoretical questions, and big data research offers the methods.
More data were accumulated in 2002 than all previous years of human history combined. By 2011, the amount of data collected prior to 2002 was being collected very two days. This dramatic growth in data spans nearly every part of our lives from gene sequencing to consumer behavior. While much of these data are binary and quantitative, text-based data is also being accumulated on an unprecedented scale. In an era of social science research plagued by declining survey response rates and concerns about the generalizability of qualitative research, these data hold considerable potential. Yet social scientists – and cultural sociologists in particular – have ignored the promise of so-called ‘big data.’ Instead, cultural sociologists have left this wellspring of information about the arguments, worldviews, or values of hundreds of millions of people from internet sites and other digitized texts to computer scientists who possess the technological expertise to extract and manage such data but lack the theoretical direction to interpret their meaning in situ….[C]ultural sociologists have made very few ventures into the universe of big data. In this article, I argue inattention to big data among cultural sociologists is particularly surprising since it is naturally occurring – unlike survey research or cross-sectional qualitative interviews – and therefore critical to understanding the evolution of meaning structures in situ. That is, many archived texts are the product of conversations between individuals, groups, or organizations instead of responses to questions created by researchers who usually have only post-hoc intuition about the relevant factors in meaning-making – much less how culture evolves in ‘real time’ (note: footnotes and references removed).
Chris goes on to offer suggestions about how cultural sociology might use big data to address big theoretical questions. For example, he believes that scholars studying discursive fields would be wise to use big data methods to evaluate the content of such fields, the relationships between actors and ideas, and the relationships between different fields. Of course, much of the paper is about how to use big data analysis to enhance or replace traditional methods used in cultural sociology. He discusses how Twitter and Facebook data might supplement newspaper analysis, a fairly common method in cultural and political sociology. Although he doesn’t go into great detail about how you would do it, an implicit argument he makes is that big data analysis might replace some survey methods as ways to explore public opinion.
I continue to think there is enormous potential for using big data in the social sciences. The key for having it accepted more broadly is for data scientists to figure out how to use big data to address important theoretical questions. If you can do that, you’re gold.
Two uncomfortable, if not disconcerting, realizations of academic publishing is that (1) people don’t read, or (2) if they do read and write, they don’t cite relevant or appropriate work. A hard-working academic’s day can be quickly sent down the dark hole of despair or rage face when reading a manuscript or publication that doesn’t properly cite relevant work or (ahem) one’s own seminal work. Worse, if cited, the work may be cited wrongly, or the minor points override the major takeaways in subsequent research and citations. In these situations, the imperative that one should contribute to standing on the shoulders of giants calls to mind the bleak image of a whale fruitlessly calling out for colleagues in an endless sea.
More whale-calling, channeling Andy Abbott, below the fold…
Read the rest of this entry »
For our budding orgheads, a new doctoral research award of possible interest:
“Emerald and the European Foundation for Management Development (EFMD) announce the launch of the 2013 Emerald/EFMD Outstanding Doctoral Research Awards.
These Awards reward the best doctoral research projects in 12 different categories, each sponsored by an Emerald journal. For a list of all categories and details of how to apply please visit the webpage below:
The Awards are open to those who have completed and satisfied examination requirements for a Doctoral award, or will do so, between 1 October 2010 and 1 October 2013, and have not applied previously for one of these Awards. The closing date for receipt of applications is 1 October 2013.
Award-winning entries will receive a cash prize of €1,500 (or currency equivalent), a certificate and a winners’ logo to attach to correspondence.”