Archive for the ‘brayden’ Category
Heads turn whenever accusations are made about academic impropriety, but it is especially provocative when one of the researchers involved in a study makes accusations about his/her own research. A forthcoming article in the Journal of Management Inquiry written by an anonymous senior scholar in organizational behavior does exactly this. The author, who remains anonymous to protect the identity of his/her coauthors, claims that research in organizational behavior routinely violates norms of scientific hypothesis testing.
I want to be clear: I never fudged data, but it did seem like I was fudging the framing of the work, by playing a little fast and loose with the rules of the game—as I thought I understood how the game should be played according to the rules of the scientific method. So, I must admit, it was not unusual for me to discover unforeseen results in the analysis phase, and I often would then create post hoc hypotheses in my mind to describe these unanticipated results. And yes, I would then write up the paper as if these hypotheses were truly a priori. In one way of thinking, the theory espoused (about the proper way of doing research) became the theory-in-practice (about how organizational research actually gets done).
I’m certain that some people reading this will say, “Big deal, people reformulate hypotheses all the time as they figure out what their analysis is telling them.” The author recognizes this is the case and, I believe, relates his/her experience as a warning of how the field’s standards for writing are evolving in detrimental ways. For example, the author says, “there is a commonly understood structure or “script” for an article, and authors ask for trouble when they violate this structure. If you violate the script, you give the impression that you are inexperienced. Sometimes, even the editor requires an author to create false framing.” Sadly, this is true.
All too often reviewers feel that it is their role to tell the authors of a paper how to structure hypotheses, rewrite hypotheses, and explain analysis results. Some reviewers, especially inexperienced ones, may do this because they feel that they are doing the author(s) a favor – they’re helping make the paper cleaner and understandable. But the unintended consequence of this highly structured way of writing and presenting results is that it forces authors into a form of mild academic dishonesty in which they do not allow the exploratory part of the analytical process to be transparent.
Some journals have a much stronger ethos about hypothesis testing than others. AJS is looser and allows authors more freedom in this regard. But some social psych journals (like JPSP) have become extremely rigid in wanting to see hypotheses stated a priori and then tested systematically. I would love to see more journals encourage variability in reporting of results and allow for the possibility that many of our results were, in fact, unexpected. I would love it if more editors chastised reviewers who want to force authors into a highly formulaic style of hypothesizing and writing results. It simply doesn’t reflect how most of us do research.
Perhaps the anonymous author’s tale will ignite a much needed discussion about how we write about social scientific analysis.
At the ASA meetings last August I was lucky to participate in an authors-meet-critics session for John Padgett’s and Woody Powell’s new book, The Emergence of Organizations and Markets. That vibrant session has now been published as a book symposium in the political sociology section newsletter, which you can download here. My comments are a bit critical at points. I’m not convinced that the concept of “autocatalysis” is especially useful. John’s and Woody’s responses are definitely worth reading though. As you’d expect, they rise to the occasion and give a convincing defense of their perspective.
Being a part of the symposium has got me thinking more about different modes of theorizing and making way for the role of humans and actor motivation in sociological theory. Stay tuned for more thoughts on this in the near future.
Usually when someone starts throwing citation impact data at me, my eyelids get heavy and I want to crawl into a corner for a nap. Like Teppo wrote a couple of years ago, “A focus on impact factors and related metrics can quickly lead to tiresome discussions about which journal is best, is that one better than this, what are the “A” journals, etc. Boring.” I couldn’t agree more. Unfortunately, I’ve heard a lot about impact factors lately. The general weight of impact factors as a metric for assessing intellectual significance has seemed to skyrocket since the time I began training as a sociologist. Although my school is not one of them, I’ve heard of academic institutions using citation impact as a way to incentivize scholars to publish in certain journals and as a measure to assess quality in hiring and tenure cases. And yet it has never struck me as a very interesting or useful measure of scholarly worth. I can see the case for why it should be. Discussions about scholarly merit are inherently biased by people’s previous experiences, status, in-group solidarity, personal tastes, etc. It would be nice to have an objective indicator of a scholar’s or a journal’s intellectual significance, and impact factors pretend to be that. From a network perspective it makes sense. The more people who cite you, the more important your ideas should be.
My problem with impact factor is that I don’t trust the measure. I’m skeptical for a few reasons: gaming efforts by editors and authors have made them less reliable, lack of face validity, and instability in the measure. Let me touch on the gaming issue first.
Some of you are attending the Academy of Management meetings this weekend in Philadelphia. As always, AOM is chock-full of parties, receptions, business meetings, and a few interesting panels as well. Here are a few of the panels that I think are worth seeing:
Habitus: Theoretical Foundations and Operationalization for Organization and Management Theory (including talks by John Mohr, Klaus Weber, & Marc Ventresca), Saturday at 11:45
Symbolic Management in the 21st Century (w/ Mike Pfarrer, Mae McDonnell, Jonathan Bundy, and myself), Monday at 9:45
Affinities of Language, Cultural Tool Kits, Institutional Logics: Advancing Strategies of Action (w/ Pat Thornton, Mary Ann Glynn, Steve Vaisey, Omar Lizardo, and Willie Ocasio), Monday at 11:30
The More the Merrier: Integrating Civil Society and the State in Innovation Research (including Huggy Rao, Bogdan Vasi, Sarah Soule, Jeff York, Chuck Eesley, and Shon Hiatt), Monday at 3
Where Do Capabilities Come From? (w/ Teppo Felin, Jay Barney, Michael Jacobides, and Todd Zenger), Monday at 4:45
The Manifestations of Social Class in Organizational Life (including a talk by my colleague Lauren Rivera), Tuesday at 9:45
And if you missed the OMT party last night, don’t worry, there’s another one Monday at 7:30 in room 204 of the Convention Center. There will be free drinks!
A couple of weeks ago I was at a workshop at Oxford about NGOs and reputations. The workshop was sponsored by the Centre for Corporate Reputation and gathered scholars from a number of disciplinary backgrounds to explore how NGOs create and maintain reputations. In addition, we were interested in examining the reputational consequences that result from their interactions with corporations. At the end of the workshop I shared some of my takeaways.
It occurred to me that a number of the papers in the workshop conceptualized NGO reputation in a similar way to how we think about corporate reputations. For example, we assume that reputations are shared perceptions that reflect how an organization (successfully or unsuccessfully) differentiates itself from competitors, or we learn that organizations strategically try to manage the impressions of their key audiences in order to create a positive reputation. But if NGO reputations are similar in most ways to corporate reputations, do we learn anything new by studying NGOs that we couldn’t learn by studying for-profit organizations? Do NGO reputations differ fundamentally from corporate reputations?
I think they are different in at least one really important way: NGOs are valued because we believe they are somehow more morally authentic than other kinds of organizations. Therefore, a NGO’s reputation is grounded in how well it meets its audience’s expectations for moral authenticity. Two questions might come to mind as I try to make the link between moral authenticity and reputation. The first is, what does it mean to be authentic anyway? It’s quite possible that the term is too fuzzy to be analytically useful or perhaps we only ascribe authenticity to organizations in a post-hoc way. And second, why should NGOs be expected to be any more morally authentic than other organizations?
I’ve spent the past few days at the EGOS meetings in Rotterdam. If you’re not an organizational scholar, EGOS is the acronym for the European Group for Organizational Studies – an interdisciplinary network of organizational scholars from both sides of the ocean. The theme of this year’s meeting was about reimagining and rethinking organizations during unsettled times. Naturally, they asked Jerry Davis – who has done more reimagining and rethinking of organizational theory than most – to be the keynote speaker.
Jerry’s keynote was, as expected, a witty, concise, empirically-driven argument for why the corporation has ceased to be a major institution in society (the impromptu dancing was an unexpected delight). If you’re not familiar with his argument, you should really read his book, Managed by the Markets, a real page-turner that explains how the growth of financial markets accompanied the deterioration of the public corporation as a major employer and provider of public welfare in contemporary society. I’ve heard him give a version of this talk several times, and like every other time I left his talk feeling uncomfortable with some of his conclusions. Feeling uncomfortable is an understatement. I disagree with his conclusions. But I still think that Jerry has done an excellent job of marshaling data that can lead to a scarier and even more cynical conclusion than the one he claims.