Archive for the ‘research’ Category
I’ve got some new material in the pipeline and I’m looking to workshop it. If your campus is a day’s drive or less from Bloomington and you need to fill up your seminar schedule, drop me an email. All I ask that you buy me lunch. Topics: a new social theory manuscript; antiwar movement research; how organizations produce scientific knowledge.
A group at MIT has revealed a new drug that might revolutionize treatment of viral infections. The drug selectively causes infected cells to destroy themselves before the virus spreads, thus shutting down and eliminating viruses. Since it’s a generic strategy – cell suicide is triggered by certain chemicals appearing whenever *any* virus shows up – it might be able to cure anything from AIDS to influenza. Certainly, if it pans out, the biggest thing since antibiotics in the 1940s.
Here’s an orgtheory observation. This humongous finding was reported in PLoS One, the innovate online journal. Why is this a big deal? Well, PLos One is an online, open access journal. Another key difference. From the wiki:
PLoS ONE is built on several conceptually different ideas compared to traditional peer-reviewed scientific publishing in that it does not use the perceived importance of a paper as a criterion for acceptance or rejection. The idea is that, instead, PLoS ONE only verifies whether experiments and data analysis were conducted rigorously, and leaves it to the scientific community to ascertain importance, post publication, through debate and comment.
In other words, what might be one of the first major revolutionary discoveries in medical science was reported in a journal that is online, free, and focuses on technical skill while leaving “relevance” to the reader. They didn’t bother with the major basic science or clinical journals. More evidence that the journal system we now use is a dinosaur.
Bruce Western and Jake Rosenfeld published an important paper in the American Sociological Review that shows that deunionization has significantly contributed to increases in economic inequality. They make the case that the effect of deunionization on inequality growth is partly the result of a change in norms surrounding equity. Unions “contribute to a moral economy that institutionalizes norms for fair pay, even for nonunion workers.” When unions are less powerful and as those norms fade, even the the wages of nonunion workers decline.
A variance decomposition analysis estimated the effect of union membership decline and the effect of declining industry-region unionization rates. When individual union membership is considered, union decline accounts for a fifth of the growth in men’s earnings inequality. Adding normative and threat effects of unions on nonunion pay increases the effect of union decline on wage inequality from a fifth to a third. By this measure, the decline of the U.S. labor movement has added as much to men’s wage inequality as has the relative increase in pay for college graduates. Among women, union decline and inequality are only related through the link between industry- region unionization and nonunion wage dispersion. Union decline contributes just half as much as education to the overall rise in women’s wage inequality. These results suggest unions are a normative presence that help sustain the labor market as a social institution, in which norms of equity shape the allocation of wages outside the union sector (532-33).
An interesting comparison is the paper by DiPrete et al. (2010), who showed that increases in CEO pay are the result of firms “leapfrogging” their compensation benchmarks. Firms first identify peer groups, and they then try to match or exceed what their peers pay their executives. Continual leapfrogging of peers leads to an escalation in CEO compensation. The same kind of benchmarking was happening with union wages, but the pattern was moving in the opposite direction. Nonunion firms essentially pegged their wages to those of union firms. As unions lost negotiating power, they no longer had the ability to set wage targets, and nonunion firms were not forced to raise their employees’ wages at the same pace they had in prior years. This lack of normative pressure to keep up with the Joneses gradually eroded notions of fair pay.
Last week, we had a great discussion about papers that were clear and insightful empirical pieces. “Sweet.” Now, let me ask a related question – what papers have you found to be inspirational? My own is probably the Padgett and Ansell (1993) paper about networks and Florentine politics. I had only begun to study networks and I hadn’t yet seen a paper that really revealed a social structure that was otherwise opaque to me. It made me want to do nitty gritty historical/quant work.
What paper have you found to be especially inspirational?
I was recently asked about research that addresses the issues government agency coordination. What stands out on this topic?
The Federation of Associations in Behavioral and Brain Sciences (FABBS) has put together a way to “take action” to ensure that congress does not defund the Social, Behavioral and Economic Sciences (SBE) directorate of the National Science Foundation.
In recent weeks, a number of U.S. House and Senate members have been critical of the National Science Foundation, especially the agency’s funding of research in the social sciences. One Senator specifically called for the elimination of the Social, Behavioral, and Economic Sciences Directorate at NSF.
Over the next several weeks, the U.S. House will vote on the appropriations bill that funds NSF. Amendments are expected to be offered during the full appropriations committee’s consideration of the bill scheduled for July 13 and again the first week in August when the bill is expected to reach the House floor.
Via Shamus Khan – David Brooks defends NSF funding for the social sciences.
Happy Independence day to all of our American readers! In honor of the holiday, I thought I’d post a paper that focuses on two aspects of our American legacy: baseball and racial bias. The paper, “Strike Three: Discrimination, Incentives, and Evaluation,” was published in the most recent issue of the American Economic Review. The authors show that Major League Baseball umpires are less likely to call strikes for pitchers who are of a different race. So, white umpires call fewer strikes when the pitcher is black and vice versa. Interestingly, the authors also show that racial bias dissipated after MLB instituted computer technology that allowed third parties to evaluate the accuracy of umpires’ strike calling, suggesting that close scrutiny may help alleviate this type of bias (at least in employment settings). Here’s the abstract:
Major League Baseball umpires express their racial/ethnic preferences when they evaluate pitchers. Strikes are called less often if the umpire and pitcher do not match race/ethnicity, but mainly where there is little scrutiny of umpires. Pitchers understand the incentives
and throw pitches that allow umpires less subjective judgment (e.g., fastballs over home plate) when they anticipate bias. These direct and indirect effects bias performance measures of minorities downward. The results suggest how discrimination alters discriminated groups’ behavior generally. They imply that biases in measured productivity must be accounted for in generating measures of wage discrimination.
NSF funding for the social sciences is under threat. Hillary Anger Elfenbein, orgs scholar at WashU, testified before Congress on this matter, but she needs some help (orgheads might have answers to her questions). Here’s an email that she sent around on the OB listserv (posted with permission).
As many of you may know, there have been recent suggestions by Republican members of the U.S. House and Senate to eliminate NSF funding of the entire slate of Social, Behavioral, and Economic (“SBE”) sciences—which includes organizational behavior and related disciplinary research in psychology, sociology, and economics.
This is chilling for our field.
I recently testified before Congress on this topic as the House Committee on Science, Space, and Technology held a hearing with the goal to educate members about the value to the US taxpayer of the social and behavioral sciences. This kind of hearing sets a tone and provides Congressional members with talking points for later discussions of financial appropriations.
(In case anyone is interested, you can see a webcast of the hearing below. The transcript and written statements will be posted there in about two weeks:
My written and oral statements are online at:
The reason for this note is that, after the hearing, I was asked two more questions by the committee chair with a request for responses to be submitted in writing for the record. In preparing my responses, I have two requests for colleagues. The deadline is July 5th, and realistically it would be possible to incorporate anything received by the end of the week.
A. Any suggestions that you might have for me in answering these questions.
B. Brief notes from anyone interested, which I can append to my written responses (no more than a paragraph, please). This is a chance for your input to go into the Congressional record.
Okay, so here are their questions, and these are BIG questions:
1. NSF is essentially the only federal agency that historically does not receive earmarks. It prides itself on the merit-review process which, while not perfect, is currently the best we have. Given its imperfections and the reality that some less than stellar grants are funded in ALL scientific disciplines, how would you recommend that it be improved?
2. In your testimony, you state that “Agencies like the NSF are in the best position to prioritize federal funding for SBE research…” Besides highlighting “transformative” research, how else can NSF prioritize research? Are there other elements that you would suggest focusing on to guide prioritization?
Big picture thoughts and tactical details are equally welcome.
Hillary Anger Elfenbein
Say I have rank data over time in an Excel spreadsheet. E.g., the Hoosiers were ranked #1 in basketball in 1977, #6 in 1978 etc. How do I make a figure where the line declines over times? In other words, how do I flip the Y axis so small numbers are the top?
I am delighted to serve as a guest blogger on orgtheory.net. I have been meaning to get into the blogging game for quite some time. I am an avid reader of various blogs, and I always wondered about people who had the pluck to release their thoughts to the internet world, without the benefit of editors, peer reviewers, and the scads of people that we rehearse our arguments in front of during academic conferences. So here I am, taking up the challenge and coming to you weekly for the next month.
I have to admit, I found myself with competing pulls today as my ‘to do’ list seemed to be growing instead of shrinking. In addition to writing this blog, I am also preparing to speak to an Illinois-based group of HIV/AIDS activists and service providers about my research. I am currently conducting a study of the economic and social survival strategies of a racially and socioeconomically diverse group of women living with HIV/AIDS in Chicago. With the 30th Anniversary of some of the first documented cases of HIV/AIDS upon us, it is definitely a time for reflection on the epidemic, the medical and social advances made in our understanding of the disease, and the long way we still have to go in eradicating the epidemic.
But in fact, I am frequently asked to give presentations on my HIV research to what scholars call, “the community,” the group of people actually living, speaking up about, fighting for, and fighting against the particular phenomenon that we study.
It raises the question: what are our responsibilities as scholars to the communities we study?
One of the most important things we do as members of an intellectual community is assist in peer review. As important as it is, reviewing papers is one of the tasks that receives the least amount of attention in graduate school training. We certainly learn how to critique in grad school, but, as you’ll see by the editors’ comments below, critiquing is not the same thing as reviewing. Most of us learn how to be reviewers simply by doing it. While there will never be a definitive how-to manual for reviewing, I thought it would be nice if our field could identify some of the best practices in reviewing. With that idea in mind, I asked a number of current and former editors at journals in organizational theory and sociology to comment about what they think makes a good review. This post includes their thoughts.
You’ll notice that the editors seem to agree on several important points (e.g., be constructive!), but there is some variation as well. Some of the editors make very specific and useful points about what reviewers should and should not be recommending in their reviews. Rather than summarize, I’ll just let you read it for yourself. I’ve put their responses in no particular order.
A few years ago, Tom DiPrete gave a talk at the ASA meetings about the history of the Columbia program. One very interesting section of the talk, in my view, was a description of the social research institute as it was run decades ago. What caught my ear was that the institute took on all kinds of projects from the private sector. That suggested a question: why don’t more sociologists do research based on projects paid for by the private sector?
A few possible answers:
- Soc programs actually do lots of private consulting (e.g., Ron Burt’s data is often drawn from research done for private interests), I just don’t know about it.
- It’s easier to stick to to existing data sets like GSS or Census than take on private projects with uncertain outcomes.
- Private industry doesn’t care for what sociology can offer.
- Private industry doesn’t want to share its data for academic analysis.
- Sociologists are allergic to working with private groups for ideological and/or stylistic reasons.
- This concept simply hasn’t occurred to most sociologists.
- Private groups don’t realize that sociologists can do cool research for them.
- Just added: Conflict of interest.
Any other ideas? Does anyone with experience in consulting care to add something?
What makes a study interesting? Is it the problem (e.g., financialization; boycott success) or is it the theoretical question that the problem is meant to address? For many organizational theorists, I would expect the answer would be the theoretical question. Organizational research, and much of sociology for that matter, is driven by making contributions to theory, no matter how small or seemingly mundane is the problem itself. I don’t think that’s necessarily a bad thing. Teppo and I called this blog orgtheory and not orgresearch for a reason. We like theory. I like the big abstract puzzles that seem pointless to outsiders but that keep many of us up late at night.
The emphasis on “big theory” poses some constraints for getting published in our field. Some, including Don Hambrick, have lamented this trend as the feeling is that it causes scholars to push beyond their data and to make claims that are unjustified. Scholars, the critics claim, have replaced theorizing with a “theory fetish.” Whether they intend to or not, scholars sometimes turn theory turns into neat packaging without any real attempt to integrate or replicate.
But not all areas of scholarship are equally focused on “big theory.” Some subfields pride themselves for doing little theory and instead examine the “big problems.” Take demography as an example. Nope, not much theory there, but the problems (e.g., changes in the divorce rate; immigration patterns) are big. Of course, it’s not as if demographers are just descriptive. They are interested in explaining problems and may use a little theory as they search for mechanisms that explain the big problems at the heart of their analysis. It’s just that the starting point for their analysis is more about the problem than the theory. Being able to shed light on a really big problem is given more weight in the review process than making a big theoretical contribution. Although demography may be on the extreme side of the continuum, I think that some branches of historical sociology and strategy research are like this as well.
Why are demography and strategy research more problem oriented than economic sociology or organizational theory? My sense is that it’s because there is more consensus in those subfields about what the big problems are. Demographers may not agree with one another about which theories matter, but they do have high consensus around what big problems they ought to be interested in explaining/solving. This is similarly true in strategy research. You can pinpoint strategy’s focus to one set of DVs – performance. If you can shed some light on explaining this problem – how to improve financial performance – I don’t think it really matters what theory you use or if you have much of a theory at all. But in organizational theory, there is much less consensus about what the big problems are, and so we instead focus on generating consensus around what the big theoretical questions are. It turns out that there are a handful of big theoretical questions (e.g., institutional change; network effects vs. cognition) but people who are in the subfield develop a sense for what these are and then figure out how to develop research projects to address those questions. Sometimes, and probably way more often than is useful, scholars start with the topical problem and then, after they’ve done some analysis or finished their ethnography, try to cram their research problem into a theoretical question. Sometimes it works and sometimes it doesn’t. When it doesn’t work, the resulting paper has all of the problems associated with “theory fetish” – e.g., superficial contribution, weak mechanisms.
Is there a place for more problem-oriented research in organizational theory? I’d like to think so. But to get there we need to start talking more about what the big problems are in the organizational world and to get over the false idea that “little theory” is bad. Little theory doesn’t mean no contribution at all; it just means that we arrive at them as a consequence of a detailed examination of a big organizational/social problem. I take Jerry Davis’s and Greta Krippner’s books as great examples of problem focused work that also happen to make nice theoretical contributions along the way. I think we need more research like this.
I’ve updated my website. Same format, but I’ve updated the following, for the first time since 2006 (!):
- Biography – updated description of my research.
- Research – every paper & working paper, plus links to selected media coverage.
- Teaching – same grad school advice, but I added my undergrad courses.
- Miscellaneous – fun list of music & art links.
Thanks for reading.
Your co-authors are probably just as busy as you are. So how do you get co-authors to focus on your joint project? There’s no manual on this. It’s probably highly idiosyncratic: depends on the unique working relationship that you have with your co-author.
But what might be generic strategies for “motivating” co-authors? (This presumes that you yourself are motivated.) Here are some quick strategies that come to mind:
- Corner your co-author. Erdos famously showed up at co-authors door steps (even at 2am) — incidentally he had LOTS of co-authors (511!) — and exclaimed “my mind is open.” Try something like that. More generally, physical proximity (despite the advantages of technology) tends to focus attention — so taking time to work on projects at conferences etc can pay off.
- Pester your co-author. In the digital era one can usually find co-authors lurking somewhere online. Skype, Facebook and other social media are good “control” devices.
- Bag the project. If your co-author doesn’t seem willing to work on the project, maybe the project is lame. Bag it — and work on something more interesting.
- Pretend your co-author doesn’t exist. Take charge and just work on the project yourself, as if you’re the sole author. The risk of course is that your co-author doesn’t agree with your arguments/work, but that might be a risk worth taking. More likely, your co-author will appreciate your work and it will push the project forwards.
- Pre-commit to intermediate deadlines. Pre-commit yourself to intermediate deadlines and do the same with your co-authors (I’ll finish “x” by next Wed). Co-authorship itself is sort of like a commitment device (well, among other things), it can keep us focused.
- Pick good co-authors in the first place. Probably the easiest way to manage co-author relationships is to have good ones in the first place. “Good” might have a lot to do with compatibility of work styles, similarity of perspectives, etc.
Drop any additional ideas into the comments.
Academia is one step closer to embracing open peer review (hat tip to David Kirsch). The Andrew W. Mellon foundation has given NYU and MediaCommons $50,000 to develop and test an open peer review system for academic journals. We’ve had a lot of debate here about how to improve the review process, which has included some modest and crazy proposals. Open peer review is one potential solution to the problems inherent in the review process – e.g., getting good reviewers, determining the quality of papers, etc. Open peer review would allow authors to post their papers online and anyone could step in and serve as a reviewer, offering public comments and suggestions through multiple iterations of paper revision. Editors use the feedback posted on the online system and monitor revisions to determine which papers should ultimately make it into their journal. It uses a crowdsourcing logic to move papers to publication. If you get more eyes on a paper, the author gets better feedback during the revision process and editors will be better at filtering out lower quality papers.
A couple of journals have already tried open peer review. In 2006 Nature‘s experiment was seen as a failure, but four years later the humanities journal, Shakespeare Quarterly, used it quite successfully, which has prompted the journal to try it again. I like the idea of making peer review more open and competitive. I’m not one of those who thinks the peer review system is currently broken (by and large I’m happy with the quality of articles published in our fields’ top journals), but I think we should be embrace technological opportunities to improve the system. One upside of an open peer review system would be improvement in paper quality, but more importantly I think that open peer review could speed up the peer review process. If you allow more people to quickly gain access to a paper, you wouldn’t have to wait months and months to hear back from the editors’ assigned reviewers. Feedback and revision could occur simultaneously, which is really an ideal model for social science.
There are some serious downsides to consider. Some authors will resist having their work vetted openly. Public criticism can be hard to take, especially if you’re a junior person seeking tenure. The system might frustrate scholars of all rank and status who don’t want to let the public in to see their half-baked ideas and analysis. In some ways we’re all invested in the illusion that great scholarship just blossoms on its own – we’d rather not let everyone see how the sausage is made, especially when it’s of our own making. There is also the potential for a tragedy of the commons scenario. Currently, the direct incentive to peer review is to maintain one’s good standing with journals we’d like to publish in some day. If no one is calling on you to review, the system completely relies on professional norms and reviewers’ good will. Open peer review might work well for one or two special issues, but when the novelty wears off and the system is congested with hundreds of submissions, willingness to review might dissipate. Sadly, I also think it’s possible that the system could be overloaded quickly if everyone just starts posting their crap online. Open peer review would require that authors take responsibility in submitting papers selectively.
I think we could overcome these obstacles, but it would requires some innovative solutions (e.g., editors could choose which papers get posted online for review and desk reject the rest). Someone will eventually have to take the risk and volunteer to be the journal to try the model out in our field. Given the risk involved, my guess is that the instigator will have to be one of the well established, high status journals if this is going to have any chance of success. Perhaps a special issue of ASQ or AJS is in order.
The most recent issue of the Journal of Economic Perspectives has an interesting, non-standard article analyzing the behavior of individuals during the Titanic disaster. Worth reading.
Frey, Bruno S., David A. Savage, and Benno Torgler. 2011. “Behavior under Extreme Conditions: The Titanic Disaster.” Journal of Economic Perspectives, 25(1): 209–22.
During the night of April 14, 1912, the RMS Titanic collided with an iceberg on her maiden voyage. Two hours and 40 minutes later she sank, resulting in the loss of 1,501 lives—more than two-thirds of her 2,207 passengers and crew. This remains one of the deadliest peacetime maritime disasters in history and by far the most famous. For social scientists, evidence about how people behaved as the Titanic sunk offers a quasi-natural field experiment to explore behavior under extreme conditions of life and death. A common assumption is that in such situations, self-interested reactions will predominate and social cohesion is expected to disappear. However, empirical evidence on the extent to which people in the throes of a disaster react with self-regarding or with other-regarding behavior is scanty. The sinking of the Titanic posed a life-or-death situation for its passengers. The Titanic carried only 20 lifeboats, which could accommodate about half the people aboard, and deck officers exacerbated the shortage by launching lifeboats that were partially empty. Failure to secure a seat in a lifeboat virtually guaranteed death. We have collected individual-level data on the passengers and crew on the Titanic, which allow us to analyze some specific questions: Did physical strength (being male and in prime age) or social status (being a first- or second-class passenger) raise the survival chance? Was it favorable for survival to travel alone or in company? Does one’s role or function (being a crew member or a passenger) affect the probability of survival? Do social norms, such as “Women and children first!” have any effect? Does nationality affect the chance of survival? We also explore whether the time from impact to sinking might matter by comparing the sinking of the Titanic over nearly three hours to the sinking of the Lusitania in 1915, which took only 18 minutes from when the torpedo hit the ship.
I’ve written a few times before about how to choose the software you work with, and what you should and should not care about when making those choices. I maintain a page with various resources related to this, if you’re interested, most notably the Emacs Starter Kit for the Social Sciences. A revised version of an article of mine on this topic called “Choosing Your Workflow Applications”, which I’ve had online for a while, has now been published in The Political Methodologist, the newsletter of the Society for Political Methodology. (The source document for my article is also available, as I wanted the piece to walk its own talk.) There are also some great contributions from others along similar lines, covering different aspects of setting up and running your research so that you can collaborate easily, remember what you did, easily revisit work when needed, and do good, reproducible social science in a relatively hassle-free way. I think the issue as a whole is something that grad students in any social science program—especially those just starting out—could benefit from reading, and there’s a lot there for faculty to chew on, too.
So, the most provocative presentation (easily) at this year’s UTAH-BYU Winter Strategy Conference was given by Mike Ryall (University of Toronto). Mike argued that “verbal theorizing” has problems, serious problems. He reiterated the Ryall Challenge (originally issued at last year’s AOM session on the Strategy Research Initiative, SRI), for any scholar to submit a natural language paper that meets the following criteria:
- Unambiguous – meaning does not vary from scholar to scholar.
- Rigorously derived – conclusions logically consistent with premises.
- Measurable – empirically refutable.
- Plausible – consistent with researcher’s priors.
OK, so, it’s hard to disagree with the need for increased clarity, fine. Yes, jargon can be a problem.
But, ambiguity and grand theorizing also leave room for additional work – thankfully so. And jargon in fact can be a very efficient way to communicate. In short, I think the Ryall Challenge is sort of meaningless, it seems to unfairly use the criteria of formal modeling to assess natural language theorizing. Of course both types of work have their purposes. When pushed, Mike does not seem to deny this either (see his last slide).
In fact, I think we might live in the best of all possible worlds (which is always the alternative hypothesis) – we have people who do more “ambiguous,” natural language-type theorizing, others model, others do empirical work of various sort, mixed-methods, etc. These approaches complement each other. And the good stuff floats to the top, gets attention, cited, etc. Put differently, the market — over time — sorts the wheat from the chaff.
Now, it gets more interesting. Ezra submitted a paper to try to meet the Ryall Challenge, his RSO paper “Speaking with One Voice: A Stanford School” Approach to Organizational Hierarchy.”
In the presentation, Mike spent quite a bit of time
thrashing unpacking the verbal argument in Ezra’s paper. Mike wrote a paper-length response to Ezra, Ezra is responding and each is posting their respective responses onto their web site. I told Mike that we’d be thrilled to have him and Ezra guest blog on this issue.
Here are Mike’s slides (the sanitized, public version).
UPDATE – here are some additional resources related to Mike’s presentation. The SRI Strategy Reader. A list of the two dozen+ top, mid-career strategy scholars involved in SRI. Upcoming, joint SRI and Administrative Science Quarterly paper development workshop.
One thing that organizational and economic sociology could use more of is experimental methods. While sociologists are not completely averse to experiments (see its prominent use in exchange theory), the method seems to occupy a small niche. Some sociologists express a real distaste for experiments. Our love of context and history seems to bias us against experiments, which emphasize internal validity over external validity and random assignment over sampling from real populations.
My sense though is that a number of theoretical areas could be more fully developed by using experiments. The real value of experiments comes from being able to more precisely identify theoretical mechanisms, especially at the cognitive level. (If you have any doubt of the utility of experiments, check out Correll’s, Benard’s and Paik’s beautiful study of the motherhood penalty.) Given the calls to explore the micro/cognitive foundations of social theories, experiments could be very useful. Here are just a few conceptual areas that could benefit from experiments.
- Networks and relationship formation – what cognitive dynamics explain homophily? How does framing affect relationships (see, for example, this paper in Psych Science). What sorts of social cues trigger relationship formation? What is the role of emotion in choosing friends?
- Institutions and cultural persistence – Zucker (1977) broke ground in this area but since then experimental methods have been scantly used. What cognitive dynamics explain habituation? What role does social influence play in the transfer of cultural preferences? What situational dynamics lead to rule conformity?
- Collective action frames – why are some frames more resonant than others? How important is shared identity to frame resonance?
- Categories and legitimacy – to what extent does categorical contrast lead to perceptions about legitimacy? How different does something have to be from others in a category before individuals perceive a fit problem? What is the relationship between categorical fit and valuation?
- Status and power – why are individuals so biased by status? How sensitive are individuals to status differences? What are the cognitive dimensions of status deference?
What else would you add to the list?
I met with an undergraduate student who is exploring the possibility of launching a journal for undergraduates to publish their work (in management/orgs-related areas: OB, strategy, accounting, finance, marketing). I have to say, I am a bit skeptical about this as there are dozens of journals, in every discipline, that an undergraduate could (of course) also publish in. For example, there are well over 100+ management journals (of varying quality) and presumably these journals are eager to get more submissions and publish work, no matter who sends it in.
But while I am skeptical, I do see the potential value that student editors and authors might get from an undergraduate journal. And, some disciplines indeed seem to have undergraduate-specific journals like this (though, I don’t have any figures to back that up).
Anyways, the student is eager to get any feedback. Post any comments, thoughts that you might have.
In terms of a model — I guess the The Ross School of Business, University of Michigan has an undergraduate business journal like this: The Michigan Journal of Business. And, here’s a longer list of undergraduate research journals.
The received wisdom is that there is an “Atlantic divide” between Europe and North America vis-a-vis organizational research. Joel Baum, using citation data from three compendia, finds that the “Atlantic divide” is essentially a myth.
Here’s the abstract:
It is customary among contemporary organization theorists to equate North American and European scholarship with objectivist and subjectivist metatheoretical positions (respectively), treat these positions as mutually exclusive alternatives, and debate which is best suited to understanding organizational phenomena. Fueled by this dispute, questions of bias and fears of colonization are readily apparent in academic reviews of three recent “handbooks” of organizations. Caught in the current of these tensions, I was prompted to assess the status of this “Atlantic divide.” To do so, I examined the three recent compendia in terms of the rhetoric academic reviewers employed to characterize them and the geographic locations, preferred journals, and university affiliations of scholars who refer to them. The results are striking. Despite the unanimous typecasting of the volumes as epitomizing either objectivist North American or subjectivist European traditions, the geographic distributions of researchers citing them are indistinguishable. Citations to each compendium are, however, clustered within particular journals and among authors with particular university affiliations—but neither the journals nor universities are neatly North American or European. Current associations of these traditions with North American and European scholarship thus seem driven more by academic rhetoric than authentic continental distinctions. I examine the roots of this rhetorical mapping and explore its implications for the field. I advocate abandonment of the myth of the Atlantic divide and exploitation of perspectives that do not privilege the subjectivist–objectivist dichotomy.
Key Words: organization and management theory; subjectivst versus objectivist perspectives
Here’s a previous post highlighting Joel’s work on journal versus article-effects.
(Warning: shameless self-promotion). The Guardian just posted a short piece I wrote on accent neutralization in Indian call centers:
Some of the comments are rather funny. Speaking of which, does anyone know of a scholarly treatment of discussion boards? They’re a bizarre phenomenon…
Joel Baum has written a provocative article which argues and shows that, essentially, article effects are larger than journal effects.
In other words, impact factors and associated journal rankings give the impression of within-journal article homogeneity. But top journals of course have significant variance in article-level citations, and thus journals and less-cited articles essentially “free-ride” on the citations of a few, highly-cited pieces. A few articles get lots of citations, most get far less — and the former provide a halo for the latter. And, “lesser” journals also publish articles that become hits (take Jay Barney’s 1991 Journal of Management article, with 17,000+ google scholar citations), hits that are more cited than average articles in “top” journals.
The whole journal rankings obsession (and associated labels: “A” etc journal) can be a little nutty, and I think Joel’s piece nicely reminds us that, well, article content matters. There’s a bit of a “count” culture in some places, where ideas and content get subverted by “how many A pubs” someone has published. Counts trump ideas. At times, high stakes decisions — hiring, tenure, rewards — also get made based on counts, and thus Joel’s piece on article-effects is a welcome reminder.
Having said that, I do think that journal effects certainly remain important (a limited number of journals are analyzed in the paper), no question. And citations of course are not the only (nor perfect) measure of impact.
But definitely a fascinating piece.
Here’s the abstract:
The simplicity and apparent objectivity of the Institute for Scientific Information’s Impact Factor has resulted in its widespread use to assess the quality of organization studies journals and by extension the impact of the articles they publish and the achievements of their authors. After describing how such uses of the Impact Factor can distort both researcher and editorial behavior to the detriment of the field, I show how extreme variability in article citedness permits the vast majority of articles – and journals themselves – to free-ride on a small number of highly-cited articles. I conclude that the Impact Factor has little credibility as a proxy for the quality of either organization studies journals or the articles they publish, resulting in attributions of journal or article quality that are incorrect as much or more than half the time. The clear implication is that we need to cease our reliance on such a non-scientific, quantitative characterization to evaluate the quality of our work.
Here’s the paper (posted with permission from Joel): “Free-riding on power laws: Questioning the validity of the impact factor as a measure of research quality in organization studies.” The paper is forthcoming in Organization.
A question from Steve Boivie (University of Arizona).
Anyone know if there is a wiki (of some sort) for management, organization theory and strategy research? I have seen various efforts, mostly by enterprising doctoral students, to post theoretical summaries etc online (for example, in preparation for qualifying exams). However, most of those have been taken down (I just checked several sites that we’ve linked to in the past). I don’t know of any ongoing efforts.
Here are a few sites, though they differ from ‘traditional’ wikis in some ways:
- the strategy research initiative wiki
- there’s the encyclopedia of organization theory (a bit old and out-dated)
- This sorta relates, theories used in IS research wiki
- Let us know of other links, I’m sure there are other wiki-like sites.
Of course wikipedia itself could be the forum for something like this. Though, I think a more specialized effort could be worth it. (While not a wiki – I love, for example, what the Stanford Encyclopedia of Philosophy has done. The Internet Encyclopedia of Philosophy is also good.)
I think with a core nucleus of aggressive folks, something like this could be pulled together. If you are interested, send Steve a note.
Lera Boroditsky on how language shapes thought (video from Long Now Foundation, FORA.tv).
Of all academic areas, you would think management would be the logical place for unrestrained economic imperialism. Yes, b-schools do employ many of the world’s leading economists, but that’s only half the game. They also employ leading sociologists, psychologists, statisticians, and even a historian here and there. Management PhD holders employ a wide range of approaches. Some are economists, but you also get people doing things that are psychology or sociology. Leading management journals are not de facto economics journals. Why?
A few hypotheses:
- Supply: Economists shy away from applied topics. Maybe the study of firms is not attractive enough for some reason.
- Cultural: There’s a culture of pluralism in b-schools that’s very strong. If so, where did that culture come from?
- Relative strength: Maybe the organizational psychologists and economic sociologists are very strong scholars and are able to hold their own in turf wars.
- More mathy: B-schools house people other than economists who know math, like the operations research or the financial engineering folks, so economists don’t have an advantage.
- Selection effect: Non-economists who go to b-schools already like econ, so they get along and learn to co-exist.
- Path dependence: A lot of key management research precedes the economic imperialism of the 1980s. So they are hard to dislodge.
- Demand: MBA students want more than economics in the curriculum.
- Demand, II: Funders want non-economists in the faculty.
Any evidence? Other hypotheses? Good question for a sociologist of science. Comments from management faculty and economists encouraged.
I’m interested in the nature of reality and particularly the boundaries and scope of the social construction of reality. I think social construction clearly plays an important role, but the question is, how “strong” is that role? For example, I think the performativity argument (and associated “strong programme”) pushes the social construction argument way too far.
But let’s get more specific: what role do categories, language and naming play in the construction of reality?
One empirical setting for actually studying this question is the case of color categories and color naming, an active area of research in linguistics, computer science and psychology. Scholars in this space have looked at whether the extant categories and names of colors of particular languages impact what individuals actually see and remember. The famous Sapir-Whorf thesis of course argued, broadly, that language, categories and culture strongly determine perception and reality. But, the color research shows otherwise. Languages with highly fine-grained distinctions for individual colors, as well as languages with relatively few (or even no!) distinctions and names for color, lead to the same perceptions and experiences of color. (Check out the citations below to see the clever way in which this is empirically tested.)
Well, almost. Recent work is making some important qualifications to the argument (articulating a middle ground, of sorts, between universality and strong construction), and there clearly is a very active debate in this space.
Here are some links to this literature:
- Berlin & Kay. 1991 (2nd edition). Basic color terms: Their universality and evolution. University of California Press.
- Lindsey & Brown. 2006. Universality of color names. Proceedings of the National Academy of Sciences, 103: 16608-16613.
- Terry Regier, Paul Kay, Aubrey Gilbert, and Richard Ivry. 2010. Language and thought: Which side are you on, anyway? In B. Malt and P. Wolff (Eds.), Words and the Mind: Perspectives on the Language-Thought Interface. New York: Oxford University Press.
- Ke Zhou, Lei Mo, Paul Kay, Veronica P.Y. Kwok, Tiffany N.M. Ip, & Li Hai Tan. 2010. Newly trained lexical categories produce lateralized categorical perception of color. Proceedings of the National Academy of Sciences, 107: 9974-9978.
- See Paul Kay’s web site.
- Also check out the World Color Survey @ Berkeley.
Now, I don’t, by any means, think that the color research necessarily is a knock-down argument against social construction. But I do think this research definitely questions the “strong” form of construction — I have opportunistically cited and referred to these and other findings to make that point. And another, perhaps unfair, knockdown argument is that no matter what linguistic categories a color-blind person has, it simply won’t matter in the perception of color.
There is of course much debate in the color literature as well and some of the work points toward a particular, softer form of construction. And, the color research of course is just one setting, and the findings may not generalize to other settings. But I do like the fact that the color research actually allows us to more rigorously say some things — with the usual qualifications and questions — about the specific role that language (as well as categories, culture etc) plays in the way we perceive the world.
I’ve noted that many journals have added or continue to use (well, at least one has recently discontinued) the “reject & resubmit” category for manuscripts. I’ve heard strong “for” and “against” statements for including or doing away with the category.
Journals of course traditionally “accept” a manuscript, or offer it a “major” or “minor revision.” And most manuscripts submitted to top journals get the ol’ “reject.” My sense — though I haven’t seen any of this articulated in an editorial statement (some journal probably has explicitly said something about this) — is that the “reject and resubmit” option essentially is for manuscripts with some potential, but that really, really need a lot of work and they thus will essentially be treated as a completely new submission, perhaps getting a new set of reviewers, etc. The “reject and resubmit” category of course also gives the authors an opportunity to send the paper back to the journal (which, I guess, usually isn’t the case for rejected manuscripts, without a petition).
Anyways, what do you think of the “reject and resubmit” category?
I like this.
A group of British schoolchildren may be the youngest scientists ever to have their work published in a peer-reviewed journal. In a new paper in Biology Letters, 25 8-year-old children from Blackawton Primary School report that buff-tailed bubmblebees can learn to reconize nourishing flowers based on colors and patterns.
The Schomburg Center for Research in Black Culture is a very important institution for historians and social scientists. It has a very deep collection of historical materials on many aspects of African American history. I even used it in writing my own book. Recently, my colleague Khalil Muhammad, an assistant professor of history at Indiana University, has accepted a position as head of the Schomburg. Here’s an interview with the “Left of Black” internet show, check it out.