Archive for the ‘academia’ Category
Appetite for Innovation: Creativity & Change at elBulli (To be published by Columbia University Press on July 12, 2016)
How is it possible for an organization to systematically enact changes in the larger system of which it is part? Using Ferran Adria’s iconic restaurant “elBulli” as an example of organizational creativity and radical innovation, Appetite for Innovation examines how Adria’s organization was able to systematically produce breakthroughs of knowledge within its field and, ultimately, to stabilize a new genre or paradigm in cuisine – the often called “experimental,” “molecular,” or “techno-emotional” culinary movement.
Recognized as the most influential restaurant in the world, elBulli has been at the forefront of the revolution that has inspired the gastronomic avant-garde worldwide. With a voracious appetite for innovation, year after year, Adrià and his team have broken through with new ingredients, combinations, culinary concepts and techniques that have transformed our way of understanding food and the development of creativity in haute cuisine.
Appetite for Innovation is an organizational study of the system of innovation behind Adrià’s successful organization. It reveals key mechanisms that explain the organization’s ability to continuously devise, implement and legitimate innovative ideas within its field and beyond. Based on exclusive access to meetings, observations, and interviews with renowned professionals of the contemporary gastronomic field, the book reveals how a culture for change was developed within the organization; how new communities were attracted to the organization’s work and helped to perpetuate its practice, and how the organization and its leader’s charisma and reputation were built and maintained over time. The book draws on examples from other fields, including art, science, music, theatre and literature to explore the research’s potential to inform practices of innovation and creativity in multiple kinds of organizations and industries.
The research for Appetite for Innovation was conducted when Adria’s organization was undergoing its most profound transformation, from a restaurant to a research center for innovation, “elBulli foundation”. The book, therefore, takes advantage of this unique moment in time to retrace the story of a restaurant that became a legend and to explore underlying factors that led to its reinvention in 2011 into a seemingly unparalleled organizational model.
Appetite for Innovation is primarily intended to reach and be used by academic and professionals from the fields of innovation and organizations studies. It is also directed towards a non-specialist readership interested in the topics of innovation and creativity in general. In order to engage a wider audience and show the fascinating world of chefs and the inner-workings of high-end restaurants, the book is filled with photographs of dishes, creative processes and team’s dynamics within haute cuisine kitchens and culinary labs. It also includes numerous diagrams and graphs that illustrate the practices enacted by the elBulli organization to sustain innovation, and the networks of relationships that it developed over time. Each chapter opens with an iconic recipe created by elBulli as a way of illustrating the book’s central arguments and key turning points that enable the organization to gain a strategic position within its field and become successful.
To find a detailed description of the book please go to: http://cup.columbia.edu/book/appetite-for-innovation/9780231176781
Also, Forbes.com included Appetite for Innovation in its list of 17 books recommended for “creative leaders” to read this summer: http://www.forbes.com/sites/berlinschoolofcreativeleadership/2016/05/15/17-summer-books-creative-leaders-can-read-at-the-beach/#7ac430985cef
Don’t worry, I won’t give away state secrets.
In the 2000-01 and 2002-03 academic years, I worked at the American Journal of Sociology as a member of the manuscript intake group and later as an associate editor. I also worked for a while, roughly at the same time, as the managing editor of Sociological Methodology, which was then edited by my dissertation advisor, Rafe Stolzenberg. In this post, I want to tell you a little bit about how top academic journals work. This is important because academics reward people based on getting into highly selective journals. There should be a lot of discussion about how the institution works and what does and does not get accepted.
Background: The AJS is the oldest general interest journal in American sociology and has, during its entire existence, been based at the Department of Sociology at the University of Chicago. To my knowledge, it has never rotated to another program. In fact, the relationship between the Department and the journal is so strong that one of Chicago’s faculty, Andy Abbott, has written a very nice monograph just about the AJS called Department and Discipline. It’s a good book and you should read it if you want to either understand the evolution of journals or how Chicago fits in to the broader discipline.
Some time ago, the AJS developed this system where students were strongly involved in the operation of the journal. For example, the AJS usually is run by a full time manager, the incredible Susan Allan, and a few students who run the office. These folks do budgets, office organization, crazy amounts of paper work, and a whole lot more. But it goes beyond administration. Students are deeply involved in the shaping the journal’s content.
At the time I was a doctoral student, the AJS was organized into three major committees: the editorial board, which is always headed by a senior professor; a manuscript intake group, which assigned reviewers to papers; and a book review board, also headed by faculty. The manuscript intake group and the book review board are mainly staffed by students. The editorial board usually has one or two students on it, who have a major voice.
In contrast, Sociological Methodology was run like many specialty journals. You had a manager (me) and the editor (Prof. Stolzenberg) who choose reviewers, read reviews, and made decisions. These two people did about 90% of the work running the journal
Lessons from working at AJS: In many ways, the AJS resembled other major journals that must process hundreds of papers per year. There is a basic intake/review/decision cycle. That process has up and down sides. The up side is that the journal review process is actually pretty decent at weeding out garbage. After a while, you can easily spot bad papers. Unending rants, poor spelling, poor formatting, lack of data. Another upside is that many papers do actually improve once people respond to reviewers.
I also saw some of the downsides of the review process. For example, I discovered that only about a third of people agree to consistently review papers, making the workload highly unequal. Some of the patterns are obvious. A lot of people stop answering the mail post-tenure. People in some sub-fields are simply bad citizens and refuse to write reviews or write bad ones. Finally, like a lot of journals, we could let papers fall between the cracks and go without a decision.
Perhaps the biggest insight that I had was the power of editors and the randomness that goes into a “great paper.” Example: while I was on the editorial board, we had a paper with ok but not great reviews. I read it and disagreed strongly. Right as the chief editor was about to assign it to the reject pile, I interjected. It was published and was covered by the national media. This may sound like a great story, which it is, it also shows the weakness of the journal system. If I had been absent that week, or if we had another student editor, the paper would have been rejected. Conversely, I am sure that I overlooked some excellent work.
A related lesson is that the chief editor matters a great deal. An editor can doom a paper from a scholar they don’t like, or on a topic they hate, by simply assigning it to known mean reviewers. Editorial influence appears in other ways. While most papers are clear rejects, many are on the border. An interventionist editor can strongly affect what is accepted from these border line cases. One editor I worked with would actually ask the authors for the data and rerun the analysis to see if reviewer 2’s criticism was right. Another is very comfortable with adding a few suggestions and then just tossing it back to the authors. The power of editors, and the Chicago department, also manifests itself in the fact that AJS is way more tolerant of longer, theory driven papers than other peer social science journals.
A second lesson is that there are big structural factors that influence what gets published. The first factor is type of research. Simply put, ethnographers produce papers at a slower rate than demographers. So if you have a small number of papers, it doesn’t make sense to risk it all on the AJS. Instead, you move to the book or more specialized journals.That’s one reason why ethnographic work is rare in top journals A second factor is culture. There are some sub-fields where the reviewers seem to be really difficult. For example, during the late 1990s, there seemed to be a sort of feud in social psychology. Each side would tank the others in the review process. Ethnography is similar. When people did submit field work papers, it was nearly impossible to get 2 or 3 reviewers to say “this is good enough.” Just endless and endless demands.
The final lesson I took is that we are humans and we are biased. While 95% of decisions really based on reviews, there were definitely times that our biases showed. There were one or two papers I promoted because I was excited about social movement research. At other times, decisions took into account touchy political situations and author prestige. As I said, this is not typical but it does happen and I include myself in this evaluation.
Lessons from Working at Sociological Methodology: This was a totally different experience. Instead of being embedded in a larger group, it was just literally me and a filing cabinet and my advisor. We had a weekly meeting to discuss submissions, I took notes, and he told me what to do.
Probably the biggest take home point from working with Professor Stolzenberg was that editors make or break the journal. The dude was really on top of things and few papers went past 2 or 3 months. Once a paper couldn’t get a single review after six months and the editor wrote a letter to the author explaining the situation and they mutually agreed to release the paper from review.
Stolzenberg was also not afraid of people, a strong trait for an editor. He didn’t mind rejecting people and making the process speed up. Although he didn’t desk reject often, he was good about getting reviews and writing detailed rejection letters. That way, the journal didn’t get clogged with orphaned papers. The lesson is that there really is no excuse for slow reviews. Get reviews, reject the paper, or get the hell out of the editing business.
Final note – authors and reviewers are lame: I conclude with a brief discussion of reviewers and authors. First, authors are quite lame. They are slow at responding to editor. They fail to read reviewer comments or take them seriously. And even more puzzling, they fail to resubmit after the R&R. I was shocked to discover that a fairly large fraction of AJS and SM papers at the R&R stage were not resubmitted. Perhaps a third or so. Second, reviewers are lame. As Pamela Oliver put it so well in the recent American Sociologist, the review process is simply broken. Reviewers ask for endless revisions, the focus on vague issues like framing, or simply write hostile and unhelpful reviews. So I thank the 1/3 of academics who write prompt and professional reviews and I curse the 2/3 of shirkers and complainers to an eternity of reading late reviews that criticize the framing of the paper.
I wanted to start this post with a dramatic question about whether some knowledge is too dangerous to pursue. The H-bomb is probably the archetypal example of this dilemma, and brings to mind Oppenheimer’s quotation of the Bhagavad Gita upon the detonation of Trinity: “Now I am become Death, the destroyer of worlds.
But really, that’s way too melodramatic for the example I have in mind, which is much more mundane. Much more bureaucratic. It’s less about knowledge that is too dangerous to pursue and more about blindness to the unanticipated — but not unanticipatable — consequences of some kinds of knowledge.
The knowledge I have in mind is the student-unit record. See? I told you it was boring.
The student-unit record is simply a government record that tracks a specific student across multiple educational institutions and into the workforce. Right now, this does not exist for all college students.
There are records of students who apply for federal aid, and those can be tied to tax data down the road. This is what the Department of Education’s College Scorecard is based on: earnings 6-10 years after entry into a particular college. But this leaves out the 30% of students who don’t receive federal aid.
There are states with unit-record systems. Virginia’s is particularly strong: it follows students from Virginia high schools through enrollment in any not-for-profit Virginia college and then into the workforce as reflected in unemployment insurance records. But it loses students who enter or leave Virginia, which is presumably a considerable number.
But there’s currently no comprehensive federal student-unit record system. In fact at the moment creating one is actually illegal. It was banned in an amendment to the Higher Education Act reauthorization in 2008, largely because the higher ed lobby hates the idea.
Having student-unit records available would open up all kind of research possibilities. It would help us see the payoffs not just to college in general, but to specific colleges, or specific majors. It would help us disentangle the effects of the multiple institutions attended by the typical college student. It would allow us to think more precisely about when student loans do, and don’t, pay off. Academics and policy wonks have argued for it on just these grounds.
In fact, basically every social scientist I know would love to see student-unit records become available. And I get it. I really do. I’d like to know the answers to those questions, too.
But I’m really leery of student-unit records. Maybe not quite enough to stand up and say, This is a terrible idea and I totally oppose it. Because I also see the potential benefits. But leery enough to want to point out the consequences that seem likely to follow a student-unit record system. Because I think some of the same people who really love the idea of having this data available would be less enthused about the kind of world it might help, in some marginal way, create.
So, with that as background, here are three things I’d like to see data enthusiasts really think about before jumping on this bandwagon.
First, it is a short path from data to governance. For researchers, the point of student-level data is to provide new insights into what’s working and what isn’t: to better understand what the effects of higher education, and the financial aid that makes it possible, actually are.
But for policy types, the main point is accountability. The main point of collecting student-level data is to force colleges to take responsibility for the eventual labor market outcomes of their students.
Sometimes, that’s phrased more neutrally as “transparency”. But then it’s quickly tied to proposals to “directly tie financial aid availability to institutional performance” and called “an essential tool in quality assurance.”
Now, I am not suggesting that higher education institutions should be free to just take all the federal money they can get and do whatever the heck they want with it. But I am very skeptical that, in general, the net effect of accountability schemes is generally positive. They add bureaucracy, they create new measures to game, and the behaviors they actually encourage tend to be remote from the behaviors they are intended to encourage.
Could there be some positive value in cutting off aid to institutions with truly terrible outcomes? Absolutely. But what makes us think that we’ll end up with that system, versus, say, one that incentivizes schools to maximize students’ earnings, with all the bad behavior that might entail? Anyone who seriously thinks that we would use more comprehensive data to actually improve governance of higher ed should take a long hard look at what’s going on in the UK these days.
Second, student-unit records will intensify our already strong focus on the economic return to college, and further devalue other benefits. Education does many things for people. Helping them earn more money is an important one of those things. It is not, however, the only one.
Education expands people’s minds. It gives them tools for taking advantage of opportunities that present themselves. It gives them options. It helps them to find work they find meaningful, in workplaces where they are treated with respect. And yes, selection effects — or maybe it’s just because they’re richer — but college graduates are happier and healthier than nongraduates.
The thing is, all these noneconomic benefits are difficult to measure. We have no administrative data that tracks people’s happiness, or their health, let alone whether higher education has expanded their internal life.
What we’ve got is the big two: death and taxes. And while it might be nice to know whether today’s 30-year-old graduates are outliving their nongraduate peers in 50 years, in reality it’s tax data we’ll focus on. What’s the economic return to college, by type of student, by institution, by major? And that will drive the conversation even more than it already does. Which to my mind is already too much.
Third, social scientists are occupationally prone to overestimate the practical benefit of more data. Are there things we would learn from student-unit records that we don’t know? Of course. There are all kinds of natural experiments, regression discontinuities, and instrumental variables that could be exploited, particularly around financial aid questions. And it would be great to be able to distinguish between the effects of “college” and the effects of that major at this college.
But we all realize that a lot of the benefit of “college” isn’t a treatment effect. It’s either selection — you were a better student going in, or from a better-off family — or signal — you’re the kind of person who can make it through college; what you did there is really secondary.
Proposals to use income data to understand the effects of college assume that we can adjust for the selection effects, at least, through some kind of value-added model, for example. But this is pretty sketchy. I mean, it might provide some insights for us to think about. But as a basis for concluding that Caltech, Colgate, MIT, and Rose-Hulman Institute of Technology (the top five on Brookings’ list) provide the most value — versus that they have select students who are distinctive in ways that aren’t reflected by adjusting for race, gender, age, financial aid status, and SAT scores — is a little ridiculous.
So, yeah. I want more information about the real impact of college, too. But I just don’t see the evidence out there that having more information is going to lead to policy improvements.
If there weren’t such clear potential negative consequences, I’d say sure, try, it’s worth learning more even if we can’t figure out how to use it effectively. But in a case where there are very clear paths to using this kind of information in ways that are detrimental to higher education, I’d like to see a little more careful thinking about the real likely impacts of student-unit records versus the ones in our technocratic fantasies.
At his web site, Kieran has a nice working paper called “Public Sociology in the Age of Social Media.” The paper, forthcoming in Perspectives in Politics, has some really nice commentary on how the world has changed since Michael Burawoy called for a public sociology. A few clips:
I shall argue that one of social media’s effects on social science has been to move us from a world where some people are trying to do “public sociology” to one where we are all, increasingly, doing “sociology in public”. This process has had three aspects. First, social media platforms have disintermediated communication between scholars and publics, as technologies of this sort are apt to do. is has not ushered in some sort of communicative utopia, but it has lowered the threshold for sharing one’s work with other people. Second, new social media platforms have made it easier to be seen. Sadly, I do not mean that it is now more likely that you or I will become famous. Rather, these technologies enable a distinctive field of public conversation, exchange, and engagement. They have some of the quality of informal correspondence, but they are not hidden in typed correspondence. They take place as real-time interaction, but do not depend on you showing up to a talk. Again, as is typically the case with communication technologies, exactly what gets enabled can vary. The field of public conversation encompasses everything from exciting forms of serendipitous collaboration to the worst sort of trolling and harassment. Thirdly, new social media platforms make it easier for these small-p public engagements to be measured. They create or extend opportunities to count visitors and downloads, to track followers and favorites, influencers and impacts. In this way they create the conditions for a new wave of administrative and market elaboration in the field of public conversation. New brokers and new evaluators arise as people take the opportunity to talk to one another. They also encourage new methods of monitoring, and new systems of punishment and reward for participation. Universities and professional associations, for example, become interested in promoting scholars who have “impact” in this sphere. But they are also slightly nervous about associating what they have come to think of as their “brand” with potentially unpredictable employees, subscribers, and members.
In “Science as a Vocation”, Weber remarks that although we do not get our best ideas while sitting at our desks all day doing regular work, we wouldn’t get any good ideas unless we sat at our desks all day doing regular work. In a similar way, successfully engaging with the public means doing it somewhat unsuccessfully very regularly. This fact is closely connected to the value of doing your everyday work somewhat publicly. You cannot drop a lump of text onto the Internet and expect anyone to pay attention if you have not been engaging with them in some ongoing way. You cannot put your work up on your website, or “do a blog”, or manufacture interest in your research like that. There is a demand side as well as a supply side to “content”, and most of the time the demand side does not care about what you have to say. This is why, in my view, one’s public work ought to be be continuous with the intellectual work you are intrinsically motivated to do. It is a mistake to think that there is a research phase and a publicity phase. Your employer might see it that way, but from a first-personal point of view it is much better—both intrinsically and in terms of any public engagement you might want—to think of yourself as routinely doing your work “slightly in public”. You write about it as you go, you are in regular conversation with other like-minded researchers or interested parties, and some of those people may have or be connected to larger audiences with a periodic interest in what you are up to.
Read the whole thing.
Rejection is never fun, but I don’t think it’s unfair. Vague and conflicting reviews aren’t fun either, but that’s life. What I do think is unfair is when editors are negligent and incompetent, which leads to enormous wasting of time and, in some cases, can end careers. In this post, I’d like to share a few of my personal horror stories with journals so you know what often happens in the review process. Here are a few of my “favorites:”
- The Journal of Mathematical Sociology (previous editor, not current) once waited 24 months to send me a rejection based on a 1 sentence review.
- Social Networks (also a previous editor) lost a manuscript twice, resulting in a paper being tied in review for almost two years before it got a review.*
- The American Journal of Public Health never reviewed a paper that was submitted. After a few months, I was asked to review my own paper! Then, after I complained, they never obtained reviews. About a year later, after me and my co-author complained, the paper was not reviewed or even desk rejected. Technically, I suppose, it might still be under review!
On top of this, some folks are plain dishonest. For example, a previous editor of Sociology of Education rejected a paper of mine after the R&R (which is fair) and said that they don’t do 2nd R&Rs but then asked me to review a paper on the 2nd R&R. A book editor, after rejecting my manuscript told me face to face that they simply didn’t accept books by first time authors. That press, and most others, actually publish first time book authors, including friends of mine.
I am under no pretense that we can eliminate incompetence and dishonesty. But there are simple reforms that can lessen the cost of poor editing. For example, I am an advocate of multiple submission for journals – you can submit to as many journals at once. That way, if you get an incompetent editor, you can simply take your business elsewhere and not bother waiting for a response or dealing with chaotic and contradictory reviews. I also think Sam Lucas is onto something when he suggests that we should not allow reviewers to write open ended and vague reviews. Bottom line: the journal system allows people to do all kinds of bad things, but simple reforms can reduce the risk to authors.
* The paper was rejected on “frustrating” grounds – a computer simulation was sent to experimentalists who wanted to see an experiment. Not unfair, but frustrating and a waste of time. If the journal doesn’t accept computational papers, it should have been desk rejected.
Samuel R. Lucas is professor of sociology at the University of California, Berkeley. He works on education, social mobility, and research methods. This guest post proposes a reform of the journal review process.
On-going discussion about the journal publication process is laudable. I support many of the changes that have been suggested, such as the proposal to move to triple-blind review, and implemented, such as the rise of new journals that reject “dictatorial revi”–oops, I mean “developmental review.” I suggest, however, that part of the problem is that reviewers are encouraged to weigh in on anything–literally anything! I’ve reviewed papers and later received others’ reviews only to find a reviewer ignored almost all of the paper, weighing in on such issues as punctuation and choice of abbreviations for some technical terms. Although such off-point reviews are rare, they indicate that reviewers perceive it legitimate to weigh in on anything and everything. But a system allowing unlimited bases of review is part of the problem with peer review, for it shifts too much power to reviewers while at the same time providing insufficient guidance on what will be helpful in peer review. I contend that we need dispense with our kitchen-sink reviewing system by removing from reviewer consideration two aspects of papers: framing and findings.
Framing is a matter of taste and, as there is no accounting for taste, framing offers fertile ground for years of delay. Framing is an easy way to hold a paper hostage, because most solid papers could be framed in any one of several ways, and often multiple frames are equally valid. Authors should be allowed to frame their work as they see fit, not be forced to alter the frame because a reviewer reads the paper differently than the author. A reviewer who feels a paper should be framed differently should wait for its publication and then submit a paper that notes that the paper addressed Z but missed its connection to Q. Such an approach would make any worthwhile debate on framing public while freeing authors to place their ideas into the dialogue as well.
As for findings, peer review should be built on the following premise: if you accept the methods, then you accept the findings enough for the paper to enter the peer-reviewed literature. Thus, reviewers should assess whether the paper’s (statistical, experimental, qualitative) research design can answer the paper’s research question, but not the findings produced by the solid research design. Allowing reviewers to evaluate findings allows reviewers to (perhaps inadvertantly) scrutinize papers differently depending on the findings. To prevent such possibilities, journals should allow authors to request a findings-embargoed review, for which the journal would remove the findings section of the paper as well as the findings from: 1)the abstract, and, 2)the discussion/conclusion section of the paper before delivering the paper for review. As some reviewers may regard reading soon-to-be-published work early as a benefit of reviewing, reviewers could be sent full manuscripts if the paper is accepted for publication.
A review system in which reviewers do not review framing and findings is a design-focused review system. Once a paper passes a design-focused review, editors can conduct an in-house assessment to assure findings are accurately conveyed and the framing is coherent. The editors, unlike reviewers, see the population of submissions, and thus, unlike reviewers, are well-placed to fairly and consistently assess any other issues. Editors will be even more enabled to make such calls if they can make them only for the papers reviewers have determined satisfy the basic criterion of having a design solid enough to answer the question the paper poses.
The current kitchen-sink review system has become increasingly time-consuming and perhaps capricious, hardly positive features for effective peer review. If findings were embargoed and reviewers were discouraged from treating their preferred frame as essential to a quality paper, review times could be chopped dramatically and revise and resubmit processes would be focused on solidifying design. As a result, design-focused review could lower our collective workload by reducing the number of taste-driven rounds of review we experience as authors and reviewers, while simultaneously reducing authors’ potentially paralyzing concern that mere matters of taste will block their research from timely publication. Design-focused review may thus make peer review work better for everyone.