Archive for the ‘productivity and performance’ Category
Do people know about social impact bonds? I hadn’t heard of them till recently. Since then, though, I’ve developed a train-wreck fascination. They have the potential to combine all the worst features of the public and private sectors. And they can be securitized, to boot!
Let’s take a step back. What is a social impact bond, anyway?
Well. Imagine you have a social problem you’d like to solve. Say that you want to reduce recidivism among young people in prison. That sounds good, right? The problem, of course, is that taxpayers don’t want to pay for rehabilitative programs, and there’s lots of disagreement about what kind of program would actually help solve the problem, anyway.
The government says, Wouldn’t it be nice if somebody would take care of this for us, and we’d only have to pay them if they actually succeeded?
Enter Goldman Sachs.
A recent article in the Journal of Economic Perspectives reports a recent attempt to curb grade inflation. High GPA departments at Wellesley College were required to cap high grades. The abstract:
Average grades in colleges and universities have risen markedly since the 1960s. Critics express concern that grade inflation erodes incentives for students to learn; gives students, employers, and graduate schools poor information on absolute and relative abilities; and reflects the quid pro quo of grades for better student evaluations of professors. This paper evaluates an anti-grade-inflation policy that capped most course averages at a B+. The cap was biding for high-grading departments (in the humanities and social sciences) and was not binding for low-grading departments (in economics and sciences), facilitating a difference-in-differences analysis. Professors complied with the policy by reducing compression at the top of the grade distribution. It had little effect on receipt of top honors, but affected receipt of magna cum laude. In departments affected by the cap, the policy expanded racial gaps in grades, reduced enrollments and majors, and lowered student ratings of professors.
My sense is that this shows that grade inflation, whatever its historical origins, acts as a competitive advantage for programs that few other market advantages. If you don’t have a strong external job market or external funding, then you can boost enrollments via grade inflation. It also absolves programs by masking racial under performance. The lesson for academic management is this: If you have inequality in funding, departments will compensate by weak grading. If you have inequality by race, departments will compensate by weak grading. Thus, academic leaders who care about either of these issues should implement policies where departments don’t choose standards and are accountable for results.
Over at Scatterplot, Jeremy’s been writing about his life gamification experiment, which involves giving himself points for various activities he’d like to be doing more of. I find this sort of thing totally compelling and have to admit I’m now giving myself all sorts of points in my head. (Finish unpacking one box — 5 points! Send an email I’ve been procrastinating on — 5 points!) Although not in 100 million years could I get my husband to play along with me, even for brunch, of which he is fond.
Anyway, the game brought to mind this post from Stephen Wolfram, in which Wolfram presents a bunch of data from the last 25 years of his life. Here, for example, are all the emails he’s sent since 1989. (Note the sharp time shift in 2002, when he stopped being completely nocturnal.) He’s also got keystroke data, times of calendar events, time on the phone, and physical activity.
Fascinating to read about, but perhaps not terribly healthy to pursue in practice. Although in Wolfram’s case, it sounds like he was mostly just collecting the data, not using it to guide his day-to-day decisions. Others become more obsessive. I don’t know if David Sedaris has really been spending nine hours a day walking the English countryside, a slave to his Fitbit, or if he’s taking poetic license, but it’s a heck of an image.
Clearly there are a lot of people into this sort of thing. In fact, there is a whole Quantified Self movement, complete with conferences and meet-up groups. One obvious take on this is that we’re all becoming perfect neoliberal subjects, rational, entrepreneurial and self-disciplined.
For me, though, what is fun and appealing as a choice — and I do think it’s a choice — becomes repellent and dehumanizing when someone pushes it on me. So while I’ll happily track my work hours and tally my steps just because I like to — and yes, I realize that’s kind of weird — I hate the idea of judging tenure cases based on points for various kinds of publications, and am uneasy with UPS’s use of data to ding drivers who back up too frequently.
It’s possible that I’m being inconsistent here. But really, I think it’s authority I have the problem with, not quantification.
Last week a judge struck down tenure for California teachers on civil rights grounds. (NYT story here, court decision here.) Judge Rolf Treu based his argument on two claims. First, effective teachers are critical to student success. Second, it is poor and minority students who are most likely to get ineffective teachers who are still around because they have tenure — but moved from school to school in what Treu calls, colorfully, the “dance of the lemons.”*
To be honest, I have mixed feelings about teacher tenure. I’d rather see teachers follow a professional model of the sort Jal Mehta advocates than a traditional union model. This has personal roots as much as anything: I’m the offspring of two teachers who were not exactly in love with their union. But at the same time, the attack on teacher tenure just further chips away at the idea that organizations have any obligation to their workers, or that employees deserve any level of security.
But I digress. The point I want to make is about evidence, and how it is used in policy making — here, in a court decision.
Org theorists know a thing or two about what happens when you rate things. People change their behavior. In this case, that’s the point — Arne Duncan et al. are hoping that the ratings will create incentives for colleges to graduate more students with less debt and higher post-graduation incomes.
Now, those are obviously not objectionable goals. There are some clear challenges in adjusting for the expected performance of different student bodies, and worries about disincentives to go into low-paying fields like teaching or social work, but who doesn’t want college to be more affordable, somehow?*
The big problem is the outcome that is missing in there: students who have learned things. If you create a system that measures access, completion, debt, and eventual income, and it has any teeth at all, you will get colleges that aim for those things. Unfortunately, those things have a limited relationship to actual learning. Where one conflicts with the other, learning will lose.
Of course, I’m kind of hesitant to say that, because heaven knows what would happen if we started trying to measure learning outcomes at the federal level. No Young Adult Left Behind, I guess. Coursera can sell us the curriculum.
* Another problem worth mentioning is that many adults without degrees don’t see graduation rates and average student debt levels as relevant to their college decision — they think it depends on them, not the school.
Ezra Klein interviews Kevin Roose, who has a new book about young Ivy League graduates who work on Wall Street. The take home point is simple: people who graduate from competitive schools graduate toward these jobs not because they love business, but because they want security. Wall Street jobs are high paid, require little experience, and have a bit of prestige. On the origins of the short term Wall Street job:
Wall Street invented this new way of recruiting in the early 80s. Before that they hired like any other industry. If you wanted to be a banker you applied for a job at a bank and they hired you or they didn’t. But in the early 80s Goldman Sachs and others figured out they could broaden their net and get lots of really smart people if they made it a temporary position rather than a permanent one.
So they created the two-and-out program. The idea is you’re there for two years and then you move onto something else. That let them attract not just hardcore econ majors but people majoring in other subjects who had a passing interest in finance and didn’t know what else to do. People now think going to a bank for two years will help prepare them for the next thing and keep them from having to make these hard decisions about the rest of their life. It made it like an extension of college. And it was genius. It led to this huge explosion in recruitment and something like a third of Ivy League graduates going to Wall Street.
Of course, it’s a mixed bag for the grads:
EK: So after writing this book, what would you say to a college senior thinking of going to Wall Street?
KR: First I would ask them why they wanted to work in an investment bank. If the answer is “because I’m tremendously in debt and need to pay it out” or “I’ve been reading Barron’s since I was 12 years old and I desperately want to be an investment banker” then those are legitimate reasons. Go ahead. But if it’s just about taking risk off the table and doing the safe prestigious thing, I’d tell them first that it will make them truly miserable, the kind of miserable it could take years to recover from, and that it also no longer has that imprimatur. It can actually hinder you. I’ve spoken to tech recruiters who say they only hire bankers in their first year or two because after that banking ruins them.
EK: How does it ruin them?
KR: It makes them too risk conscious. It gets them used to a standard of lifestyle they may not be able to replicate in any other industry. And it has a deleterious effect on creativity. Of the eight people I followed, a few came out very damaged by the experience. And not in a way a vacation can cure. It’s not about having bags under your eyes. It destroys your ability to think in creative ways about what it means to build something of value. The people I followed would admit they got a lot out of being a banker but I don’t think they’re all that tuned into the ways the experience changed them.
Check it out.
Vanity Fair has a new article on the Samsung-Apple litigation. Kurt Eichenwald makes the following case about Samsung’s business strategy:
- Pick a cool area of electronics.
- Quickly reverse engineer lower quality, low cost versions of the innovators.
- When sued for copyright or patent infringement, fight non-stop legal battles that only end with last-minute settlements.
- You win by either (a) grabbing insurmountable market share during the legal battle or (b) punishing small firms with exhausting litigation and high legal fees (Samsung counter-sues almost all plaintiffs).
If this is an accurate account of Samsung’s strategy, it has interesting implications. First, it contradicts resource based value theory in that the firm doesn’t need a monopoly on anything – just the ability to quickly mimic and exploit the system. Second, it suggests that markets are indeed stable in the absence of patents or enforceable intellectual property rights. Samsung has beat up some other firms, but most competitors have survived. Third, it suggests an interesting use of slack resources – throw them at emerging markets. Fourth, it suggests that the patent system is simply an ineffective means of enforcing intellectual property rights when the defendant is sufficiently large.
Strategy scholars and intellectual property gurus – go nuts in the comments.
In baseball lore, the “Curse of the Balboni” means that teams with sluggers (a player who hits 36 home runs per year or more) don’t win the world series. It is also a short hand for the observation that slugging isn’t always correlated with play off wins. I was reminded of this during this year’s Super Bowl when the Denver Broncos lost the Super Bowl after posting one of the most impressive performances in the history of the NFL. Not only did they lose, they lost very, very badly.
What’s my hypothesis? My theory is that teams with extremely successful offenses tend to overlook the defense. Think of it is an “sense making” issue in athletic organizations. Defenses are often less glamorous and harder to measure in many cases (e.g., good fielding in baseball or blocking in football). So they get less attention than offense. So having top notch offense let’s you off the hook defense wise. It distracts you from problems in the organization. In a league or division with unbalanced teams, it can be easy to wrack up wins. But when you meet more balanced teams in the play offs, or even teams who are a little better at exploiting defense mistakes, your success is limited.
A classic result in the social analysis of science is that most papers are poorly cited. For example, the classic deSolla Price paper in Science (1965) found that the modal citation count in his sample was zero. Low mean and modal citation counts remain the standard in contemporary studies of scientific behavior. So, what gives?
Scientific research is a type of creative pursuit. By definition, journal articles are supposed to report on what is new or novel. Once you buy that, the low citation rates in science make sense. First, creativity (or importance) is a scarce commodity. Anyone trained in a psychology graduate program can do an experiment, but few can do a novel experiment. Second, new results are themselves scarce. Fields quickly get covered and only obscure points remain. Third, even if you have a creative scientist who found a genuinely important problem, they might not have an audience. Perhaps people are focused on other issues, or the scientist is low status or publishing in a low status journal.
In principle, we should expect that few articles will deserve more than token citation. But still, why can’t journals just stick to important stuff? The answer is imperfect knowledge. Once in a while we encounter obvious innovation, but usually we have a limited ability to predict what will be important. It is better to over publish and let history be the judge. Considering that the cost of journal publishing is low (but not the subscription!), we should be ok with a world of many uncited and lonely articles.
The media covered a new book by Lance Dodes called The Sober Truth. In the book, Dodes surveys the evidence on rehab and finds that there is literally no evidence that rehab, AA or other popular methods for kicking drugs are effective. From a recent Alternet article:
Peer-reviewed studies peg the success rate of AA somewhere between 5 and 10 percent. That is, about one of every fifteen people who enter these programs is able to become and stay sober. In 2006, one of the most prestigious scientific research organizations in the world, the Cochrane Collaboration, conducted a review of the many studies conducted between 1966 and 2005 and reached a stunning conclusion: “No experimental studies unequivocally demonstrated the effectiveness of AA” in treating alcoholism. This group reached the same conclusion about professional AA-oriented treatment (12-step facilitation therapy, or TSF), which is the core of virtually every alcoholism-rehabilitation program in the country.
What I find interesting is that I was told this before by physicians and social workers. These programs work for very few people and this is common knowledge. But why didn’t I draw the logical conclusion? If it’s expensive ($200,000 for a stint in a fancy rehab center) and it doesn’t work, why not just stop doing it?
Two answers: The Robin Hanson answer is that it’s a signal of morality. We do it to show that we care, even if the evidence is dodgy. Another (not unrelated) answer is that charismatic orgs get less scrutiny. AA is trying to be nice to people and help them overcome serious problems, so I am less inclined to search for evidence that assesses their effectiveness. This is different than, say, a think tank that is pushing a policy that I don’t like. Then, I’ll search high and low for all the evidence I can find to fight them.
Bottom line: We should probably get tougher on organizations that claim to do good. We’re probably giving out too many free passes.
Once in a while, I am asked by students about contingency theory – the view in organization theory that there is no optimal firm structure and that it simply “depends.” In other words, it’s the pragmatism of the org theory world. Here’t the question I get asked: is contingency theory still an active research area? On the one hand, it is obviously alive – people (including myself) still talk about it in published articles. On the other hand, it seems to be permission to resort to contextual, ad hoc exaplanations, or, better, to add a needed extra dimension of variation. There aren’t native “contingency theory variables” that have been developed in the decades since the 1960s.
My own view is that it is now more of an argumentative move rather than a stand alone theory. Even though it is an obvious point, it acts as a corrective to the very rigid theories of org environments often found in sociology (e.g., iron cage institutionalism or early population ecology). If you think there is a real advance in contingency theory, do use the comment section.
Mikaila Mariel Lemonik Arthur is an Associate Professor of Sociology at Rhode Island College and is the author of Student Activism and Curricular Change in Higher Education. Her current research explores network effects on curricular change in higher education. Her primary teaching responsibilities include social research methods and law and society courses, and this spring she is teaching a new interdisciplinary upper-level general education course on higher education.
One of the hallmarks of modernity is the focus on rationality and efficiency in organizational function: organizations of all types, from hospitals to Fortune 500 corporations, from universities to small not-for-profits, seek to improve their performance in terms of measureable outcomes. But, as the aphorism goes, “What gets measured gets done, what gets measured and fed back gets done well, what gets rewarded gets repeated” (variously attributed to any number of management scholars). For example, pharmaceutical companies’ focus on stock prices, sales figures, and the next blockbuster drug has led to a focus on treatments for common, chronic conditions, such as the umpteenth heartburn medication, and less focus on the development of new antibiotics, a trend that may soon prove to have devastating effects on our attempts to control infections disease.
In higher education, a similar dynamic is occurring. In the past, colleges and universities were primarily measured (and funded) based on enrollments. This meant that encouraging more students to enroll, and keeping them enrolled in classes until after the third week (or whenever official enrollment statistics are due), was often the highest priority, and whether students ever graduated did not matter nearly as much. You get what you measure: students in seats.
More recently, the emphasis has shifted to retention and graduation as measureable outcomes. This change encouraged administrators to consider what was necessary to keep students in school and to improve time-to-degree, but it came with its own perverse incentives. For example, administrators turned to student evaluations as a way to increase student satisfaction; some colleges and universities discourage faculty from failing students because failures decrease graduation rates and increase dropout rates. This leads to colleges in which students can graduate with a 2.0, never having written a paper (a phenomenon discussed in recent books like Arum and Roksa’s Academically Adrift and Armstrong and Hamilton’s Paying for the Party). It also contributes to rampant grade inflation, including elite institutions where over half of all grades awarded are As (happy students=repeat customers). You get what you measure: grads with high grades.
A variety of colleges and universities have thus sought ways to curb grade inflation, such as providing average class grades on transcripts and setting strict grading curves. By encouraging tougher grading standards, these methods may indeed reduce the average GPA of enrolled students, but tougher grading standards do not necessarily translate into better educated graduates—and in any case, most colleges and universities have not chosen to enact these sorts of reforms. Indeed, the ease by which average grades can be manipulated highlights the fact that grades themselves may not be even an adequate proxy measure of student learning, and thus the assessment movement was born.
Today, accrediting agencies require colleges and universities to demonstrate that students meet measurable learning outcomes, and projects like the Lumina Foundation’s Degree Qualifications Profile encourage institutions and departments to clearly state the intended outcomes of their programs in measureable language. Some colleges and universities have gone further, developing competency-based degrees in which students supposedly demonstrate their skills rather than their seat time to graduate. At first blush, many critics argued that these programs are just another kind of teaching to the test. But teaching to the test is only a problem if the test is not actually able to test the desired learning outcomes—you get what you measure: results on the test.
It has already become clear to advocates of competency-based learning that competency is a pretty low floor, and instead they have begun to use the term “proficiency.” One goal of proficiency-based degree plans has been to shorten the time and cost of a degree, particularly by reducing Baumol’s cost disease by disrupting the relationship between seat time, faculty workload, and degree production. So far, competency- and proficiency-based programs are rare and likely appeal only to a particular self-selected group—but as Chambliss and Takacs point out in their forthcoming book How College Works, college only works if it works for all students, including the lazy, the unmotivated, and the perhaps not-so-smart.
So if we get what we measure and what gets rewarded gets repeated—and we measure proficiencies and reward completion—what do we get? Degrees as checklists? Students who cannot earn a college degree because, while they are excellent writers and have superb disciplinary knowledge, they cannot (in Lumina’s language) construct and define “a cultural, political, or technological alternative vision of either the natural or human world,” a key bachelor’s-level competency? An even more extreme bifurcation of the higher education field in which some colleges and universities develop rigorous proficiency measures and provide students with the supports necessary to excel while others assess writing, critical thinking, and speaking with machine- or peer-grading?
Or is it possible to build a system that measures proficiencies in a real, valuable way and which rewards completion without reducing the rigor of these proficiencies? In other words, can find a way to measure what we want to get instead of getting what we happen to have measured?
The Uncluttered blog has a nice post about the way Eisenhower organized his work. It’s a 2×2 table, which means sociologists should love it:
He was highly organized and prioritized his tasks and responsibilities while serving as president, a five-star general, supreme commander of the Allied Forces in Europe, and supreme commander of NATO. Eisenhower devised an effective system that’s simple enough to be executed with a pencil and a piece of paper and effective enough to, well, run the free world. It’s called the Eisenhower Matrix.
Yes, the Matrix of (Eisenhower) Domination.
A recent Washington Post op-ed describes recent research showing that interviews are poor predictors of future job performance. The idea is old, but the results elaborate in new ways. From Daniel Willingham, a psychologist at the University of Virginia:
You do end feeling as though you have a richer impression of the person than that gleaned from the stark facts on a resume. But there’s no evidence that interviews prompt better decisions (e.g., Huffcutt & Arthur, 1994).
A new study (Dana, Dawes, & Peterson, 2013) gives us some understanding of why.
The information on a resume is limited but mostly valuable: it reliably predicts future job performance. The information in an interview is abundant–too abundant actually. Some of it will have to be ignored. So the question is whether people ignore irrelevant information and pick out the useful. The hypothesis that they don’t is called dilution. The useful information is diluted by noise.
Dana and colleagues also examined a second possible mechanism. Given people’s general propensity for sense-making, they thought that interviewers might have a tendency to try to weave all information into a coherent story, rather than to discard what was quirky or incoherent.
Three experiments supported both hypothesized mechanisms.
In other words, interviews encourage people to see patterns in the data where none exist. They also distract us with irrelevant information. Toss this in the file of “we have evidence it don’t work, but people will do it anyway.”
Salary.com had one of those lists of majors that don’t pay very well. #8? You guessed it – sociology:
People who enter the field of sociology generally are interested in helping their fellow man. Unfortunately, that kind of benevolence doesn’t usually translate to wealth. Here are three jobs commonly held by sociology majors (click on job title and/or salary for more info):
… social worker
… corrections officer
… chemical dependency counselor
This is one of those cheesy magazine articles on careers, but it is consistent with prior research on college majors and income. Sociology is a feeder into service professions. That’s a good thing, though I do wonder how my sublime lectures on the differences between structuralism and post-structuralism help people get off of drugs.
Like most of us in the world of organization studies, I was saddened to hear of Michael Cohen’s passing. I only met him once and he was very gracious. In the spirit of his work, let me me draw your attention to his last research project – an analysis of “handoffs.” The issue is that doctors can’t continuously watch patients. Whenever a doctor leaves to go home, a new doctor comes in and there is a “handoff.” Cohen wrote a nice summary for the Robert Wood Johnson Foundation website:
1. To be effective, a handoff has to happen.
It may seem incredibly commonplace, but all too often preventable injuries or even deaths trace back to handoffs that were abbreviated, conducted in awkward conditions, or downright skipped. The easy cases to identify are things like leaving before handoff is done, or rushing the handoff in order to get out the door.
Unfortunately, many other causes are also in play. Some major examples derive from schedule or workload incompatibilities. If patients are sent from the PACU (post-anesthesia care unit) to a floor unit during its nursing report, the nurses accepting the patients will necessarily miss out on the handoff of existing patients. If a patient is moved from the Emergency Department (ED) before her doctor or nurse has time to complete phone calls to the destination unit, the patient endures some period of having been transferred without benefit of handoff. If there is a shift change in the ED just before a patient moves, the handoff is conducted by a doctor or nurse who has only second-hand familiarity with the events. To improve handoffs, we may need to teach participants to think about the organizational structures that make it hard to do them well.
In this post, I want to follow-up on my previous posts about conducting research by discussing the thorny issue of time management. One challenge of academia involves completing work under schedules that incorporate both structured and unstructured time, with both unclear ends (what does one want/need to accomplish?) and means of reaching these ends (how does one achieve that goal?). People must learn to self-manage the processes of undertaking a dissertation or research project and preparing publications along with other responsibilities.
During the school year, class preparation and grading, committee work (i.e., admissions committee, curriculum committee, hiring committee, etc.), service to the profession (i.e., reviewing manuscripts or conference papers, committee work for professional associations, etc.), and other commitments structure schedules. For some, research, writing, and publishing all get squeezed into the remaining time. Thus, periods such as the summer, winter break, and sabbaticals usually start with a long list of best intentions of how to spend “unstructured” time, which can feel overwhelming.
What to do? This post is devoted to examining several Jedi tricks that increase the likelihood of accomplishing research projects during both structured and unstructured times.
As part of class requirements, I used to assign my students two two-page long journal entries to help them understand the link between theory and phenomena (say, how routines help direct workers but may have undesired consequences). The deadlines for these assignments were open-ended as students could pick whichever readings they wanted to use to analyze their organizational experience. The assignment was due the same day that the reading was due.
Although a few students submitted their work on time, most students struggled with selecting their own deadlines and waited until the semester’s last week to turn in their journal entries. A few didn’t submit any entries at all. After a couple semesters, I tried another tactic: I made the first of the two journal entries due by the semester’s midpoint. Student turn-in rates improved during the first half, but students still had problems with turning in the second journal entry. After reading behavioral economist Dan Ariely’s description of his experiments with having MBA students set their own deadlines for turning in assignments versus setting deadlines for them (the MBA students at his elite institution apparently did no better than my undergrads in setting their own deadlines), I finally replaced this requirement with regular homework assignments with set deadlines.
Such experimentation shows how setting deadlines can be helpful, even if the deadlines are arbitrarily imposed.
How to set deadline prods:
- Understand prioritization
For scholars, juggling multiple responsibilities often means that projects that lack hard deadlines or immediate reinforcement can fall by the wayside. It’s too easy to prioritize not particularly important deadlines for obligatory service commitments or bureaucratic paperwork simply because these have set deadlines while other, often more important or consequential responsibilities do not. Or, some may find that the instantaneous gratification of teaching or mentoring students can trump the often-lonely, seemingly thankless task of writing up research and responding to reviewers’ comments. These “pulls” can derail research productivity, particularly during long projects where deadlines are self-set – for example, submitting journal or book manuscripts for peer review.
For those of you who enjoy making 2 by 2 typologies, time management guru Stephen Covey suggests writing down projects and responsibilities in an important vs. not important and urgent vs. not urgent table to assess how you are allocating time.
- Routinize large projects into small incremental tasks
Based on his research comparing the writing and publication productivity of faculty who wrote in spurts versus regular, steady writing, Richard Boice recommends setting up small, incremental deadlines. Rather than “binging” on intermittent writing bouts, he suggests regularly writing small amounts. Some people assume that they have to be motivated first in order to be productive – instead, productivity is like a waterwheel: productivity elicits productivity.
- Follow a preset template
Wendy Belcher published an excellent guide to how to submit and publish journal articles according to a set schedule. Her chapters cover topics such as how to write letters of inquiry to editors and how to respond to reviewers’ comments.
- Use other external deadlines
Some scholars find that presenting at conferences helps with getting initial drafts done, with the possible bonus of getting useful feedback during the review process or presentation. If you’re writing journal manuscripts, check out calls for special issues, which usually have hard deadlines. These also have the added advantage that reviewers have to get back to submitters by a set date.
- Participate in a writing group or colloquiums
Writing groups or colloquiums where members regularly present drafts for feedback can be great prods for productivity. Depending on what your needs are, writing groups need not include only members from your own discipline – often, those from other disciplines can offer writing feedback that extend beyond substantive content, or they can suggest alternative perspectives which can be very helpful for cross-fertilizing with other fields. Your university might even have a program led by a trained facilitator who will set up guidelines for a group.
- Intermix different types of deadlines
Sometimes deadlines for smaller projects can feed larger project deadlines by supporting substantive knowledge. For example, if you are asked to write a dictionary entry or review a book related to your research topic, you now have the opportunity to distill your knowledge of existing literature. Successful submission can set up conditions for entrainment – that is, meeting a small deadline might provide the impetus to pursue a larger deadline. The tricky balance is not to take too many small projects at the expense of a larger one with a bigger impact.
- Use carrots and sticks
To meet deadlines, some colleagues have used carrots like a non-refundable vacation or moving to a new job. A stick might be running out of funding – a “natural” end to a project.
- Work with collaborators
If you’re the type of scholar who prefers company, you might find that the stimulation of working interdependently with others is more appealing than working independently. However, this can be a double-edged sword if the collaborators (or you) are overly optimistic about abilities to expend time. Most likely these will involve frank conversations about authorship and responsibilities upfront, as well as adjustments along the way.
- Spend regular time with friends and family; participate in a hobby
Finally, some might feel tempted to eschew “distractions” until a big project is over. However, scheduling in hobbies and regular downtime with friends and family – even a deceptively mundane task such as a walk with a pet – can help motivate scholars over the productivity hump.
Add your recommendations to the comments…
Why is Facebook valuable? As we’ve discussed before, we know it’s valuable, but we don’t know how valuable. The issue is that we know that advertisers are willing to pay, but we also know that estimated revenue per FB user is low (about $1.21 by one estimate). If FB is valued in tens of billions, that’s way more than the couple of billion generated by users. Currently, FB is charging advertisers about $9.50 a user. Something ain’t right.
There’s more. The big selling feature is that FB has data volunteered by users, but FB doesn’t seem very adept at using this data to target ads. For example, I get Indiana targeted ads, some hobbies (online games), and generic ads for online colleges. Most of this could have been obtained from other websites. The ads don’t seem to exploit my networks much.
The best I can figure out is that social networking creates economic value for investors by the same mechanism that tv shows create value for networks. A social network is a form of entertainment, people just want to keep up with friends in an easy way and FB does that. So we watch our FB page like we watch a TV show. Enter the advertisers.
As long as FB maintains an easy to use interface, it will likely remain the social network leader, at least among US users. But this also means that FB’s market value will drop until it matches the relatively more modest income stream generated by users. That doesn’t mean FB is doomed to failure. It’s got a lock on it’s niche, which is huge, but for it to justify its utterly gigantic IPO, it’ll have to innovate in ways to create more value beyond allowing Internet addicts to post the cat meme of the day.
First, I’d like to thank our July guests, Jenn Lena and Katherine Chen. We are blessed to have such accomplished friends. Second, I’m picking a fight with Jenn Lena, just because I can. Earlier this week, Jenn referred to an earlier discussion of college majors, where I argued that some students drift into social sciences and humanities because they are easier and that this means that these students have less academic ability. Jenn called this view bonkers.
I may be bonkers, but I’ve got some evidence. But first, a few qualifiers. People may think I hate the humanities or that I think poets are dumb. Quite the opposite. I am impressed by the humanities. I think it requires enormous intellect to write great music or compose an insightful poem. Also, I freely admit that there a lot of folks in the arts who have high cognitive ability that’s on par with people in other fields. Doing great art is just as much of a challenge as solving a math problem.
But that still doesn’t mitigate the fact that the *average* social science or humanity major simply has less academic skill than the *average* science major. For example, consider this 2004 study from the Journal of Econometrics, Ability Sorting and Returns to the College Major by Peter Arcidiacono. The paper analyzes labor market outcomes, SAT scores, and college major. The majors are sorted into natural science, social/science humanities, business & education. If you look at Table 2, the results are clear. The natural science majors had higher mean scores in both SAT math and verbal (!), though the verbal difference is small. The humanities/social sciences does about the same as business in math, but better in verbal. Education is dead last in both categories. These results are not atypical and common in the higher education literature.
There is also evidence about graduate students. Studies of GRE score by major, once again, show that sciences do better than humanities/arts/social sciences in math, and there are many science fields that do better than the humanities & arts in verbal GRE. Once again, education and some types of business, don’t do well.
Bottom line: On the average, science students are the best in terms of math, reading, and vocabulary. On the average, education is rock bottom. The arts and social sciences are in the middle, but still consistently less than the sciences.
Brendan Nyhan wrote a post on how to improve publication process. I’d like to focus on one recommendation. He had this idea about awarding credit to speedy or high quality reviewers. Get some points and you will be first in line for another high quality reviewer. This proposal, I think, should be fairly easy to implement.
- Probably tough for a system across all journals, but it could easily be done for a group of journals that all use the same submission system, like Manuscript Central. Or maybe all journals at Chicago or Cambridge.
- Each person who participates in Manuscript Central has a unique ID# attached to their email.
- You start with 10 credit points.
- You can add points by submitting paper reviews within 3 months. 1 point per review.
- You lose 1 point for a declined review, reviews that never show up, 0r short, useless reviews. Your review will be judged by the editor.
- Half-credit for slow reviews.
- Manuscript Central can regulate the flow of requests, so people aren’t penalized for being popular (e.g., maybe 2 a month).
- No paper can be uploaded to Manuscript Central if any author has an email whose ID# has zero or less credit.
The system is simple and intuitive. I can’t imagine programming it would be hard. It also provides the right incentives. The system can be gamed a little (switching emails), but not much.
Here’s a recent book chapter worth reading: “Why Behaviorism Isn’t Satanism.”
The history of comparative evolutionary psychology can be characterized, broadly speaking, as a series of reactions to Cartesian versus pragmatist views of the mind and behavior. Here, a brief history of these theoretical shifts is presented to illuminate how and why contemporary comparative evolutionary psychology takes the form that it does. This brings to the fore the strongly cognitivist research emphasis of current evolutionary comparative research, and the manner in which alternative accounts based on learning theory and other behaviorist principles generally receive short shrift. I attempt to show why many of these criticisms of alternative accounts are unjustified, that cognitivism does not constitute the radical lurch away from behaviorism that many imagine, and that an alternative “embodied and embedded” view of cognition—itself developing in reaction to the extremes of cognitivism—reaches back to a number of behaviorist philosophical principles, including the rejection of a separation between brain and body, and between the organism and environment.
Key Words: animal, cognition, behavior, cognitivism, behaviorism, evolution, learning, psychology
Organizing Entrepreneurial Judgment: A New Approach to the Firm is $99 at Amazon, and my library has not ordered a copy. I don’t own a Kindle since it won’t accept books, like the Grad Skool Rulz, from independent distributors. How might a man on a sociologist’s salary get a copy of this fine volume?
Bill Roy gave me permission to post this comment and illustration: ” Trajectories, of course, apply to individuals as well as genres. The comparison of musical trajectories to other artistic trajectories is very promising. I have played around with trajectories of musical careers in the 78 rpm era (before 1950). Career trajectories also include output—how many songs a performer records. If you examine the number of songs performers record relative to their first recording, the overall picture is one of decline. The great majority of musicians record only once. Year 1=2 (A and B sides of a record), and all subsequent years=0. Artists who record more than once peak early, then decline. What is especially interesting is that those who eventually record many songs (hundreds) look no different in their second or third year. They peak later, then decline at a slower rate. If you compare groups with different levels of life-time productivity, the initial curves are nearly identical. This is illustrated in this figure: The x axis is the number of years relative to an artist’s first record. The y axis is the average number of songs in that year (a different metric should be used because the distribution approximates a pareto distribution, but I’m just beginning the analysis). The different lines are different levels of life time productivity. Of course, there is right censoring.”
Adam Galinsky’s recent work and experiment on clothing and perceptions of cognition have been getting lots of attention. Here’s the New York Times piece – “Mind games: Sometimes a white coat isn’t just a white coat.” And, the ABC News story – “Clothes make the man and career.”
Here’s the paper (with Hajo Adam) in Journal of Experimental Social Psychology, “Enclothed Cognition.“
(Sorry, Fabio, I don’t think the untucked shirt + Fanny Pack look gets you any extra cognition points. But I could be wrong.)
Here’s Joel West giving a primer (at Berkeley) on open and user innovation.
The European Art Foundation released a report that estimates the total volume of the global fine arts trade. They surveyed auction houses, consultants, and deals to get an estimate. Doesn’t sound like they focused on crafts and low status art. Total? $60.8 billion. Roughly speaking every person on earth chips in about $10 for fine art. Obviously, some chip in more than others.
- global art commerce ($60bn) is a less than 10% of the total US defense budget ($739bn)
- there’s a ton of auctioneers dealing in the super hot Chinese art market
- the average high art item is sold for about $1,2000
- London and New York account for 60% of the total.
Jenn Lena’s new book, Banding Together, takes on a major issue in the sociology of culture – how people organize so that they can make culture. In other words, music, or painting, or poetry, just doesn’t appear out of nowhere. There’s usually a community of people who create the music.
Of course, Jenn Lena isn’t the first to make this observation. Howard Becker has a well known book called “Art Worlds,” which describes the world of visual arts, with its gate keepers and taste makers. However, the sociology of cultural production wasn’t terribly well developed in the years after Becker’s work. During the 1980s and 1990s, “culture” took on a different meaning in sociology. It didn’t mean cultural artifacts, it meant the cognitive aspects of behavior, the shared understandings that guide action and provide meaning to the world.
Still, a number of sociologists did continue plugging away at the question of how people came together to actually make stuff that was artistic or “cultural.” Richard Peterson wrote a highly influential book on the social construction of country music. More recently, we have studies of how artistic organizations persist (see Victoria Johnson’s book on operas) and how networks facilitate artistic work (see Gabriel Rossman’s work).
A follower and co-author of Petersen, Jenn Lena brings this literature to a new point. She asks a very simple, yet surprisingly neglected, question. What are the different ways that people get together to make music?
Here answer is intuitive and important. Music communities tend to take on one of four forms – traditional (e.g., think folk music); commercial; avant-garde; and scene based. These forms can mutate into one another and Jenn spends a lot of time describing how that happens. Each type of music community (“genre” in her words) has it’s own type of organizations and networks.
It’s a rich book that pushes the study of markets and culture in the right direction and I think its models can be extended. Next week, we’ll get into the nitty gritty of music production and talk about how the model might be applied to other examples of cultural production.
I’m a sucker for nutty futurist speculations. So bear with me on this one.
A few nights ago I was watching Neal Stephenson’s talk on “getting big stuff done,” where he bemoans the lack of aggressive technological progress in the past forty or so years. There’s obviously some debate about this, though he makes some good points. He raises the question of why, for example, we haven’t yet built a 20km tall building despite the fact that it appears to be technologically very feasible with extant materials. Nutty. But an interesting question. From a sci-fi writer.
Stephenson ends his talk on an organizational note and asks:
What is going on in the financial and management worlds that has caused us to narrow our scope and reduce our ambitions so drastically?
I like that question. Even if you think that ambitions have not been lowered, I think all of us would like to see the big problems of the world addressed more aggressively. (Unless one subscribes to the Leibnizian view that we live in the “best of all possible [organizational] worlds.”) Surely organization theory is central to this. This is particularly true in cases where technologies and solutions for big problems seemingly already exist – but it is the social technologies and organizational solutions that appear to be sub-optimal. So, how can more aggressive forms of collective action and organizational performance be realized? I don’t see org theorists really wrestling with these types of questions, systematically anyways. It would be great to see some more wide-eyed speculation about the organizational forms and theories that perhaps might facilitate more aggressive technological, social and human progress.
I can see several reasons for why organization theorists don’t engage with these types of, “futurist” questions. First, theories of organization tend to lag practice. That is, organizational scholars describe and explain the world (in its current or past state), though they don’t often engage in speculative forecasting (about possible future states). Second, many of the organizational sub-fields suited for wide-eyed speculation are in a bit of a lull, or they represent small niches. For example, organization design isn’t a super “hot” area these days (certainly with exceptions) — despite its obvious importance. Institutional and environmental theories of organization have taken hold in many parts, and agentic theories are often seen as overly naive. Environmental and institutional theories of course are valuable, but they delimit and are incremental, and are perhaps just self-fulfilling and thus may not always be practically helpful for thinking about the future.
That’s my (very speculative) two cents.
Despite its many problems, I use wikipedia, a lot. Too much. Sure enough, just now I tried to dig something up – and got the wikipedia blackout page. Given the blackout- where will we quickly read up on SOPA (or whatever else)?
The SOPA thing is a complicated matter – a fascinating tension between protecting intellectual property and free speech. At the extreme – should online sites like Pirate Bay (free movies, music and books) be allowed to operate freely? Few people say “yes” to that one (including Jimmy Wales), so the questions emerge in the gray areas. But SOPA itself is a mess, no question.
The latest episode of This American Life is a breathtaking first-person account of a Mac aficionado’s visit to an electronics manufacturing plant in Shenzhen, China. Here he meets some of the workers who put iPhones together and discovers that the entire manufacturing process is done by hand! He learns of the incredible toll this process of constructing little electronics goods has on their health and lives. The account, partly due to Mike Daisey’s engaging monologue style, is really unforgettable and disturbing. One of my favorite lines from Daisy’s account:
How often do we wish more things were hand-made? Oh, we talk about that all the time, don’t we? I wish it was like the old days. I wish things had that human touch. But that’s not true. There are more hand-made things now than there have ever been in the history of the world. Everything is hand-made. I know, I have been there. I have seen the workers laying in parts thinner than human hair, one after another after another. Everything is hand-made.
In typical TAL style, they try to get the other side of the story and the last ten minutes of the episode really grapple with the effects of sweatshop labor on economic mobility. Still, the voices that will remain in your head after the podcast are those of the mistreated workers whose bodies are souls are slowly being sacrificed on the factory line.
We all have gripes about the publishing process. Scholastica is a cool initiative by set of grad students at the University of Chicago to take the pain out of academic publishing. Specifically, “Scholastica makes it easy to create and manage peer reviewed journals online by streamlining administrative tasks and helping you find enthusiastic, qualified reviewers.”
Definitely a worthy cause! Be sure to check Scholastica out.
Hypothes.is is an ambitious project to peer-review the internet. The project was recently featured on kickstarter – and it successfully raised $100,000+ for the effort. The site “will enable sentence-level critique of written words combined with a sophisticated yet easy-to-use model of community peer-review.” Cool. I think orgtheory already has a system for this type of peer review — readers and co-bloggers who aren’t afraid to challenge posts in the comments — but an open peer-review overlay of the Internet is ambitious.
The effort will inevitably run into lots of fascinating, epistemological issues: what indeed counts as truth, what is expertise (particularly when there are divergent opinions), how are disputes reconciled, the role of social consensus versus logic or proof, etc. The effort should be fun to follow.