Archive for the ‘fabio’ Category
When advising PhD students, I try to dispel a misleading idea – all the “good” jobs go quickly and you are a complete failure if you can’t find employment by the Fall of your final grad skool year. This is simply incorrect. The sociology job market actually has three distinct phases. Once you appreciate this, it will help you out a lot:
- Round 1: The classic arts & sciences positions. In sociology, the research intensive programs usually advertise in summer, accept applications by October, interview in November, and extend offers by December (or earlier). The most competitive liberal arts colleges seem to recruit in round 1.
- Round 2: January-March – teaching intensive, professional schools, and post-docs. Winter break provides a nice cut point; many programs choose to go in the early Winter. In sociology, b-schools and ed schools will often interview in the Winter. A lot of high status, well funded post-docs, such as the recently deceased RWJ program, go at this time.
- Round 3: March-early summer. Pot luck – a diverse group of positions, including short term post-docs, very teaching intensive schools, private sector jobs, government, policy, and jobs at R1s that opened up due to last minute shifts in budgets. Some jobs may still be open if they were *really* slow in processing applications, or they had a long string of interviews that didn’t pan out. I’ve seen people get some very high quality jobs as late as April or May, because candidates 1-4 turned a department down.
I am not saying that there are a lot of jobs. It is still the case that academia is very competitive and some very good people won’t find jobs. What I am saying is that sociologists have a lot of options that are spread across the academic year. Don’t panic if things don’t immediately work out. It is in your interest to keep your eyes open and keep applying.
A guest post by Jerry Davis. He is the Wilbur K. Pierpont Collegiate Professor of Management at the Ross School of Business at the University of Michigan.
By this point everyone in the academy is familiar with the arguments of Nicholas Kristof and his many, many critics regarding the value of academics writing for the broader public. This weekend provided a crypto-quasi-experiment that illustrated why aiming to do research that is accessible to the public may not be a great use of our time. It also showed how the “open access” model can create bad incentives for social science to write articles that are the nutritional equivalent of Cheetos.
Balazs Kovacs and Amanda Sharkey have a really nice article in the March issue of ASQ called “The Paradox of Publicity: How Awards Can Negatively Affect the Evaluation of Quality.” (You can read it here: http://asq.sagepub.com/content/59/1/1.abstract) The paper starts with the intriguing observation that when books win awards, their sales go up but their evaluations go down on average. One can think of lots of reasons why this should not be true, and several reasons why it should, all implying different mechanisms at work. The authors do an extremely sophisticated and meticulous job of figuring out which mechanism was ultimately responsible. (Matched sample of winning and non-winning books on the short list; difference-in-difference regression; model predicting reviewers’ ratings based on their prior reviews; several smart robustness checks; and transparency about the sample to enhance replicability.) As is traditional at ASQ, the authors faced smart and skeptical reviewers who put them through the wringer, and a harsh and generally negative editor (me). This is a really good paper, and you should read it immediately to find out whodunit.
The paper has gotten a fair bit of press, including write-ups in the New York Times and The Guardian (http://www.theguardian.com/books/2014/feb/21/literary-prizes-make-books-less-popular-booker). And what one discovers in the comments section of these write-ups is that (1) there is no reading comprehension test to get on the Internet, and (2) everyone is a methodologist. Wrote one Guardian reader:
The methodology of this research sounds really flawed. Are people who post on Goodreads representative of the general reading public and/or book market? Did they control for other factors when ‘pairing’ books of winners with non-winners? Did they take into account conditioning factors such as cultural bias (UK readers are surely different from US, and so on). How big was their sample? Unless they can answer these questions convincingly, I would say this article is based on fluff.
Actually, answers to some of these questions are in The Guardian’s write-up: the authors had “compared 38,817 reader reviews on GoodReads.com of 32 pairs of books. One book in each pair had won an award, such as the Man Booker prize, or America’s National Book Award. The other had been shortlisted for the same prize in the same year, but had not gone on to win.” And the authors DID answer these questions convincingly, through multiple rounds of rigorous review; that’s why it was published in ASQ. The Guardian included a link to the original study, where the budding methodologist-wannabe could read through tables of difference-in-difference regressions, robustness checks, data appendices, and more. But that would require two clicks of a functioning mouse, and an attention span greater than that of a 12-year-old.
This is a non story based on very iffy research. Like is not compared with like. A positive review in the New York Times is compared with a less complimentary reader review on GoodReads…I’ll wait to fully read the actual research in case it’s been badly reported or incorrectly written up
Evidently this person could not even be troubled to read The Guardian’s brief story, much less the original article, and I’m a bit skeptical that she will “wait to fully read the actual research” (where her detailed knowledge of Heckman selection models might come in handy). After this kind of response, one can understand why academics might prefer to write for colleagues with training and a background in the literature.
Now, on to the “experimental” condition of our crypto-quasi-experiment. The Times reported another study this weekend, this one published in PLoS One (of course), which found that people who walked down a hallway while texting on their phone walked slower, in a more stilted fashion, with shorter steps, and less straight than those who were not texting (http://well.blogs.nytimes.com/2014/02/20/the-difficult-balancing-act-of-texting-while-walking/). Shockingly, this study did not attract wannabe methodologists, but a flood of comments about how pedestrians who text are stupid and deserve what they get. Evidently the meticulousness of the research shone through the Times write-up.
One lesson from this weekend is that when it comes to research, the public prefers Cheetos to a healthy salad. A simple bite-sized chunk of topical knowledge goes down easy with the general public. (Recent findings that are frequently downloaded on PLoS One: racist white people love guns; time spent on Facebook makes young adults unhappy; personality and sex influence the words people use; and a tiny cabal of banks controls the global economy.)
A second lesson is that there are great potential downsides to the field embracing open access journals like PLoS One, no matter how enthusiastic Fabio is. Students enjoy seeing their professors cited in the news media, and deans like to see happy students and faculty who “translate their research.” This favors the simple over the meticulous, the insta-publication over work that emerges from engagement with skeptical experts in the field (a.k.a. reviewers). It will not be a good thing if the field starts gravitating toward media-friendly Cheeto-style work.
People often complain, justifiably, that “big data” is a catchy phrase, not a real concept. And yes, it certainly is hot, but that doesn’t mean that you can’t come up with a useful definition that can guide research. Here is my definition – big data is data that has the following properties:
- Size: The data is “large” when compared to the data normally used in social science. Normally, surveys only have data from a few thousand people. The World Values Survey, probably the largest conventional data set used by social scientists, has about two hundred thousand people in it. “Big data” starts in the millions of observations.
- Source: The data is generated through the use of the Internet – email, social media, web sites, etc.
- Natural: It generated through routine daily activity (e.g., email or Facebook likes) . It is not, primarily, created in the artificial environment of a survey or an experiment.
In other words, the data is bigger than normal social science data; it is “native” to the Internet; and it is not mainly concocted by the researcher. This is a definition meant for social scientists- it is useful because it marks a fairly intuitive boundary between big data and older data types like surveys. It also identifies the need for a skill set that combines social science research tools and computer science techniques.
This year, there are many great pre-conferences. In addition to the New Computational Sociology conference on August 15, there is also:
- Digitizing Demography - hosted by Facebook and our guest blogger Michael Corey.
- The Hackathon at UC Berkeley – hosted by Wisconsite Alex Hanna. Get together and code all night long.
- Junior Theory Symposium – hang out with the cool kids!
Please put links to more ASA pre-conferences in the comments.
Because I advocate open access, public access, and other new forms of scholarly publishing, some people think I am against traditional journals. That’s not quite right. I am always against ineffective, or incompetent, journal practices – like dragging papers through 3 or 4 rounds of revision. But my larger point is this: journal pluralism – scholarship comes in many forms and there can be many forms of distributing it.
- Standard model: High rejection rate, often “developmental” – multi-year revisions standard. Criteria are particular and vague.
- Up or out: Sociological Science is a new model. Maybe not quite as selective, but they take papers “as is” or with modest revision. Still, there is a strong editorial influence.
- Agnostic: PLoS One – the main criteria is scientific rigor but completely agnostic with respect to “importance.” The reader decides.
It is not too hard to see the value of each model. The Standard model allows people to engage in a lengthy and complex revision process. It is also good for identifying papers that fit disciplinary norms well. Up or out is well designed for papers that may not fit disciplinary standards, but have an obvious and strong result. Agnostic publishing is exactly that. The journal certifies adherence to scientific standards but shifts decisions about importance to external audiences.
Some people see the new models as illegitimate, but I say the competition is good.
The new open access journal, Sociological Science, is now here. The goal is fast publication and open access. Review is “up or out.” On Monday, they published their first batch of articles. Among them:
- The Structure of Online Activism by Lewis, Gay, and Meierhenrich.
- Time as a Network Good by Young and Lim.
- Political Ideology and Preferences in Online Dating by Anderson et al.
Check it out, use the comments section, and submit your work. Let’s move sociological journals into the present.
Once in a while, I am asked by students about contingency theory – the view in organization theory that there is no optimal firm structure and that it simply “depends.” In other words, it’s the pragmatism of the org theory world. Here’t the question I get asked: is contingency theory still an active research area? On the one hand, it is obviously alive – people (including myself) still talk about it in published articles. On the other hand, it seems to be permission to resort to contextual, ad hoc exaplanations, or, better, to add a needed extra dimension of variation. There aren’t native “contingency theory variables” that have been developed in the decades since the 1960s.
My own view is that it is now more of an argumentative move rather than a stand alone theory. Even though it is an obvious point, it acts as a corrective to the very rigid theories of org environments often found in sociology (e.g., iron cage institutionalism or early population ecology). If you think there is a real advance in contingency theory, do use the comment section.
One of the more serious anti-immigration arguments is that immigration is correlated with welfare state expansion. The argument hinges on a normative evaluation of social services, but, at the least, it is a coherent argument. The issue then is empirical evidence – does immigration actually precede welfare state expansion? An op-ed in the Investor’s Business Daily summarizes research that claims that there simply isn’t any association. Written by Alex Nowratesh and Zachary Gouchenour:
.. we show that, historically, immigrants and their descendants have not increased the size of individual welfare benefits or welfare budgets and are unlikely to do so going forward. The amount of welfare benefits is unaffected by the foreign origin or diversity of the population.
Since 1970, no pattern can be seen between the size of benefits a family of three gets under welfare programs like Temporary Aid for Needy Families (TANF) and the level of immigration or ethnic and racial diversity.
We compared individual states because they largely decide the benefit levels for many welfare programs, and states’ levels of ethnic diversity vary tremendously along racial, ethnic and immigrant lines. For instance, in 2010 only 1.2% of West Virginia’s population was foreign-born while 27% of California’s was.
Furthermore, the amount of TANF benefits also varied by states with similar demographics. For instance, in 2010 a California family of three received $694 a month in TANF benefits. But in Texas, an identical family received only $260. The size of the Hispanic population in each state is the same: 39%.
For every California with many immigrants, considerably diverse, and a vast welfare state, there is a Florida or a Texas with similar demographics but a smaller welfare state.
In other words, there is no actual link between welfare state generosity and a state’s immigration population. So, basically economic research shows small or no effects on wages and this research shows no effect on political outcomes. The arguments against immigration are extremely flimsy.
By Antonio Carlos Jobim & Newton Mendonca
This is just a little samba
Built upon a single note
Other notes are sure to follow
But the root is still that note
Now this new note is the consequence of the one we’ve just been through
As i’m bound to be the unavoidable consequence of you
There’s so many people who can talk and talk and talk
And just say nothing or nearly nothing
I have used up all the scale i know and at the end i’ve come
To nothing i mean nothing
So i come back to my first note as i must come back to you
I will pour into that one note all the love i feel for you
Any one who wants the whole show show do-re-mi-fa-so-la-ci-do
He will find himself with no show better play the note you know
Having spent a lot of time doing higher education research, I get asked about college all the time. Here are my major talking points.
For high school students:
- Even though social scientists disagree on why college is correlated with income, the evidence does show that there is a very large difference in life outcomes between college degree earners and everyone else.
- Don’t worry about which college to go to. Find one that you enjoy and that is affordable. In general, most people will move into jobs where pedigree does not matter.
- Getting into college: With the exception of the top 40 schools in America – out of 4,000! – most colleges have high acceptance rates, including a lot of good ones.
- The exception: There are a few careers where pedigree matters a l0t – the law, politics, some the performing and visual arts, and academia. Not a guarantee of success, but specific colleges do substantially boost your chance of success.
- Major: The big secret of higher education is the difference between STEM majors, business majors, and everyone else. In general I urge people to study what they enjoy because you will get the college degree income boost in any case. But if income is important, focus on STEM or business/econ.
For college students:
- People sort early into “tracks.” By the first year of college, most people will fall into a “party track” or a serious track. If you have any concern with professional school or completing in a timely fashion, don’t fall into the party track. It is hard to get out of.
- Performance: The easiest way to do well in college is not to master the lecture notes, it is to study previous tests and papers.
- Graduate school: Some jobs (e.g., medicine) require a post-graduate credential. Most jobs don’t. If you are wondering if you should go to graduate school, make sure you need the degree first.
- 529s, people. 529s.
- Cost: With the exception of a few career tracks (e.g., academia), where you go to school doesn’t seem to have a big effect, although people do enjoy college more at small liberal arts institutions. So unless you have a lot of discretionary income, encourage children to go to public university. You’ll save the price of a house.
- Students typically enroll in academically comparable schools that are close by. So, yes, a few ambitious kids will move cross country for school, but most won’t.
For policy makers:
- Costs are out of control due to administrative growth and student services. The solution is to relieve colleges of administrative burdens and cut services and administrators.
- Faculty salaries have been flat.
- There is a massive increase in part time labor/adjuncts.
- State support for higher education will never come back. Alternative income sources must be found.
Siri Ann Terjesen is an assistant professor of management and international business at Indiana University. She is an entreprenuership researcher and she also does work on supply chains and related issues. This guest post addresses gender and management.
I am hoping that orgtheory readers can offer some new theoretical angles for a relatively new phenomenon: national legislation to set gender quotas (usually of 33%-40%) for boards of directors, usually with a short time horizon (3-5 years) and targeted to publicly-traded but also state-owned enterprises. The first country to adopt a gender board quota was Norway, in December 2003- setting a 40% quota for state-owned firms by 2006 and for publicly-traded firms by 2008. Since then, ten countries have implemented quotas (Spain, Finland, Quebec in Canada for SOEs, Israel, Iceland, Kenya for SOEs, France, Italy for SOEs, and Belgium) and another 16 have softer ‘comply or explain’ legislation. The mandatory quotas have potentially tremendous impact at multiple levels: from individuals’ careers and ambitions to creating new boardroom composition and dynamics, to challenging targeted firms to establish greater levels of female leadership at the board level, and providing an example for other countries. I recently surveyed the fast-growing academic literature on gender board quotas (about 80 articles, book chapters, working papers, and conference papers, all in the last 7 years, most in the last 2 years) and it is generally a-theoretical with the exception of some work on institutional theory and path dependency (as antecedents and inputs to the process of legislation) and a little bit on tokenism (back to Kanter’s 15% in 1977). Dear readers, any thoughts for promising theoretical perspectives?
When people discuss Obama’s contribution to racial inequality, people quickly sort into a few camps. In the middle, and among Democratic partisans, Obama has done well. He believes in affirmative action and avoids race baiting. On the hard left, he’s slammed for not taking a more direct approach. They suggest that Obama either openly discuss the legacy of slavery and consider more redistribution. On the right … well, let’s just say that they can’t quite accept the fact Obama isn’t an atheist Muslim who hates America. I think these views all miss something important about race and the US presidency. They all say: What do I wish the president could magically do? Instead, you have to start by asking: What are the biggest racial issues in America? Which of these can the president actually solve?
In my view, the biggest drivers of racial inequality are:
- The mass incarceration of Blacks for non-violent drug related offenses. This is hugely important because prison massively disrupts the economic and social lives of people in nearly irreversible ways.
- The de-facto criminalization of undocumented migration, which is designed to marginalize non-whites on a massive scale.
- The college completion gap between Whites and Asians, and everyone else. This hugely important because college completion is the crucial difference between having a middle class life style and not getting one.
Notice that I didn’t say white privilege or white distrust/hatred of other groups. I certainly believe they are important, but honestly, if one had to choose, most rational people probably end mass incarceration before eliminating white privilege.
Let’s talk about Obama specifically. What can he do about #1? No president can magically undo a maze of Federal and state drug law, or single handedly reform the nation’s prosecutors. However, he could do some fairly simple things like simply remain silent on drug issues or down play excessive drug enforcement. I’ve little evidence that the Obama is especially interested in reforming drug laws and the President has scoffed, in the past, at drug legalization. On #2, Obama’s record ranges from marginal improvement (like promoting the DREAM act) to atrocious (overseeing mass deportation). On #3, there is little that the President can do directly to affect education. The power to improve schools lies mainly in the hands of the states and local school boards. My summary judgment on Obama is that he has done little to directly affect mass incarceration of Blacks and what positive he is doing immigration is outweighed by doing nothing to prevent (or actively encourage?) deportation. On schooling, I’ll give a pass.
This is a post for people in R1 programs who have dissertation students. I am only writing this after I’ve gone through the job market search with two students. One got a great job – but I had nothing to do with it! It was a good administrative job that fit with the student’s career goals and found without my assistance. The other student has found a very competitive academic position. So, the play book worked. Also, I am only writing this after seeing 10 years of IU job searches, and others, including my own.
Here it goes. Proper PhD advising boils down to three major issues – Patience, Professionalism, and Publication. In detail:
- Patience – the training of PhDs is, literally, a multi-year process. During this time, people switch topics, advisers, jobs, career goals, and a whole lot more. They also grow as people. Given the great change that people experience, you need patience to help people with where they are going. You need patience with half-baked topics. You need patience with mistaken regression models. You need patience, patience, patience.
- Professionalism – Nuts and bolts matter as well. You need to be present (physically and mentally). You need to get the paper work done – letters need to be written and paperwork signed. You need ways student can contact you. You need to calmly explain your standards and then show people how to accomplish them.
- Publication – In most fields, you either need publication for jobs or for promotion. Good advisers can help students develop a strategy for converting research into finished products in a reasonable amount of time. Good advisers will pull no punches. Unless you are an elite student at an elite school, you will need publications for an academic job and your adviser needs to clearly communicate that.
Sure, a few people will finish their PhDs and get jobs with nutty mentors or absent super stars. But most successful PhDs have a decent prof who, in some form, practices the three P’s.
The Economist has a map showing that the density of the Mexican American population mimics pre-1848 borders.
I strongly believe that graduate education in America is exploitative and structurally flawed. The system requires cheap teaching labor and lab assistants, but provides no incentives for quality training or professorial accountability. But still, that doesn’t mean that students should abrogate responsibility for their careers. Here are some simple (though not easy) things that can help you to make sure you aren’t screwing up:
- Show up. Even if you feel horrible, show up. No matter what. Period. Unless someone died in your family, show up.
- Do your job. Grade the papers. Do the lab work. Unless the work is extreme, take it in stride.
- Be completely realistic about how you will be evaluated from day #1 – acquire a teaching record and a record of publication. Don’t have the fantasy that you will magically get the job of your dreams sans publications. Time spent on other issues is “out of pocket” – do it because you care, not because it will help you.
- Hang out with winners. These people are actually pretty easy to identify – they do well in teaching and publication and they have a track record of placement. Also, ask around to see if people are nice. Where there is smoke, there’s fire.
- Be constructive. It is easy to criticize people, but it really doesn’t accomplish much. Instead, if you actually offer to help and present a solution, then you’ll make a difference and people will appreciate it.
- Say yes (unless it is a crazy person). In other words, join teams and accept projects, and say yes to grad school buddies. Once you get a few projects going, then you can say no.
- No excuses: the only thing that matters is task completion. It may be long or short, but everyday should involve a core task.
- Submit, submit, submit. Got rejected? No problem – just resubmit tomorrow. If you thought the reviewers were right, take a week and then resubmit. The key to success isn’t submission – it’s resubmission.
Some problems in academia are truly hard, but, on the other hand, there are a lot of simple things you can do from day #1 to increase the chance that you get through the program promptly and you get the career outcome that you like.
This coming August 15, Dan McFarland of Stanford University and I will host a conference on the new computational sociology at the Stanford campus. The goal is to bring together social scientists, informatics researchers, and computer scientists who are interested in how modern computation can be brought to bear on issues that are of central importance to sociology and related disciplines. Interested people should go to the following web site for information on registration and presentation topics. I hope to see you there.
Scatter has a great post on why we need to treat the Introduction to Sociology course with great importance by Nathan Palmer:
The 101 class is the public face of our discipline. Every year there are roughly a million students in the United States who take Soc 101, that is, if my publisher friends’ estimates are to be believed. For the overwhelming majority of Americans, 101 will be their only exposure to our discipline. Sure, they might hear about our research findings in the media, but chances are they’ll have no idea that it was a sociologist who produced the research.
…How do the faculty in your department think about 101? Is it something to be avoided like the plague? Is it a hazing ritual that you put newbs through so that senior faculty can get to teach their “real classes” (i.e. their upper division classes within their area of interest)?
Undergraduates are significantly more likely to major in a field if they have an inspiring and caring faculty member in their introduction to the field. And they are equally likely to write off a field based on a single negative experience with a professor.
Second, it matters because of Krulak’s law which posits, “The closer you get to the front, the more power you have over the brand.” Put simply, if the 101 class is the frontline of sociology, then the 101 teacher is the ambassador for us all.
Read the whole thing.
Mikaila Mariel Lemonik Arthur is an Associate Professor of Sociology at Rhode Island College and is the author of Student Activism and Curricular Change in Higher Education. Her current research explores network effects on curricular change in higher education. Her primary teaching responsibilities include social research methods and law and society courses, and this spring she is teaching a new interdisciplinary upper-level general education course on higher education.
One of the hallmarks of modernity is the focus on rationality and efficiency in organizational function: organizations of all types, from hospitals to Fortune 500 corporations, from universities to small not-for-profits, seek to improve their performance in terms of measureable outcomes. But, as the aphorism goes, “What gets measured gets done, what gets measured and fed back gets done well, what gets rewarded gets repeated” (variously attributed to any number of management scholars). For example, pharmaceutical companies’ focus on stock prices, sales figures, and the next blockbuster drug has led to a focus on treatments for common, chronic conditions, such as the umpteenth heartburn medication, and less focus on the development of new antibiotics, a trend that may soon prove to have devastating effects on our attempts to control infections disease.
In higher education, a similar dynamic is occurring. In the past, colleges and universities were primarily measured (and funded) based on enrollments. This meant that encouraging more students to enroll, and keeping them enrolled in classes until after the third week (or whenever official enrollment statistics are due), was often the highest priority, and whether students ever graduated did not matter nearly as much. You get what you measure: students in seats.
More recently, the emphasis has shifted to retention and graduation as measureable outcomes. This change encouraged administrators to consider what was necessary to keep students in school and to improve time-to-degree, but it came with its own perverse incentives. For example, administrators turned to student evaluations as a way to increase student satisfaction; some colleges and universities discourage faculty from failing students because failures decrease graduation rates and increase dropout rates. This leads to colleges in which students can graduate with a 2.0, never having written a paper (a phenomenon discussed in recent books like Arum and Roksa’s Academically Adrift and Armstrong and Hamilton’s Paying for the Party). It also contributes to rampant grade inflation, including elite institutions where over half of all grades awarded are As (happy students=repeat customers). You get what you measure: grads with high grades.
A variety of colleges and universities have thus sought ways to curb grade inflation, such as providing average class grades on transcripts and setting strict grading curves. By encouraging tougher grading standards, these methods may indeed reduce the average GPA of enrolled students, but tougher grading standards do not necessarily translate into better educated graduates—and in any case, most colleges and universities have not chosen to enact these sorts of reforms. Indeed, the ease by which average grades can be manipulated highlights the fact that grades themselves may not be even an adequate proxy measure of student learning, and thus the assessment movement was born.
Today, accrediting agencies require colleges and universities to demonstrate that students meet measurable learning outcomes, and projects like the Lumina Foundation’s Degree Qualifications Profile encourage institutions and departments to clearly state the intended outcomes of their programs in measureable language. Some colleges and universities have gone further, developing competency-based degrees in which students supposedly demonstrate their skills rather than their seat time to graduate. At first blush, many critics argued that these programs are just another kind of teaching to the test. But teaching to the test is only a problem if the test is not actually able to test the desired learning outcomes—you get what you measure: results on the test.
It has already become clear to advocates of competency-based learning that competency is a pretty low floor, and instead they have begun to use the term “proficiency.” One goal of proficiency-based degree plans has been to shorten the time and cost of a degree, particularly by reducing Baumol’s cost disease by disrupting the relationship between seat time, faculty workload, and degree production. So far, competency- and proficiency-based programs are rare and likely appeal only to a particular self-selected group—but as Chambliss and Takacs point out in their forthcoming book How College Works, college only works if it works for all students, including the lazy, the unmotivated, and the perhaps not-so-smart.
So if we get what we measure and what gets rewarded gets repeated—and we measure proficiencies and reward completion—what do we get? Degrees as checklists? Students who cannot earn a college degree because, while they are excellent writers and have superb disciplinary knowledge, they cannot (in Lumina’s language) construct and define “a cultural, political, or technological alternative vision of either the natural or human world,” a key bachelor’s-level competency? An even more extreme bifurcation of the higher education field in which some colleges and universities develop rigorous proficiency measures and provide students with the supports necessary to excel while others assess writing, critical thinking, and speaking with machine- or peer-grading?
Or is it possible to build a system that measures proficiencies in a real, valuable way and which rewards completion without reducing the rigor of these proficiencies? In other words, can find a way to measure what we want to get instead of getting what we happen to have measured?
On Wednesday, we discussed the ASA’s opposition to opposition to the Fed’s Open Access policy. What do you think?
Federal grant agencies have asked people who receive grants to make the results of their work “public access.” In other words, if the public pays for it, the public should get to read it. Turns out that the ASA is against this policy. In a letter dated January 9, 2012 (about two years ago), Sally Hillsman, executive officer of the ASA makes a strong argument against public access. Here is the letter and some key clips. Please read the letter yourself (open_access_hillsman):
It remains unclear why the federal government should spend scarce taxpayer dollars appropriated for scientific research to add to existing dissemination avenues. This is what scientific societies such as the ASA and our private sector publishing partners have done for over a century, and continue to do extremely well today. The national and international marketplace demonstrates that non-‐profit and profit-‐making scientific publishers in collaboration with scholarly societies have responded vigorously and competitively to expand access to scientific knowledge as new demands for content and sophisticated communication technologies have emerged. This success suggests that federal science agencies should invest taxpayer dollars in the research itself, especially as federal dollars that support scientific innovation fail to keep up with the pace of research.
There are no empirical studies that I know of which support the notion that free access to the scientific research literature will increase research productivity or economic growth in the United States.
ASA spends nearly $600,000 annually on journal editorial office expenses alone (which does not include administrative costs, printing and mailing expenses, editor honoraria, legal or overhead costs). ASA does not pay peer reviewers, but in return we sacrifice some revenue by a long-‐standing policy of keeping our university library subscription prices low (averaging well under $300 in 2011) in explicit recognition of the contribution university faculty make as peer reviewers, editors, and editorial board members.
Comments: First, it seems that the main issue in Dr. Hillsman’s response is that they are concerned about the income stream. I think this is a legitimate concern. But it should lead to a few sensible questions. For example, in an age of electronic publishing, why does one need $600,000 for a journal office? At the AJS, of which I was an editor, we had (1) a full time manager (call it $50k), (2) some part time staff ($50k), (3) office space (say $5k month – $60k per year) and toss in $50k for postage, computers, etc. That totals about $210k per year. If we give Andy a nice fat bonus for running the joint ($50k), you get up to $260k. I am not sure why we need to wrack up hundreds of thousands more in administrative costs.
But there are deeper questions. What is preventing the ASR from going all electronic and printing paper versions on demand for a few readers? Or going free access, but having advertisements or the “freemium” model? In other words, this argument seems to be a rear guard defense of an older publishing model, not an attempt to creatively think about how the ASR can be read by the widest audience possible.
Second, I don’t think Dr. Hillsman’s letter gets at the main point – the Federal government, sensibly, doesn’t want the results of funded research to be hidden behind pay walls. The pay wall for ASR may not be a barrier to social scientists who have university accounts, but $300 is a barrier for many other readers. But the Federal government’s argument isn’t directed at the ASA. It’s directed at other publishers who charge thousands of dollars for a journal subscription. If you are a lay person, a poor person, or someone from another country, this is a real barrier.
We are now living in an exciting era of journal publishing. We have traditional models, the egalitarian PLoS One model, and the “up or out” Sociological Science model. I say let us experiment, not drift into rent seeking defenses of a 19th century approach to science.
About once a semester, I get into an argument about how my students shouldn’t go into law unless they get into a top ranked program. The argument is fairly simple – most lawyers make a modest salary. In the current environment, where one can easily acquire about $200,000 in law school debt, the salary simply doesn’t justify the loan, especially for students from low ranked schools who have very limited job prospects. This semester, one aspiring lawyer said that the average was $78k and then a student raised her hand and said, “sure, but it’s a bimodal distribution!” Indeed, it is. Lesson: Don’t trust law school stats and only go if you get into a top program or you get a free ride.
What do the Cooper Union and the University of California have in common? They both promised no tuition and have abandoned that. This leads to an interesting idea about higher education. My hypothesis is that free tuition is an “unstable equilibrium.” Once you get it going, it can be sustainable, since people exert great pressure to keep it that way. But once you charge tuition, it’s impossible to go back. For the University of California, it was the freedom to charge a “registration fee.” Originally meant to cover bureaucratic costs, it very quickly became de facto tuition. It was even litigated and the courts openly admitted it was de facto tuition but needed. The same for the Cooper Union, which is now just another private school with a hefty tuition, after nearly a century of being tuition free.
In a world where college is a certification for the labor market, and entry is restricted, you invite monopoly pricing by producers. In that world, any excuse you can provide that allows you to start charging tuition is the first step in extracting huge amounts of money from students and parents. And that is very hard to resist.
Around 2004 or so, I felt that we were “done” with institutionalism as it was developed from Stinchcombe (1965) to Fligstein (2000). My view was that once you focused on the organizational environment and produced a zillion diffusion studies, there were only so many extra topics to deal with. For example, you could propose a strong coupling argument (DiMaggio and Powell 1983) or weak coupling argument (Weick 1976 or Meyer and Rowan 1977). Then maybe you could do innovation within a field (DiMaggio’s institutional entrepreneur argument) or how people exploited fields (Fligstein’s social skill argument). Finally, starting with Clemens (1999), and then Armstrong (2004), then Bartley (2006), and then the work of the orgs/movement crowd (including Brayden and myself) you got into contention. So what else was left?
Well, it turns out there are two major moves that force you in a new direction. One might be called the “aspects of fields” program – which means that you study some element of an organizational field in depth and really analyze the living day lights out of it. For example, the Ocasio/Thornoton/Lounsbury stuff on institutional logics is an example. Another example is the new stuff by Suddaby and Lawrence on institutional work, which includes some of my work on power building in organizations.
The other program is the Fligstein/McAdam Theory of Fields, which essentially marries “social skill” era Fligstein with the incumbent-challenger dynamics that was highlighted in The Dynamics of Contention book. In other words, you rub the Orange Bible and DoC together and hope the child is attractive.
The purpose of this post is not to evaluate these programs. That’ll come later, and there will some special summer action concerning ToF. But here, I am just mapping out institutional theory as it stands these days. The “aspects of institutionalism” program is clearly a deepening and refinement of the theory that emerged in early post-Parsons sociology. On Facebook, I asserted that ToF was our “new new institutionalism,” and there was push back. I think my position is that, as far as genealogy and conent is concerened, ToF is a merger of two separate ideas.
As far as the discipline is concerned, management likes the aspects program because it is relatively easy to stick to firm level dynamics. Studies of executives, or regulations, or what have you can be pegged to “institutional X” theory. In contrast, sociologists like conflict a bit more and movements, so ToF will prove popular. If nothing else, it provides a simple and intuitive vocabulary for the types of social processes that contemporary macro-sociologists like to talk about.
Michael Corey is a PhD candidate in sociology at the University of Chicago. This guest post explains his experiences working for Facebook, the world’s leading social networking website (as if you didn’t know that!).
Another Dispatch from Industry
Last summer I moved from Chicago to the bay area to work as a quantitative researcher at Facebook. I’d done six years in the PhD program at Chicago and left with drafts of all my dissertation papers but without a cohesive dissertation to turn in (3 paper dissertations aren’t exactly allowed). Six months at Facebook has been eye opening and weird. Below I’ll try to give readers a feel for what it is like to go from an academic track to an industry job.
The FB Culture:
The culture at Facebook is really fun. I work at the main campus in Menlo Park, where a few thousand people work on the various FB platforms and the associated companies (Parse, Onavo, Instagram, etc). My mother-in-law describes it as an Oxford College designed by Willy Wonka, which is pretty fair. The campus houses everything you need to reduce any external friction that would take you off-campus during the day [http://cnettv.cnet.com/barber-candy-shop-bank-among-deluxe-perks-facebook/9742-1_53-50153870.html]. It is pretty easy to drink the Kool-Aid about how great FB is, and I would imagine that it is hard to work here if you don’t. I wasn’t the biggest FB user when I started here, but having been off the site for a long time I learned to recognize how much I missed by not being on it. For so many of my peers it is the only medium to communicate news, baby pictures, or cat memes to weak ties. Risk taking is encouraged and speed is considered a virtue.
university of chicago visit – everything you wanted to know about tweets and votes, but were afraid to ask
I will be a guest of the computational social science workshop at the University of Chicago this coming Friday. I will present a very detailed talk on the more tweets/more votes phenomena called “Everything You Wanted to Know About the Tweets-Votes Correlation, but Were Afraid to Ask.” If you want to chat or hang out, please email me.
Refreshments will be served.
During Festivus, a commenter complained about the gender inequality on this blog. This comes up from time to time. Trust me, I’ve tried to remedy the situation. In the past, I’ve made a conscious effort to invite comparable numbers of guests from all genders. And we’ve had excellent female bloggers. Our permanent crew member Katehrine Chen, Hilary Levy Friedman, Jenn Lena, Leslie Hinkson, Mito Akiyoshi, Brandy Aven, Rhacel Parrenas, Karissa McKelvey, and others. But usually, men are much more likely to accept invitations and post, that’s why the imbalance remains. In Spring 2013, I even put out an open call and I posted *everything* that was sent to me. The result? Two men and one woman.
But that doesn’t mean we can’t try even harder. So here’s the deal: send me something to post. You have a commitment from me. If you send me a post that is social science/management or related to the academic profession (orgtheory’s two main topics), I will post it contingent on light editing and meeting our admittedly low intellectual standards. This helps me by bringing fresh ideas to the blog and it will bring new voices to the soc blogosphere. So if there’s a book you want to comment on, or an article you hate, or a theoretical point that needs to get out there, send it in!!
Loyal orgtheorista Monica Prasad sent me the link to a ASA report written by her, David Brunsma, and Ezra Zuckerman. They interviewed 26 of the “best” manuscript reviewers identified by editors at a variety of journals. Common themes from those who write good and fast reviews:
- Reviewing is not a drag. You can learn a lot.
- Immediately focus on “big issues” – major questions, research design, etc.
- Don’t do “death by a thousand cuts.”
- Set out time in the schedule. It’s a normal part of academic life.
You can read the responses from all 26 respondents in the full length report.