orgtheory.net

Archive for the ‘productivity and performance’ Category

driverless cars vs. police departments

In my view, driverless cars are revolutionary. At the very least, they will eliminate a major health problem – auto injuries and fatalities. No system will be accident free, but driverless cars will be better at driving that most humans, they don’t get drunk, and they won’t drive recklessly.

There is another social consequence of driverless cars that needs discussion. Driverless cars will seriously disrupt police departments. Why? A lot of police department revenue comes from moving vehicle violations and parking tickets. In a recent news item, one judge admitted that many small town fund their police department entirely through speeding tickets. Even a big city police department enjoys the income from tickets. New York City receives tens of millions in moving violation fines. This income stream will evaporate.

Another way that driverless cars will disrupt police departments is that they will massively reduce police stops. If a driverless car has insurance and registration (which can be transmitted electronically) and drives according to the rules of the road, then police, literally, have no warrant to pull over a car that has not been previously identified as related to a specific crime. Hopefully, this means that police will no longer use moving violations as an excuse to pull over racial minorities.

Even if a fraction of the hype about driverless cars turns out to be true, it would be a massive improvement for humanity. Three cheers for technology.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street 

 

 

Written by fabiorojas

November 3, 2016 at 12:15 am

genres in popular music

There is a new paper in PLoS One by Daniel Silver, Monica Lee, and C. Clayton Childress about the structure of genres. They use MySpace co-mentioning data to understand which genres are mentioned together, which maps out the space of pop music in the mid-2000s. From the abstract of “Genre Complexes in Popular Music:”

Recent work in the sociology of music suggests a declining importance of genre categories. Yet other work in this research stream and in the sociology of classification argues for the continued prevalence of genres as a meaningful tool through which creators, critics and consumers focus their attention in the topology of available works. Building from work in the study of categories and categorization we examine how boundary strength and internal differentiation structure the genre pairings of some 3 million musicians and groups. Using a range of network-based and statistical techniques, we uncover three musical “complexes,” which are collectively constituted by 16 smaller genre communities. Our analysis shows that the musical universe is not monolithically organized but rather composed of multiple worlds that are differently structured—i.e., uncentered, single-centered, and multi-centered.

For Chicago-ites, this is a “hollow core” finding about musical social worlds. Recommended.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

June 7, 2016 at 12:01 am

i am so sorry, the GRE is still a valid tool in graduate admissions

A recent Atlantic article by Victoria Clayton makes the case that the GRE should be ditched based on some new research. The case for the GRE rests on the following:

  1. The GRE does actually, if modestly, predict early graduate school grades and you need to do well in courses to get the degree.
  2. Many other methods of evaluating graduate school applications  are garbage. For example, nearly all research on letters of recommendation shows that they do not predict performance.

To reiterate: nobody says GRE scores are perfect predictor. I also believe that their predictive ability is lower for some groups. But the point is not perfection. Th point is that the GRE sorta, kinda works and the alternatives do not work

So what is the new evidence? Actually, the evidence is lame in some cases. For example, Clayton cites a 1997 Cornell study claiming that GRE’s don’t correlate with success. True, but if you actually read the research on GRE’s there have been meta-analyses that compile data from multiple studies and find that the GRE does actually predict performance. This study compiles data from over 1,700 samples and shows that, yes, GRE does predict performance. Sorry, it just does, test haters.

Clayton also cites a Nature column by Miller and Stassun that correctly laments the fact that standardized tests sometimes miss good students, especially minorities. As I pointed out above, no one claims the GRE makes perfect predictions. Only that the correlation is there and that is better than the alternatives that simply don’t predict performance. But at least Miller and Stassun offer a new alternative – in depth interviews. Miller and Stassun cite a study of 67 graduate students at Fisk and Vanderbilt selected via this method and note that their projected (not actual) completion rate is 80% – much better than the typical 50% of most grad programs.

Two comments: 1. I am intrigued. If the results can be replicated in other places, I would be thrilled. But so far, we have one (promising) study of a single program. Let’s see more. 2. I am still not about to ditch GRE’s because I am not persuaded that academia is ready to implement a very intensive in-depth interview admissions system as its primary selection mechanism. The Miller and Stassun column refers to a study of physics graduate students – small numbers. What is realistic for grad programs with many applicants is that you need to screen people for interviews and that screen will include, you guessed it, standardized tests.

Bottom line: The GRE is far from perfect but it is usable. There is no evidence to systematically undermine that claim. Some alternatives don’t work and the new proposed method, in depth interviews, will probably need to be coupled with GREs.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

March 16, 2016 at 12:01 am

my deep burning hatred of letters of recommendation

Econjeff mentions my long standing critique of letters of recommendation (LoRs). Here, I describe my personal experience with them and then I restate the massive empirical research showing that LoRs are worthless.

Personal experience: In graduate school, I had enormous difficulty extracting three letters from faculty. For example, during my first year, when I was unfunded, I asked an instructor, who was very well known in sociology, for a letter. He flat out refused and told me that he didn’t think I’d succeed in this profession. In the middle of graduate school, I applied for an external fellowship and was informed by the institution that my third letter was missing. Repeatedly, I was told, “I will do it.” Never happened. Even on the job market, I had to go with only two letters. A third professor (different than the first two cases) simply refused to do it. Luckily, a sympathetic professor in another program wrote my third letter so I could be employed. Then, oddly, that recalcitrant member submitted a letter after I had gotten my job.

At that point, I had assumed that I was some sort of defective graduate student. Maybe I was just making people upset so they refused to write letters. When I was on the job, I realized that lots and lots of faculty never submit letters. During job searches at Indiana, I saw lots of files with missing letters, perhaps a third were missing at least one letter. Some were missing all letters. It was clear to me that l was not alone. Lots of faculty simply failed to complete their task of evaluating students due to incompetence, malice, or cowardice.

Research: As I grew older, I slowly realized that there are researchers in psychology, education and management dedicated to studying employment practices. Surely, if we demanded all these letters and we tolerated all these poor LoR practices, then surely there must be research showing the system works.

Wrong. With a few exceptions, LoRs are poor instruments for measuring future performance. Details are here, but here’s the summary: As early as 1962, researchers realized LoRs don’t predict performance. Then, in 1993, Aamondt, Bryan and Whitcomb show that LoRs work – but only if they are written in specific ways. The more recent literature refines this – medical school letters don’t predict performance unless the writer mentions very specific things; letter writers aren’t even reliable – their evaluations are all over the place; and even in educational settings, letters seem to have a very small correlation with a *few* outcomes. Also, recent research suggests that LoRs seem to biased against women in that writers are less likely to use “standout language” for women.

The summary from one researcher in the field: “Put another way, if letters were a new psychological test they would not come close to meeting minimum professional criteria (i.e., Standards) for use in decision making (AERA, APA, & NCME, 1999).”

The bottom line is this: Letters are unreliable (they vary too much in their measurements). They draw attention to the wrong things (people judge the status of the letter writer). They rarely focus on the few items that do predict performance (like explicit comparison). They have low correlations with performance and they used codes that bias against women.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

October 6, 2015 at 12:01 am

megaprojects are a rip-off

I’ve always known that some city projects are simply bad deals, like sport stadiums. What I didn’t know is that there is new research showing that maga-projects of all types are a giant rip-off. Bent Flyvberg of Oxford discusses this finding a new Econ Talk podcast. What Flyvberg did was collect data on “mega-projects” – construction efforts that cost at least a billion dollars and affects a million people. What was found is that 90% of the time, mega projects are over budget, not completed on time, or do not attract the customers that were predicted (i.e., the demand was wildly over estimated). This applies to both private and public sector projects. Flyvberg also reports that smaller projects tend to do much better, for a variety of reasons.

A few comments here:

  • This is pretty strong evidence that states should completely avoid the expensive mega-sports projects like stadiums above a certain size. The Olympics, for example, should only be hosted in nations that have preexisting facilities.
  • Flyvberg points out that mega projects can even destroy the engineers and other professionals who build them. The architect of the Sydney opera only did one building in his life. The cost-overruns and delays ruined his reputation.
  • It is mainly state actors, contractors, and land owners who receive benefits.
  • I recently saw the Church of the Sagrada Familia in Barcelona. Big project, but built little by little over a 100 years. A better model?
  • There are some cases of success, but they seem hard to predict ex-ante.

Bottom line: The next time they tell you that we need this multi-billion dollar road, just say no.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

July 28, 2015 at 12:01 am

let’s just burn 20% of our research dollars

Plummeting grant funding rates are back in the news, this time in the U.K., where success rates in the Economic and Social Research Council—a rough equivalent to NSF’s SBE division—have dropped to 13%. In sociology, it’s even lower—only 8% of applications were funded in 2014-15.

I’ve written before about the waste of resources associated with low funding rates. But this latest round prompted me to do some back-of-the-envelope calculations. Disclaimer: these numbers are total guesses based on my experience in the U.S. system. I think they are pretty conservative. But I would love to see more formal estimates.

Read the rest of this entry »

Written by epopp

July 16, 2015 at 12:01 pm

what does success mean in higher education? – a guest post by mikalia marial lemonik arthur

Mikaila Mariel Lemonik Arthur is an associate professor of sociology at Rhode Island College. She is the author of Student Activism and Curricular Change in Higher Education.

My state, Rhode Island, is in the process of beginning an experiment with performance funding for public higher education. Because of our small size, we have only three institutions of public higher education: The University (URI), The College (RIC—my institution), and The Community College (CCRI), and thus our performance funding initiative cannot involve comparative metrics or those based on what “the top” institution in the state is doing. Therefore, the legislature instead decided to craft a performance funding formula based on their own goals for higher education outcomes. The current version of the bill—considerably improved from prior versions, due in large part to the concerted efforts of my colleagues who testified before the relevant state House and Senate committees—includes among its metrics the 100% and 150% of normative time graduation rates; the production of degrees tied to “high demand, high wage” employment opportunities in our (very small) state; and an additional measure to be decided by each of the three institutions in consultation with internal constituencies; with the potential to adjust the weights of these measures to reflect institutional missions, student body demographics, and “the economic needs of the state.”

But this is not a post about performance funding, at least not really. Rather, it is a post about what “success” means for colleges and university.

Read the rest of this entry »

Written by fabiorojas

June 15, 2015 at 12:01 am