what does success mean in higher education? – a guest post by mikalia marial lemonik arthur

Mikaila Mariel Lemonik Arthur is an associate professor of sociology at Rhode Island College. She is the author of Student Activism and Curricular Change in Higher Education.

My state, Rhode Island, is in the process of beginning an experiment with performance funding for public higher education. Because of our small size, we have only three institutions of public higher education: The University (URI), The College (RIC—my institution), and The Community College (CCRI), and thus our performance funding initiative cannot involve comparative metrics or those based on what “the top” institution in the state is doing. Therefore, the legislature instead decided to craft a performance funding formula based on their own goals for higher education outcomes. The current version of the bill—considerably improved from prior versions, due in large part to the concerted efforts of my colleagues who testified before the relevant state House and Senate committees—includes among its metrics the 100% and 150% of normative time graduation rates; the production of degrees tied to “high demand, high wage” employment opportunities in our (very small) state; and an additional measure to be decided by each of the three institutions in consultation with internal constituencies; with the potential to adjust the weights of these measures to reflect institutional missions, student body demographics, and “the economic needs of the state.”

But this is not a post about performance funding, at least not really. Rather, it is a post about what “success” means for colleges and university.

The RI state legislature has identified several potential definitions of success, which I’ll disambiguate and comment on:

  1. Graduation rates: Typically, and the performance funding bill in my state is no exception, graduation rates are produced in line with federal law, which requires 4-year institutions to report the number of students who began at their institution as first-time full-time freshman and graduated within 4, 6, or—since 2008—8 years of beginning (a similar metric, with shorter time frames, is used for 2-year institutions). Estimates of the percentage of students nationwide who are excluded by official graduation rates range from just under 30% to as high as 75%. Colleges like my own, which serve many transfer, part-time, and nontraditional students, have particularly large portions of the student body excluded from official graduation rates, reducing their utility substantially.
  1. Connection to highdemand, high-wage employment: Political figures would like to be able to tell parents and constituents that college education produces desirable employment and economic outcomes, and given state disinvestment in higher education funding, graduates hope to earn higher salaries to pay back the loans they have had to take. However, while research has made clear that students with 4-year degrees do better on the labor market in general, our understanding of the connection between degree attainment and specific occupational and economic outcomes is not as clear. National standards seem to be developing around the notion that outcomes should be measured at the 6-month-post-graduation mark, which I am sure many of us would agree is far too early to tell what is really going on. Of course, the idea that graduates should be employed in “high demand, high wage” occupations downplays the central focus of many comprehensive colleges on preparing students for careers in public and social service—social work may be high demand, but it is not high wage; education programs are seeing the demand for their graduates drop; etc.
  1. Graduating alumni who stay in the state: In my very small state, the legislature is extremely concerned that individuals who are educated here be retained as residents of and employees in the state. Anecdotally, I tend to believe this “problem,” such as it exists, is primarily related to the large number of students from outside our borders who we educate here in RI, most of whom never intended to reside here beyond their college years. Furthermore, our state is almost entirely part of the Boston Metropolitan Statistical Area, yet working and living in Boston—the same MSA as where students may have grown up and been educated—counts as outmigration from our state.

As educators and organizational scholars, we can easily see some of the tradeoffs these metrics suggest. For instance, it is easy to produce higher graduation rates by reducing the academic demands of college, but that is also likely to reduce the degree to which students are prepared for high-demand, high wage jobs (see Arum and Roksa 2014). Of course, we don’t actually know how to measure what they are learning or the degree to which they are prepared for careers, so that type of metric is not a serious contender. If we begin with a focus on alumni in high demand, high wage occupations, we lose sight of some of the other valuable contributions our students can make as social service or social justice workers, as well as the fact that students may have different goals than legislators (see the conclusion to Armstrong and Hamilton 2013, which presents a case study of a young woman who chose a career which did not require a college degree, was fulfilled by her choice, and explained how college was necessary to achieving her future). It is also not clear at what point post-graduation we should be measuring outcomes: 6 months? 9 months? A year? A decade? At mid-career, once students have completed their education? If we measure too early, we underestimate what our colleges can accomplish and ignore the many students still figuring things out. If we measure too late, it takes decades to get the data, and how can we tell whether it really was our institution that made the difference? Furthermore, if students really go where the demand is, that may mean crossing state borders—and the legislature does not want to lose graduates to other states after investing state dollars in them, no matter how small that investment has become.

A colleague and I are beginning a project in which we hope to explore some of these tradeoffs in the context of our own unique location, but there are broader issues here as well. How do we understand what it means for our colleges and universities to be successful? How should these metrics relate to institutional missions—in other words, even if we limit our focus to the undergraduate education function, does success look different for a community college, a public compressive college, a public research university, a private college, a private research university, etc.? And if we determine what success means for a given type of institution or even an individual college or university, how do we best communicate that to our constituents and our elected officials? Thoughts, ideas, and provocations welcome—we are hoping to do something useful with this project.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

June 15, 2015 at 12:01 am

4 Responses

Subscribe to comments with RSS.

  1. Reblogged this on grouchosis and commented:
    This is a project to keep track of. Very TARGET relevant.



    June 15, 2015 at 8:32 am

  2. This is a great post, and a worthy project, but I am pretty skeptical about our ability to measure anything about individual colleges’ effects on student outcomes that can be meaningfully used to improve colleges. One reason is that the universe of things we can conceivably measure (like salary, or having a job that requires a college degree, or performance on some kind of test) is pretty thin, and as the post sort of implies, it’s hard to even think about what better measures would look like.

    The other, equally important, is that it’s not really possible to fully adjust for student differences coming in and isolate college effects. This is not to say that we can’t learn valuable things from studying college attendance and various sorts of outcomes. I totally think we can. It’s that I don’t think we can learn enough to make them genuinely useful (as opposed to, say, politically useful) for organizational improvement.

    Now, one response might be that the metrics are coming whether we like it or not, and so we should try to make them as not-bad as possible because we’re going to have to live with them. And that is certainly a reasonable argument, and clearly some metrics are worse than others. Salary at six months after graduation is probably a particularly bad measure, even if income is the effect you’re interested in.

    So within the constraints of “the legislature is going to make us do this,” what might I want to see? Let’s assume that other people will demand measures of salary and STEM employment. Maybe I’d focus on measuring some things we value that are likely to be overlooked otherwise. For example, what fraction of classes require the 40 pp a week of reading and 20 pp a semester of writing that Arum & Roksa suggest promotes learning. How much debt do students have upon graduation, and what fraction of students’ incomes go to debt repayment five years out. What fraction of students self-report, five years out, that their college decision was a good one, and that they would make the same decision again? None of those are perfect, but maybe they would get at some different things, at least, beyond salary.

    Liked by 2 people


    June 15, 2015 at 5:17 pm

  3. I would also suggest simply asking graduates if they learned anything.

    An alternative, would be to put in place a tracking system that uses some form of assessment/poll starting with high school graduation. Keep coming back every 5 years or whatever. In that time some will have gone through the RI university system. Subsequent assessments do not or maybe even should not ask the same questions, because its about measuring adaptability. Did university or not going to university teach skills to adapt/excel? It can be mobile, answered on a phone, and offer the participants a free course in exchange for participating. Maybe more credits with time, such as a paid sabbatical of 6 months to study again with a stipend for living expenses after answering 3 times. Perhaps offer the bank of assessments to local employers, if respondents agree. Quite often employers struggle to find matching applicants, and as mentioned qualified applicants can be discounted simply because they don’t have the degree. It doesn’t work if the same people don’t keep answering, but it seems increasingly lifelong learning is not a luxury but a necessity and if there is a demonstrated commitment to accompanying citizens on this journey they might stay engaged and give the needed feedback. Just an idea to throw out there.


    chris eberhardt

    June 16, 2015 at 4:42 am

  4. […] Recent orgtheory posts excepted, we pay way too much attention to a tiny handful of higher education institutions in the U.S. (Not to mention too much attention to the U.S. relative to the rest of the world.) […]

    Liked by 1 person

Comments are closed.

%d bloggers like this: