Archive for the ‘education’ Category
A recent meta-analysis of studies of critical thinking (e.g., seeing if students can formulate criticisms of arguments) shows that, on average, college education is associated with critical thinking. From “Does College Teach Critical Thinking? A Meta-Analysis” by Christopher Huber and Nathan Kuncel in Review of Educational Research:
This meta-analysis synthesizes research on gains in critical thinking skills and attitudinal dispositions over various time frames in college. The results suggest that both critical thinking skills and dispositions improve substantially over a normal college experience.
Now, my beef with the whole critical thinking stream is that there is a special domain of teaching called “critical thinking.” The looked at studies where students were in special critical thinking instruction:
Although college education may lag in other ways, it is not clear that more time and resources should be invested in teaching domain-general critical thinking.
Students are learning critical-thinking skills, but adding instruction focused on critical thinking specifically doesn’t work. Students in programs that stress critical thinking still saw their critical-thinking skills improve, but the improvements did not surpass those of students in other programs.
Bottom line: Take regular courses on regular topics and pay close attention to how people in specific areas figure out problems. Skip the critical thinking stuff, it’s fluff talk.
Last month, Howard Aldrich made—as he often does—a good point in the comments:
There’s been an interesting subtle shift in the rhetoric regarding whose responsibility it is to pay for an individual’s post-secondary education. My impression is that there was a strong consensus across the nation 50 years ago, and certainly into the late 1960s, that governments had a responsibility to educate their students that extended up through college. However, I perceive that consensus has been under attack from both the left and the right….Liberals argue that much of the public subsidy goes to the wealthier high income students whose parents don’t really deserve the subsidy. Conservatives argue that as students benefit substantially from their college education, they should pay most of the cost.
This month, I’ve been writing about the history of cost-benefit analysis. (Why yes, I do know how to have a good time.) On the surface, it has nothing to do with universities. But there are important links to be made.
One of the arguments I’m playing with is that economic thinking—here just meaning a rational, cost-benefit, systematic-weighing-of-alternative-choices sort of thinking—has been particularly constraining for the political left. On the right, when people’s values disagree with economic reasoning, they ignore the economics and forge ahead. On the left, while some will do the same, the “reasonable” position tends to be much more technocratic. Think Brookings versus Heritage. Over time, one thing that has pulled “the left” to the right has been the influence of a technocratic, cost-benefit strain of thought.
Yes, I know these are sweeping generalizations. But stay with me for a minute.
There are a couple of big economic arguments for asking individuals, not the public, to pay for higher education. Howard’s comment gets at both of them.
One is that while there is some public benefit in educating people, individuals capture most of the returns to higher education. If that is the case, it makes sense that they should pay for it, with the state perhaps making financing available for those who lack the means. Milton Friedman made this argument sixty years ago, and since then, it has become ever more popular.
The other is that providing free higher education is basically regressive. The wealthier you are, the more likely you are to attend college (check out this NYT interactive chart), and relatively few who are poor benefit. Milton Friedman made this argument, too, but it is particularly associated with a 1969 paper by Lee Hansen and Burton Weisbrod, and continues to be made by commentators across the political spectrum.
Both of these arguments have become economic common sense (even though support for the latter is actually pretty weak). Of course it’s fair for individuals to have to pay for the education that they benefit so much from. And of course it doesn’t make sense to pay for the education of the upper-middle class while the working poor who never make it to college get nothing.
Indeed, these arguments have been potent enough that it has become hard to argue for free higher education without sounding extreme and maybe economically illiterate. Really, it kind of amazes me that free college is even being talked about seriously these days by President Obama and Bernie Sanders.
But even the argument for free college now depends heavily on claims about economic payoff. The Obama proposal headlines “Return on Investment,” arguing that “every dollar invested in community college by federal, state and local governments means more than $25 [ed: !] in return.” The Sanders statement starts, “In a highly competitive global economy, we need the best-educated workforce in the world.” The candidate who is a self-described socialist relies on a utilitarian, economic argument to justify free higher education.
So what’s the problem with thinking about college in terms of economic costs and benefits? After all, it’s an expensive enterprise, and getting more so. Surely it doesn’t make sense to just wantonly spend without giving any thought to what you’re getting in return.
The problem is, if the argument you really want to make is that college is a government responsibility—that is, a right—starting with cost-benefit framing leads you down a slippery slope. Benefits are harder to measure than costs, and some benefits can’t be measured at all. All sorts of public spending becomes much harder to justify.
Now, this might be fine if you generally think that small government is good, or that the economic benefits of college are pretty much the ones that matter. But if you think it’s worth promoting college because it might help people become better citizens, or increases their quality of life in some difficult-to-measure way, or you just want to live in a society that provides broad access to education, well, too bad. You’ve already written that out of the equation.
If you really believe there are social benefits to making public higher education freely available, then cost-benefit arguments will always betray you. But rights, on the other hand, aren’t subject to cost-benefit tests. Only a moral argument that defends higher education as a right—as something to value because it improves the social fabric in literally immeasurable ways—can really work to defend real public higher education.
Seem too unrealistic? Think about high school. There’s no real reason that free college should be subject to a cost-benefit test when free high school is not. Individuals reap economic benefits—lots of them—from attending high school, too. And high school is at least as regressive as college: the well-off kids who attend the good public schools reap many more benefits than the low-income kids who attend the crummy ones. It only makes sense, then, that families should pay for high school themselves, right? Perhaps with government loans, if you’re too poor to afford it.
And yet no one is making this argument. Because we all still agree—at least for now—that children have the right to a free primary and secondary education. We may argue about how much to spend on it, or how to make it better, but the basic premise—governments have a responsibility to educate students, in Howard’s words—still holds.
So I support the free college movement. But I’d like to see its champions stop saying it’s because we need to be globally competitive, or because it’s got a huge ROI.
Instead, say it’s because our society will be stronger when more of us are better educated. Say that knowing higher education is an option, and an option you don’t have to mortgage your future for, will improve our quality of life. Say that colleges themselves will be better when they return to seeing students as students, and not as revenue streams.
Say it’s because it’s the right thing to do.
The New York Times has run an op-ed by Molly Worthen, a professor of history, who implores against active learning in college classes and wants to retain the lecture format:
Good lecturers communicate the emotional vitality of the intellectual endeavor (“the way she lectured always made you make connections to your own life,” wrote one of Ms. Severson’s students in an online review). But we also must persuade students to value that aspect of a lecture course often regarded as drudgery: note-taking. Note-taking is important partly for the record it creates, but let’s be honest. Students forget most of the facts we teach them not long after the final exam, if not sooner. The real power of good notes lies in how they shape the mind.
“Note-taking should be just as eloquent as speaking,” said Medora Ahern, a recent graduate of New Saint Andrews College in Idaho. I tracked her down after a visit there persuaded me that this tiny Christian college has preserved some of the best features of a traditional liberal arts education. She told me how learning to take attentive, analytical notes helped her succeed in debates with her classmates. “Debate is really all about note-taking, dissecting your opponent’s idea, reducing it into a single sentence. There’s something about the brevity of notes, putting an idea into a smaller space, that allows you psychologically to overcome that idea.”
As we noted on this blog, there is actually a massive amount of research comparing lecturing to other forms of classroom instruction and lectures do very poorly:
To weigh the evidence, Freeman and a group of colleagues analyzed 225 studies of undergraduate STEM teaching methods. The meta-analysis, published online today in the Proceedings of the National Academy of Sciences, concluded that teaching approaches that turned students into active participants rather than passive listeners reduced failure rates and boosted scores on exams by almost one-half a standard deviation. “The change in the failure rates is whopping,” Freeman says. And the exam improvement—about 6%—could, for example, “bump [a student’s] grades from a B– to a B.”
If you’d like your students to master the art of eloquent note taking, continue lecturing. If you’d like them to learn things, adopt active learning.
The Washington Post just ran an article about research showing that homework isn’t particularly effective. A clip from the article:
Let’s start by reviewing what we know from earlier investigations. First, no research has ever found a benefit to assigning homework (of any kind or in any amount) in elementary school. In fact, there isn’t even a positive correlation between, on the one hand, having younger children do some homework (vs. none), or more (vs. less), and, on the other hand, any measure of achievement. If we’re making 12-year-olds, much less five-year-olds, do homework, it’s either because we’re misinformed about what the evidence says or because we think kids ought to have to do homework despite what the evidence says.
Second, even at the high school level, the research supporting homework hasn’t been particularly persuasive. There does seem to be a correlation between homework and standardized test scores, but (a) it isn’t strong, meaning that homework doesn’t explain much of the variance in scores, (b) one prominent researcher, Timothy Keith, who did find a solid correlation, returned to the topic a decade later to enter more variables into the equation simultaneously, only to discover that the improved study showed that homework had no effect after all, and (c) at best we’re only talking about a correlation — things that go together — without having proved that doing more homework causes test scores to go up. (Take 10 seconds to see if you can come up with other variables that might be driving both of these things.)
Third, when homework is related to test scores, the connection tends to be strongest — or, actually, least tenuous — with math. If homework turns out to be unnecessary for students to succeed in that subject, it’s probably unnecessary everywhere.
Along comes a new study, then, that focuses on the neighborhood where you’d be most likely to find a positive effect if one was there to be found: math and science homework in high school. Like most recent studies, this one by Adam Maltese and his colleagues doesn’t provide rich descriptive analyses of what students and teachers are doing. Rather, it offers an aerial view, the kind preferred by economists, relying on two large datasets (from the National Education Longitudinal Study [NELS] and the Education Longitudinal Study [ELS]). Thousands of students are asked one question — How much time do you spend on homework? — and statistical tests are then performed to discover if there’s a relationship between that number and how they fared in their classes and on standardized tests.
Was there a correlation between the amount of homework that high school students reported doing and their scores on standardized math and science tests? Yes, and it was statistically significant but “very modest”: Even assuming the existence of a causal relationship, which is by no means clear, one or two hours’ worth of homework every day buys you two or three points on a test. Is that really worth the frustration, exhaustion, family conflict, loss of time for other activities, and potential diminution of interest in learning? And how meaningful a measure were those tests in the first place, since, as the authors concede, they’re timed measures of mostly mechanical skills? (Thus, a headline that reads “Study finds homework boosts achievement” can be translated as “A relentless regimen of after-school drill-and-skill can raise scores a wee bit on tests of rote learning.”)
Education researchers have long known that homework doesn’t lead to improved learning. Back in 2006, I blogged about The Battle over Homework, which lays out the case. Hey, teacher, leave us kids alone!
Econjeff mentions my long standing critique of letters of recommendation (LoRs). Here, I describe my personal experience with them and then I restate the massive empirical research showing that LoRs are worthless.
Personal experience: In graduate school, I had enormous difficulty extracting three letters from faculty. For example, during my first year, when I was unfunded, I asked an instructor, who was very well known in sociology, for a letter. He flat out refused and told me that he didn’t think I’d succeed in this profession. In the middle of graduate school, I applied for an external fellowship and was informed by the institution that my third letter was missing. Repeatedly, I was told, “I will do it.” Never happened. Even on the job market, I had to go with only two letters. A third professor (different than the first two cases) simply refused to do it. Luckily, a sympathetic professor in another program wrote my third letter so I could be employed. Then, oddly, that recalcitrant member submitted a letter after I had gotten my job.
At that point, I had assumed that I was some sort of defective graduate student. Maybe I was just making people upset so they refused to write letters. When I was on the job, I realized that lots and lots of faculty never submit letters. During job searches at Indiana, I saw lots of files with missing letters, perhaps a third were missing at least one letter. Some were missing all letters. It was clear to me that l was not alone. Lots of faculty simply failed to complete their task of evaluating students due to incompetence, malice, or cowardice.
Research: As I grew older, I slowly realized that there are researchers in psychology, education and management dedicated to studying employment practices. Surely, if we demanded all these letters and we tolerated all these poor LoR practices, then surely there must be research showing the system works.
Wrong. With a few exceptions, LoRs are poor instruments for measuring future performance. Details are here, but here’s the summary: As early as 1962, researchers realized LoRs don’t predict performance. Then, in 1993, Aamondt, Bryan and Whitcomb show that LoRs work – but only if they are written in specific ways. The more recent literature refines this – medical school letters don’t predict performance unless the writer mentions very specific things; letter writers aren’t even reliable – their evaluations are all over the place; and even in educational settings, letters seem to have a very small correlation with a *few* outcomes. Also, recent research suggests that LoRs seem to biased against women in that writers are less likely to use “standout language” for women.
The summary from one researcher in the field: “Put another way, if letters were a new psychological test they would not come close to meeting minimum professional criteria (i.e., Standards) for use in decision making (AERA, APA, & NCME, 1999).”
The bottom line is this: Letters are unreliable (they vary too much in their measurements). They draw attention to the wrong things (people judge the status of the letter writer). They rarely focus on the few items that do predict performance (like explicit comparison). They have low correlations with performance and they used codes that bias against women.
A long, long time ago, I used to teach math. One of the central questions in mathematical education at the college level is how to teach mathematical proofs. Sometimes, you had pessimistic conversations. People simply had “mathematical maturity” and there wasn’t much you could do about it. There is truth to this – some people simply can’t grasp what a proof would entail.
Beyond this simple observation, there was remarkably little thinking about how to teach proofs. Of course, there are occasional books that try to break down the process of creating and writing proofs, such as How to Prove It. Still, I felt there was something missing in the conversation about proof teaching. This blog post is my modest contribution to the topic.
My hypothesis: An important barrier to teaching math proofs is that they combine two very, very hard skills and that most math teachers only focus on one of the those skills. Specifically, proofs entail (a) symbolic manipulation and (b) recipes that get you from A to B. Math teachers and books are actually pretty good at (a). For example, almost every text will teach you about the symbols – set theory; formal logic; deltas and epsilons; etc. What is almost completely overlooked is that students find it hard to glimmer the “recipes” that make up proofs and there is no theory, or set of instructional strategies, for helping students intuitively understand recipes. In practice, you simply take courses on various topics (numerical analysis or matrix theory) and you mimic the proofs that people give you. Not great, but better than nothing.
The old Dolciani high school text books had an interesting response to this issue. In the geometry text, the proofs would always have two parts: “analysis” (outline of the idea) and “proof” (traditional proof with all details). You also see this in advanced texts and journal articles. When a long, hard proof is coming up, the author will present an outline.
Here is my modest suggestion: When teaching proofs, always outline the proof as a flow chart. In other words, take the old notion of the proof outline (or “analysis” in Dolcian’s terms), make it visual, and then put it in front of all proofs that require more than a few sentences. By repeatedly visualizing proofs as chains, teachers will be forced to extract the recipe from the text in a way that more students can understand. They will also more easily identify common themes that appear in multiple visualizations of proofs. Also, pictures are easier to remember than dense, equation filled masses of text.
Psychology Today has an article on a new analysis of play and the mental health of young people. The gist is that (a) recently, we let kids have less unstructured play and (b) unstructured play increases the belief that one has direct control over their life, which in turn has positive effect on mental health and various measures of well being. From the article:
The standard measure of sense of control is a questionnaire developed by Julien Rotter in the late 1950s called the Internal-External Locus of Control Scale. The questionnaire consists of 23 pairs of statements. One statement in each pair represents belief in an Internal locus of control (control by the person) and the other represents belief in an External locus of control (control by circumstances outside of the person). The person taking the test must decide which statement in each pair is more true. One pair, for example, is the following:
- (a) I have found that what is going to happen will happen.
- (b) Trusting to fate has never turned out as well for me as making a decision to take a definite course of action.
In this case, choice (a) represents an External locus of control and (b) represents an Internal locus of control.
Many studies over the years have shown that people who score toward the Internal end of Rotter’s scale fare better in life than do those who score toward the External end. They are more likely to get good jobs that they enjoy, take care of their health, and play active roles in their communities—and they are less likely to become anxious or depressed.
In a research study published a few years ago, Twenge and her colleagues analyzed the results of many previous studies that used Rotter’s Scale with young people from 1960 through 2002. They found that over this period average scores shifted dramatically—for children aged 9 to 14 as well as for college students—away from the Internal toward the External end of the scale. In fact, the shift was so great that the average young person in 2002 was more External than were 80% of young people in the 1960s. The rise in Externality on Rotter’s scale over the 42-year period showed the same linear trend as did the rise in depression and anxiety.
[Correction: The locus of control data used by Twenge and her colleagues for children age 9 to 14 came from the Nowicki-Strickland Scale, developed by Bonnie Strickland and Steve Nowicki, not from the Rotter Scale. Their scale is similar to Rotter’s, but modified for use with children.]
It is reasonable to suggest that the rise of Externality (and decline of Internality) is causally related to the rise in anxiety and depression. When people believe that they have little or no control over their fate they become anxious: “Something terrible can happen to me at any time and I will be unable to do anything about it.” When the anxiety and sense of helplessness become too great people become depressed: “There is no use trying; I’m doomed.”
Wow. Later in the article, they talk about how this shift correlates with mental health outcomes. So don’t schedule them or boss the, let them play.