Babcock uses data on course evaluations for "6,753 classes from 338,414 students taught by 1,568 instructors across all departments, offered in 12 quarters between Fall 2003 and Spring 2007" at the University of California, San Diego. Importantly, the dataset included data on the number of hours (per week) that students reported studying in each of their courses, and their expected grade. The average expected grade for each course as a whole is a measured of how the course is graded (he didn't have data on actual grades, but student decision-making is really about perceptions and expectations, so the expected grade data are the right data to use). The simplest analysis is just to graph the data - here is a plot of study time against the average expected grade:
Note the downward trend line, which nicely illustrates that students in classes with higher average expected grades study for fewer hours per week, on average. Babcock notes that this is a general result across all departments. For instance, here is the same graph for economics:
However, he doesn't stop there, applying regression models to control for characteristics of the course, but also instructor-specific effects, course-specific effects and any time trends. So, the regression results can be thought of as representing the answer to the question: if the same instructor, teaching the same course, had higher expected grades, what would be the effect on study time? Here is what he finds:
Holding fixed instructor and course, classes in which students expect higher grades are found to be classes in which students study significantly less. Results indicate that average study time would be about 50% lower in a class in which the average expected grade was an "A" than in the same course taught by the same instructor in which students expected a "C".Obviously, that's quite a significant effect. And as I see it, it has two negative implications. First, students who study less will learn less. It really is that simple. Second, as Babcock notes in his paper this isn't so much a story of 'grade inflation' as 'grade compression'. Since the top grade is fixed as an A, if an A is easier to get, that compresses the distribution of grades. The problem here is that the signalling value of grades for employers becomes less valuable. If employers can no longer use grades to tell the top students from the not-quite-but-nearly-top students, then the value of the signal for top students is reduced (I've written about signalling in education before here). If grade compression continues, the problem gets even worse.
Of course, the incentives for teachers are all wrong, and Babcock demonstrates this as well. Students give better teaching evaluations to teachers when those teachers are teaching a course where the average expected grade is higher. So, if good teaching evaluations are valuable to the teacher (for promotion, tenure, salary increments, etc.) then there is incentive for them to inflate grades to make students happy and more likely to rate the teaching highly. On a related note, Babcock also finds that:
...even though lower grades are associated with large-enrollment courses, when the same course is taught by a more lenient instructor, significantly more students enrol.My only gripe with the paper is pretty minor, and is something that Babcock himself addressed (albeit briefly, in a footnote), and that is the lack of consideration of general equilibrium effects. I've had a number of conversations with other lecturers (and students) about making additional resources available for students (past exam papers, worked examples, extra readings, this blog, etc.). The idea is we put in this additional effort in order to help our students to pass our courses. However, the kicker is that if we as teachers put more effort in, then students might re-direct their own effort away from our courses and towards other courses where the effort requirement from them to achieve their desired grades is higher. I have only anecdotal evidence (from talking with students over a number of years) that this unintended consequence occurs, but it is worrying.
In the case of grade inflation, making it easier to get an A in our course may make it easier for the students to do better in other courses, since they re-direct efforts to the other courses they perceive as being more difficult. While it might make the impact of grade inflation higher (since average grades would also increase in courses where grades are not inflated), I don't think you could argue that this is a good outcome at all.
Overall, while this is based on a single study in a single institution, it is reasonably convincing (or maybe that is just confirmation bias?). Grade inflation is not good for students, even if it might be good for teachers.
No comments:
Post a Comment