Regular readers of this blog will know that I am highly sceptical of online learning, blended learning, flipped classrooms, and the like. That's come from a nuanced understanding of the research literature, and especially from a concern about heterogeneity. Students respond differently to learning in the online environment, and in ways that I believe are unhelpful. Students who are motivated and engaged and/or have a high level of 'self-regulation' perform at least as well in online learning as they do in a traditional face-to-face setting, and sometime perform better. Students who lack motivation, are disengaged, and/or have a low level of self-regulation flounder in online learning, and perform much worse.
The problem with much of the research literature, though, is a lack of randomisation. Even when a particular study employs randomisation, randomisation into online learning occurs at the level of the student, not the level of the section or course. That is, particular lecturers opt in to being part of the study (often, they are the researchers who are undertaking the study).
An alternative to a pure, randomised experiment is a natural experiment - where some unexpected change in a real world setting provides a way of comparing those in online and traditional face-to-face learning. That's where the pandemic comes in. Prior to lockdowns and stay-at-home orders prevented in-person teaching, some students were studying online. Other students were studying in person, but were forced into online learning. Comparing the two groups can give us some idea of the effect of online learning on student performance, and there are a number of studies starting to appear that do just that. I'm going to focus this post on four such studies.
The first study is this NBER working paper (ungated) by Duha Altindag (Auburn University), Elif Filiz (University of Southern Mississippi), and Erdal Tekin (American University). Altindag was one of the co-authors on the article I discussed on Sunday. Their data come from a "medium-sized, public R1 university" (probably Auburn University), and includes a sample of over 18,000 students and over 1000 instructors. They essentially compare student performance in classes in Spring and Fall 2019 with the same students' performance in classes in Spring 2020, where pandemic restrictions shut the campus down partway through the semester, forcing all in-person teaching to online. Importantly:
This shift occurred after the midterm grades were assigned. Therefore, students obtained a set of midterm grades with F2F [face-to-face] instruction and another set (Final Grades) after the switch to online instruction.
Altindag et al. find that, once they account for heterogeneity across instructors:
...students in F2F instruction are 2.4 percentage points (69% of the mean of the online classes) less likely to withdraw from a course than those in online instruction in Fall 2019... Moreover, students in F2F courses are 4.1 percentage points (4 percent) more likely to receive a passing grade, i.e., A, B, C, or D, than their counterparts in online courses.
However, importantly, Altindag et al. go on to look at heterogeneous effects for different student groups, and find that:
Strikingly, for honor students, there seems to be no difference between online and F2F instruction... Students in the Honors program perform equally well regardless of whether the course is offered online or in person... When we turn to students in regular courses, however, the results are very different and resembles the earlier pattern that we discussed in the previous results...
So, the negative impacts of online learning were concentrated among non-honours students, as I suggested at the start of this post. Better students are not advantaged by online learning in an absolute sense, but they are advantaged relatively because the less-able students do much worse in an online setting. Also interestingly, in this study there were no statistically significant differences in the impact of online learning by gender or race. However, they also show some suggestive evidence that having access to better broadband internet reduces the negative impact of online learning (which should not be surprising), but doesn't eliminate it.
Altindag et al. also show that the negative impact of online learning was concentrated in courses where instructors were more vigilant about academic integrity and cheating, which suggests that we should be cautious about taking for granted that grades in an online setting are always a good measure of student learning.
The second study is this working paper by Kelli Bird, Benjamin Castleman, and Gabrielle Lohner (all University of Virginia). They used data from over 295,000 students enrolled in the Virginia Community College System over the five Spring terms from 2016 to 2020 (with the last one being affected by the pandemic). As this is a community college sample, it is older that the sample in the first study, more likely to be working and studying part-time, and has lower high school education performance. However, the results are eerily similar:
The move from in-person to virtual instruction resulted in a 6.7 percentage point decrease in course completion. This translates to a 8.5 percent decrease when compared to the pre-COVID course completion rate for in-person students of 79.4 percent. This decrease in course completion was due to a relative increase in both course withdrawal (5.2 pp) and course failure (1.4 pp). We find very similar point estimates when we estimate models separately for instructors teaching both modalities versus only one modality, suggesting that faculty experience teaching a given course online does not mitigate the negative effects of students abruptly switching to online instruction. The negative impacts are largest for students with lower GPAs or no prior credit accumulation.
Notice that, not only are the effects negative, they are more negative for students with lower GPAs. Again, Bird et al. note that:
One caveat is that VCCS implemented an emergency grading policy during Spring 2020 designed to minimize the negative impact of COVID on student grades; instructors may have been more lenient with their grading. As such, we view these estimates as a lower-bound of the negative impact of the shift to virtual instruction.
The third study is this IZA Discussion Paper by Michael Kofoed (United States Military Academy) and co-authors. The setting for this study is again different, being based on students from the US Military Academy at Westpoint. This provides some advantages though. As Kofoed et al. explain:
Generally, West Point students have little control over their daily academic schedules. This policy did not change during the COVID-19 pandemic. We received permission to use this already existing random assignment to assign students to either an in-person or online class section. In addition, to allow for in-person instruction, each instructor agreed to teach half of their four section teaching load... online and half in-person.
This provides a 'cleaner' experiment for the effect on online learning, because students were randomised to either online or in-person instruction, and almost all instructors taught in both formats, which allows Kofoed et al. to avoid any problems of instructors self-selecting into one mode or the other. However, their sample is more limited in size, to the 551 students enrolled in introductory microeconomics. Based on this sample, they find that:
...online instruction reduced a students final grade by 0.236 standard deviations or around 1.650 percentage points (out of 100). This result corresponds to about one half of a +/- grade. Next to control for differences in instructor talent, attentiveness, or experience, we add instructor fixed effects to our model. This addition reduces the estimated treatment effect to -0.220 standard deviations; a slight decrease in magnitude....
Importantly, the results when disaggregated by student ability are similar to the other studies:
...learning gaps are greater for those students whose high school academic preparation was in the bottom quarter of the distribution. Here, we find that being in an online class section reduced their final grades by 0.267 standard deviations, translating to around 1.869 percentage points of the student’s final grade.
Unlike Altindag et al., Kofoed et al. find that online learning is worse for male students, but there are no significant differences by race. Kofoed et al. also ran a post-term survey to investigate the mechanisms underlying their results. The survey showed that:
...students felt less connected to their instructors and peers and claimed that their instructors cared less about them.
This highlights the importance of social connections within the learning context, regardless of whether learning is online or in-person. Online, those opportunities can easily be lost (which relates back to this post from earlier this month), and it appears that not only does online education reduce the value of the broader education experience, it may reduce the quality of the learning as well.
Kofoed et al. were clearly very concerned about their results, as:
From an ethical perspective, we should note that while it is Academy-wide policy to randomly assign students to classes, we did adjust the final grade of students in online sections according to our findings and prioritized lower [College Entrance Examination Rank] score students for in-person classes during Spring Semester 2021.
Finally, the fourth study is this recent article by Erik Merkus and Felix Schafmeister (both Stockholm School of Economics), published in the journal Economics Letters (open access). The setting for this study is again different, being students enrolled in an international trade course at a Swedish University. The focus is also different - it compares in-person and online tutorials. That is, rather than the entire class being online, each student experienced some of the tutorials online and other tutorials in person, over the course of the semester. As Merkus and Schafmeister explain:
...due to capacity constraints of available lecture rooms, in any given week only two thirds of students were allowed to attend in person, while the remaining third was assigned to follow online. To ensure fair treatment, students could attend the in-class sessions on a rolling basis, with each student attending some tutorials in person and others online. The allocation was done on a first-name basis to limit self-selection of students into online or in-person teaching in specific weeks.
They then link student performance for the 258 students in their sample in the final examination questions with whether the student was assigned to an in-person tutorial for that particular week (they don't compare whether students actually attended or not - this is an 'intent-to-treat' analysis). Unlike the other three studies, Merkus and Schafmeister find that:
...having the tutorial online is associated with a reduction in test scores of around 4% of a standard deviation, but this effect does not reach statistical significance.
That may suggest that it is not all bad news for online learning, but notice that they compare online and in-person tutorials only, while the rest of the course is conducted online. There is no comparison group of students who studied the entire course in person. These results are difficult to reconcile with Kofoed et al., because tutorials should be the most socially-interactive component of classroom learning, so if students feel that the social element is much less (per Kofoed et al.), then why would the effect be negligible (per Merkus and Schafmeister). The setting clearly matters, and perhaps that is enough to explain these differences. However, Merkus and Schafmeister didn't look at heterogeneity by student ability, which I have noted many times before is a problem.
Many universities (including my own) are seizing the opportunity presented by the pandemic to push forward plans to move a much greater share of teaching into online settings. I strongly believe that we need to pause and evaluate before we move too far ahead with those plans. To me, the research is continuing to suggest that, by adopting online learning modes, we create a learning environment that is hostile to disengaged, less-motivated students. You might argue that those are the students we should care least about. However, the real problem is that the online learning environment itself might increase or exacerbate feelings of disengagement (as the Kofoed et al. survey results show). If universities really care about the learning outcomes of students, then we're not at the point where they should be going 'all in' on online education.
Read more:
- Online vs. blended vs. traditional classes
- Flipped classrooms work well for top students in economics
- Flipped classrooms are still best only for the top students
- Online classes lower student grades and completion
- Online classes may also make better students worse off
- Meta-analytic results support some positive effects of using videos on student learning
- New review evidence on recorded lectures in mathematics and student achievement
- Meta-analytic results may provide some support for flipping the classroom
- The value of an in-person university education
No comments:
Post a Comment