The pandemic forced education online, and should afford a lot of opportunity for us to understand the impact of online teaching on student engagement, student achievement, and student learning (and yes, those are all different things). Most of the literature on online learning relates to the university context, but students at university tend to be a little more self-directed than students at high school or below. What works in the university context doesn't necessarily translate to those other contexts, so we need more research on how online teaching affects high school and primary school students. I discussed one such paper back in May, which looked at students in Grades 3 through 8 in the US.
In a new article published in the journal China Economic Review (sorry I don't see an ungated version online), Andrew Clark (Paris School of Economics), Huifu Nong (Guangdong University of Finance), Hongjia Zhu (Jinan University), and Rong Zhu (Flinders University) look at the effects for three urban middle schools in Guangxi Province in China. These three schools (A, B, and C) each took a different approach to the government-enforced lockdown from February to April 2020:
School A did not provide any online educational support to its students. School B used an online learning platform provided by the local government, which offered a centralized portal for video content, communication between students and teachers, and systems for setting, receiving, and marking student assignments. The students’ online lessons were provided by School B’s own teachers. School C used the same online platform as School B over the same period, and distance learning was managed by the school in the same fashion as in School B. The only difference between Schools B and C is that, instead of using recorded online lessons from the school’s own teachers, School C obtained recorded lessons from the highest-quality teachers in Baise City (these lessons were organized by the Education Board of Baise City).
Clark et al. argue that comparing the final exam performance of students in Schools B and C with students in School A, controlling for their earlier exam performances, provides a test of the effect of online teaching and learning for these students. Then, comparing the difference in effects between School B and School C provides a test for whether the quality of online resources matters. There is a problem with this comparison, which I'll come back to later.
Clark et al. have data from:
...20,185 observations on exam results for the 1835 students who took all of the first 11 exams in the five compulsory subjects.
The five compulsory subjects are Chinese, Maths, English, Politics, and History. Clark et al. combine the results of all the exams together, and using a difference-in-differences approach (comparing the 'treatment group' of students from Schools B and C with the 'control group' of students from School A), they find that:
...online learning during the pandemic led to 0.22 of a standard deviation higher exam grades in the treatment group than in the control group...
And there were statistically significant differences between Schools B and C:
The online learning in School B during lockdown improved student performance by 0.20 of a standard deviation... as compared to students who did not receive any learning support in School A. But the quality of the lessons also made a difference: students in School C, who had access to online lessons from external best-quality teachers, recorded an additional 0.06 standard-deviation rise in exam results... over those whose lessons were recorded by their own teachers in School B.
Clark et al. then go on to show that the effects were similar for rural and urban students, that they were better for girls (but only for School C, and not for School B), and that they were better for students with computers (rather than smartphones) in both treatment schools. But most importantly, when looking across the performance distribution, they find that:
The estimated coefficients... at the lower end of distribution are much larger than those at the top. For example, the positive academic impact of School B’s online education at the 20th percentile is over three times as large as that at the 80th percentile. Low performers thus benefited the most from online learning programs. We also find that the top academic performers at the 90th percentile were not affected by online education: these students did well independently of the educational practices their schools employed during lockdown. Outside of these top academic performers, the online learning programs in Schools B and C improved student exam performance.
Clark et al. go to great lengths to demonstrate that their results are likely causal (rather than correlations), and that there aren't unobservable differences between the schools (or students) that might muddy their results. However, I think there is a more fundamental problem with this research. I don't believe that it shows the effect of online teaching at all, despite what the authors are arguing. That's because:
For students in the Ninth Grade, all Middle Schools in the county had finished teaching them all of the material for all subjects during the first five semesters of Middle School (from September 2017 to January 2020). Schools B and C then used online education during the COVID-19 lockdown (from mid-February to early April 2020 in the final (sixth) semester) for the revision of the material that had already been taught, to help Ninth Graders prepare for the city-level High- School entrance exam at the end of the last semester in Middle School.
The students had all finished the in-person instruction part of ninth grade by the time the lockdown started, and the remaining semester would have only been devoted to revision for the final exams. What Clark et al. are actually investigating is the effect of online revision resources, not the effect of online teaching. They are comparing students in Schools B and C, who had already had in-person lessons but were also given online revision resources, with students in School A, who had already had in-person lessons but were not given online revision resources. That is quite a different research question from the one they are proposing to address.
So, I'm not even sure that we can take anything at all away from this study about the effect of online teaching and learning. It's not clear what revision resources (if any) students in School A were provided. If those students received no further support from their school at all, then the results of Clark et al. might represent a difference between students who have revision resources and those who don't, or it might represent a difference between those who have online revision resources and those who have other revision resources (but not online resources). We simply don't know. All we do know is that this is not the effect of online teaching, because it is online revision, not online teaching.
That makes the results of the heterogeneity analysis, which showed that the effects are largest for students at the bottom of the performance distribution, and zero for students at the top of the performance distribution, perfectly sensible. Any time spent on revision (online or otherwise) is likely to have a bigger effect on students at the bottom of the distribution, because they have the greatest potential to improve in performance, and because there are likely to be some 'easy wins' for performance for those who didn't understand at the time of the original in-person lesson. Students at the top of the distribution might gain from revision, but the scope to do so is much less. [*]
We do need more research on the impacts of online teaching and learning. However, this research needs to actually be studies of online teaching and learning, not studies of online revision. This study has a contribution to make. I just don't think it is the contribution that the authors think it is.
Read more:
*****
[*] This is the irony of providing revision sessions for students - it is the top students, who have the least to gain, who show up to those sessions.