Wednesday 2 September 2020

Meta-analytic results support some positive effects of using videos on student learning

The coronavirus pandemic and associated lockdowns forced teaching online. Teachers at all levels had to adapt their teaching to online delivery basically overnight. For most, that included both asynchronous recorded video 'lectures' that students watch in their own time, or synchronous video classrooms or workshops using videoconferencing tools like Zoom or Teams. Since the lockdowns have been relaxed, many teachers have continued to use videos in their teaching (in many cases, including at my institution, this was forced on teachers). A reasonable question, then, is what impact the use of videos has on student learning.

A new working paper by Michael Noetel (Australian Catholic University) and co-authors provides a fairly thorough answer, in terms of the impact of asynchronous, pre-recorded video content (there is also a non-technical summary of the research available on The Conversation). I say that this was a thorough answer because the authors conducted a meta-analysis of 105 different studies, with a combined 7,776 student research participants. Meta-analysis involves combining the results of many studies quantitatively, in order to determine more precisely any underlying relationship, and if it is executed well it can take account of publication bias (such as when statistically significant results are more likely to be published than statistically insignificant results). An added mark of quality in this particular meta-analysis is that they limited the included studies to randomised trials, which are more likely to establish causal estimates than observational or quasi-experimental studies.

The 105 studies included in the meta-analysis all compared the impact of either replacing face-to-face classes with video, or adding video to face-to-face classes, on student learning, measured as differences in grades or examination results or some other measure of academic performance (not differences in subjective measures, such as student evaluations of their own learning). All were conducted in higher education settings.

There are a lot of important results to unpack in this paper. First, I'll focus on the results that relate to replacing content with videos. The headline result is that:

...replacing other teaching with video had a significant positive effect on student learning...

I guess, in spite of my scepticism on the impacts of online learning (see this post, and the links at the bottom of this post), maybe there are advantages to it. In terms of the size of the effect, it was reasonably large, with students exposed to video performing about 0.28 standard deviations better. To provide some additional context, in my ECONS101 class in A Trimester last year, a 0.28 standard deviation increase in grade would be 6.4 percentage points, or a bit more than one grade point.

In the conclusion, Noetel et al. provide some theoretical foundation for their positive result:

The finding that video was superior to even face-to-face classes may be explained in a few ways. It may be due to the increased ability for students to manage their own cognitive load (e.g., by pausing and rewinding) or because teachers can better optimise cognitive load through editing.

Essentially, giving students more control over their own pace of learning is a good thing, as is editing the video content to focus more on the key points. Noetel et al. also note that videos perform better even when the face-to-face classes had more time:

...because teachers are prioritising relevant content, by editing out discussions and details that are not important for the learning objectives.

I'm not sure that all teachers would see that as a good outcome. Also, it appears that not all video content is created equal. Extending their analysis a bit further, they find that:

...half the implementations of exchanging other learning for video will be helpful (50% of true effects, 95% CI [40%, 59%]). A small proportion of implementations may be unhelpful for student learning (19% of true effects, 95% CI [13%, 25%]) with the rest having negligible influences.

That suggests that there is a large amount of heterogeneity in the impact of video. So, it is worth considering what conditions make video content more effective. On that question, they find that:

...effects were not significantly different between studies that used videos in lectures, tutorials, or homework... In contrast, the comparison condition was a significant moderator... when video replaced static media (e.g., text) it was significantly more effective... than when video replaced a teacher...

So, the comparison really matters here. Replacing the textbook with video led to a 0.51 standard deviation increase in performance, but replacing a teacher (presumably, a face-to-face lecture or tutorial) with video led to a barely statistically significant 0.18 standard deviation increase in performance. The type of content also appears to matter:

Video was more effective when students were assessed on skill acquisition... compared with assessments of their knowledge...

In other words, a 'how-to' style video helping students develop skills was more effective than a video delivering knowledge, skills-development videos increasing performance by 0.44 standard deviations while knowledge videos increased performance by a barely statistically significant 0.18 standard deviations. It may be that videos help students to develop particular skills (Noetel et al. use the example of learning how to calculate a t-statistic), but don't really help in terms of developing a broader knowledge base. That probably also explains two other results from the paper. First:

The number of minutes of the educational intervention did not moderate effects... and there were no significant differences in effects when the video intervention was applied to a single topic or a whole course... In other words, there was no significant dose-response effect.

If one video is good for learning, more videos should be better. However, that is not what they find. Perhaps there are substantial diminishing marginal returns to the use of video in teaching and learning, and all of the benefits of using video are exhausted after the first topic worth of videos is made available to students? That seems unlikely. More likely is that each intervention replaced all the skills-development content with video first (since those are the easiest videos to create), quickly exhausting all of the gains from the transition to video content. However, that doesn't quite explain why there would be no difference between using video in a single topic or a whole course. Definitely, this is something that needs further exploration. Second:

The relative interactivity was a significant moderator of effects... There was no benefit to video when the control condition was afforded more interactivity... Videos were effective when both conditions were given equivalent opportunities for interactivity... Effects were particularly large when videos were presented in an interactive context (e.g., co-viewing with a peer) that was not available to the control condition...

Interactivity really matters. Simply replacing face-to-face classes (or a textbook) with video content that lacks interactive elements does not afford students with the same learning opportunities - the effect of non-interactive video content was small and statistically insignificant. Since interactivity in 'knowledge videos' is more difficult to pull off than in skills-development videos (where students can be encouraged to follow along and practice their skills), that may help explain the type of video content that works best.

Finally, when video doesn't replace the traditional content but is instead added as supplementary material, the effect is large - a 0.88 standard deviation increase in performance, and almost all implementations of adding video increase student performance, which is much less heterogeneity than observed for replacing content with video. These particular results seemed to hold equally for both knowledge and skills-development videos. However, again there appeared to be no dose-response relationship. The takeaway from this is that, at the minimum, once face-to-face teaching returns we should be routinely recording our existing lectures and making those recordings available to students. Teachers need to get over their fear that making recorded lectures available somehow makes students worse off, because it clearly is not the case.

Noetel et al. note in the Discussion section of their paper:

As universities move toward online learning through multimedia, some academics may fear that students will be receiving an inferior learning experience compared with traditional methods. Our review suggests those intuitions are unfounded.

I don't think I would go so far as to say that the intuitions are unfounded. Video clearly adds value in some circumstances, but it is not clear that it dominates face-to-face learning. If the focus of a particular class is on developing particular skills, video is a good tool. Otherwise, unless the teacher has a particular tool or method for engaging interactively with students during their video watching (and I'm still waiting to see robust evaluations of such methods), it isn't clear that there is much advantage over face-to-face teaching. Certainly, this paper doesn't suggest that we should be abandoning face-to-face teaching in favour of video.

Read more:


No comments:

Post a Comment