Sunday, 8 April 2018

Flipped classrooms are still best only for the top students

Back in 2016, I wrote a couple of posts about blended learning, specifically related to the evidence on blended or online learning, and flipped classroom models (see here and here). My takeaway from this literature has been, and continues to be, that it works very well for highly-engaged high-achieving students, but it is rubbish for less-engaged low-achieving students. So, I was interested to see this new paper by Nathan Wozny (USAF Academy), Cary Balser (Notre Dame), and Drew Ives (USAF Academy), published in the Journal of Economic Education (sorry, I don't see an ungated version online).

This article is interesting because it applies a randomised-controlled trial (RCT) approach. If implemented well, RCTs are the gold standard for evaluating the impacts of interventions because the randomisation should lead the treatment and control groups to be similar on the whole range of observed and unobserved characteristics that might affect the impact of the intervention. That means that if you observe a difference between the two groups after the intervention, it is highly likely to be due to the intervention and not due to a difference between the two groups. There's a bit more to it than that, but in short that's why RCTs are usually the best approach.

Anyway, Wozny et al. randomized their third-year econometrics class at the USAF Academy (137 students) into sections that would receive a mix of traditional lectures and flipped classroom lessons, where each section would receive a different mix of lessons that were traditional and flipped. Each section received five flipped classroom lessons, and five traditional, within the ten experimental lessons. They explain:
For each flipped lesson, the instructor reviewed comprehension questions, using student responses as a basis for discussion. Next, the instructor facilitated independent or small group work on exercises and provided mini-lectures as appropriate for topic and student needs. Students did not have any assignment in advance of each lesson selected as a traditional lecture but listened to the instructor lecture, during class, on the same material covered in the video for the flipped lesson. Students in lecture lessons had access to the same exercises offered in the flipped classes, but the lecture group generally did not have available class time to complete the exercises. The key difference in the two conditions is therefore timing rather than the primary learning resources provided: both groups received a lecture (before class for the flipped group and during class for the lecture group) and exercises (during class for the flipped group and after class for the lecture group).
They then evaluated the impact on student learning at three points in time:
Six classes ended with an online, unannounced, ungraded formative assessment testing comprehension of content covered in approximately the three lessons preceding the assessment. Four announced, written graded exams administered throughout the semester measured medium-term comprehension on content covered in approximately the eight lessons preceding the exam... A comprehensive written final exam administered at the end of the semester measured long-term comprehension.
Their key results are nicely summarised in Figure 1:

There is only a statistically significant difference for the medium-term (the four, written graded exams held throughout the semester), and not for the online assessments or, tellingly, for the final exam. Wozny et al. then looked at sub-groups, specifically separating their sample into students above and below the median GPA. Now, using the sub-group analysis, they find a statistically significant and positive effect of flipped classrooms for the above-median-GPA students, and a statistically insignificant effect on below-median-GPA students. They then conclude from this that:
Impacts are slightly larger and also persist to long-term assessments for high-performing students, but the relatively similar effects for above-median and below-median students support the generalizability of the results to a wide spectrum of students.
I disagree. The effect for below-median students may be statistically insignificant, but the point estimate is negative and relatively large. Here's the key table (you may need to enlarge it to see clearly):


The size of the negative effect on below-median-GPA students is a bit more than half of the positive effect on above-median-GPA students (and the overall effect for both groups combined is statistically insignificant, as shown in the figure above). The problem may be that their sample size is not large enough to create enough statistical power to identify the negative effect on below-median-GPA students, rather than that the effect is zero.

As far as I'm concerned, this study is not enough to convince me. As I concluded in 2016:
I'm still waiting for the research that will convince me that the flipped classroom model will have positive outcomes for the marginal (or even the median) student that I teach.
Read more:

No comments:

Post a Comment