Saturday 22 August 2015

Your workload impacts your grade more than you think it does

One of the most well-established cognitive biases is that people are over-confident in their own abilities. We believe that we are better than we are. This holds for students' expectations about grades as well - on average, at the beginning of a course students expect a higher grade than they end up receiving.

A recent paper by Belayet Hossain and Panagiotis Tsigaris (Thompson Rivers University) takes a closer look at students' grade expectations (sorry, I don't see an ungated version). The authors collected data on grade expectations from students across six semesters of a second-year statistics for business and economics course. Their overall test is whether students expectations are rational (unbiased, and incorporating all available information relevant to grades), which is a pretty high standard to meet. Not surprisingly, they find that expectations are not rational. However, it is some of the other results that are most interesting:
Like other studies in the field, we found that students are mostly overconfident. The unbiasedness hypothesis is rejected for most students. Expectations change sluggishly during the semester. The study suggests that students' expectations improve as more information regarding actual performance becomes available. Hence, expectations carry valuable information, and adjustments are made... Course load has a negative impact on a student's final grade after controlling for grade expectations.
That last point is important. It suggests that students who have a heavier course load are more over-confident about their grades than students with a lighter course load. Which has been my experience too - even top students over-estimate their ability to maintain top grades when over-loading their studies. All of which suggests that you should be more realistic about the degree to which study workload is going to have a negative effect on your grades.

Finally, a couple of gripes about the analysis in the paper. First, there are six semesters of students, so observations of students within each semester are not independent of each other. So, when pooling the sample the standard errors should probably be clustered at the level of the semester (Jeremy Miles gives a good, though technical, explanation of why here). Not clustering the standard errors leads to results that are more likely to show up as statistically significant than they 'should'. Second, there are likely to be unobserved differences between semesters, so semester fixed effects should probably have been included too. Omitting these fixed effects may lead to biased coefficient estimates in the models. The combination of these two problems should lead us to doubt the robustness of the results, even though they are mostly similar to those in other studies.

No comments:

Post a Comment