Saturday 10 October 2015

Are teaching and research substitutes or complements?

A couple of weeks ago I wrote a post on adjuncts being better teachers than tenured or tenure-track professors. My argument there was that the results were not particularly surprising:
Teaching and research both require investment in terms of time and effort. While some may argue that teaching and research are complementary, I'm not convinced. I think they're substitutes for most (but not all) faculty (I'll cover more on that in a post in the next week). Faculty who do teaching but little or no research can be expected to put more effort into teaching than faculty who do a lot of (particularly high quality, time intensive) research. So, contingent teaching faculty should do a better job of teaching on average. These results seem to support that.
However, the question of whether teaching and research are substitutes or complements remains a fairly open question. The theoretical framework that underlies a lot of the work in this area dates from this excellent William Becker paper from 1975 (gated).

Which brings me to this paper in Applied Economics (ungated earlier version here) by Aurora Garcia-Gallego (Universitat Jaume I), Nikolaos Georgantizis (Universidad de Granada and Universitat Jaume I), Joan Martin-Montaner (Universitat Jaume I), and Teodosio Perez-Amaral (Universidad Complutense). In the paper, the authors use yearly panel data from Universitate Jaume I in Spain over the period 2002-2006 to investigate the question of whether better researchers are also better teachers or not. They also look at administrative work as well, but I want to focus instead on the teaching/research data and results.

I've been in contact with the authors, as I was sorely tempted to write a comment on their paper for publication in the journal [*]. The authors were kind enough to provide me with some additional analysis that doesn't appear in their paper, that I will share with you below.

Anyway, I'll first explain what my issue is with the paper. Here's what the authors found:
Summarizing our results, we find that professors with a typical research output are somewhat better teachers than professors with less research. Moreover, nonresearchers are 5 times more likely than researchers to be poor teachers. In general, the quality of university-level teaching is positively related with published research across most levels of research output.
Those seem like fairly strong results, but of course that depends on how things are measured. The authors' measure of research is based on an internal measure of research quality, while teaching quality "is obtained from students' responses to an overall satisfaction survey question using a 0-9 Likert scale", and the teaching quality measure is calculated for each professor as the average evaluation across all of their courses. I know what you're thinking but no, my issue isn't with conflating teaching quality with popularity. However, I do think the teaching measure is a problem. Here's a density plot of the measure of teaching quality (provided to me by the authors):


So, what you have there is a reasonably normal distribution, centred on five. Which is what you would expect from Likert scale data, especially if you are averaging the scores across many courses. But wait! What's that huge bar at zero? Are there really a large number of teachers with zero teaching quality? That seems very implausible. I highly suspect that missing data has been treated as zeroes - not necessarily by the authors themselves, but probably in the administrative database they are using.

When the authors separate their data into researchers and non-researchers, this is what the histograms look like:


So, that large spike at zero is a feature of the non-researchers sub-sample, but not so much in the researchers sub-sample. Which supports my argument that this is a missing data problem - it's more likely that you would have missing data for fixed-term or part-time (adjunct) staff, who are also more likely to be non-researchers. Unfortunately, that probably drives the results that the authors get.

In fact, the authors were kind enough to re-run their analysis for me, excluding the 85 observations where teaching quality was less than one. Here's their results (the first regression shows the original results, and the second regression excludes the observations with teaching quality less than one):


For those of you who lack a keen eye for econometrics, I'll summarise. The key result from the first regression is that the variable research1 has a large and statistically significant and positive relationship with the dependent variable (teaching quality). This shows that better researchers have higher teaching quality. However, when you remove the hokey data (in the second regression), the coefficient on research1 halves in size and becomes statistically insignificant. Which doesn't necessarily mean there is no effect - the regression might simply be underpowered to identify what could be a very small effect (although with nearly 2,000 observations and lots of degrees of freedom you'd expect it to have reasonable statistical power).

All of which means that this paper doesn't provide strong evidence for research and teaching being complements. Neither does it provide evidence for research and teaching being substitutes (for which there would have had to have been with a significant negative relationship between research and teaching in the regression model).

I've invited the authors to respond - I'll keep you posted.

*****

[*] The editors of the journal didn't respond to my query as to whether they would consider publishing a comment, so instead I summarised much of what I would have written in this post.

No comments:

Post a Comment