Sunday, 4 January 2026

The Guardian Cap mandate in the NFL and the 'Peltzman effect'

Yes, this is another NFL post (after yesterday's post). And before we get to the post: Yes, I will be cheering for the Atlanta Falcons tomorrow, since a Falcons win is the only way for my Panthers to make the playoffs, after they lost to the Buccaneers today. With that out of the way, onto business: player safety in the NFL.

Concussions have become big news in contact sports over the last decade or so. Long-time fans will be sure to have noticed that more players are being substituted out of games, or missing games entirely, due to concussions, than was the case in the past. The concern about concussions is due to their implication in Chronic Traumatic Encephalopathy (CTE), which is linked to behavioural problems, mood swings, and cognitive issues, which all look very similar to dementia.

The NFL has been the target of a lot of attention, and has taken action. One change that was implemented was to introduce 'unaffiliated neurotrauma consultants' to make in-game decisions about whether players should be kept out of a game after an on-field concussion (although that change has not been without controversy). Another change was the introduction of the Guardian Cap, which is a soft shell that attaches over an existing helmet and is supposed to reduce the force of impacts to the head. Do Guardian Caps work though? Guardian, the makers of the cap, is itself cautious on that point:

Researchers have not reached an agreement on how the results of impact absorption tests relate to concussions. No conclusions about a reduction of risk or severity of concussive injury should be drawn from impact absorption tests. Guardian has always stood by the fact that Guardian Caps reduce the impact of hits and that its use should be one piece of the puzzle to an overall safety strategy.

That shouldn't dissuade researchers from looking into the impact on concussions, and it hasn't. This new article by Kerianne Lawson Rubenstein (Syracuse University) and Todd Nesbit (Ball State University), published in the Southern Economic Journal (open access), looks into whether the introduction of the Guardian Caps was associated with a reduction in concussions in the NFL.

Before you conclude that it is self-evident that a padded helmet that reduces head impacts must reduce concussions, we need to discuss the 'Peltzman effect'. In a famous paper in the 1970s, Sam Peltzman (University of Chicago) showed that mandatory safety devices on cars, such as seat belts, do not reduce traffic deaths, and actually increase the number of non-fatal car accidents. This 'Peltzman effect' is otherwise known as 'offsetting behaviour', where a policy makes an activity less risky, but results in more of the now-less-risky behaviour. One example of this in sports was that when new driver safety devices were installed in race cars in NASCAR, drivers responded by driving more recklessly, increasing the number of crashes.

Rubenstein and Nesbit test for a Peltzman effect of the Guardian Caps. They hypothesise that:

...wearing the Guardian Caps incentivizes riskier tackling due to the perceived safety from wearing the Cap. Players may not accurately calculate the risk of a helmet-to-helmet hit if they place a lot of faith in the Cap's ability to absorb the shock. And if the players only wear the Caps in practice and not during games, they may actually end up hitting harder and protecting themselves less when falling to the ground in games without the Caps because of their practiced behavior.

In other words, even if the Guardian Caps work as intended and make each impact less dangerous, players may respond by taking more risks, leading to more hits, harder hits, or less care in avoiding hits. This would be the unintended consequence of using the Guardian Caps.

The NFL mandated the use of the Guardian Cap in contact practice sessions through to the second preseason game in 2022, for all linemen, linebackers, and tight ends. This was extended to all contact practice through the entire season in 2023, as well as including running backs and fullbacks. In 2024, this was extended again, by including wide receivers and defensive backs, and by allowing (but not mandating) players to wear the Guardian Cap during games (although, from my viewing, very few players play while wearing a Guardian Cap).

Rubenstein and Nesbit use data from the 2021/22 to 2023/24 NFL seasons, and a difference-in-differences research design. This basically involves calculating: (1) the difference in concussion prevalence between players in positions with the mandate and players in positions without the mandate before it was introduced; and (2) the same difference after the mandate was introduced; and then testing whether the difference in those two differences is statistically significant. Rubenstein and Nesbit rely on data from weekly injury reports, and their unit of observation is the injured player. Essentially this analysis answers the question whether, for a given injured player, were they more likely to be injured by a concussion when the Guardian Cap was mandated for their position than when it was not?

In their main analysis, Rubenstein and Nesbit find:

... a consistently positive and significant relationship between players that were mandated to wear Guardian Caps after the mandate took place and concussions. This suggests that relative to before the mandate, concussions after the mandate were more likely for players in the position groups affected by the mandate when looking at the total number of injuries across the NFL.

The size of the effect is small but nevertheless meaningful - a given injured player is about 2.6 percentage points more likely to have been injured by a concussion with the Guardian Cap mandate in place, than without it. However, there is a problem here. The question they are answering isn't the question we really want to answer. An injury report might be more likely to be a concussion if there are more concussions, or if there are fewer injuries of other types. In other words, the share of injuries that are concussions can go up either because concussions are more common, or because other injuries are less common, even if concussions themselves haven't changed much. Rubenstein and Nesbit partially allay this concern by showing that there are no effects on either knee injuries or ankle injuries. However, despite being statistically insignificant, the point estimate on knee injuries is negative and of the same magnitude as the positive effect on concussion injuries. So, the effect they observe with their main analysis could be driven by there being fewer knee injuries, and not more concussions.

Fortunately, Rubenstein and Nesbit then go on to look at the count of concussions in each game, comparing players in positions mandated to wear the Guardian Cap and players in positions who were not mandated. In this analysis, they find that:

The coefficient on our variable of interest is consistently positive and statistically significant, suggesting that the prevalence of concussions among players that were mandated to wear the Guardian Caps increased compared to other players after the mandate went into effect. The magnitude of our coefficients suggests about 0.07 more concussions per game for the treated group of positions on a team relative to the untreated group of positions on the team. This may seem small, but considering the observation is per team and per week, our estimates suggest there were about 36 more concussions per NFL season across all linemen, linebackers, tight ends, full backs, and running backs required to wear Guardian Caps than in the season before the mandate.

So, it does appear from the analyses that there was an unintended consequence of mandating the Guardian Caps. The number of reported concussion injuries increased. But not so fast! Remember that the mandate was introduced during a period of increasing scrutiny of head injuries in the NFL. The introduction of the Guardian Caps was not the only change during this period. The NFL changed its concussion protocols during the 2022 season (see also this note by the NFL Players Association). So, an increase in concussions noted on injury reports might be because of a genuine increase in concussion injuries, or it might be because of an increase in the reporting of concussions. Because the concussion protocols changed during the same period, it’s hard to know what concussion reports would have looked like in the absence of the Guardian Cap mandate. Overall, that means that it's very hard to separate a true increase in concussions from increased reporting.

So, we can't conclude from this research that there was a Peltzman effect of mandating the Guardian Caps. It remains a possibility, but we would need better research in order to identify any such effect.

[HT: Marginal Revolution]

Read more:

Saturday, 3 January 2026

What the COVID public health mandates taught us about home advantage in the NFL

The final weekend of the NFL regular season is upon us, and my favourite Carolina Panthers will play for the NFC South division title against the Tampa Bay Buccaneers tomorrow. The winner wins the division and goes to the playoffs. The loser goes home disappointed [*]. The game is being played in Tampa, which conveys some home advantage to the Bucs. How much home advantage, and why?

Those are essentially the questions answered by this new article by Adam Cook (State University of New York at Fredonia), published in the Journal of Sports Economics (sorry, I don't see an ungated version online). There have been lots of studies of home advantage across many sports, including the NFL. The problem is that, while it is obvious that there is home advantage (home teams do win more often), it isn't clear why home advantage exists. Cook notes that:

Various mechanisms have been proposed to explain the persistent advantage enjoyed by the home team: direct crowd effects, home crowd influence upon referee decisions, travel hardships for the visiting athletes, unexpected temperature, wind and precipitation shocks may all explain portions of the persistent difference in success between home and visiting competitors.

Cook focuses his attention on the home crowd. However, untangling the effect of attendance on home advantage is difficult, because:

...better home team performance will positively affect demand for stadium attendance, but greater stadium attendance may positively affect home team success at the same time; the two quantities are likely simultaneously determined...

To overcome this problem, Cook leverages the COVID public health mandates, which limited some stadiums to 31,700 fans, while others had zero fans, during the 2020 NFL season [**]. The good thing about this approach is that COVID mandates created sharp constraints on attendance that weren’t driven by team quality or local demand. However, COVID mandates were not the only disruption in 2020, and if any of those other disruptions affected game outcomes directly, the instrument could partly pick those up as well. That said, it’s an plausible instrument and one that that others have used (although in less comprehensive analyses than Cook's).

Cook uses data from the 2009 to 2022 seasons, and uses a binary variable for the 2020 season as an instrument for stadium attendance (and, in separate analyses, as an instrument for how full the stadium was, in terms of percentage of capacity). Cook looks at the impact on a number of variables, including the probability of a home win, home advantage (measured in points differential), total points scored (by the home team, the away team, and both combined), and various measures of penalties (to pick up differences in referee decisions). To deal with travel hardships, Cook includes travel distance (and number of time zones crossed) in the model, while to deal with the weather variables, he initially includes temperature, weather, and precipitation as variables in the model, then estimates models separately for games played indoors (where weather cannot be a factor) and outdoors (where it can).

Cook finds initially that:

The stadium attendance effect on home winning percentage, home field advantage, total points scored and visiting team points scored are significant at the 5% level...

...for every 10000 fans who attend an NFL game, the probability of a home team victory rises by 1.10% and the home field advantage grows by 0.3323 points. When evaluated at the average NFL stadium attendance, 63407 fans, home attendance accounts for 2.11 points (or 97%) of the mean 2.17 point home field advantage observed in the full sample.

The total number of points scored falls by 0.6542 per 10000 fans, but the reduction in aggregate scoring is not shared between home and away teams– instead it is the visiting team who suffers more, scoring 0.4933 fewer points per 10000 fans.

Summing up, home advantage is related to crowd size, and operates primarily through the away team scoring fewer points. Turning to the effect on penalties (and thereby, influence on refereeing crews), Cook finds that:

For every additional 10000 fans in attendance, the total number of penalties rises by 0.2499 total penalties per game. This increase in total penalties is shared equally between the home and visiting teams, however, with home teams receiving 0.1077 extra accepted penalties and visitors an extra 0.1052 extra accepted penalties per 10000 fans in attendance. The effect of attendance on penalty yardage is also comparable, with home teams receiving 0.7461 extra penalty yards and visitors receiving 0.7754 extra yards per 10000 fans in attendance.

There is no home advantage in terms of penalties, so NFL refereeing crews do not appear to be biased towards the home team. At least, not through a mechanism of crowd influence on penalties.

What about travel effects? In an earlier OLS regression, Cook finds that the correlation between the distance the visiting team travelled and game outcomes, and the correlation between the number of west-to-east time zone changes the visiting team experienced and game outcomes, are both small and statistically insignificant.

Turning to weather, Cook's separate analyses between games played indoors, and those played outdoors, reveal that:

When compared to the full sample results, rising attendance in outdoor games no longer has any measurable effect on the probability of the home team winning, nor on home team points scored and the effects on home field point advantage, total points scored and visiting team points scored are all diminished compared to the full sample estimates...

...playing indoors is associated with a larger home field advantage– a much larger advantage. For every 10000 fans in attendance at an indoor game, the probability of a home team win rises by 3.15% and home advantage rises by 0.6227 points– almost double the effect found using the full sample and 243% larger than the attendance effect on home advantage at outdoor games. Evaluated at the mean indoor attendance, 64678 fans, the average home field advantage rises to 4.03 points, or 186% of the average home field advantage observed in the full data sample.

Total points scored falls by 1.412 points, and this decrease is accounted for by a 0.3947 point decrease in home team scoring, but a 1.017 point decrease in away team scoring per 10000 in attendance, suggesting that greater indoor crowd size negatively affects both teams’ scoring output compared to the full sample and outdoor sample results, but affects the visiting side to a much greater degree.

Penalties were similar between indoor and outdoor games. What we learn from these results is that the home advantage is not driven by the weather, because it is bigger when weather is not a factor (in indoor stadiums). So, going back to the list of explanations for home advantage that Cook begins with, he has eliminated home crowd influence upon referee decisions (no differences between home and away teams), travel hardships for the visiting athletes (not statistically significant), unexpected temperature, wind and precipitation shocks (the home advantage is bigger when games are played indoors). That only leaves direct crowd effects, unless we are missing something. Cook concludes that:

In the absence of any detectable NFL referee bias, these results suggest that it is the NFL home stadium crowd itself that is directly affecting the on-field performance of the home and visiting athletes. Despite a lack of data tracking in-match noise intensity, given the acoustic differences between indoor and outdoor stadiums, this effect is likely related to crowd noise levels.

That conclusion will certainly please many fans attending NFL games, who really believe that they have a direct impact on team performance. The 12th Man is real!

Should my Panthers be worried? Based on these results, they should be somewhat worried about the crowd, but not as much as for some other opponents. Raymond James Stadium is outdoors, and although Cook found no significant effect on the probability of the home team winning, there was still an (albeit smaller) effect on home advantage measured in points. The betting odds have the Panthers as 2.5-point underdogs. With a 70,000-capacity stadium, Cook's results for outdoor games imply that 1.8 points [***] of that spread comes from the home advantage.

Let's go Panthers! Keep pounding!

*****

[*] Although, if the Bucs beat the Panthers and then the Atlanta Falcons beat the New Orleans Saints the next day, the Bucs, Panthers, and Falcons will all finish with a record of 8-9. Due to round-robin results between those three teams, the Panthers win the division. Hopefully, a Falcons win won't be necessary!

[**] It was eerie to watch that season, with 'simulated' crowd noise for the stadiums that had no fans present.

[***] The coefficient of 0.2563 points differential is for each 10,000-person increase in attendance. The difference between 0 and 70,000 attendance is 7 times the coefficient, or 1.7941.

Friday, 2 January 2026

This week in research #107

This post actually covers two weeks, but here's what caught my eye in research over that time (which was fairly quiet!):

  • Voigtländer and Voth (with ungated earlier version here) find that the building of the Autobahn network in Nazi Germany boosted popular support for Adolf Hitler, helping to entrench the Nazi dictatorship
  • Chetty et al. (with ungated earlier version here) present the latest output from their research on opportunity and intergenerational mobility, a public atlas of mean outcomes in adulthood by childhood census tract in the US

Also new from the Waikato working papers series:

  • Buckle, Ryan, and Song examine how firm price-setting behaviour has evolved across episodes of high inflation, including the recent COVID-19 inflation episode, finding that the proportion of firms changing prices has become more highly correlated with inflation since the high inflation episodes of the 1970s and 1980s, meaning that the Phillips Curve has become more nonlinear at higher levels of inflation and that the inflation accelerator operates more strongly than in the past

Thursday, 1 January 2026

Employers strongly prefer applicants who complete in-person rather than online qualifications

As I've noted before (see this post and the links at the end of it), on average online and blended learning don't appear to make students any better off, or any worse off, in terms of learning (however, that conclusion hides important heterogeneity, with more engaged students doing better with online learning, and less engaged students doing worse). So, on average, the human capital or skills gained from online learning appear similar for both online and in-person learning. If employers cared only about skills, they shouldn’t care whether those skills were acquired online or in-person.

However, human capital development is only part of the benefits of higher education. In his book The Case Against Education (which I reviewed here), Bryan Caplan presented an estimate that the education premium is 20% human capital and 80% signalling. And as I have noted several times (most recently in this post), the signal from online education is much weaker than the signal from in-person education. Putting that all together, we should expect employers to be more skeptical of online qualifications, and to be less willing to hire graduates who have online qualifications than those who have studied in-person.

This 2021 article by Conor Lennon (University of Louisville), published in the journal ILR Review (ungated version here), uses a correspondence experiment to demonstrate exactly that. A correspondence experiment involves the researcher making job applications with CVs (and sometimes cover letters) that differ in known characteristics. The researcher then counts how often CVs with different characteristics receive callbacks (a positive phone message or email, or an invitation to an interview). A very simple regression model can then be used to estimate the effect of each characteristic on the probability of receiving a callback. That is what Lennon did in this experiment, with the key characteristic being whether the applicant studied online or in person. As he explains:

...I examine employer responses to 1,891 job applications using 100 unique fictitious applicant profiles. The fictitious profiles are based on real rĂ©sumĂ©s, gathered from a major online jobs website, and represent recent college graduates in four broad areas: business, engineering, nursing, and accounting. For each real rĂ©sumĂ©, names, dates, contact information, addresses, and previous employer and education details were anonymized. At random, for 50 of these rĂ©sumĂ©s, the researcher added the word ‘‘online’’ in parentheses next to the name of the listed college or university. The researcher then used these rĂ©sumĂ©s to apply for suitable job openings... Because employers typically left voicemail messages without specifically offering an interview time, any positive personalized contact is considered a ‘‘callback.’’

To avoid the employers detecting that they were being subjected to research, each job opening received only one randomised application. However, this 'unmatched' design is still appropriate, because randomisation and a large sample size mean that, on average, the only systematic difference between the two groups of résumés is whether the degree is listed as online or in-person. Lennon finds that:

The effect of having an online degree is large and negative in all specifications. Specifically, the estimates... suggest a 7.3 percentage-point difference in callback rates between traditional and online degree holders, all else being equal... Given that the mean callback rate for online degree holders is 8.3%, a 7.3 percentage-point difference suggests that a rĂ©sumĂ© reflecting a traditional degree will receive almost twice as many callbacks for interviews as a rĂ©sumĂ© reporting an online degree, all else being equal.

That is a huge effect and, because 'online' versus 'in-person' was randomly assigned across otherwise similar rĂ©sumĂ©s, we can interpret the 7.3 percentage-point difference in callbacks as a causal effect of completing an online qualification.. Lennon then tests whether the effect is larger depending on the gender or race of the (fictitious) applicant, or by profession. The results show some differences, but I wouldn't read too much into them because they’re driven by a small proportion of the sample. On the other hand, the effect on online education does make a difference to the effect of GPA on the probability of getting a callback. Specifically:

...GPA matters significantly but only for in-person degree holders. Put another way, if you earn an online degree, even a 4.0 GPA will not help all that much. This estimate is a confirmation of the main takeaway of this article: Employers currently do not appear to trust online education.

This is a clear indication of the difference in the signalling value between an online qualification and an in-person qualification (note that the qualifications that Lennon chose were those that could be completed online or in-person, and were otherwise identical). GPA is also a signal of quality. If GPA makes no difference to callback rates for an online qualification, then employers aren't distinguishing between high-GPA and low-GPA graduates of the online qualification. Employers don't seem to value GPA as a signal of applicant quality, if the applicant completed an online qualification. In contrast, GPA makes a large and statistically significant difference for in-person qualifications, showing that GPA remains a strong signal for employers when students complete an in-person qualification. Lennon concludes that:

Because learning outcomes appear to vary little between in-person and online instruction... fewer callbacks for those with online degrees would support the idea that employers view having a traditional degree as a better signal of employability... Alternatively, employers may be inferring some socioeconomic characteristics, or they may believe that human capital formation is diminished in online programs relative to traditional degrees (even if it is not), that the individual will be less socially adept, or that a traditional college education gives students something more than just grades written on a piece of paper.

Lennon rightly notes that his results apply to new graduates, and may not apply to second-chance learners, who often have more real-world experience prior to beginning (or returning to) higher education studies. This study was also conducted in 2015-2017, and some things have definitely changed. Large language models may actually make online qualifications even less of a quality signal than they did when this research was conducted.

The new graduate market is important to universities. We need to understand how employers view our graduates. Based on this study, we should be very cautious about encouraging students into online-only qualifications, lest we hamper their chances of employment when they graduate.