Saturday, 21 December 2024

This week in research #54

Here's what caught my eye in research over the past week (which, it appears, was a very quiet week in the lead-up to Christmas):

  • Bietenbeck (open access) finds that exposure to academically motivated classmates causes an increase in student achievement among elementary school students, using data from the Project STAR randomised controlled trial in Tennessee

Wednesday, 18 December 2024

Onshore windfarms vs. birds

Several times recently, I've had conversations with others about environmental objections to windfarms, and specifically about their impacts on birds. I've expressed surprise that anyone could believe that large, slow-moving wind turbines could be a threat to birds. It turns out, there is research that supports the negative impacts of wind turbines on birds (see here or here), but that research doesn't actually demonstrate that wind turbines cause a decrease in bird populations. The problem, of course, is that it isn't feasible to run a randomised controlled trial with wind turbines due to cost - placing wind turbines at random across some areas and not others, and comparing the effect on bird populations in both areas. The cost of such an experiment would be enormous.

Fortunately, there are statistical methods that we can use to try and estimate the causal effects. And that is what this new article by Meng et al., published in the Journal of Development Economics (ungated earlier version here) attempts to do. Specifically, they look at the effect of onshore windfarms on bird biodiversity at the county level in China. They have two measures of biodiversity:

Bird abundance is the average number of birds of a given species per checklist observed at a county-month-year-species level. Species richness is the total number of unique species observed in a given county at the month-year level, which better reflects the diversity of the bird populations.

To establish causality, they use a difference-in-differences (two-way fixed effects) model, which essentially compares the difference in bird biodiversity before and after a windfarm is installed, between counties with and without windfarms. However, Meng et al. go a step further, using an instrumental variables approach, instrumenting for the location of windfarms by the interaction between national-level growth in windfarms interacted with county-level average windspeed at 100 metres. That instrumental variables approach should mitigate issues arising from the correlation of wind turbine location and bird biodiversity.

The novelty of this paper is not just in the methods, but in the data that Meng et al. employ. To measure bird biodiversity, they make use of data:

...from the China Birdwatching Report (CBR, similar to the eBird Reference Dataset), a citizen science dataset consisting of reports from users, including information on individual bird trips and associated characteristics, such as the specific date and time, location of a specific trip, as well as species and quality of birds encountered...

Their dataset covers the period from 2015 to 2022, and includes data collated from over 33,000 checklists. They also control for a variety of other variables:

...including average bird observed duration, average temperature, average visibility, average wind speed, total precipitation, average ozone, percentage of natural park areas in the county, average population density, and average night light value.

Using this data and the two-way fixed effects approach, Meng et al. find that:

A one standard-deviation increases in wind turbines (approximately 84 turbines)... in a given county leads to a 9.75% decrease in bird abundance per checklist from the mean value of 5.38...

...while a one standard-deviation increases in wind turbines (approximately 84 turbines) in a given county decreases the number of unique bird species by 17.67% from the mean value of 66...

Meng et al. also find evidence that the impacts are greater on migratory birds than on resident birds (important given that China is a major migration pathway for migratory birds), and that the effects are larger in forested and urban/farmland than for grassland. There is also evidence that the impact is greatest for the largest bird species.

Finally, Meng et al. show that there are effects of windfarms on neighbouring counties (as well as the counties in which the windfarms are located), and that those effects are somewhat smaller in size. That made me wonder why those analyses were not the primary results in the paper, since it seems obvious that birds may move across county borders.

So, it does appear that windfarms might cause a decrease in bird biodiversity. Meng et al. even address a bunch of concerns that jumped out at me as I was reading the paper, especially that birdwatchers, anticipating that there would be fewer birds near windfarms, do less birdwatching in those locations. On that point, Meng et al. note that:

We do not find a significant impact of wind turbine installations on birdwatcher behaviors regarding the submitted number of checklists...

And they further support that with detailed mobile phone GPS data, showing that there were not fewer trips made to the areas of windfarms, relative to areas further away. However, a couple of concerns do remain, but they are rather technical. First, I wondered why Meng et al. used the interacted variable (national growth in windfarms interacted with windspeed), rather than just windspeed alone. They describe this as a "Bartik-like variable", but we should be cautious about whether Bartik instruments are appropriate (see here). Also, two-way fixed effects models have also come in for criticism recently (see here and here). I'm not going to drag you into the technical details (read the links if you're interested). But suffice to say, this will not be the last word on whether windfarms negatively impact bird biodiversity. However, the best quality study we have so far seems to suggest they do.

Tuesday, 17 December 2024

Licensing of economists, and other fortune tellers

There are certain examples I use in my classes where the origins are shrouded in mystery. They likely come from some obscure note I wrote to myself after reading something online. One of those examples is that there are some states in the US that require fortune tellers to be licensed. [*] I use this as an example of the ridiculousness of occupational licensing, which in many circumstances serves no real purpose other than creating a barrier to entry into the market. After all, what harm could befall consumers from receiving the services of an unlicensed fortune teller, that licensing would help to prevent?

It turns out that the fortune teller example is true. Here's the relevant website with links to the law, as well as this hilarious article which asks the most relevant questions:

How cool would that be to have a fortune tellers license? But then I started to wonder how the licensing process would work. Is there a written examination? Do they hand you a blank piece of paper and expect you to divine the questions and then answer them? Is the test multiple choice or essay? Who grades the essays? Other fortune tellers – kind of like bar exam? Is there a road test? Is reading tea leaves or your palm akin to parallel parking?

I was in New Orleans last month. Walking along Bourbon Street, you see a lot of fortune tellers. I could tell the phony ones. They were the ones that beckoned me over. If they could tell the future, then surely they would have known that I wasn't going to walk over to them, no matter how enthusiastic they waved at me?

Anyway, if fortune tellers are licensed in Massachusetts, does that mean economists should need a licence? After all, economists are regularly asked to tell the future - what is going to happen to GDP, unemployment, interest rates, exchange rates, etc.? Whether economists should be licensed or not isn't a crazy question - there have been calls for that in the past (see here and here). And the consequences of bad fortune telling are likely to be as bad, or worse, when an economist gets it wrong as when a palm reader does. Real risk of harm is the reason that governments license doctors, dentists, and nurses. If there is a real risk of harm from people making poor financial decisions on the advice of economists (or other fortune tellers), maybe they do have to be licensed after all?

[HT: Marginal Revolution]

*****

[*] Although, as it turns out, I have referred to licensing of fortune tellers before, with a relevant link (see here).

Read more:

Monday, 16 December 2024

How university students and staff used and thought about generative AI in early 2023

Somehow, this report languished in my to-be-read pile for over a year. By Natasha Ziebell and Jemma Skeat (both University of Melbourne), it explores the relatively early use of generative AI by university students and staff, based on a small (110 research participants - 78 students and 32 academic staff) survey conducted in April-May 2023.

While the results are somewhat dated now, given the pace of change in generative AI and the ways that university students and staff are engaging with it, some aspects are still of interest. For example, Ziebell and Skeat found that while over 78 percent of academic staff had used generative AI to that point, only 52 percent of students had done so. I think many of us would be surprised that students were not more experimental in their early use of generative AI. On the other hand, perhaps they were simply reluctant to admit to having used it, given that this was a study undertaken by a university that may sanction students for the use of generative AI in assessment?

The other aspect of the paper that still warrants attention are the opportunities and challenges identified by the research participants, which still seem to be very current. In terms of opportunities:

There were a range of opportunities identified for using generative AI as a tool for teaching and learning:

• to generate study materials (e.g. revision materials, quiz/practice questions)

• to generate resources (e.g. as a teacher)

• to summarise material (e.g. coursework material, research papers)

• to generate information (e.g. similar to Wikipedia)

• to provide writing assistance (e.g. develop plans and outlines, rewording and refining text, editing)

• for learning support (e.g. explaining questions and difficult content, as an additional resource, ‘using it like a tutor’)

• as a research tool (e.g. potential for integrating generative AI with library search tools)

• as a high efficiency time-saving tool (e.g. to sort data, gather information, create materials)

• to encourage creative thinking, critical thinking and critical analysis (e.g. students generate work in an AI program and critique it)

I don't think we've moved on substantially from that list of opportunities, and if a similar survey was conducted now, we would see many of the same opportunities are still apparent. In terms of challenges:

The key challenges identified by respondents can be summarised according to the following categories:

- Reliability of generative AI (inaccurate information and references, difficulty fact checking, misinformation)

- Impact on learning (e.g. misusing generative AI, not understanding limitations of the technology)

- Impact on assessment (e.g. cheating, difficulty detecting plagiarism, assessment design)

- Academic integrity and authenticity (e.g. risk of plagiarism, collusion, academic misconduct)

- Trust and control (reliance on technology rather than human thinking, concerns about future advancements)

- Ethical concerns (e.g. copyright breaches, equitable access, impact on humanity)

Unfortunately, just as the opportunities remain very similar, we are still faced with many of the same challenges. In particular, universities have been fairly poor at addressing the impact on learning and assessment, and in my view there is a distinct 'head-in-the-sand' approach to issues of academic integrity and authenticity. Many universities seem unwilling to step back and reconsider whether the wholesale move to online and hybrid learning and assessment remains appropriate in an age of generative AI. The support available to academic staff who are on the frontline dealing with these issues is superficial.

However, academic integrity and authenticity of assessment are only an issue if students are using generative AI tools in assessment. This report suggests that, in early 2023, only a minority of students were doing so. I don't think we can rely on that being the case anymore. One example from my ECONS101 class in B Trimester serves as an illustrative case.

This year (and for many prior years going back to at least 2005), we've had weekly quizzes in ECONS101 (and before that, ECON100). These quizzes this year had 12 questions, generally consisting of ten multiple choice questions and two (often challenging) calculation questions. These quizzes are each worth about one percent of students' grades in the paper, so they are fairly low stakes. Students have generally taken them seriously, and the median time to complete a quiz has been fairly stable at 15-20 minutes over the last few years. Until B Trimester this year, when the median time to complete started the trimester at over 17 minutes, but by the end of the trimester was down to 7 minutes. It isn't clear to me that it is possible to genuinely complete the 12 questions in 7 minutes. Around 16 percent of students completed the last Moodle quiz in four minutes or less. And it wasn't that those students were rushing the test and performing badly. The average score in the quiz for students completing it in four minutes or less was 86 percent (only slightly below the 92 percent average for students who took longer than four minutes). I'm almost certain that the culprit was one of the (now several) browser extensions that will automatically answer Moodle quizzes using generative AI. Needless to say, this year sees the end of Moodle quizzes that contribute to grades in ECONS101.

Anyway, I digress. It would be really interesting to see this sort of study replicated in 2025, and with a decent sample size - it is hard to say much with a sample of 110, split across students and staff. I imagine that we would see many of the same opportunities and challenges would be salient, but that the uses of generative AI have changed in the meantime, and students would now be at least as prolific as users of generative AI as are staff.

[HT: The Conversation, last year]