You have probably seen the news that ChatGPT has passed law exams in Minnesota, MBA exams in Pennsylvania, or the US medical licensing exam. Business Insider even provided a list back in March of tests and exams that ChatGPT had been able to pass. High school level economics (or, rather, AP microeconomics and AP macroeconomics) were on the list. Up to now though, university-level economics hadn't made the list.
Based on this article, by Wayne Geerling, Dirk Mateer (both University of Texas at Austin), Jadrian Wooten (Virginia Polytechnic Institute), and Nikhil Damodaran (OP Jindal Global University), forthcoming in the journal The American Economist (open access), that is about to change. They tested ChatGPT using version 4 of the Test of Understanding of College Economics, a widely used multiple-choice test that covers both microeconomics and macroeconomics (technically, they are separate tests, but they can be completed together).
Geerling et al. inputted the multiple-choice questions into ChatGPT and coded whether its response was correct or not (if ChatGPT gave more than one answer, it was marked as incorrect). And ChatGPT did incredibly well, in line with the results from the other tests and exams noted above:
In our trial, ChatGPT answered 19 of 30 microeconomics questions correctly and 26 of 30 macroeconomics questions correctly, ranking in the 91st and 99th percentile, respectively.
Geerling et al. title their article: "ChatGPT has Aced the Test of Understanding in College Economics: Now What?". I'm not sure I would go so far as to say that ChatGPT 'aced' the test, as it did get several questions wrong (especially in microeconomics). However, no doubt ChatGPT will improve, and it would be interesting to see how GPT-4 would go, not least because it can handle visual inputs.
The question, "now what?" is important for teachers and lecturers everywhere. What do we do when ChatGPT can answer multiple-choice economics questions better than the average student? Geerling et al. offer only a few suggestions:
The emergence of ChatGPT has raised fears about widespread cheating on unproctored exams and other assignments. The short-term solution for many educators involves returning to in-person, proctored assessments...
Beyond this back to the future approach, there are other techniques that can be utilized in an online environment. Assessments that are time-constrained reward students who know the material, while others who do not know the material as well search their notes, ask their classmates, and seek answers through any means (including ChatGPT). The time spent searching means that they cannot complete as many questions, even if they are successful in obtaining the information...
One popular recommendation among the teaching community so far has been to produce ChatGPT responses with errors and have students work in small groups to identify and correct those errors. In essence, students are asked to “fact check” the system to ensure that the responses are accurate...
I think we're only just beginning to understand what is possible in this space, and how teachers and students interact with large language models is naturally going to evolve over time. The best uses of AI for teaching and learning probably haven't even been discovered yet. Moreover, as Geerling et al. note in their conclusion:
It is important to note that ChatGPT is not the only disruptive technology in education. The advent of artificial intelligence in education is a reality that cannot be ignored, and it is time to embrace the new era with innovative and effective assessment strategies.
Indeed.
[HT: Mark Johnston at the Econfix blog]
Read more:
No comments:
Post a Comment