Tuesday, 29 April 2025

Walstad and Bosshardt on undergraduate GPAs in economics (and other subjects)

I've had this short 2019 article by William Walstad (University of Nebraska-Lincoln) and William Bosshardt (Florida Atlantic University), published in the journal American Economic Review Papers and Proceedings (sorry, I don't see an ungated version online), sitting in my 'to-be-read' pile for far too long (especially given how short it is!). Walstad and Bosshardt (incidentally, two of the top researchers in economics education) look at how GPAs differ across undergraduate majors, using data from the Baccalaureate and Beyond (B&B) project of the National Center for Education Statistics in the US. Their sample covers nearly 16,000 students who graduated in the 2007-08 academic year.

The results make for fascinating reading (albeit, as a snapshot of GPAs that is now over 15 years old). For starters:

The overall undergraduate GPA for all majors is 3.24, or between a B and B+ letter grade. The GPA for economics majors is only slightly below the average at 3.16.

...we also calculated an economics GPA for college graduates who completed a course or courses in economics. This economics GPA average is 2.9, or a B to B− grade.

It's not too surprising to learn that economics has a slightly lower GPA than other subjects, or that students who take an economics course (or more than one), but don't major in economics have a lower GPA in economics than students taking an economics major. I'm sure that the results would be similar for other subjects (with students taking a few courses in a subject having a lower GPA in that subject than students majoring in that subject).

Walstad and Bosshardt then look at the factors associated with GPA:

The most striking finding is that prior achievement or measured ability in high school is highly associated with success in the undergraduate coursework...

The only other variable that appears to be a fairly consistent predictor of GPAs is age. The age effect is nonlinear, with the youngest college graduates having the highest GPA, but it declines with age and then eventually increases.

The correlation of GPA with prior academic achievement is not surprising. Students who do well at high school tend to do better in university as well, on average. The better performing students tend to be highly engaged and motivated, both at high school and university. However, the effect of age is more interesting. The youngest students (those aged under 22 years at graduation) have the highest GPA (of 3.36), and GPA declines with age (to 3.19 for those aged 22 or 23, and 3.02 for those aged 24 or 25), until the oldest group (those aged 26 years or older at graduation), where GPA jumps back up (to 3.29). It is likely that this reflects that students who take longer to get to graduation have lower grades, having failed one or more courses along the way. However, the oldest group will include many 'mature' students, who tend to be more focused on their studies and do better on average. The other variable that stands out as associated with GPAs is gender, with female students receiving a GPA that is 0.14 points higher, on average.

Next, Walstad and Bosshardt look at the factors associated with GPA by subject. Focusing on economics, the factors that are statistically significantly associated with GPA in economics are being aged 24 or 25 at graduation (which is associated with a GPA that is 0.22 points lower, on average), having a high school GPA of 3.5 or more (which is associated with a GPA that is 0.24 points higher, on average), verbal and math SAT scores (which are both associated with higher GPAs), and graduating from a baccalaureate or Masters granting institution, rather than a doctoral degree granting institution. The latter is consistent across all subjects, which suggests that grades are simpler lower on average at doctoral degree granting institutions.

Walstad and Bosshardt, though, focus on the differences by gender, noting that:

Females earned significantly higher overall GPAs than males and in four subjects (biology, calculus, foreign languages, and psychology), but no significant difference is evident in three subjects (economics, business, engineering).

However, female students are less likely than male students to earn an A grade in their first economics course (and their first engineering course), which is not the case for any of the other subjects. Male students in economics get an A grade 3.1 percentage points more often than female students, so the effect is not large.

Given the known issues with grade inflation over time (see here and here), it would be interesting to know how things have changed since 2007-08, and especially whether the gender gap in economics achievement is still apparent. The B&B project does apparently have data for a cohort that graduated in 2015-16, so perhaps a follow-up project is forthcoming?

Sunday, 27 April 2025

Mexico's agave farmers learn the lessons of dynamic supply and demand

The Financial Times reported earlier this year (paywalled):

But in 2018, the tequila boom in the US presented Antonio, who requested we not use his real name, with an opportunity to get back into the fields and connect with his father. With the price of agave, the key ingredient in tequila, reaching record heights, everyone with a patch of land was rushing to plant the crop, or to sell their land to others keen to do so. As it peaked at some 30 pesos ($1.45) per kilogramme, doctors, dentists, and many others piled into the business. The number of registered agave growers rocketed from 3,180 in 2014 to 41,000 in 2023. For several years, the region was abuzz with a sense of possibility, even among those without land to grow on. Opportunistic investment companies set up crypto-esque trading websites encouraging Tapatíos, people local to the area, to place bets that the price of agave would keep rising...

A couple of years after he planted his crops, Antonio secured a contract with a tequila producer promising to buy his plants. The deal gave him the confidence to plant more, but did not include any kind of price protection. In 2022, when his first crops were still a couple of years from maturity, he started to hear about falling prices. Within two years the spot price had plummeted to between 1 and 3 pesos per kg. “We started to plant all excited, making the investment when things were good without really knowing that it’s all cyclical,” he says.

Stories like Antonio’s are now crystallised into tequila industry lore: the hapless middle-class professionals who helped fuel the agave oversupply crisis that is now rocking Jalisco.

In my ECONS101 class, we teach a model of dynamic supply and demand that explains fluctuations in market prices such as those that the Mexican agave farmers have been experiencing. It isn't all bad news. As you will see, the farmers who can ride out the low prices and profits will likely find themselves in a period of higher prices and profits before too long.

Consider the market for agave, and assume that it is perfectly competitive - most importantly, there are no barriers to entry into the market or barriers to exit from the market. The market for agave is shown in the diagram on the left below. The diagram on the right will track changes in agave farmers' profits over time. Initially (at Time 0) the market is at equilibrium (where demand D0 meets supply S0) with price P0, and agave farmers are making profits π0. Now say there is a permanent increase in demand at Time 1, to D1. This increase in demand may be because of an increase in the production of tequila (as I noted in this post earlier this month). Prices increase to P1, and agave farmer profits also increase (to π1). There are no barriers to entry (this is a perfectly competitive market), so the higher profits encourage new farmers to enter this market (like Antonio). Supply increases to S2 (more producers) at Time 2. Price falls to P2, and agave farmer profits also fall (to π2). This is the situation that the Financial Times article describes.

What happens next? At Time 2 profits are low and some agave farmers will choose to exit the market (no barriers to exit because this is a perfectly competitive market). Supply will decrease to S3 (fewer producers) at Time 3. Price will increase to P3, and farmer profits will increase to π3. So, as I noted above, provided the agave farmers can ride out the low prices and profits, the market will recover as other farmers drop out of the market.

The problem for agave farmers like Antonio is that this was foreseeable. When prices and profits are high, and lots of farmers are moving into the market, that is not a good time to invest in an agave farm. The increase in supply is going to lead to lower prices and profits in the future. This is made even worse in this case because, as the FT article notes:

Although tequila remains the world’s fastest-growing spirit, the peak growth is over, and drinkers have been cutting back on boozing. That was already particularly true in the US, tequila’s largest export market, before President Donald Trump proposed launching a trade war. While large producers with long-held relationships with the tequila houses are able to ride out the cycle, farmers without solid contracts are now desperately trying to offload their agave in a saturated market. 

Mexico's tequila lake is doubling down on the cycle of low prices and profits for Mexican agave farmers. As I noted in the tequila lake post, the price of tequila will fall, and less tequila will be produced. That means that the prospects for agave farmers are even worse than portrayed in the market diagram above, because the demand for agave isn't going to stay high at D1, but will be decreasing back towards D0. That means even lower prices and profits for agave farmers.

The Financial Times wants us to feel sorry for the agave farmers like Antonio. But honestly, they should have done some due diligence. The clever business strategy when faced with a market that is heading into a cycle like that in the agave market is the 'hit and run' strategy. It is counterintuitive, but it says that when prices and profits are high, that is a good time to get out of the market. Forget selling agave, agave farms can be sold for a high price at that point in the cycle. The time to get into the market is when prices and profits are low, because the price to buy an agave farm will be much lower. Recognising that this market is perfectly competitive is important here, as is recognising what is happening in the market around you. If Antonio looked around, and realised that lots of other farmers were getting into agave farms, that should have made him curb his excitement. Hopefully, the farmers (and others) have now learned this lesson of dynamic supply and demand.

Read more:

Saturday, 26 April 2025

The increase in methamphetamine use in New Zealand has been driven more by supply than demand

The New Zealand Herald reported last month:

Prime Minister Christopher Luxon has asked his justice and police ministers to look at what more can be done to tackle methamphetamine use in New Zealand, which has nearly doubled in two years.

Police data shows an “unprecedented 96% increase in meth consumption when compared to 2023, with consumption increasing across all sites”...

The police report said the spike in methamphetamine use likely resulted from an increase in both supply and demand, along with a decrease in street-level pricing.

The changes in the market for methamphetamine described in the Police report are illustrated in the diagram below. In 2023, the market operated in equilibrium where the demand curve D0 intersects the supply curve S0. The equilibrium price (the street price of methamphetamine) was P0 and the equilibrium quantity of methamphetamine traded (and consumed) was Q0. Between 2023 and 2025, there was an increase in supply of methamphetamine (from S0 to S1) and an increase in demand for methamphetamine (from D0 to D1). The equilibrium quantity of methamphetamine consumed increased from Q0 to Q1 (an "unprecedented 96% increase" according to the article). The equilibrium price of methamphetamine decreased from P0 to P1 (a "decrease in street-level pricing: according to the article).

Ordinarily, when we see an increase in both supply and demand in a market, the increase in the equilibrium quantity is certain, but the change in equilibrium price is ambiguous. That's because an increase in demand causes an increase in the equilibrium price (ceteris paribus), while an increase in supply causes a decrease in the equilibrium price (ceteris paribus). In this case, the decrease in the street-level (equilibrium) price of methamphetamine tells us that the increase in supply of methamphetamine must have been larger than the increase in the demand for methamphetamine. So, the increase in methamphetamine use has been caused by both increases, but the supply side of the market is having a larger effect than the demand side.

Now, that doesn't mean that police should be targeting the supply side of the market. As I noted in this 2016 post, in the long run it is likely to be more effective to focus on the demand side, rather than the supply side, to reduce drug use. And that's what we should see now.

Read more:

Friday, 25 April 2025

This week in research #72

Here's what caught my eye in research over the past week:

  • Bauer, Lakdawalla, and Reif develop a theoretical model for valuing health and longevity improvements, and show in calibrated simulation results that sick adults are willing to pay nearly twice as much per quality-adjusted life-year (QALY) to reduce mortality risk as healthy adults, and that reducing the risk of serious illness is valued similarly to reducing the risk of mild illness

In some exciting news, the latest issue of Australasian Journal of Regional Studies (AJRS) has just been published (although it is backdated to December 2024, as for the second year in a row we faced some unexpected issues with completing the issue). This issue has four papers, as well as the editorial:

  • Shakir uses a microsimulation model to analyse the impact of two housing programmes that aim to help low- and moderate-income families into homeownership in Australia, finding that the “First home guarantee scheme” (FHGS) increases rates of home ownership by more than the “Help to buy scheme” (HTBS), and attributes the difference to the focus of the HTBS on younger and lower-income households (this paper won the John Dickinson Memorial Award for the best paper published in AJRS in 2024)
  • Mangioni et al. looks at how residential land values and housing prices across regional Australia have changed over the past five years, and thematically analyses submissions to the NSW Regional Housing Taskforce 2022, identifying that a shift towards lifestyle living and second dwelling ownership, and a change in workforce demand have increased the demand for regional housing
  • Sarkar and Tigga use bootstrap data envelopment analysis to evaluate the efficiency of health expenditures in improving child mortality outcomes across 127 low- and middle-income countries (LMICs), and find that 45 percent of LMICs exhibit decreasing returns to scale, meaning that increases in health inputs will generate less than proportionate reductions in child mortality
  • Nguyen looks at the impact of the COVID-19 pandemic on the financial services sector across four regions of the US, and finds that before the pandemic, labour determined the revenue differences between regions, while during the pandemic, local, state, and federal taxes played a greater role

Tuesday, 22 April 2025

The gender gaps in academia may not arise entirely from gender biases

I've written a lot of posts about the gender gap in academia, in economics and in other (mostly STEM) disciplines (see the list at the end of this post). However, this 2023 review article by Stephen Ceci (Cornell University), Shulamit Kahn (Boston University), and Wendy Williams (Cornell University), published in the journal Psychological Science in the Public Interest (open access), suggests that the gender gap may not be a substantial as previously believed (including by me). This article is quite credible, having arisen as an 'adversarial collaboration', meaning a collaboration between researchers who previously disagreed on the key conclusions from the literature. As Ceci et al. explain:

This article represents more than 4.5 years of effort by its three authors. By the time readers finish it, some may assume that the authors were in agreement about the nature and prevalence of gender bias from the start. However, this is definitely not the case. Rather, we are collegial adversaries who, during the 4.5 years that we worked on this article, continually challenged each other, modified or deleted text that we disagreed with, and often pushed the article in different directions...

Kahn has a long history of revealing gender inequities in her field of economics, and her work runs counter to Ceci and Williams’s claims of gender fairness.

Ceci et al. focus on seven questions of relevance to understanding the gender gap in academia:

In this article, we comprehensively examine evidence in six key evaluation contexts: (a) Are similarly accomplished women and men treated differently by academic hiring committees? (b) Are grant reviewers biased against female PIs? (c) Are journal reviewers biased against female authors? (d) Are recommendation-letter writers biased against female applicants for tenure-track positions? (e) Are faculty salaries biased against women? And, (f) are student teaching evaluations biased against female instructors? Claims of gender bias are omnipresent in all six of these domains... (We also review the literature in a seventh context, gender differences in publication rates, because publishing productivity can moderate evaluation in most of these six contexts.)

Ceci et al. focus on research published since 2000, which is more likely to represent the 'current' state of academia (although, arguably, they should weight more heavily more recent studies, which they don't). They also distinguish between:

...the most mathematically intensive fields—geosciences, engineering, economics, mathematics/computer science, and physical science (GEMP)—and less math-intensive fields—life sciences, psychology, and social sciences (LPS).

It is generally claimed that gender gaps are more prevalent in the GEMP fields than in the LPS fields (and a quick read through the links at the end of this post would suggest that is definitely true of economics, as one example).

The article is very thorough and has a lot of detail related to each of those contexts, so I'm just going to hit the headlines in relation to each of the seven research questions. If you are interested in any particular finding, the article is open access, so you can easily look at their review and unpack the details there. In relation to the first research question (are similarly accomplished women and men treated differently by academic hiring committees?), Ceci et al. conclude that:

The vast majority of findings—from (a) synthetic cohort analysis, (b) institutional hiring records, and (c) experiments—indicate that women are less likely than men to apply for tenure-track jobs, but when they do apply, they receive offers at an equal or higher rate than men do.

So, the news is both good and bad. On the positive side, there is little evidence of bias. However, where a gender gap in hiring persists, and is not because of bias in hiring, the gender gap must arise from differences in the rates of applying for academic positions between men and women. Indeed, Ceci et al. note that:

...women are more likely than men to give up their initial aspirations to become tenure-track professors while in graduate school, a finding primarily true of women with children or contemplating children. Undoubtedly, broad systemic factors are partly responsible, along with biological factors, for these women not applying for tenure-track positions.

That still suggests that there is further work to do (both in terms of research and in terms of addressing the problem), but this particular article skirts around that issue because it focuses on the gender biases for academics, and not the graduate student experience. In relation to the second research question (are grant reviewers biased against female principal investigators (PIs)?), Ceci et al. conclude that:

...pre-2006 evidence suggests that although some agencies evaluated men and women differently, on average they did not.

Ceci et al. then conduct their own meta-analysis of the literature since 2000 (including 39 studies), and conclude that:

Taken together, both the analytic dissection and our meta-analyses appear not to support the claim that the grant peer-review process has been rigged against women PIs during the past 20 years in the United States. This is particularly true when analyses controlled for PIs’ research productivity...

On the third research question (are journal reviewers biased against female authors?), Ceci et al. conduct both a review and meta-analysis and conclude that:

...overall, our meta-analyses and our dissection of key studies revealed no evidence of systematic bias against female authors, notwithstanding claims to the contrary.

For the fourth research question (are recommendation-letter writers biased against female applicants for tenure-track positions?), Ceci et al. conclude that:

On the basis of our analysis of the nine studies in this domain, we conclude that no persuasive evidence exists for the claim of antifemale bias in academic letters of recommendation.

In relation to the fifth research question (are faculty salaries biased against women?), Ceci et al. conclude that:

...the evidence supports the claim that women are paid less than men in tenure-track academia, although the magnitude of the gap is much smaller (60%–80% smaller) than often claimed in executive summaries and headlines, and in some situations has disappeared.

Ceci et al. also dig a bit deeper on the salary differences, noting that:

Some of the unexplained gender salary gap may be due to implicit bias (although this seems unlikely in biology, where starting salaries are higher for women), and some of it may be due to differences in willingness to negotiate and solicit outside offers... some of the remaining pay gap may be due to women’s work discontinuities for family leave... or to a desire to keep jobs flexible... Finally, some of the relatively small remaining pay gap may be due to women’s lower likelihood of negotiating higher salaries or their lower likelihood of pursuing more lucrative job offers. The lower likelihood of negotiating higher salaries may itself be due to bias... Without specific data on family leaves, past employment, and job pursuit, it is impossible to know how much, if any, of the less than 4% unexplained pay gap is attributable to bias.

The salary gap of 4% is small, but it is not zero. The fact that much of the gap can be explained by the factors outlined above would accord with research by 2023 Nobel Prize winner Claudia Goldin (whose work they cite, among others). However, that leaves open the question of how large the salary gap would be, after accounting for work discontinuities, preferences for flexibility, and negotiation? Again, a research question to be addressed in the future.

On the sixth research question (are student teaching evaluations biased against female instructors?), Ceci et al. conclude that:

...the evidence supports the claim that female instructors are penalized for being women, independent of the content and delivery of their lectures and independent of students’ actual learning. The effect sizes we calculated indicate penalties for women that ranged between small and moderately large (ds = 0.10–0.50). So, unlike the domains in which we were able to unequivocally reject claims of widespread gender bias, in this domain, we conclude that there is gender bias.

This is consistent with many research findings on student evaluations of teaching, including studies I have blogged about before (most recently here, and see the other links at the end of that post for more). Unfortunately, this seems to be a pervasive finding across all teaching contexts, and it doesn't appear to be getting any better.

Finally, in relation to the additional research context (gender differences in research productivity), Ceci et al. conclude that male researchers do have higher research productivity (more publications), and that:

...gender productivity differences are smallest in GEMP fields (with the exception of economics) and are largest (and possibly growing) in biology, psychology, and economics.

The overall takeaway from this work is that there are still gender gaps in academia, but that many of the gaps, or claimed gaps, don't seem to arise from gender bias. At least, that's what we should conclude from the research to date. That is true of all of the domains except salaries and teaching evaluations. None of this means that we should conclude that all is rosy for female academics (unlike the title of this post), and that is certainly not the case across all fields. Indeed, some of the changes in recent years might actually make things worse for female academics before they get better (as noted in yesterday's post). We still have some way to go, especially in the more technical fields, including STEM and economics.

[HT: Marginal Revolution, back in 2023]

Read more:

Monday, 21 April 2025

Co-authorship in economics in the aftermath of #MeToo

The #MeToo movement was a necessary corrective action recognising decades of toxic behaviour across many occupations. Economics was not immune (for example, see here or here). However, could the #MeToo movement have had an unintended consequence on the careers of female economists? If having a female co-author increases the chances of a male economist being called out for even minor indiscretions, does this meaningfully raise the cost of having female co-authors? And if the cost of having female co-authors meaningfully increases, we would expect to see fewer male-female collaborations (especially where the male economist is more senior).

That is the topic addressed in this new article by Noriko Amano-Patiño, Elisa Faraglia, and Chryssi Giannitsarou (all Cambridge University), published in the journal European Economic Review (open access). The use data on co-authorships in the nearly 27,000 working papers published in the NBER and CEPR working paper series between January 2004 and December 2020. They first note that:

The MeToo movement’s impact on the economics profession may have fostered a more respectful research environment, increased scrutiny of existing practices, and promoted greater diversity and inclusivity within the community. Conversely, it could have induced a chilling effect on collaborations, potentially causing researchers to become more hesitant in forming partnerships outside their established networks due to heightened concerns about trust and reputational risk.

Although the #MeToo movement started in 2017, Amano-Patiño et al. use the second quarter of 2018 as the effective date for the onset within the economics profession (that dates to the fallout arising from a series of studies, including a particularly notable study by Alice Wu, which I blogged about here). However, Amano-Patiño et al. vary the effective start date and find little difference in their results. So, comparing papers written before and after 2018, and controlling for a variety of author characteristics, Amano-Patiño et al. find that there was:

...a rise in the proportion of women coauthors for men, both overall and within junior and senior subgroups. Conversely, we find a decrease in the proportion of women coauthors for women, both overall and within corresponding seniority levels. Using a back-of-the-envelope calculation, these increases in mixed-gender collaborations, translate to an estimated 12.3% increase in women coauthors per 100 men-authored papers.

That seems to go against what we might expect, if the cost of having female co-authors has increased for male economists after #MeToo. However:

...we estimate decreases in the proportion of senior coauthors (especially senior women) for juniors, and symmetrically, in the proportion of junior coauthors (particularly junior women) for seniors. The decreases in collaborations between senior and junior economists we quantify, suggest a 3.0% decrease in the share of senior authors collaborating with junior coauthors.

 Amano-Patiño et al. interpret this as showing that:

...post-MeToo, authors have increasingly sorted their collaborations by seniority rather than by gender.

What might explain these findings? Researchers who are worried that they might get called out by female co-authors might respond by reducing their collaborations with female co-authors generally, as I noted at the start of this post. Or, they might reduce their collaborations with new co-authors, who they have not developed trust with, while continuing to collaborate with more senior authors that they trust. This is also consistent with Amano-Patiño et al.'s further results, where they note that:

...we find evidence of a general chilling effect on the expansion of economists’ professional networks. We estimate decreases in the share of new coauthors across all seniorities and genders, the share of new senior coauthors for juniors, and the share of new junior coauthors for seniors. Our estimates translate into 5.4% fewer new coauthorships per 100 papers. This trend is primarily driven by a substantial decrease in new coauthorships between senior and junior authors: for seniors, the share of new junior coauthors has dropped by 18.4%, with a particularly sharp 48% decrease in their share of new junior women coauthors.

Amano-Patiño et al. interpret their results as bad news, noting that if the results can be interpreted as causal:

First, authors may have prioritised increasing gender diversity in their collaborations. Second, senior authors have increasingly relied on their existing collaboration networks rather than forming new coauthorships. The latter trend, if persistent, could have long-lasting consequences for the career development of women economists and potentially exacerbate the already ‘leaky’ pipeline in the profession.

 Amano-Patiño et al. stop short of noting that this is a substantial negative unintended consequence of the #MeToo movement in economics. Although the environment for female economists may be improving, at least one aspect, being the opportunities for collaboration and mentoring from senior economists, appears to be declining. And that will be a difficult problem to address. Indeed, Amano-Patiño et al. aren't able to offer any concrete steps that could be implemented to solve this issue, concluding with some more general statements:

These results underscore the urgent need for sustained efforts to cultivate a supportive ecosystem and dismantle systemic barriers hindering the advancement of women and junior economists in the field. The economics profession must proactively continue to foster a safe, inclusive environment by evaluating, monitoring, and educating on relevant issues.

Sadly, I also can't offer anything concrete, and only hope that the current desire for change within the profession will ultimately lead to greater opportunities for female economists overall.

Sunday, 20 April 2025

Grumpy young economists

Academic writing has changed over time, as I noted in this post back in 2022. The research I referred to in that post identified an increase in the use of adjectives and adverbs over time, noting that as a result, research was becoming less readable over time. The research speculated on the reasons why research had become less readable, but one explanation that they didn't consider was that different generations of academics might express themselves in different ways.

And that is essentially what this 2024 article by Lea-Rachel Kosnik (University of Missouri-St. Louis) and Daniel Hamermesh (University of Texas at Austin), published in the Southern Economic Journal (ungated earlier version here). sets out to look at. Kosnik and Hamermesh look at a sample of all 15,138 articles published in the 'Top 5' economics journals between 1969 and 2018 (the 'Top 5' journals are American Economic Review, Econometrica, Journal of Political Economy, Quarterly Journal of Economics, and Review of Economic Studies). Once they restrict the sample to authors with at least five articles, the sample reduces to 1389 researchers (and 12,812 articles).

Kosnik and Hamermesh then apply sentiment analysis to the articles in the sample, resulting in three scores:

...a positive/negative score (POSN), a certain/tentative score (CERT), and a contemporaneity/past score (CONP).

The POSN score reflects the (positive or negative) emotive tone of the writing, CERT measures how certain or tentative the writing is, and CONP measures whether the writing is contemporary or focused on the past. The measures are normalised by subtracting the average score for all articles in the same field of economics (the same JEL group). Kosnik and Hamermesh look at how these normalised measures vary systematically across the sample of authors and over time, paying particular attention to how the measures are related to the number of years since each researcher completed their PhD. They find that:

Based on the fixed-effects estimates for the entire sample (the 1970s cohort), a one standard-deviation increase in age leads to changes of 0.07 (0.02), -0.03 (-0.01), and -0.05 (-0.03) standard deviations in POSN, CERT, and CONP, respectively.

In other words, older economists write in a more positive emotive tone. However, the effects for CERT and CONP are not statistically significant. Kosnik and Hamermesh then pivot to looking at the square of each normalised measure, contending that it represents the deviation from the norms. It isn't clear to me why they consider that 'more negative' and 'more positive' deviations in norms should be treated identically, and so I don't find that analysis particularly illuminating. It seems like an arbitrary approach, and they note in a footnote that using the absolute value of the normalised measure rather than its square makes the results less statistically significant. That should also give us pause.

However, the basic analysis does provide some other points of interest, including:

Natives write less positively, with less certainty, and with less present/future orientation than do leading economists whose mother tongue is not English. This is true, however, only for those native English-speakers who grew up in North America (57% of authors) or the United Kingdom (5% of authors), whose styles of writing economics are almost identical along the three measures we examine. The styles of the 2% of authors whose native English comes from elsewhere (Ireland, South Africa, Australia, or New Zealand), however, do not differ from those of non-native speakers.

And:

There are also significant differences across the five journals, with all of them being more positive and more contemporary-oriented than the AER, and all but the QJE being written in a more certain voice than the AER...

Clearly, the most dismal scientists get published in the American Economic Review. And:

Additional coauthors, however, do make writing styles more positive, more certain, and less present-oriented, both in the full sample and in the 1970s cohort.

Are sole authors more negative because they have to do all of the work themselves, I wonder? Anyway, Kosnik and Hamermesh then turn to looking at citations, finding that:

While positive deviations of all three measures of sentiment reduce citations significantly or nearly so, the more important question is how large these reductions are. Taking simultaneous one-standard deviation increases in sentiment scores... these increases reduce citations by 5 (2.5)%, or 0.015 (0.01) standard deviations. Writing in a more positive, more certain, or more present-oriented way than others publishing at the same time and in the same sub-field reduces the scholarly impact of one's articles, although the effects are quite small.

Decomposing the change in citations as authors get older, Kosnik and Hamermesh find that:

Scholarly recognition decreases with author's age, but only a small part of the decrease is due to changes in writing style with age.

Finally, Kosnik and Hamermesh look at the subset of Nobel Prize winners, and find that:

Nobelists' style exhibits significantly less certainty than that of other star authors. This example suggests that writing in a more tentative style distinguishes one's scholarship and might provide the scope for subsequent researchers to accord it the attention that helps to generate the distinction of a Nobel Prize.

What do we take away from all this? There is a lot of depth in the analysis, but when we put aside the analyses that rely on the squared measure (which, as I noted above, I don't have as much faith in), it seems that the only remaining result (in terms of age) is that older economists write in a more positive tone than younger economists. Fortunately, the impact of tone on research impact (as measured by citations) is fairly small, so I guess younger economists can afford to be grumpy.

Coming back to where I started this post, what does that imply for changing writing styles over time? If younger economists write in a more negative tone, then as the population (including the population of economists) ages, we might see more positively minded economics writing! Now, the question arises, do younger researchers in other disciplines also write in a more negative tone than older researchers?

[HT: Marginal Revolution, back in 2023]

Saturday, 19 April 2025

The 'rising stars' of economics education (and yes, it includes me!)

I've been meaning to post about this working paper by Wayne Geerling, Dirk Mateer (both University of Texas at Austin), and Jadrian Wooten (Virginia Polytechnic Institute and State University) for a while (it was highlighted in a 'this week in research' post back in November, and I read it then). The paper analyses citation data from articles on economics education published in economics journals between 2019 and 2023. It then ranks authors based on citation counts and i10-index (the number of published articles with ten or more citations).

The top 50 authors are shown in Table 1 in the paper. The top four will not come as a surprise to anyone familiar with the literature: William Walstad, Sam Allgood, William Becker, and KimMarie McGoldrick. However, if you scroll a little further down the list, there I am, ranked at #27, with 161 citations and an i10-index of three. And if you look carefully at the list, you'll see that almost all of the names above me are from US institutions. So, I am ranked #5 among non-US-based economics education authors! Sadly, I don't make it onto the list of the top authors based on i10-index, which they cut off at four. However, this is still welcome recognition of my research on economics education.

However, it is worth noting that my performance is almost entirely driven by this one article (ungated earlier version here). That article on financial literacy among high school students was co-authored with Richard Calderwood, Ashleigh Cox, Steven Lim, and Michio Yamaoka, all of whom also make it onto the authors list in joint 30th place, with the majority of their 145 citations coming from that one article, which has 118 (the rest of their citations will be from this other article (ungated) co-authored with me, from the same project). The article on financial literacy among high school students is actually the fifth most cited in the whole sample used by Geerling et al.

I have a number of economics education projects on the go. Most of them involve data drawn from my ECONS101 classes, where I am always trying (and evaluating) something new. This trimester. we've been trialling the use of a generative AI tutor, Harriet, which I posted about at the start of the year. We've also given students the opportunity to do practice multiple choice questions every day ('Question of the Day') on Moodle. Both of those initiatives will be evaluated in terms of their contribution to student performance in assessment. However, even having done the evaluation, that doesn't guarantee that I find the time to write the analysis up as a paper. But if I want to retain my ranking in Geerling et al.'s list, then I will have to make a more concerted effort to get my economics education research published!

[HT: Wayne Geerling]

Friday, 18 April 2025

This week in research #71

Here's what caught my eye in research over the past week:

  • Bertola and Lo Prete (open access) find large propensities to guess in Italian data from a survey on financial literacy and resilience, and show that truly financially literate respondents are more likely than those who guessed and randomly picked the correct answers in the financial literacy test to make ends meet at the end of the month and to cope with unexpected expenses
  • Feyzollahi and Rafizadeh investigate the likely use of large language models in the writing of papers published in 25 top economics journals, and find a 4.76 percentage point increase in LLM-associated terms during 2023–2024, and that the effect more than doubles from 2.85 percentage points in 2023 to 6.67 percentage points in 2024, suggesting rapid integration of language models in economics research (are you also wondering if their paper was partly written by an LLM?)
  • Hatton (open access) presents new data on the voyage times and travel costs for emigrants from the UK traveling to the US and to Australia from 1850 to 1913, showing that the voyage time from Liverpool to New York fell from 38 days to just 8 days (or 79 per cent), and the voyage time to Sydney fell from 105 days to 46 days (or 56 per cent)
  • Brakman, Kohl, and van Marrewijk (open access) use a gravity model to link long-run changes of the demographic dividend to geographical changes in world trade for the 21st century, and show that, compared to the current situation, North America and Europe will no longer be the centre of global trade in 2100 due to their aging populations, while South Asia and Sub-Saharan Africa will experience a substantial increase in their share of world trade, and China will experience a substantial decrease
  • De Haro investigates whether declining drug revenues in Mexico incentivised cartels to target the avocado sector, and finds that the decline in the demand for heroin increased homicide rates, including those of agricultural workers, as well as truckload thefts in avocado-growing municipalities
  • Chenarides et al. find that dollar store presence in a county corresponds with higher employment levels within the general merchandise retail sector, and decreases in average weekly earnings (although these reverse in urban areas over time)

Thursday, 17 April 2025

What's new in regional and urban economics?

The journal that I edit, the Australasian Journal of Regional Studies, is about to release its latest issue (more on that in a future post). That makes it timely to think about what's new in regional and urban economics. Actually, it's probably always a good time to think about what's new, but this is a particularly useful time because we can rely on this new NBER Working Paper (ungated here) by Ran Abramitzky (Stanford University), Leah Boustan (Princeton University), and Adam Storeygard (Tufts University).

The paper mainly covers new data sources that have come into more regular use in recent years, and provides a good survey of the literature that has developed using each source. Abramitzky et al. also identify some new use cases for some of the data sources, which points to new research directions or extensions of existing work. For new (or experienced) researchers looking for inspiration for their next research project in regional and urban economics, this paper is a good one to read.

To save you a little bit of time though, here are some of the key data sources that Abramitzky et al. discuss (some of which I have grouped together differently than they do). The first is historical (US) Census records:

The US Census is far from a “new” data source, having provided the backbone of empirical research in urban economics and other applied fields for decades. Yet advances in record linkage have allowed researchers to convert (historical) census data into large panel datasets that follow individuals over time. This longitudinal data opens up a set of new research questions on spatial topics, including the determinants of geographic mobility, the long-run effect of childhood exposure to environmental conditions or economic shocks, and the causes and consequences of neighborhood change within cities.

Complete census records, including an individual’s name and detailed location information, becomes available to the public 72 years after the Census is taken; the 1950 Census was just released in 2022.

Sadly, this is not a data source that is available for many countries (including New Zealand, where Census unit records prior to 1966 were destroyed). However, the ability to link people over long periods of time (including between generations) has opened up a wealth of new research questions. Second, Abramitzky et al. discuss digitised historical maps and directories:

Digital spatial data in Geographic Information Systems (GIS) is indispensable for a variety of modern urban applications but, until recently, historical maps were not compatible with this tool. In recent years, economic historians and other social scientists have digitized a wide range of historical maps, including census geography, and environmental and land management maps. These efforts have opened up study of historical neighborhoods and the effects of proximity to relevant geographic features like administrative boundaries, industrial sites, religious and cultural institutions and the epicenters of natural disasters.

I have a project in progress (which is, unfortunately, somewhat stalled due to non-map-related data issues) that has made use of digitised boundary maps for the electoral boundaries from past New Zealand elections (more on that in a future post, if that project ever gets re-started). However, the key point is that there is a wealth of information stored in historical maps and archives that are currently underutilised. On a related note, Abramitzky et al. note that:

Beyond mapping the location of households or firms, GIS is also useful for reconstructing historical transportation infrastructure via waterways, roads or railroads.

Given that past infrastructure patterns, including transportation and other networks, affect the patterns observed today, these seem like important sources. Third, Abramitzky et al. talk about a range of remote sensing data, including night lights (from satellite imagery), and physical attributes like air quality, weather variables, and building footprints and heights. These data are often available at small spatial resolutions, allowing analysis at very fine-grained spatial levels. However, it is worth reading the paper (and the references) carefully, as they also identify issues to be aware of with remote sensing data.

Fourth, Abramitzky et al. very briefly discussed picture and video data, including Google Streetview, and CCTV camera data. There are definitely some interesting use cases for these atypical data sources, and you can expect to see a growing use of them in future research. Fifth, Abramitzky et al. discuss mobile phone or smartphone data, including data derived from particular smartphone apps:

Mobile phones provide information about the location of the people who use them, and sometimes the vehicles they drive. Broadly, there are two kinds of cell phone data. Call data records (CDRs), provided by network operators, report the location of the phone at the time a call was made or received, as triangulated from the network of cell towers. In some cases, the counterparty to the call can also be identified...

For research purposes, CDRs have been mostly replaced by data from smartphones, whose apps collect more accurate GPS-based locations at all times (regardless of whether a call is placed)...

Researchers have used location data from individual apps with which they have developed relationships. Most prominently, Uber has provided data on its trips to several groups of researchers...

Similarly, they discuss transportation data derived from vehicle location trackers, transit cards, or electronic tolling stations, or electronic payment systems for transit riders. All of these sources are useful for identifying transportation and commuter flows, which have high policy relevance.

Finally, Abramitzky et al. give a rapid-fire selection of other data sources that are only beginning to be used, including e-commerce and payment card transactions data, posted prices and listings (often scraped from websites), routinely collected administrative data (which in my experience will generally require a lot more data cleaning), and text as data.

Clearly there are lots of new and emerging data sources in use in regional and urban economics. However, Abramitzky et al. are clear that developing skills with these data sources and the appropriate methods for dealing with them is not feasible for everyone. They do, however, suggest a solution:

We encourage urban economists, both young and old, to familiarize themselves with these data sources and to become conversant in some of the methods needed to build new data from textual corpora, digital traces, and images and video of the world around us, including large language models and deep learning more broadly. We emphasize the word “conversant” because we do not think that all of us need to become experts in these techniques. Rather, we anticipate and encourage interdisciplinary collaboration with scholars around the university in data science, computational linguistics, computer science, geography and the natural sciences who know these methods well and can thus complement the research focus and conceptual framework specific to urban economics.

So, we should definitely expect the current trend for larger, more interdisciplinary, research teams to continue into the future.

[HT: Marginal Revolution]

Tuesday, 15 April 2025

Mexico's lake of tequila

What happens when prices fail to adjust to a new equilibrium? Mexico found out the hard way at the end of last year, as the Financial Times reported in December (paywalled):

Mexico is sitting on more than half a billion litres of tequila in inventory, almost as much as its annual production, as the fast-growing industry reckons with slowing demand and the prospect of tariffs on exports to the US under Donald Trump.

By the end of 2023, the industry had 525mn litres of tequila in inventory, either ageing in barrels or waiting to be bottled, according to data shared with the Financial Times by the Tequila Regulatory Council. Of the 599mn litres of tequila produced last year, about one-sixth remained in inventory, according to the figures.

“Much more new spirit is being distilled than is being sold, and inventories are starting to accumulate,” said Bernstein analyst Trevor Stirling, attributing the build-up to falling demand and new distillery capacity that has recently begun operating in Mexico. “The tequila industry is set for a very turbulent 2025.”

Consider the market for tequila, as shown in the diagram below. The market was originally in equilibrium, where the supply curve S0 meets the demand curve D0, with an equilibrium price of P0 and Q0 units of tequila being traded. Then demand decreased to D1, and supply decreased to S1. The market should move to the new equilibrium, where the supply curve S1 meets the demand curve D1. However, say that the price remained at the original price P0 for a little while. What would happen?

If the price remained P0, the quantity of tequila supplied would increase to QS (because with the supply curve S1, the quantity supplied at the price P0 is equal to QS). The quantity of tequila demanded would decrease to QD (because with the supply curve D1, the quantity demanded at the price P0 is equal to QD). The difference between QS and QD is the quantity of tequila that remains unsold - a surplus, or excess supply. Or, as the FT article refers to it, a 'tequila lake'.

What happens next? In a market with excess supply, we would expect the price to adjust. Since tequila distilleries can't sell all of their tequila inventory, and it is costly to store it, they would start to lower the price. This 'bidding down of the price' by sellers would continue until the excess supply is eliminated. On the diagram above, that happens when the market gets to the new equilibrium, at the lower price P1, where Q1 tequila is traded.

And the FT article even notes some evidence that this adjustment is happening:

Two of the largest tequila brands, Bacardi-owned Patrón and Casamigos, which is now owned by London-listed Diageo, have been cutting prices for more than a year in response to weaker consumer demand, according to research by Bernstein.

Eventually, there would be no more tequila lake. Which is sad, because it calls to mind some interesting imagery. Here's what ChatGPT thinks a tequila lake looks like:

That's one way to get rid of the tequila lake, I guess!

Sunday, 13 April 2025

Anthropic on how university students are using generative AI

This week Anthropic released a fascinating and important report on university students' use of generative AI (specifically, how they are using Anthropic's Claude AI). The report is based on anonymised data from over 570,000 conversations (by people with a university-affiliated email address, who the AI judged to be students, not staff) over an 18-day period. 

The report has a number of important insights, but I want to focus on two in particular. First, it tells us how students are interacting with Claude. Anthropic summarises this with the following taxonomy:

Anthropic then note that:

These four interaction styles were represented at similar rates (each between 23% and 29% of conversations), showing the range of uses students have for AI.

Now, most people are probably most interested in how students are interacting with Claude, in order to determine the extent of cheating in assessment that is going on. In terms of the four categories, I'd suggest that there is a clear hierarchy. Direct output creation is the most likely to be cheating, since asking AI to write an essay or project report is likely to fit in there. Next is direct problem solving, since asking AI to provide answers to take-home tests and multiple-choice quizzes is likely to be in that category. However, students asking direct questions, using generative AI in place of a search engine, would also be captured, and that likely isn't cheating and is likely to contribute to student learning (indeed, that is how we encourage students to use Harriet, our ECONS101 AI tutor). Third is collaborative output creation, since that may involve rewriting an essay to thwart plagiarism tools, or using AI to provide critiques of other output, or debugging code. However, there is no doubt a lot of genuine collaborative effort that is allowed as part of assessment guidelines that will fit into that category. Finally, collaborative problem solving is likely to be the least problematic. A 'Socratic tutor' or guided learning approach would fit in here, for example.

In relation to cheating, Anthropic notes that:

...nearly half (~47%) of student-AI conversations were Direct—that is, seeking answers or content with minimal engagement. Whereas many of these serve legitimate learning purposes (like asking conceptual questions or generating study guides), we did find concerning Direct conversation examples including:

  • Provide answers to machine learning multiple-choice questions
  • Provide direct answers to English language test questions
  • Rewrite marketing and business texts to avoid plagiarism detection

These raise important questions about academic integrity, the development of critical thinking skills, and how to best assess student learning. Even Collaborative conversations can have questionable learning outcomes. For example, “solve probability and statistics homework problems with explanations,” might involve multiple conversational turns between AI and student, but still offloads significant thinking to the AI.

They are absolutely right that how to best assess student learning is an important question. And as Justin Wolfers notes, any high-stakes at-home assessment is essentially a non-starter in terms of credibility. That certainly limits the options available to lecturers and teachers.

The second important insight from the report is the cognitive level at which students are engaging with Claude. This is the really worrying aspect (and the modes of interaction are worrying enough already), because:

We saw an inverted pattern of Bloom's Taxonomy domains exhibited by the AI:

  • Claude was primarily completing higher-order cognitive functions, with Creating (39.8%) and Analyzing (30.2%) being the most common operations from Bloom’s Taxonomy.
  • Lower-order cognitive tasks were less prevalent: Applying (10.9%), Understanding (10.0%), and Remembering (1.8%).

In other words, students were outsourcing tasks that were higher on Bloom's taxonomy. As I noted in this post last year:

Teachers might hope that generative AI is better at the lower levels - things like definitions, classification, understanding and application of simple theories, models, and techniques. And indeed, it is. Teachers might also hope that generative AI is less good at the higher levels - things like synthesising papers, evaluating arguments, and presenting its own arguments. Unfortunately, it also appears that generative AI is also good at those skills. However, context does matter. In my experience, and this is subject to change because generative AI models are improving rapidly, generative AI can mimic the ability of even good students at tasks at low levels of Bloom's taxonomy, which means that tasks at that end lack any robustness to generative AI. However, at tasks higher on Bloom's taxonomy, generative AI can mimic the ability of failing and not-so-good students, but is still outperformed by good students. So, many assessments like essays or assignments that require higher-level skills may still be a robust way of identifying the top students, but will be much less useful for distinguishing between students who are failing and students who are not-so-good.

It seems that either I was wrong in my assessment of the strengths of generative AI at different levels of Bloom's taxonomy, or that despite the weaknesses of generative AI at the higher levels, students still prefer to use it that way. That might reflect comparative advantage. Perhaps generative AI is better at both lower-level and higher-level tasks (it has absolute advantage in both), but it has comparative advantage in the higher-level tasks? In that case, students may find it more useful to have the generative AI work on the higher-level tasks, while they complete the lower-level tasks themselves (I feel like that would be a useful topic to explore in a future post). Anyway, Anthropic point to the worries when they note that:

...AI systems may provide a crutch for students, stifling the development of foundational skills needed to support higher-order thinking.

This was definitely an interesting report and provides some genuine insights into university student use of generative AI. I feel like there is much more to learn here, including how to steer students towards more collaborative modes of engagement with the AI, which are likely to lead to more learning. It also highlights (yet again, as if we need further reminders) the vulnerability of much assessment to students' use of AI.

[HT: Marginal Revolution]

Read more:

Saturday, 12 April 2025

This week in research #70

Here's what caught my eye in research over the past week:

  • Burton (with ungated earlier version here) finds that smoking bans in bars in the US result in a 1-drink-per-month (5 percent) increase in alcohol consumption and no economically meaningful effects on smoking
  • Perron and Hu (open access) find that, in the NHL, each additional locally born player completing a full season is associated with an increase in home game attendance by approximately 12,000 spectators and $4.8 million in additional revenue
  • Dalal and Raju develop a theoretical model of the illegal drugs market, and show that, under risk aversion, increasing punishment costs (i.e., severity) is more effective than increased enforcement (i.e., certainty) and demand reduction is more effective than interdiction
  • Faerber-Ovaska et al. (open access) test the accuracy of ChatGPT’s answers to multiple-choice and short essay questions from a widely used economics textbook, and find that ChatGPT scored a high D in multiple-choice questions and a low A in essay questions (I feel like the world has already moved on from this though, as I noted in this post)
  • Cheah and Qi develop a theoretical model of the effect of broadcast revenue on competitive balance in sports, and their simulations with the model show that broadcast revenue allows teams with smaller home fan bases to narrow their performance gap against stronger teams (it must depend on revenue sharing rules though, surely?)
  • Reimão et al. find that expert judges favour the first dish tasted in a blind test in the Great British Bakeoff (and other English-speaking versions of the show)

Wednesday, 9 April 2025

Book review: The Economists' Hour

Once upon a time, economists were backroom advisers, crunching numbers and developing theories, but rarely in the limelight and certainly not the central actors in political decision-making. However, as Binyamin Appelbaum outlines in his 2019 book The Economists' Hour, that all changed in the late 1960s. The title of the book references the period from 1969 to 2008, a period of unprecedented policy change (in the US and in other countries), and a period where economists had the ear of the key governmental decision-makers. As Appelbaum notes in the introduction to the book:

This book is a biography of the revolution. Some leading figures are relatively well-known, like Milton Friedman, who had a greater influence on American life than any other economist of his era, and Arthur Laffer, who sketched a curve on a cocktail napkin in 974 that helped to make tax cuts a staple of Republican economic policy. Others may be less familiar, like Walter Oi, a blind economist who dictated to his wife and assistants some of the calculations that persuaded Nixon to end military conscription; Alfred Kahn, who deregulated air travel and rejoiced in the cramped and crowded cabins on commercial flights as the proof of his success; and Thomas Schelling, a game theorist who persuaded the Kennedy administration to install a hotline to the Kremlin - and who figured out a way to put a dollar value on human life.

That paragraph neatly sums up the book. Each chapter is devoted to one particular aspect of policy that changed as a result of the influence of economists. Before reading the book, I had no idea of the important role that economists played in ending military conscription in favour of volunteer armed forces. I was, however, well aware of economists' role in deregulation of airlines, as well as deregulation of interstate trucking in the US, and of financial markets, and the development of monetary policy and the independence of central banks. Some particular parts are surprising, such as the relatively late impact of economists on antitrust regulation (only from the 1960s). However, like other areas covered in the book, economists drove a radical change in policy in that space:

The rise of economics transformed the role of antitrust law in American life. During the second half of the twentieth century, economists gradually persuaded the federal judiciary - and, to a lesser extent, the Justice Department - to set aside the original goals of antitrust law and to substitute the single objective of providing goods and services to consumers at the lowest possible prices.

Appelbaum describes in some detail the contributions of the key players in each case, including economists as well as political decision-makers and their other advisors. Some figures, such as Friedman and various US presidents, make many appearances, and often similar ideas come up across multiple chapters. This repetition might turn some readers off. However, it is difficult to see how the book might have been constituted in any other way, because the thread of each case would easily be lost if all the material were presented chronologically.

The book is incredibly well researched, with nearly 90 pages of footnotes. As is sometimes the case in books like this, particularly for readers that are familiar with the general story, the footnotes present details that are of more interest than the text itself. For example, consider this footnote on Milton Friedman, and real and nominal interest rates:

This is another example of a battle Friedman won so completely that his victory is largely forgotten. He insisted during the 1950s and the 1960s that there was a significant difference between real and nominal rates. Conventional economists disagreed... Today the distinction between real and nominal rates is universally understood to be significant.

Indeed, we teach the difference between real and nominal interest rates (and the relationship between them known as the 'Fisher equation'), but Friedman's battle to have this recognised is largely forgotten.

I really enjoyed that Appelbaum didn't limit the book to only considering the US case. Economists had important roles in reshaping the economies in Chile and Taiwan, and in deregulating markets across the developed world. Appelbaum writes a lot about deregulation in Iceland. If there is one missing element to the book, it would be the relative lack of attention paid to economists' roles in the transitional economies of former Communist countries such as Poland, Hungary, of the Soviet Union. However, New Zealand does make an appearance a couple of times, including this bit:

In December 1989, New Zealand passed a law making price stability the sole responsibility of its central bank, sweeping away a 1964 law that, characteristically for its time, had instructed the central bank to pursue a laundry list of goals including economic growth, employment, social welfare, and trade promotion. The man picked to lead New Zealand's experiment was an economist named Don Brash, who ran one of the nation's largest banks and then one of its largest trade groups, the Kiwifruit Authority...

Appelbaum is careful not to provide an overly rosy view of the role of economists, and the impacts of these changes. Indeed, in the introduction he warns that:

This book is also a reckoning of the consequences...

Markets make it easier for people to get what they want when they want different things, a virtue that is particularly important in pluralistic societies which value diversity and freedom of choice. And economists have used markets to provide elegant solutions to salient problems, like closing a hole in the ozone layer and increasing the supply of kidneys available for transplant.

But the market revolution went too far. In the United States and in other developed nations, it has come at the expense of economic equality, of the health of liberal democracy, and of future generations.

And almost as quickly as it began, perhaps, the economists' hour was over:

The Economists' Hour did not survive the Great Recession. Perhaps it ended at 3:00 p.m. on Monday, October 13, 2008, when the chief executives of America's nine largest banks were escorted into a gilded room at the Treasury. The government had tried to support the banks by purchasing bonds in the open market, but the market had collapsed, so the government decided to save the financial system by taking ownership stakes in the largest financial firms.

Or perhaps it was one of a dozen other moments during the financial crisis; it doesn't really matter which. In the depths of the Great Recession, only the most foolhardy purists continued to insist that markets should be left to their own devices...

However, it would be fair to note that economists continue to have a strong influence in policy, in other countries if not in the US (as the current furore over tariffs attests).

I really enjoyed this book, and if you have an interest in understanding how economics (and economists) came to have such an important influence on policy, I am sure that you will enjoy it too. Highly recommended!

Tuesday, 8 April 2025

Supply curves slope upwards... Nigerian cocoa edition

The New Zealand Herald reported last month:

Booming cocoa prices are stirring interest in turning Nigeria into a bigger player in the sector, with hopes of challenging top producers Ivory Coast and Ghana, where crops have been ravaged by climate change and disease.

Nigeria has struggled to diversify its oil-dependent economy but investors have taken another look at cocoa beans after global prices soared to a record US$12,000 ($21,000) per tonne in December.

“The farmers have never had it so good,” Patrick Adebola, executive director at the Cocoa Research Institute of Nigeria, told AFP.

More than a dozen local firms have expressed interest in investing in or expanding their production this year, while the British Government’s development finance arm recently poured US$40.5 million into Nigerian agribusiness company Johnvents.

When the price of a good increases, sellers become willing and able to supply more of the good. In general, sellers want to increase their profits. When the price of a good increases, it becomes more profitable to sell it, and so sellers want to sell more of it. [*] This intuition is embedded in the supply curve, as shown in the diagram below. When the price of cocoa is P0, sellers want to sell Q0 tonnes of cocoa. But when the price increases to P1, sellers want to sell Q1 tonnes of cocoa.

What might have caused the increase in the global price of cocoa? The New Zealand Herald article explains that:

Ivory Coast is by far the world’s top grower, producing more than two million tonnes of cocoa beans in 2023, followed by Ghana at 650,000 tonnes.

But the two countries had poor harvests last year as crops were hit by bad weather and disease, causing a supply shortage that sent global prices to all-time highs.

I'll refrain from drawing the global market for cocoa, but suffice to say that the high global price of cocoa is attracting Nigerian farmers to produce more, illustrating that the supply curve for Nigerian cocoa is upward sloping.

*****

[*] There are at least two other explanations for why the supply curve is upward sloping, and both relate to opportunity costs. First, as sellers produce more of the good, the factors of production (raw materials, labour, capital, etc.) become more scarce and so become more expensive. Also, less relevant (and so more costly) inputs begin to be used to produce the good. So, the opportunity costs of production increase, and as the sellers produce more the minimum price they are willing to accept increases more because their marginal cost is increasing. Second, when the price is low the opportunity cost of not selling is low, but as the price rises the opportunity cost of not selling rises, encouraging the sellers to offer more for sale. In other words, as the price increases, the sellers do less of not selling (yes, that is a double negative, and it was intentional). As the price increases, the sellers want to sell more.

Sunday, 6 April 2025

Do economists act like the self-interested decision-makers from our models, and if so, why?

Economics models typically assume that decision-makers are self-interested, trying to maximise their own 'economic rent'. Does exposure to these models, and the assumption of self-interest, lead people who have studied economics to make more self-interested decisions? Or, are people who make more self-interested decisions more likely to study economics (perhaps because it accords with their already-established world view)?

These are questions that many studies have tried to grapple with (and which I have written about before, most recently in this 2023 post). What is needed is a good systematic review of the literature. We don't have that, but this 2019 article by Simon Hellmich (Bielefeld University), published in the journal The American Economist (sorry, I don't see an ungated version online), provides a review of the literature (up to 2019, of course).

Hellmich prefers the term "people trained in economics" rather than "economists", noting that much of the literature focuses on undergraduate students who have only taken one or a few courses in economics, and can hardly be considered "economists". Hellmich reviews the empirical literature that comes from both lab experiments and field experiments, although it is worth noting that most of the literature makes us of lab experiments. He draws three broad conclusions from the literature:

• People trained in economics behave more in accordance with the standard paradigms of their discipline in situations that are typically described in economic categories. They tend to prioritize their self-interest in games... but this is at least in part an outcome of their expectations about other peoples’ behavior and social interaction can strengthen their cooperativeness.

• Most of the experiments reviewed here involve economic decisions (i.e., involve the allocation of money); in most of the less obviously economic decisions, people trained in economics do not seem to be much less concerned with other people’s welfare and no more likely than other people to expect opportunism from other individuals. All in all... there is not much unambiguous support for the view that training in economics affects the fundamental preferences of people by making them more “selfish” or opportunistic.

• Most empirical evidence seems to be consistent with the self-selection assumption and more than half of the relevant studies—some of them providing high-quality evidence— seem to suggest that there are training effects... Probably both forces play a role.

In other words, the review doesn't really tell us much more than we already knew. People trained in economics behave in a more self-interested way, and part (or perhaps most) of the reason for that is the types of people who choose to train in economics. What Hellmich adds to this research question, though, is a concern about the way that previous research has tried to identify the effects, and in particular, the way that the research is framed (from the perspective of the research participants). He notes that:

...most of the experiments reviewed here lack sufficient consideration of the fact that human subjects in experiments do not mechanistically and passively respond to selected stimuli consciously created and controlled by the experimenter, and in so doing reflect their fundamental preferences. Instead, human subjects tend to interpret cues given to them—perhaps unconsciously— by the experimenter or the environment and what they might know about the theories underlying the experiment... In social dilemmas that involve decisions that are clearly identifiable as being of an economic nature (e.g., because they involve the allocation of money), people compete more than if this trait is less clear... In market-like contexts, there is broad acceptance of self-interest. It may even constitute the social norm to follow...

In other words, perhaps people trained in economics act differently in these experiments because the lab environment, and the wording of the decisions, induces them to apply their economics skills. This would explain why, in the field experiments conducted in more naturalistic settings, the behaviour of people trained in economics differs much less from other people than it does in the lab experiments. Hellmich is essentially arguing for more investigation of real-world decisions, and how they differ between people trained in economics and people who are not. That seems like a sensible suggestion.

However, the overwhelming result from Hellmich's review is that people trained in economics are "different" in meaningful ways (including higher levels of self-interest), and that difference should be recognised. He concludes that:

...as provisional steps, we should perhaps try to make students more aware of the fact that most economists understand key elements of neoclassical theory—like the homo economicus—as an instrument to explain macrophenomena rather than as a normative model of micro-behavior and how other elements of the “culture” of the discipline might make their judgments deviate from that of other groups.

In other words, our students (and other people) need to understand that self-interested behaviour is an assumption that we make in economic models, and not an ideal to strive for.

Read more: