Friday, 31 May 2024

This week in research #25

 Here's what caught my eye in research over the past week:

  • Yang and Zhang look at the short- and medium-term impacts of online teaching during COVID-19 on the academic performance of high school freshmen in Zhejiang Province, China, and find that online teaching had short-term negative effects on Chinese performance in the high-performing school and math performance in the mid-performing school but no significant effects on other tests and in the low-performing school (a bit of a mixed bag, really)
  • Ferguson reports a new meta-analysis of the effect of social media on mental health, finding that meta-analytic evidence for causal effects was statistically no different than zero (not good news for Jonathan Haidt, see here and here)
  • Claudia Goldin's Nobel lecture is now available, published in the American Economic Review

Thursday, 30 May 2024

The economics of the falling total fertility rate in New Zealand

Earlier this week, I was interviewed by Paul Brennan on Reality Check Radio, on New Zealand's declining birth rate. You can listen to the interview here. We didn't have time to go through all of the questions I was given beforehand, so I thought I would add some points here, along with some links to some of the underlying data and research.

First, we need to understand what the numbers mean. The age-specific fertility rate is the number of births per women of each year of age (often it's reported in five-year age groups). The completed fertility rate is the number of births per woman over their entire childbearing years (typically assumed to finish at age 50, since so few women older than 50 give birth). Ideally, we want to know the completed fertility rate for each cohort, but we have to wait decades to find that out. So instead, national statistical agencies like StatsNZ measure the total fertility rate, which is the number of babies a woman would be expected to have, on average, if they experienced each of the age-specific fertility rates in that year.

To maintain a stable population (ignoring the impact of migration), a population needs to maintain a completed fertility rate of 2.1. This is more than two (which is theoretically all you would need to replace each couple), because not all women live to childbearing ages. So, each woman that lives to childbearing age needs to have more than two children to maintain a stable population. Now, it is worth noting that the total fertility rate is not the same as the completed fertility rate, because age-specific fertility rates change over time. In fact, when birth rates are falling, the total fertility rate probably over-estimates the completed fertility rate, but that's a story for another post.

Here's the total fertility rate for New Zealand since 1921 (source here):

The baby boom is easy to see, with the total fertility rate peaking at 4.31 in 1961. It then declined to approximately replacement level by the late 1970s, and until about 2012 the total fertility rate remained at or around replacement level. In fact, since 1978 the total fertility rate has only been above the replacement level in 1989-90 and 2007-2010. After 2012, the total fertility rate has been falling, down to 1.56 for the year ended December 2023. It is this recent decline that has many people freaking out.

What has contributed to the decline? In my view, there are two factors. First, children are really expensive. As noted here, JUNO estimated in 2021 that the cost to raise a child to age 18 in New Zealand was $265,680. That's pretty expensive, especially when you consider that the cost of housing has grown a lot in the last decade or so. According to Infometrics, the ratio of house price to income has grown from 4.9 in 2010 to 7.0 in 2024 (the period when the total fertility rate has been declining). Before 2010, the ratio was relatively stable (back to 2005, based on that dataset). As housing costs go up, that squeezes family's ability to afford to raise children.

Second, there are two long term social changes at play (and I only mentioned one of them in the interview). Women have been delaying fertility. As shown here, the median age of mothers giving birth to their first child was 27.4 years in 1998, but had increased to 29 years by 2018. Women delay fertility for a number of reasons, but two contributors are a longer period spent in education (more women going on to and completing tertiary study), and a greater focus on career development before having a family. One consequence of women delaying the start of fertility is that it leaves fewer years of childbearing age to have children (and so women have fewer children), and greater likelihood of fertility problems (and so more women remain childless).

The second social change is higher labour force participation of women. Even though the gender wage gap remains persistent, women do earn more than in earlier decades, which means that the opportunity cost of time spent out of the workforce has increased. This increases the 'implicit cost' of having children.

Of course, New Zealand is not alone in facing declining total fertility rate. It's been a trend across many (but not all) OECD countries (and most other countries as well). Here's the trends since 2012 (New Zealand is the red line, and the OECD average is the bold black line):

Can countries turn the trend in declining fertility around? The extreme example here is Hungary, which has offered some very large incentives to increase fertility, including a lifetime exemption from paying taxes for women with four or more children. Estimates vary, but Hungary may spend as much as 6 percent of GDP on families. How much extra fertility has that spending 'bought'? Hungary's total fertility rate increased from 1.25 in 2010 to 1.59 in 2021, a 27 percent increase (although some have claimed that the increase in total fertility rate is just an artefact of the data).

Closer to home, Australia introduced a baby bonus in 2004, worth AU$2500 per baby (now AU$5000 per baby). The baby bonus has been estimated to have increased the Australian total fertility rate by about 7 percent, which is hardly a huge change. In fact, Australia's total fertility rate was indistinguishable from New Zealand's in recent years, and was just 1.63 in 2022.

Taken together, it seems unlikely that countries can have a large enough impact on the total fertility rate to fight the tide that is driven by costs and long-term social changes. At least, it can't be done at the current levels of spending (and now South Korea is talking about introducing an incredibly generous baby bonus worth US$70,000).

The main demographic consequence of a falling total fertility rate is an ageing population. The median age will increase over time, and the proportion of the population in older age groups will grow. The main economic consequences relate to a need to recalibrate the infrastructure and social services that the population will need in the future. We may need fewer early childcare centres and schools, more elder daycare and rest homes, and greater healthcare capacity for people who are living longer (albeit also possibly healthier for longer). We will also need more age-friendly policies. Whether there will be a negative fiscal impact is less certain.

A declining total fertility rate is not something for us to fear. However, it is something that we need to take account of.

Tuesday, 28 May 2024

Have we reached peak online dating?

On the surface, the persistence of online dating is somewhat of a mystery. As I explained in this post, online dating should have serious adverse selection problems:

Adverse selection arises when one of the parties to an agreement (the informed party) has private information that is relevant to the agreement, and they use that private information to their own advantage at the expense of the uninformed party. In the case of online dating, the private information is the quality of each potential date. They know if they are high quality or not, but no one else knows. That could lead to a pooling equilibrium, where every online dating subscriber assumes that everyone they are matched with is low quality (because they can't tell the high quality and low quality dates apart, and assuming that every match is high quality is a recipe for disaster). High quality dates don't want to be treated as if they are low quality, so they drop out of the online dating market. Eventually, the online dating market only has low quality dates. The 'market' for high quality dates fails.

The online dating market deals with this through screening (an activity that allows the uninformed party to reveal the private information). Screening is achieved by allowing subscribers to chat with the people they are matched with, which they can use to work out who is high quality and who is low quality. However, screening is imperfect, and costly. And the costs tend to be higher when the average quality is low. So, unless the online dating services can keep the quality of their users high, eventually things will turn out bad for them. And, as I noted earlier, that may be increasingly difficult.

So, it was perhaps inevitable that we would get to a point where even screening is not enough, as reported by the Financial Times this week:

“Trying to engage young women is the biggest struggle for dating apps,” said Rebecca McGrath, associate director for media and technology at Mintel. “Significant gender skew means it is harder for men to find matches and, subsequently, women often become bombarded, making the experience worse for all.”...

The number of paid subscribers on Tinder fell to under 10mn in the three months to March, a sixth consecutive quarterly decline. Monthly active users, the majority of whom use the app’s free services, have fallen steadily since 2021, according to figures from Sensor Tower. Bumble also showed a first-quarter drop in the number of active users, data from the app-tracking service showed, even though paid subscribers remained steady.

The declines in users come as reports increase of so-called dating app fatigue. A survey by Bumble, for example, found that 70 per cent of women using the app had experienced “burnout”...

Match Group’s [chief executive Bernard] Kim said Tinder’s rebrand in 2023 was expected “to have some positive impact on users, particularly women and Gen Z”. He noted however that paid subscriber growth would come primarily from “product innovation”, which includes improving profile quality, moderation and the accuracy of its algorithmic matchmaking.

'Burnout' is one consequence of an increasing cost of screening in online dating. It's becoming more difficult for women to find high-quality matches, and they are starting to switch off. The FT article discusses various ways that the apps are trying to keep women engaged. However, to solve the problem, they need to reduce the cost of screening. The option that may come closest is this one:

Tinder said in February it was expanding its own identity verification programme, which compares a video selfie with a passport or driving licence, as well as the images on a user’s profile. Bumble, which already offered a similar feature, said it had enhanced its “computer vision model for likeness comparison” in the first quarter in order to improve verification.

At the least, that will reduce the impact of fake accounts, bots, and catfishing. However, it still doesn't address the challenge that large language models pose for online dating. I'm still expecting a non-trivial amount of the traffic on online dating apps to become ChatGPT (or Claude or whatever) talking to ChatGPT.

Read more:

Sunday, 26 May 2024

Local crime rates and mental health in South Africa

It seems somewhat obvious that being a victim of crime would have a negative impact on mental health. However, does simply living in an area with a higher crime rate also negative affect mental health? That is the research question addressed in this new article by Magda Tsaneva and Lauren-Kate LaPlante (both Clark University), published in the journal Review of Development Economics (open access).

Tsaneva and LaPlante use South African data from the National Income Dynamics Survey (NIDS) between 2008 and 2014, matched with district-level data on crime from the South African Police. The NIDS data includes the Center for Epidemiological Studies Depression (CES-D) scale, which Tsaneva and LaPlante use as a measure of mental health. Their sample includes over 13,000 individuals who responded to the survey at least twice (of the four survey waves between 2008 and 2014), which allows them to control for individual fixed effects. Essentially, their analysis looks at how individual depression changes as local crime rates change (measuring crime rates for the previous year), while controlling for the person's age and province-time fixed effects (which should pick up things like common changes in unemployment and economic activity within a province).

Tsaneva and LaPlante find that:

...both property crimes and violent crimes are associated with higher mental distress - an increase of 1SD in property (violent) crime is associated with a 7.2 (8.7) percentage point increase in the probability of depression, significant at the 5% level... Given a baseline proportion of people with depression symptoms of 0.34, these estimates translate to 21.2% and 25.6% change in depression symptoms for a 1SD change in property and violent crime, respectively.

Those are quite sizeable effects. However, they also find that:

...only crime in the most recent year has an effect on mental health....

So, the effect of a change in crime rates is relatively short-lived. And:

...while both men and women experience a rise in depression symptoms associated with a rise in property crime, women experience a much larger effect of violent crime relative to men (1SD increase in violent crime is associated with a 10.2 percentage point increase in depression symptoms for women but only 6.1 percentage point increase for men with effect significant at the 10% level)...

In terms of race, property crime has a small and insignificant effect on the mental health of individuals of African race but a significant and large effect on other racial groups. This is reversed for violent crime, where White and Asian/Indian individuals do not experience a significant deterioration of mental health as violent crime rates rise, while other racial groups do. 

I suspect that the differences probably relate to how likely a person is to feel at risk of crime. Women may feel more at risk of violence (even if, ultimately, men are more likely to be victims of violence). People in higher socioeconomic status groups may have greater fear of property crime, as they likely have more valuable property (and therefore have more to lose, as well as being more likely to be targeted). None of this is terribly surprising, but interesting nonetheless.

However, all of this is based on a self-reported depression scale. I suspect that if there were data on antidepressant prescriptions (and controlling for differential geographical access to prescriptions), you could corroborate this with a more objective measure. In fact, this analysis could easily be done in New Zealand, given data that are available in the IDI. That is perhaps an opportunity for a suitably motivated graduate student in the future.

Friday, 24 May 2024

This week in research #24

Here's what caught my eye in research over the past week:

  • Christensen, Dinesen, and Sønderskov (open access) find using a panel survey linked to Danish registry data that increased exposure to poor individuals is associated with lower support for redistribution among wealthy individuals (which contradicts the current consensus on this topic)
  • Sabia et al. find using US state-level data that recreational marijuana laws increase the use of marijuana by adults and reduce marijuana-related arrests, but there is little evidence that the laws increase the use of harder drugs, admissions to substance use treatment facilities, or property and violent crime (which contrasts with some earlier research)
  • Spencer (with ungated earlier version here) finds that the AIDS epidemic in the US increased the birth rate by 0.55 percent and the abortion rate by 1.77 percent, as women opted for more monogamous partnerships and/or switched from prescription contraceptive methods to condoms

The American Economic Review published its annual papers and proceedings issue this week, which included the following papers (and yes, gender seems to have been a big theme at the AEA Conference this year):

  • Iyigun, Mueller, and Qian (with ungated earlier version here) find that consecutive and prolonged drops in temperature are positively associated with conflict between 1400 and 1900 CE
  • Francis, de Oliveira, and Dimmitt find that even after controlling for how prepared a candidate seems, White males are more likely to be recommended for Advanced Placement (AP) Calculus, and that name-blind review has no effect
  • Antman et al. look at economics dissertations over the period from 1991 to 2021, and find a remarkable rise in gender-related research in economics over time, and that women economists are significantly more likely to pursue gender-related dissertation topics
  • Gualavisi, Kleemans, and Thornton (with ungated earlier version here) document a number of gender differences in the impacts of academic migration, including that women are significantly less likely to move for promotion than men, women are also more likely to move to lower-ranked institutions, and that men who work in departments receiving a new faculty member see their publication output increase more than twice than that of women at those departments
  • Bansak et al. (with ungated earlier version here) look at the AER Papers and Proceedings from 1998 to 2017, and find that Papers and Proceedings evolved to become more inclusive, and that did not come at a cost in terms of quality
  • Koffi, Pongou, and Wantchekon look at more than 200 economics journals over the period from 1990 to 2019 and find disproportionate racial disparities in authorship distribution
  • Nguyen, Ost, and Qureshi find that Millennial teachers are more effective at reducing absences compared to Baby Boomer teachers in K-12 schools in North Carolina, and that this gap is larger for Black students
  • Also for North Carolina, Zhu (with ungated version here) finds that teachers are significantly more negative in their assessments of English Learner students compared to non-English-Learner peers in the same classes with the same  standardised test score
  • Agarwal et al. (with ungated version here) show that self-supervised algorithms have surpassed human radiologists in the long tail of diseases (those that are particularly uncommon) in terms of predictive ability

Wednesday, 22 May 2024

Try this: Long-term data series for New Zealand at Data1850

NZIER (New Zealand Institute of Economic Research) has a Public Good Fund that supports awards and scholarships, as well as the Data1850 project. Data1850 collates data for New Zealand going back as far as records allow, which is 1850 for some data.

The Data1850 site allows you to create some cool graphics of the data, like this graph of the unemployment rate from 1956 to 2023 (I've suppressed the separate female and male rates, although it is a little annoying that they still show on the legend):

Jason Shoebridge posted on the Asymmetric Information substack, offering some other examples. I should point out that you can't interpret the graph of life expectancy and household income as showing anything important, because two variables with underlying time trends will always look like they are correlated (this is spurious correlation). However, that post was useful in that it reminded me that this excellent resource exists. [*]

Importantly, you can download the underlying data series (in the Resources tab), which are organised into five datasets: (1) Economic activity; (2) People; (3) Prices; (4) International linkages; and (5) Government. Each dataset has data for several different variables.

Try it out for yourself!

*****

[*] Although it would have been good to remember this before my BUSAN205 students began their group research projects. Having said that, projects using time series data are trickier for students at that level to do well.

Tuesday, 21 May 2024

Good news for broccoli lovers

The New Zealand Herald reported yesterday:

New Zealand broccoli lovers are in for a treat, as a “phenomenal” season has resulted in great prices for consumers.

According to the latest Stats NZ Food Price Index, the price of broccoli dropped 32.3 per cent in April compared to the same month last year.

Foodstuffs North Island’s head of butchery and produce Brigit Corson said this time a year ago, fresh produce was at the mercy of extreme weather events which wreaked havoc for many growers, but good weather had since turned this around.

“Right now, we’re seeing great supply for produce like broccoli because we’ve had months of fantastic weather, making for near-perfect growing and planting conditions.”

Corson said the price of fresh produce depended on a few different factors, including if it was in season, the growing conditions and whether it was in abundance.

“If there’s been a bumper crop and great supply, that’s when the prices go down.”

It is easy to see why the price of broccoli has decreased, using the model of supply and demand, as shown in the diagram below. Last year, when the conditions for growing broccoli were not good, the supply was S0, and demand was D0. The equilibrium price of broccoli was P0, and the equilibrium quantity of broccoli traded was Q0. This year, with better growing conditions, the supply of broccoli has increased to S1. Another way of thinking about this is that, at each and every price, more broccoli would be supplied, shifting the supply curve out to the right (to S1). The result is that the equilibrium price of broccoli decreases to P1, and the equilibrium quantity of broccoli traded increases to Q1.

Overall, good news for broccoli lovers, and easily anticipated using the model of supply and demand.

Monday, 20 May 2024

Flapjacks, jaffa cakes, and the quiet simplicity of GST

Tim Harford wrote in the Financial Times back in March (and re-posted to his blog last month):

Earlier this year, two distinguished gentlemen, Judge Hyde and his adviser Julian Stafford, sampled a mineral-enriched flapjack — alas, a year past its sell-by date — and pondered its qualities. (Flapjacks are slabs of oats stuck together with a glue made of butter, sugar and syrup.) The question: was this unconventional flapjack, designed as a pre-exercise snack, “of a standard to be served to guests as a treat with afternoon tea”?

Much turns on the answer, since the enriched flapjack hovers in the liminal space between a muesli bar, which, in the UK, attracts value added tax at 20 per cent, and an ordinary flapjack, which, by long-hallowed British tradition, is a cake and, therefore, zero rated for VAT purposes.

I am serious about the long-hallowed tradition. His Majesty’s Revenue & Customs notes that “at the inception of VAT, traditional flapjacks were widely accepted as cakes of common perception”. When HMRC drew the line between cake and confectionery, it nodded through the idea of flapjacks-as-cakes because to insist otherwise would be to incite a revolution. Is it absurd that a British judge found himself pondering the qualities of a flapjack and the “slightly unpleasant mouth feel” of the protein-enriched brownie with which it was packaged? Of course, it is absurd. But it is an inevitable consequence of the way the UK’s VAT rules try to draw distinctions that cannot sensibly be sustained.

FT Alphaville rightly lavished 5,000 words on the flapjack tribunal, which we can add to the infamous Jaffa Cake controversy — in which what is self-evidently a fancy chocolate biscuit was ruled to be a cake for tax purposes, and to the more recent case of the giant marshmallows, which were ruled to be an ingredient for toasted-marshmallow-and-cookie sandwiches (zero rated) rather than a standalone sweet (20 per cent rated).

That post, and various other writings (including my own) should be required reading for any politician looking to carve out exceptions to GST. It should be required reading for journalists writing on the subject, and for academics writing 'expert opinion' for media, such as this one:

While economists have argued that removing GST from foods is an expensive and complex exercise in terms of administration, and public health experts have argued that the approach is inequitable because it is not targeted to lower-income households (both arguments raised by parties opposing the bill), we need to start somewhere and focus on the changes we can make now to relieve families of the burden of high food costs.

New Zealand’s approach to taxing food differs from that of comparable countries including Australia, Canada, and the UK, where most basic foods purchased at the supermarket are exempt from GST (or VAT, as it is known in the UK). In these countries, basic foods are viewed as essentials and are therefore not subject to a consumption tax, to keep the foods more affordable for consumers.

Yes, New Zealand's approach to GST is different to those other countries. Our approach is better. Do we really want resources being tied up in court cases deciding whether flapjacks are a muesli bar, jaffa cakes are a biscuit, or giant marshmallows are a sweet? That's what we would get as soon as we start to carve out exceptions to the comprehensive GST that we currently have.

New Zealand's GST has a quiet simplicity. There are no arguments to be had about whether a good or service attracts GST or not. The exceptions (financial services, residential rents) are for the most part clear and obvious. If government is concerned about the cost of food for low-income families, they should provide targeted assistance to low-income families, and leave GST alone. On the other hand, I am very much looking forward to starting up SpudCars (see here).

Read more:

Sunday, 19 May 2024

Mike Masnick on the social media-mental health debate

There's an ongoing debate about whether social media has a causal negative impact on mental health. The latest iteration of this debate was triggered by the release of Jonathan Haidt's book The Anxious Generation. I wrote briefly about the debate between Haidt and Candice Odgers last month. Around the same time, Mike Masnick wrote a long article on the Daily Beast clearly against Haidt's perspective. Here's one important part of the article:

Reading Haidt’s book, you might think the evidence supports his viewpoint, as he presents a lot of it. The problem is that he’s cherry-picking his evidence and often relying on flawed studies. Many other studies by those who have studied this field for many years (unlike Haidt), find little to no support for Haidt’s analysis. The American Psychological Association, which is often quick to blame new technologies for harms (it did this with video games), admitted recently that in a review of all the research, social media could not be deemed as “inherently beneficial or harmful to young people.”

Two recent studies from the Internet Institute at Oxford used access it had obtained to huge amounts of data that showed no direct connection between screen time and mental health or social media and mental health. The latter study there involved data on nearly 1 million people across 72 countries, comparing the introduction of Facebook with widely collected data on mental health, finding little to support a claim that social media diminishes mental health.

To get around this unfortunate situation, Haidt seems to carefully pick which data he uses to support his argument. For example, Haidt mentions the increase in depression and suicide among teen girls from 2000 to the present. The numbers started rising around 2010, though they are still relatively low.

What’s left out if you start in 2000 is what happened earlier. Prior to 2000, the numbers were on par with what they were today in the late 1980s and early 1990s, when no social media existed. Across the decades, we see that the late ’90s and early 2000s were a time when depression and suicide rates significantly dipped from previous highs, before returning recently to similar levels from the ’80s and ’90s.

It’s worth studying why it dropped and then why it went up again, but by starting the data in 2000, Haidt ignores that story, focusing only on the increase, and leading readers to the false conclusion that we are in a unique and therefore alarming period that can only be blamed on social media.

Masnick also highlights that suicide rates (which are indicative of extreme negative mental health) have not seen an uptick in all countries since 2010, or even in all Western countries, pointing to these data:

I felt a bit obliged to include the figure, since it shown the overall downward trend in youth suicide rates in New Zealand. It doesn't break the data down by gender, and part of Haidt's argument is that the negative effects are concentrated among young women. However, if you look at data for young women for those same countries, you would have to squint really hard to see any uptick in suicide rates starting around 2010:

Masnick concludes that:

In the end, neither the data nor reality support his position, and neither should you. Kids and mental health is a very complex issue, and Haidt’s solution appears to be, in the words of H.L. Mencken: clear, simple, and wrong.

Clearly, there is more to come in this debate. I remain agnostic, but very cautious about claims on both sides that are not supported by clearly causal evidence.

[HT: Marginal Revolution]

Read more:

Saturday, 18 May 2024

More impatient people are more likely to commit crime

Gary Becker's famous model of rational crime suggests that criminals weigh up the costs and benefits of crime (and engage in a criminal act if the benefits outweigh the costs). Time preferences matter in this model, because the benefits of a criminal act are usually realised immediately, whereas the greatest costs (including the penalties of being caught occur in the future. So, someone with a higher discount rate (a greater preference for the present over the future) will be more likely to commit crime, because the costs will be more heavily discounted. In other words, people who are more impatient (and therefore have a greater preference for the present) will be more likely to commit crime.

Is there evidence to support this idea that more impatient people are more likely to commit crime? This new article by Stefania Basiglio (Universitá degli Studi di Bari), Alessandra Foresta (University of Southampton), and Gilberto Turati (Universitá Cattolica del Sacro Cuore), published in the Journal of Economic Psychology (ungated earlier version here), provides some supporting evidence. They make use of data from the National Longitudinal Survey of Youth 1997 (NLSY97), using data from the 2008-2011 survey waves (with a sample of nearly 6000 observations), when the cohort was aged 24 to 31 years old. Their dependent variables are self-reported measures of whether the survey respondent engaged in property crimes, violent crimes, or drug crimes, in the previous twelve months.

One of the interesting aspects of this study is how Basiglio et al. chose to measure impatience. Because the NLSY doesn't include a survey measure of time preference, they instead use a variety of variables that are expected to be correlated with impatience. As they explain:

...we consider several similar observed variables 𝑋, available in the NLSY97, which represent individual behaviors for which impatience plays a role: the saving rate 𝑋1 is defined as the ratio between total savings and income; smoking 𝑋2 and drinking 𝑋3 are measured by the average number of cigarettes smoked in the past month and the average number of drinks consumed in the past month, respectively; obesity 𝑋4 is defined by a dummy which is equal to one if the Body Mass Index is equal or higher than 30; risky sexual behavior 𝑋5 is measured by the number of sexual partners that the individual had in the previous 12 months; 𝑋6 is a dummy for using hard drugs (like cocaine or meth) in the previous 12 months; 𝑋7 is a dummy for participating to a worship service at least once a month; finally, we define a dummy about marital status 𝑋8, which takes value one if the individual is married.

Basiglio et al. then use factor analysis to extract a single factor (a single variable) that best summarises the information contained in all eight of those variables. This is a useful approach to reducing the dimensionality of data, but also a handy way to proxying for a latent variable like impatience. The proxy variable seems to pick up the right correlations with each of the variables, being the same correlation that we would expect with impatience:

Factor loadings take up the expected signs: we find a negative correlation between being married, obesity, and having attended a worship service and the extracted common factor 𝐹1, while we find a positive association for all the other proxies. Consistent with our expectations, the strongest positive linkages are with drinking (0.260), smoking (0.204), and hard drug use (0.203); the strongest negative linkages are with attending a worship service (−0.267) and being married (−0.210).

Using this proxy variable as their measure of impatience, and controlling for age, ethnicity, education. occupation, and whether the individual had been jailed in the previous year, they find that:

The marginal effect for impatience is positive and significant for all types of crimes... The result suggests that being more patient is associated with a lower probability of committing crimes. The correlation of our proxy for impatience is stronger for drug crimes.

One of the main problems with the proxy variable is that it doesn't have a natural interpretation in terms of the size of the coefficients. However, taking the results as given in Table 4 of the article, a one-standard-deviation increase in the impatience measure is associated with a 5.4 percentage point higher probability of having committed any crime, a 1.8 percentage point higher probability of having committed a violent crime, a 3.1 percentage point higher probability of having committed a property crime, and a 6.4 percentage point higher probability of having committed a drug crime. Those are substantial effects, given that the baseline probability of committing those crimes are 6 percent for any crime, 2 percent for violent crime, 4 percent for property crime, and 6 percent for drug crime.

Basiglio et al. then look at differences by demographic group, and find no differences between men and women, or between people whose parents have college education compared to those whose parents have no college education. They do find some evidence that the effects of impatience are larger for non-Black/non-Hispanic men than for other men, for total crimes and drug crimes only. It is difficult to see what we can take away from the demographic analyses though - we would need some theory as to why impatience would affect different groups' crime decision-making differently.

Basiglio et al. also find that the results remain after controlling for risk preferences, which is an important robustness check, since people who are more impatient may also be those that are willing to take on more risk. Now, the results are not causal, but they do suggest that impatient people are more likely to commit crime.

If we accept these results, what are the policy implications? Basiglio et al. suggest that education may be a solution to reducing crime, to the extent that it both increases the opportunity costs of crime and makes people more patient. However, I think that there is a more immediate solution, which is to make the punishment of crime more immediate and more certain. If people who heavily discount the future are more likely to commit crimes, then the costs of committing crime (and being caught) have to be more severe, or (and this may be more effective overall) the punishment needs to come more quickly after the crime is committed. Either way, that probably means more resources devoted to policing and the criminal justice system.

Friday, 17 May 2024

This week in research #23

Here's what caught my eye in research over the past week (another quiet week, it seems):

  • Kenny (open access working paper) outlines the good and the bad of the World Bank's extreme poverty line (could this finally signal the end of the debates about this measure?)
  • Bloem and Rahman (open access) show that how statements are framed have a substantial effect on the measurement of attitudes
  • Krauss (open access) looks at the Nobel Prize winners in economics, and finds that major advances in the field of economics are mainly brought about by methodological innovation, that is, by developing new and improved research methods

Thursday, 16 May 2024

New York restaurants find a new way to respond to the minimum wage

The New York Times (paywalled, but also available here) reported last month:

At Sansan Chicken in Long Island City, Queens, the cashier beamed a wide smile and recommended the fried chicken sandwich.

Or maybe she suggested the tonkatsu — it was hard to tell, because the internet connection from her home in the Philippines was spotty.

Romy, who declined to give her last name, is one of 12 virtual assistants greeting customers at a handful of restaurants in New York City, from halfway across the world.

The virtual hosts could be the vanguard of a rapidly changing restaurant industry, as small-business owners seek relief from rising commercial rents and high inflation. Others see a model rife for abuse: The remote workers are paid $3 an hour, according to their management company, while the minimum wage in the city is $16.

The workers, all based in the Philippines and projected onto flat-screen monitors via Zoom, are summoned when an often unwitting customer approaches. Despite a 12-hour time difference with the New York lunch crowd, they offer warm greetings, explain the menu and beckon guests inside.

The jobs that are most often used to explore the disemployment effects of the minimum wage include fast food workers and retail workers. If those jobs can increasingly be off-shored to foreign workers not subject to the minimum wage, then it seems to me that the disemployment effects of the minimum wage (which are still contested in the literature - see the links at the end of this post) are likely to become more substantial.

To see why, consider the price (wage) elasticity of demand for labour in fast food restaurants (or retail). One of the factors that affects the price elasticity of demand is the availability of substitutes. Foreign workers in the Philippines connecting virtually to the restaurant are substitutes for in-person workers in New York. When new substitutes become available, demand becomes more elastic - the buyers become more sensitive to the price. In this case, the availability of cheaper foreign workers makes employers' demand for local workers more elastic.

To see why the disemployment effect would be larger with more elastic demand for labour, consider the diagram of the labour market below. The equilibrium wage is W0, and the equilibrium quantity of labour is Q0. The minimum wage is shown by WMIN. At that wage, the quantity of labour supplied (the number of workers wanting a job) is QS, and with the original (more inelastic, or steeper) demand for labour (DL0), the quantity of labour demanded (the number of jobs available) is QD0. The minimum wage creates unemployment equal to the difference between the quantity of labour supplied (QS) and the quantity of labour demanded (QD0). The disemployment effect of the minimum wage is the number of jobs lost, which is the difference between Q0 (the number of jobs without the minimum wage) and QD0 (the number of jobs with the minimum wage). Now, if the demand curve is DL1 (more elastic, or flatter), then the quantity of labour demanded is QD1. The amount of unemployment is the difference between QS and QD1, which is larger than when the demand curve is less elastic. The disemployment effect of the minimum wage is the difference between Q0 and QD1, which is also larger than when the demand curve is less elastic.

So, the takeaway message from our model is that we can expect the transition to more remote workers in fast food and retail will increase the disemployment effects of the minimum wage.

[HT: Marginal Revolution]

Read more:

Wednesday, 15 May 2024

Homebrewing as the gateway to craft brewing

Despite some fluctuations and concerns about reaching the peak, one of the key trends in the brewing industry (both in New Zealand and in most Western countries) has been the rise of craft brewing. However, craft brewing remains quite concentrated in some areas rather than others. What might explain the regional concentration of craft brewing?

That is essentially the question that this 2019 article by Michael McCullough (California Polytechnic State University), Joshua Berning (Colorado State University), and Jason Hanson (History Colorado), published in the journal Contemporary Economic Policy (sorry, I don't see an ungated version online). Specifically, they look at the effect of legalising the homebrewing industry on brewing across states in the U.S. As they explain:

Amendment XXI, ratified in 1933, repealed Prohibition and made the commercial production of beer and other alcoholic beverages legal again in the United States, although it left it to the states to allow and regulate brewing, vinting, and distilling within their borders. Importantly, the amendment solely omitted homebrewing, the brewing of beer at home for personal consumption, from the list of legal activities...

From 1933 to 1978, 13 states affirmed the right to homebrew in spite of the federal ruling... In 1978, President Carter signed H.R. 1337 which legalized homebrewing, although federal law deferred to state statutes. At that time, only an additional nine states opted in to legalize homebrewing. The remaining 28 states gradually legalized homebrewing over the next 35 years, with Alabama and Mississippi being the last in 2013.

McCullough et al. look at how the date that a state legalised homebrewing affected the commercial brewing industry, hypothesising that:

...states that legally restricted homebrewing may have hindered the development of future brewmasters and therefore the expansion of their own brewing industry...

However, there are some challenges here, because states that legalised homebrewing earlier may have done so because of high demand for beer, so any relationship between brewing and legalisation of homebrewing would arise because both are driven by beer demand (a common cause, or confounding). Or, large breweries might lobby for less restrictive laws on all brewing, thereby cultivating a homebrewing culture that would also lead to more demand for their products (reverse causation). So, a simple model that looks at the relationship between legalisation of homebrewing and brewing would not demonstrate a causal relationship, just correlation.

McCullough et al. solve this problem using an instrumental variable model, which involves finding an instrument that is correlated with homebrewing legalisation, but which would have no effect on commercial brewing more generally. They argue that the number of years since each state repealed their antimiscegenation laws (laws prohibiting marriage between different races) is such an instrument, because it represents "that measures a state’s willingness to pass legislation in favor of individual rights", and because these laws were pure-and-simple racism, they aren't related to brewing [*].

Using data from 1970 to 2012, McCullough et al. find that:

...the legalization of homebrewing has a positive effect on the average number of breweries per capita. The estimate suggests roughly 7.1 breweries per 1 million people.

McCullough et al. also show that there is an increase in the growth rate of brewing after homebrewing is legalised. Moreover, when comparing how legalisation of homebrewing affected breweries of different sizes, they find that:

...the change in homebrewing laws had a significant effect on the number of small breweries. There were roughly 5.6 more breweries per 1 million people. Furthermore, the number of breweries is growing over time... Looking at medium-sized breweries... we find that the effect of legalization is smaller and not growing significantly over time...

There is no significant change in the number of larger breweries per million people following changes in homebrewing laws...

These results are consistent with their hypothesis, because if homebrewing leads to the development of brewmasters, you would expect a greater number of small breweries to develop, since that's what the brewmasters would create first (to become a middle-sized or big brewery, you probably have to start out as a small brewery first). McCullough et al. also show that there is a statistically significant effect on craft beer production, where:

...craft production increases significantly following the legalization of homebrewing. The estimated impact is roughly 85,000 barrels per million people.

So, it seems clear that homebrewing is the gateway to craft brewing. As McCullough et al. conclude:

While one cannot draw the conclusion that the mere legalization of homebrewing was the main driver for the existence of the beer brewing industry as it is today, one can say that it would not exist in its current fashion without such political action.

*****

[*] However, as McCullough et al. partially note in the paper, states that are more religious and conservative may be more likely to maintain antimiscegenation laws, and more likely to be in favour of temperance. McCullough et al. wave this away by saying that they control for alcohol laws like Sunday sales bans, as well as state fixed effect, but including those variables is only likely to partially allay concerns about the instrument.

Tuesday, 14 May 2024

More results on role models and the gender gap in economics

The gender gap in economics remains difficult to shift, and lots of economists are working on ways to make a difference at the margin. Back in 2018, I wrote about an intervention that brought female role models into class that seemed to have a positive effect on female students continuing their studies in economics beyond the first-year paper (based on this paper by Porter and Serra). It seemed promising, so I tried something similar in my classes later that year, bringing in female and Māori alumni to give some short guest lectures. It didn't appear to have a great effect (and isn't something I could test experimentally, as there is no obvious control group), squeezed the rest of the curriculum, and was challenging to coordinate. So, the trial was short-lived.

Perhaps I should have persisted? This 2023 NBER Working Paper (sorry, I don't see an ungated version online) by Arpita Patnaik (Charles River Associates), Gwyn Pauley (University of Wisconsin-Madison), Joanna Venator (Boston College), and Matthew Wiswall (University of Wisconsin-Madison) broadens the case for having role models come into class for short guest lectures. In their experiment undertaken at the University of Wisconsin-Madison:

...we invited alumni who graduated from the department to give a fifteen minute presentation to the Econ 101 courses in the sixth week of classes. The study took place during the 2018-19 academic year, encompassing the Fall 2018 and Spring 2019 semesters...

The alumni speakers were given a series of questions as prompts, including questions about their first jobs out of college, their experiences in economics course work, the skills they think they gained in the economics courses, and how an economics degree helps them in the work force.

Like the Porter and Serra experiment, this one is fairly low-key. Patnaik et al. evaluate it by comparing students who were in lecture groups where the guest speakers presented with students who were in lecture groups where no guest speaker presented. They also look at the results separately by gender, and the results for different gender combinations of speakers and students. They find that:

...the alumni intervention increases the likelihood that students continue in economics by taking intermediate microeconomics by 2.1 percentage points or 11% more than the baseline level. We find that these effects are much larger when we look at the effects separately by gender of the speaker and gender of the student. A male speaker increases the likelihood that male students take intermediate microeconomics by 8.1 percentage points or 36% (from the base rate of 22.5%) and has no significant effect on women. Female speakers increase the likelihood that female students take intermediate microeconomics by 5.0 percentage points or 40% (from the base rate of 12.4%) and have no significant effect on men.

They also find that:

...students are affected by speakers similar to themselves. Our speakers were all White and two worked for a Wisconsin-based company... We find that the effects of the intervention on course take-up are larger for White students and state residents.

They show that the additional economics majors are mainly shifting from business majors. Finally, they show that there are negligible impacts (if any) on student grades, either in the current course or in following courses in economics.

Taken all together, these results suggest that we probably should bring in alumni speakers to talk to students, if we want to encourage greater enrolment in the economics major. The effects are surprisingly large (an increase from 12.4 to 17.4 percent of female students going further in economics), which suggests that they are probably worth what is likely to be a fairly modest cost. And we need to be willing to pay the cost - as I noted above, we stopped the trial in my classes in part because it became difficult to coordinate the guest speakers.

However, we need to be mindful about who we bring in as guest speakers. If we also want to close the gender gap, we should be inviting more female speakers than male speakers. And, if we want to close the gap for other underrepresented groups, we should be inviting speakers from those groups.

None of the results from this paper are terribly surprising, but it is important that we keep these things in mind. Low-touch interventions can sometimes have important positive impacts.

[HT: Marginal Revolution, last year]

Read more:

Monday, 13 May 2024

If a salary seems too good to be true, it's probably a compensating differential

The New Zealand Herald reported last week:

It’s not often you get a chance to relocate to a coastal town with a beautiful white sand beach along with a whopping pay rise.

But that’s exactly what is on offer, and it’s right on Kiwis’ doorstep - sort of.

Western Australian beachside community Bremer Bay is offering one lucky person an eye-watering $300,000 to $450,000 wage, a rent-free five-bedroom home and a car if they relocate to their remote town.

So what does the job entail?

Bremer Bay, located on the south coast between Albany and Esperance, is in desperate need a new general practitioner (GP).

According to Indeed, the average Australian GP earns an annual salary of $233,304.

As I've noted before, when a job offers a pay package that is far higher than elsewhere, you ought to be thinking: what's wrong with this job? There must be something about the job that means the employer has to pay a much higher salary in order to attract people to work there.

Wages differ for the same job in different firms or locations. Consider the same job in two different locations. If the job in the first location has attractive non-monetary characteristics (e.g. it is in an area that has high amenity value, where people like to live), then more people will be willing to do that job. This leads to a higher supply of labour for that job, which leads to lower equilibrium wages. In contrast, if the job in the second area has negative non-monetary characteristics (e.g. it is in an area with lower amenity value, where fewer people like to live), then fewer people will be willing to do that job. This leads to a lower supply of labour for that job, which leads to higher equilibrium wages. The difference in wages between the attractive job that lots of people want to do and the unattractive job that fewer people want to do is called a compensating differential.

Is the high salary on offer for a GP in Bremer Bay a compensating differential? Consider this:

Bremer Bay is a five-and-a-half hour drive from Perth and as of 2021 had a population of just over 200.

Over the past 10 years, the beachside town’s population has swelled as high as 6500 people during the Christmas and holiday periods.

The successful applicant will be the only GP in a rural community that's a long way from urban amenities (that might be a positive aspect for some people, but negative for many). However, being the only GP for 6500 people during summer has to be a strong downside. A beachside job sounds great, but if you spend your summer working flat out instead of enjoying the sun, that kind of destroys the positive aspect of it. And that is probably why the town is having trouble attracting anyone for this position, and why they have to offer such a high salary.

Read more:

Saturday, 11 May 2024

This week in research #22

The blog has been pretty quiet this week, while I've dealt with a number of other things. However, research marches on, and this is what caught my eye over the past week:

  • Ng and Riehl (with ungated earlier version here) find that less-prepared students have higher earnings returns to selective STEM programs than more-prepared students in Colombia, even though they are less likely to complete these programs
  • Kunaschk (open access) finds negligible overall employment effects of the minimum wage on hairdressers in Germany (a group of workers that haven't really been looked at in detail in this regard)
  • Saez (open access) gives us the lowdown on last year's John Bates Clark medal winner, Gabriel Zucman
  • Ruggles (open access) discusses how the 2020 U.S. Census Confidentiality Program has not solved the privacy issues from earlier Census data, and has reduced the quality of the available data

And the latest paper from my own research (or, more accurately, from the thesis research of my successful PhD student Mohana Mondal, on which I am a co-author along with Jacques Poot):

  • Our new article (open access) in the journal New Zealand Population Review describes the development, calibration and validation of a dynamic spatial microsimulation model for projecting small area (area unit) ethnic populations in Auckland - this was a very ambitious undertaking as part of Mohana's PhD research, and demonstrates that microsimulation can be a useful tool for small-area population and ethnic projections, over the short term

Monday, 6 May 2024

Study abroad in the Erasmus programme doesn't make students worse off

One of my ECONS101 tutors is off on study abroad later this year, to the University of California at Berkeley. What an incredible opportunity for her! I am constantly surprised, though, at how few of my students take the opportunity for a trimester abroad. In many cases, it is possible to do the period of study abroad while only paying New Zealand university fees (plus the costs of travel and accommodation, of course).

If a rational student is weighing up whether to study abroad for a trimester (or a whole year), they should be weighing up the costs and the benefits of that decision. The costs are the incremental (extra) costs of studying abroad, while the benefits are the value of the incremental learning (if any), as well as the international experience, cultural competency, and so on, or at least what those 'transferable skills' are worth in the labour market after graduation.

Sadly though, study abroad has a reputation for slowing down student progression, meaning that they take longer to graduate. That may be because not all credits studied while overseas will cross-credit back to the student's original university, or the student doesn't put in as much effort while overseas, when faced with the competing priorities of study and sightseeing.

However, that reputation for slowing progress may not be deserved, as shown in this new article by Silvia Granato (European Commission, Joint Research Centre) and co-authors, published in the journal Economics of Education Review (open access). They look at the outcomes for students at the University of Bologna who applied to participate in the Erasmus programme, which allows students to study abroad for up to a year in another European Union country.

The University of Bologna is the oldest university in Europe, and the second largest public university in Italy, so the results may have broader applicability. Granato et al. take advantage of the fact that students who apply to the programme are assigned a score, based on a combination of their GPA, their motivations for studying abroad, and their language proficiency. By comparing students just above the cutoff, and students just below the cutoff, Granato et al. can establish the impact of the Erasmus programme on student outcomes (this is what we refer to as a regression discontinuity design [*]). The impact of the Erasmus programme measured in this way is a good estimate of the causal impact of the programme, for students who are close to the cutoff score. For students further away from the cutoff, the effects could be different.

Anyway, Granato et al. use data from students who made an application between the 2013/14 and 2018/19 academic years. They look at Bachelor's and Master's degree students separately, and find that:

...spending a portion of university studies abroad does not have an impact on the probability of graduating on time for either group. Moreover, it has a positive effect on the final graduation mark of bachelor’s students only. The estimates show that the latter obtain a 2-point premium in their final grade, which is approximately a third of one standard deviation of the final grade in the estimation sample.

Then, looking at the heterogeneity of the effects, they find that:

...the effect on the final graduation mark is remarkably stronger for graduates in science, technology, engineering and mathematics (STEM) and students who apply for the Erasmus programme earlier in their studies, suggesting that the observed impact might be related to the content of exams taken during the study period abroad.

So, while students do achieve a slightly higher grade overall in their studies, that is driven by their performance while overseas. Exploring this further, Granato et al. find that:

...the positive effect on the final graduation mark appears to be driven by programmes in host institutions of relatively lower quality – and thus arguably with a relatively lower quality of learning inputs – and, in particular, when the duration of the period abroad is longer.

Taken altogether, this doesn't suggest that students are really benefiting from their study abroad. However, it isn't making them any worse off either. But what about after graduation? On that point:

We merge student administrative data with survey data on student choices and outcomes one year after graduation and investigate the potential impact of study abroad on the probability of continuing studies and of being employed. We do not find significant effects on these outcomes, although our analysis is likely hampered by the small sample size.

Although statistically insignificant, Bachelor's degree students are 10.2 percentage points more likely to enrol in a Master's degree programme abroad after participating in the Erasmus programme, and Master's degree students are 7.7 percentage points more likely to be employed and 4.2 percentage points more likely to be employed abroad. That is suggestive evidence of positive outcomes of the study abroad period.

Overall though, the best we can see is that the programme doesn't make students any worse off. Again, it is worth reiterating that these results apply to students who are close to the cutoff score. Students who are well above, or well below, the cutoff score may experience different impacts (and we know nothing about those impacts from these results).

*****

[*] Actually, because not all students who are offered a place accept the offer, the assignment to the Erasmus programme is not perfect. This actually makes identification of the causal effects of the programme even stronger, since Granato et al. can use an offer as an instrument for completing a period of study abroad, in what we refer to as a fuzzy regression discontinuity design (it's fuzzy because of the imperfect assignment to treatment).

Read more:

Sunday, 5 May 2024

Investor sentiment and stock prices

Last week in my ECONS101 class, we covered (among other things) the economics of financial markets. In particular, we looked at explanations for why the value of financial assets rarely represents the expected value of future cash flows or profits (the asset's fundamental value). As part of this, we discussed the formation (and the bursting) of asset bubbles.

Asset bubbles form because of self-fulfilling prophecies: If investors believe that the price of a financial asset is going to go up, many will buy the asset. This raises the demand for the asset, and the price of the asset goes up, which is exactly what investors expected to happen. Because the price is increasing and this increase is due only to investors’ expectations about the future, the price of assets gets pushed beyond the asset’s fundamental value – this is an asset bubble.

Notice the key role of expectations in the formation of asset bubbles. Investors' expectations can be affected by any number of different things, one of which is their general mood (as shown in this research). In other words, investor sentiment is an important component of investors' expectations, and is therefore an important factor in asset prices. As John Maynard Keynes wrote in The General Theory of Employment, Interest and Money:

...the market will be subject to waves of optimistic and pessimistic sentiment, which are un-reasoning and yet in a sense legitimate where no solid base exists for a reasonable calculation...

Knowing that I would be covering this topic, and the importance of sentiment I was interested to read this article in The Conversation by Jedrzej Bialkowski and Moritz Wagner (both University of Canterbury), back in May. They have developed an index of investor sentiment for New Zealand. As they explain:

Market sentiment refers to the overall attitude of investors. It is commonly summarised as bullish (expecting increasing prices), bearish (expecting decreasing prices), or neutral (expecting no or only little changes in price). Such sentiment is not always based on fundamentals such as revenue, profitability and growth opportunities...

Every week since January 2020, we asked registered members of the NZSA whether they expected the stockmarket to increase (bullish), decrease (bearish) or stay the same (neutral) over the next six months. The NZSA has about 1,200 members, a quarter of whom receive email invitations to participate in the survey.

Bialkowski and Wagner then use their index of sentiment to explore the New Zealand equity market:

During the first four weeks of this year, expectations that stock prices will rise over the next six months remained elevated at 40%. In other words, 40% of the surveyed investors believe the NZ equity market will increase in the first six months of 2024. At the same time, bearish sentiment, expectations that stock prices will fall over the next six months, fluctuated around 16%.

So, despite the mounting global and local uncertainties, retail investors are optimistic about the equity market. Bullish sentiment is stronger and bearish sentiment weaker than the historical average levels of 28% and 36%, respectively.

On the back of last year’s strong market performance and a better-than-expected economy, investor optimism carries forward.

However, since sentiment is known to be a contrarian indicator, informed investors should be cautious going further into the new year.

There is definitely some hedging of bets in that last sentence. It will be interesting to see how this measure of investor sentiment performs over time, and whether it has any predictive value.

Saturday, 4 May 2024

This week in research #21

Here's what caught my eye in research over the past week:

  • Cakmakli, Demiralp, and Günes find that political leaders' political commentary affects the level and the volatility of exchange rates, bond yields and the risk premium in Türkiye, while for other countries (Brazil, Colombia, Hungary, New Zealand, and the US) only exchange rate volatility is affected
  • Atay, Asik, and Tumen (open access) find that graduating with an honours degree increases the entry wages of males from non-elite universities (but not elite universities) in Türkiye by about 4% on average, with no effect on female graduates (at either elite or non-elite universities)
  • Taralashvili (open access) finds that interstate soft conflicts (such as diplomatic restraints, renegotiation of relations, protests, or boycotts) have a sustained negative impact on bilateral trade flows, using a gravity model of trade

Wednesday, 1 May 2024

Book review: Doing Economics

Last year, I reviewed The Economist's Craft by Michael Weisbach and noted that:

The book does an excellent job of exposing the tacit knowledge of academia - the things that academic economists otherwise learn 'on the job', from the PhD through to the end of their academic career.

Weisbach's book was published in 2021, and barely a year later Marc Bellamare's 2022 book Doing Economics was released. The overlap in the two titles is extensive, as both aim to reveal and explain the tacit knowledge of academic work as an economist. Bellamare's book is subtitled, "what you should have learned in grad school - but didn't".

While I was highly impressed with Weisbach's book, I actually think that Bellamare does an even better job of collating the important advice. The book is separated into several chapters, each devoted to one aspect of work as an academic economist: writing papers; giving talks, navigating peer review, finding funding, doing service, and advising students. Each chapter outlines the key things that academic economists need to know, and importantly, the intended audience is not students in top US PhD programmes, and this may be one aspect that sets this book apart from Weibach's earlier book. Bellamare was unsuccessful in getting tenure at a top university, and is a Professor at the University of Minnesota (which, to be fair, is still a good university - it's just not Cornell, where he did his PhD, or Duke, where he was an assistant professor), and that may explain the different focus.

I must admit a certain amount of bias in preferring this book though. Bellamare's advice is scarily similar to advice that I already give to students. So, recommending this book is a little like recommending my own advice. For example, Bellamare recommends developing skills in writing and understanding the literature by engaging in what the American philosopher Mortimer Adler referred to as 'inspectional reading':

...reading the introduction, looking at the methods and results, and (maybe) reading the conclusion before moving on to the next item on one's reading list.

While I don't recommend this to PhD students, it is an approach that I frequently recommend for undergraduate students, such as those in the Waikato Economics Discussion Group. Bellamare also recommends that, in terms of reviewing journal articles:

...the first journal you ever send a paper to sends out your manuscript to two reviewers, and they both recommend that your paper be accepted "as is." This means that for your one submission, you have benefited from two reviewers giving your manuscript a thorough read. In this hypothetical scenario, for the system to work, you should perform at least two reviews.

This 'net zero' approach to the number of reviews to undertake is one that I have applied for many years, of course noting that the exact number of reviews to undertake depends on whether the papers you submit have co-authors, how many different journals you submit to that provide reviewer comments, and so on. Bellamare moderates this advice though, by noting that early on, emerging researchers should review as often as possible, as that gives them exposure to bad papers as well as good papers, and is an excellent learning tool. In my career, I did far more reviewing when I was a new and emerging academic, almost never refusing a request to complete a review (I'm a lot more selective now). As Bellamare notes:

Many economists see refereeing as an unfortunate tax they need to pay to get their own papers reviewed and published. Unlike a tax, however, there is almost always something to be learned from reviewing - and from reviewing widely.

And on responding to reviewers, Bellamare suggests an exhaustive approach that is very similar to my practice, which involves copying the new text from the revised paper into the response to reviewers, so that reviewers don't need to refer back to the paper to confirm that you have addressed their comments. It's an approach that works wonders in getting papers accepted from the revise-and-resubmit stage, without further rounds of review.

The book has so much good advice, that it is difficult to do justice to it. However, not all of the advice is good. When discussing poster presentations, Bellamare starts by noting that "I must confess to having never prepared a poster for presentation at a conference". He probably should have stopped there, as the advice on posters is not great. There are resources that he could have used to provide some value in this section, but instead the advice is banal and largely unhelpful.

Nevertheless, that is a small gripe about a book that is otherwise excellent. The outline of Keith Head's formula for the introduction of a paper (which you can read here) is a great reminder of how to structure that section of a paper. There are also great sections on what to put in an abstract, and why separate literature review sections are generally unnecessary for most journal articles.

Current or future PhD students and early career researchers should definitely read this book, and supervisors of PhD students should recommend that their students read it. While I previously considered giving my PhD students a copy of Weisbach's book when they reach confirmed enrolment (after the first six months of their PhD journey, when their full research proposal is complete and has been approved), I'm going to switch to this book instead. Or perhaps, I will give them one, and encourage them to read the other. Highly recommended!