Tuesday 31 December 2019

Marijuana legalisation and the displacement of drug dealers

What do marijuana dealers do when their illegal product become legal? Perhaps they become legitimate business people, and open a store. However, they have a particular set of skills that is more aligned with criminal enterprise than with legitimate retail, so perhaps they move into selling other illegal goods, like harder drugs. That is the question that Heyu Xiong (now at Case Western Reserve University) addresses in his 2018 job market paper, which as far as I can tell, has not yet been published.

The research question, of how drug dealers respond to legalisation, is important, particularly in countries that have yet to legalise such as New Zealand. The idea that legalisation simply eliminates criminal activity is overly simplistic, and that is exactly what Xiong finds. He uses data on prison admissions and releases from three U.S. states that have legalised marijuana (Colorado, Washington, and Oregon) over the period from 2000-2016 (for Colorado and Washington) or 2007-2017 (Oregon). He looks at the recidivism rate for those convicted of marijuana-related crime before and after legalisation, using several different methods and several comparison populations. He finds that:
...state adoption of marijuana legalization is associated with a significant increase in the risk of recidivism for marijuana dealers. Following legalization, marijuana offenders become 4 to 5 percentage points more likely to re-enter prison within 9 months of release. The effect is sizable, corresponding to a near 50% increase from a baseline rate of 10 percent. When decomposed by crime categories, I find the overall increase masks two countervailing effects. One, marijuana offenders became less likely to commit future marijuana offenses. Two, this reduction is offset by the transition to the trafficking of other drugs. As a result, the observed criminality of former marijuana traffickers increased. Because participation in other type of crimes did not vary significantly, the revealed patterns are consistent with the importance of drug-industry specific human capital in explaining the persistence of criminal choices.
So, marijuana legalisation doesn't reduce crime by marijuana dealers, but increases it. There were significant increases in offences related to other (harder) drugs, and a smaller (but still significant) increase in property crime. Why did that happen? Xiong notes that:
...a 1% increase in marijuana crime within the county corresponds to a .2% increase in number of establishments...
The spatial pattern of entry reveal that legal dispensaries entered precisely in locations where illegal dealers operated. Given their close proximity, this implies that legal entrants directly competed with the incumbent illegal dealers. Hence, illegal retailers was supplanted by legitimate trade.
It should be no surprise that the legitimate marijuana sellers would want to locate in the places where demand is highest, which just happens to be where the drug dealers were previously located. Xiong also finds that:
...the retail prices of marijuana dropped significantly following legalization. Additionally, owing to the large-scale legal entry and lower search frictions, much of the within-state price dispersion disappeared.
The combination of lower (and more stable) prices suggests that competition in the market was high. That competition, especially being in the neighbourhood of the drug dealers, pushed the drug dealers out of the market, and they had to find something else to do. Xiong also finds that they don't shift into legal work. Looking at whether the offenders use employment services after they are released, he finds only a:
...small effect on utilization of these employment services, even amongst sub-populations consisting only of people who are eligible and have not returned to incarceration. With the NLSY data, I fail to detect any increase in weeks worked or income from wages. Altogether, the transition to legitimate employment resulting from criminal displacement is evidently low.
Overall, this paper is helpful in thinking about the issues, and the costs and benefits, associated with marijuana legalisation. While legalisation has many evident benefits, not least a reduction in the amount of resources devoted to policing the illegal market, that must be weighed up against the costs. Those costs may include the diversion of marijuana dealers into the selling of harder drugs that come with higher social costs.

[HT: Marginal Revolution, back in November 2018]

Monday 30 December 2019

Erotic capital and happiness

If you have a research paper that investigates the correlation between self-reported attractiveness and subjective wellbeing (or happiness), how do you ensure that it gets read by more people? If you're the author of this 2017 article published in the journal Research in Social Stratification and Mobility (sorry, I don't see an ungated version online), you refer to attractiveness as 'erotic capital'. For the record, the author was Felix Requena (University of Malaga).

The idea of referring to attractiveness as a form of capital isn't a crazy idea. In economics, capital is a factor of production that is used to produce other goods and services. Requena takes a Bourdieuian view of capital, but extends it:
Like social capital, erotic capital focuses on the benefits received by individuals who participate in groups and in the deliberate construction of sociability in order to create resources that generate relational and other benefits, such as power or influence...
People constantly use the different types of power available to them to obtain the resources they desire... According to their circumstances and particular assets, some use economic power, others use acquired skills and abilities (human capital), others their social relations (social capital), and others use erotic power (beauty or appeal). Like other types of capital, erotic capital serves as another way to influence the social environment.
In that case, attractiveness is a form of capital because it can be used to produce things of value to the person through the power to influence the social environment. Others may quibble as to whether this is simply an extension of human or social capital, or whether it sits between those two types of capital, but such definitional issues are not important. The real question that the article addresses is the relationship of erotic capital to happiness.

Requena uses data from the Spanish General Social Survey in 2013, which had a sample size of a bit more than 5000 people. Erotic capital is self-reported attractiveness on a 0-10 scale. Happiness is measured on a 0-6 scale. Controlling for gender, income (economic capital), education (human capital), and the size of a person's social network (social capital), he finds that:
For men and women, erotic capital ranked first (men: 0.104, women: 0.105), followed by social capital (men: 0.017, women: 0.018), human capital (men: 0.014, women: 0.014) and economic capital (men: 0.001, women: 0.0001). Contrary to what we predicted in H2, the analysis showed no significant differences in the importance of erotic capital for men and women.
In other words, erotic capital had the strongest correlation with happiness, of the four capitals included in the regression analysis. Of course, other than the correlation vs. causation issue, there is a problem of multicollinearity in this analysis. It has previously been established that more attractive people earn more (see for example here, or my book review of Daniel Hamermesh's excellent book Beauty Pays), and that attractiveness is related to educational choices (see here). So, it isn't clear that Requena's measure of erotic capital isn't also picking up some of the variation in happiness that is actually due to differences in economic capital or human capital. This article is far from the last word on attractiveness and happiness.

Finally, if you're reading the article, I would recommend ignoring all of the analysis related to self-reported importance of the different types of capital to success in life, and their relationship to happiness. I don't think it tells us what the author thinks it does, and it certainly doesn't contribute anything to our understanding of the relationship between attractiveness and happiness.

Sunday 29 December 2019

E-bike commuting, happiness, and survivorship bias

If I told you that e-bike commuters are happier than drivers are, would you conclude that travelling to work by e-bike made people happier? Perhaps you would, but if you did, you would be confusing correlation with causation. Perhaps happier people are more likely to commute using e-bikes? Or perhaps only higher-income people can afford an e-bike, and higher-income people are happier? It is difficult to say. However, jumping straight from the observed correlation into studying why e-bike commuters are happier shouldn't be your next step, especially not if you are going to base your study on talking to only 24 e-bike commuters.

However, that's exactly what this study published in the Journal of Transport & Health (sorry, I don't see an ungated version online), by Kirsty Wild and Alistair Woodward (both University of Auckland), did. It was covered in the New Zealand Herald earlier this year, but I held off on writing about it until I had a chance to read the research myself.

The problem isn't so much the research itself - interviewing e-bike commuters about what they like about e-bike commuting is fine. However, extrapolating that to answer the question about what should be put in place to encourage e-bike commuting, as this study does, is fraught. The reason is survivorship bias.

Almost by definition, if you interview current e-bike commuters, then you're interviewing people who tried e-bike commuting, and liked it. However, there are a bunch of people who tried e-bike commuting and hated it - they don't commute by e-bike any more, and they didn't get interviewed. In other words, the current e-bike commuters are the survivors from a larger group of people that have tried e-bike commuting at some time.

The problem in this case is that those two groups (survivors and non-survivors) are different. At the very least, the survivors like e-bike commuting, and the non-survivors don't (or, at least, they don't like it enough to continue commuting by e-bike). Interviewing the survivors tells you nothing about what the non-survivors liked or didn't like about e-bike commuting. It could be that the things that the survivors like about commuting by e-bike are exactly the things that the non-survivors hated about it. And you can't tell from this research, because former e-bike commuters (the non-survivors) were not interviewed.

So, if you decide to base decisions about cycling infrastructure on what current e-bike commuters like about it, there is no guarantee that e-bike commuting would increase as a result. The people who want to commute by e-bike with the current infrastructure are already commuting by e-bike. What you really want to know is, what do the people who don't currently commute by e-bike want?

I'd find research like this a lot more plausible if they had interviewed people who gave up on commuting by e-bike, or if they interviewed both groups. Or, even better, if they ran an experiment where they distributed e-bikes randomly to some people and asked them to use them for commuting, and then interviewed that experimental group about what they liked and did not like.

Otherwise, you simply get more of what the survivors already like, and don't necessarily create the right environment to increase e-bike commuting at all.

Wednesday 18 December 2019

Randomising research publications

If you've ever had the misfortune of being drawn into a conversation with me about research funding, then you will have heard my view that, after an initial cull of low-quality research funding applications, all remaining applications should be assigned ping-pong balls, which are then drawn randomly from a bin to allocate the available funding. You could even make a big event of it - the researchers' equivalent of a live lotto draw.

In fact, as this Nature article notes, this approach has begun to be adopted, including in New Zealand:
Albert Einstein famously insisted that God does not play dice. But the Health Research Council of New Zealand does. The agency is one of a growing number of funders that award grants partly through random selection. Earlier this year, for example, David Ackerley, a biologist at Victoria University of Wellington, received NZ$150,000 (US$96,000) to develop new ways to eliminate cells — after his number came up in the council’s annual lottery.
“We didn’t think the traditional process was appropriate,” says Lucy Pomeroy, the senior research investment manager for the fund, which began its lottery in 2015. The council was launching a new type of grant, she says, which aimed to fund transformative research, so wanted to try something new to encourage fresh ideas...
...supporters of the approach argued that blind chance should have a greater role in the scientific system. And they have more than just grant applications in their sights. They say lotteries could be used to help select which papers to publish — and even which candidates to appoint to academic jobs.
The latest issue of the journal Research Policy has an interesting article by Margit Osterloh (University of Zurich) and Bruno Frey (University of Basel), which argues for randomisation of the selection of which papers to publish (it seems to be open access, but just in case here is an ungated version). Their argument relies on the fact the Journal Impact Factors (JIFs) are a poor measure of the quality of individual research, and yet a lot of research is evaluated in terms of the impact factor of the journal in which it is published. Moreover, they note that:
...many articles whose frequency of citation is high were published in less well-ranked journals, and vice versa... Therefore, it is highly problematic to equate publication in “good” academic journals with “good” research and to consider publication in low-ranked journals automatically as signifying less good research.
Despite this problem with impact factors, they continue to be used. Osterloh and Frey argue that this is because the incentives are wrong. Researchers who publish in a high impact factor journal are benefiting from 'borrowed plumes', because the journal impact factor is largely driven by a small number of highly cited papers:
It is exactly the skewed distribution of citations that is beneficial for many authors. As argued, the quality of two thirds to three quarters of all articles is overestimated if they are evaluated according to the impact factor of the journal in which they were published. Thus, a majority of authors in a good journal can claim to have published well even if their work has been cited little. They are able to adorn themselves with borrowed plumes...
Osterloh and Frey present three alternatives to the current journal publication system, before presenting their own fourth alternative:
When reviewers agree on the excellent quality of a paper, it should be accepted, preferably on an “as is” basis (Tsang and Frey, 2007). Papers perceived unanimously as valueless are rejected immediately. Papers that are evaluated differently by the referees are randomized. Empirical research has found reviewers´ evaluations to be more congruent with poor contributions (Cicchetti, 1991; Bornmann, 2011; Moed, 2007; Siler et al., 2015) and fairly effective in identifying extremely strong contributions (Li and Agha, 2015). However, reviewers’ ability to predict the future impact of contributions has been shown to be particularly limited in the middle range in which reviewers´ judgements conform to a low degree (Fang et al., 2016).25 Such papers could undergo a random draw.
In other words, the best papers are accepted immediately, the worst papers are rejected immediately, and the papers where the reviewers disagree are accepted (or rejected) at random. Notice the similarity to my proposal for research grant funding at the start of this post.

The journal issue that the Osterloh and Frey article was published in also has three comments on the article. The first comment is by Ohid Yaqub (University of Sussex), who notes a number of unresolved questions about the proposal, and essentially argues for more research before any radical proposal to shake up the journal publication system is implemented:
Randomisation in the face of JIF may carry unintended consequences and may not succeed in dislodging the desire for journal rankings by some other measure(s). It should wait until we have more widely appreciated theory on peer review and citation, more inclusive governing bodies that can wield some influence over rankings and their users, and a stronger appetite for investing in evaluation.
The second comment is by Steven Wooding (University of Cambridge), who also argues that impact factors are a heuristic (a rule of thumb) used for judging research quality. Like Yaqub, he argues for more evidence, but in his case he argues for more evidence on why people are using journal impact factors and testing and evaluating the alternatives:
If you want people to stop using a heuristic, you need to ask what they are using it for, why they are using it, and to understand what their other options are. We agree that JIF use needs to be curbed. Our difference of opinions about trends in JIF use and the best way to reduce it should be settled by good evidence on whether its use is increasing or falling; where and why JIF is still used; and by testing and evaluating different approaches to curb JIF use.
The third comment is by Andrew Oswald (University of Warwick), who presents a mathematical case in favour of randomisation. Oswald shows that, if the distribution of quality research papers is convex (as would be the case if there are a few blockbuster papers, and many papers of marginal research significance), then randomisation is preferable:
Consider unconventional hard-to-evaluate papers that will eventually turn out to be either (i) intellectual breakthroughs or (ii) valueless. If the path-breaking papers are many times more valuable than the poor papers are valueless, then averaging across them will lead to a net gain for society. The plusses can be so large that the losses do not matter. This is a kind of convexity (of scientific influence). Averaging across the two kinds of papers, by drawing them randomly from a journal editor's statistical urn, can then be optimal.
Finally, Osterloh and Frey offer a response to the comments of the other three authors (at least for now, all of the comments and the response are open access).

A huge amount of under-recognised time and effort is spent on the research quality system. Journal reviewers are typically unpaid, as often are reviewers of research grants and those on appointments committees. However, there is a large opportunity cost of their time spent in reviewing manuscripts or applications. It isn't clear that the benefits of this effort outweigh the costs. As Osterloh and Frey note, there seems little correlation between the comments and ratings of different reviewers. Every academic researcher can relate stories about occasions where they have clearly lost in the 'reviewer lottery'. In the face of these issues, it is time to reconsider key aspects of the way that research is funded and published, and the way that academic appointments are made. If executed well, an approach based on randomisation would save significantly on cost, while potentially increasing the benefits of ensuring that high quality research is funded and published, and that high quality applicants are not overlooked.

[HT: Marginal Revolution for the Nature article]

Monday 16 December 2019

Book review: Poor Economics

When Abhijit Banerjee and Esther Duflo (along with Michael Kremer) won the Nobel Prize a couple of months ago, I resolved to push Banerjee and Duflo's 2011 book Poor Economics a lot closer to the top of my pile of books-to-be-read. I finally finished reading it last week and I have to say, it might be one of the most thoroughly researched and well-written summaries of a research area that I have ever read. Every page is packed with details, which makes it incredibly difficult to excerpt. However, consider this bit on the work of their future Nobel partner Kremer:
In the early 1990s, Michael Kremer was looking for a simple test case to perform one of the first randomized evaluations of a policy intervention in a developing country. For this first attempt, he wanted a noncontroversial example in which the intervention was likely to have a large effect. Textbooks seemed to be perfect: Schools in western Kenya (where the study was to be conducted) had very few of them, and the near-universal consensus was that the books were essential inputs. The results were disappointing. There was no difference in the average test scores of students who received textbooks and those who did not. However, Kremer and his colleagues did discover that the children who were initially doing very well (those who had scores near the top in the test given before study began) made marked improvement in the schools where given out. The story started to make sense. Kenya's language of education is English, and the textbooks were, naturally, in English. But for most children, English is only the third language (after their local language and Swahili, Kenya's language), and they speak it very poorly. Textbooks in English were never going to be very useful for the majority of children... This experience has been repeated in many places with other inputs (from flip charts to improved teacher ratios). As long as they're not accompanied by a change in pedagogy or in incentives, new inputs don't help very much.
The book covers a wide scope of randomised controlled trials (RCTs) in developing countries, which is exactly the work for which Banerjee and Duflo (and Kremer) won the Nobel Prize. They don't just limit themselves to discussing their own research though. The reader will receive a thorough grounding in the state of knowledge (or at least, the state of knowledge in 2011, as this is a field that has been moving quickly). If you want to gain an understanding of why the authors won the Nobel Prize and what their contributions have been, this is a good place to start. All of the examples are linked through Banerjee and Duflo's philosophy, which can be summarised as:
...attend to the details, understand how people decide, and be willing to experiment...
Banerjee and Duflo don't shy away from the criticisms of RCTs in development economics either. They engage with critics like William Easterly and Angus Deaton, and the criticisms that small experiments don't necessarily scale up well, or are not useful unless they identify or test some underlying theory that can be useful in a wider context. Unlike many authors, Banerjee and Duflo are quite realistic about their ambitions, but at the same time they have a strong counter-argument:
We may not have much to say about macroeconomic policies or institutional reform, but don't let the apparent modesty of the enterprise fool you: Small changes can have big effects. Intestinal worms might be the last subject you want to bring up on a hot date, but kids in Kenya who were treated for their worms at school for two years, rather than one (at the cost of $1.36 USD PPP per child and per year, all included), earned 20 percent more as adults every year, meaning $3,269 USD PPP over a lifetime... But to scale this number, note that Kenya's highest sustained per capita growth rate in modern memory was about 4.5 percent in 2006-2008. If we could press a macroeconomic policy lever that could make that kind of unprecedented growth happen again, it would still take four years to raise average incomes by the same 20 percent. And, as it turns out, no one has such a lever.
I learned a lot in reading this book. This included why microinsurance doesn't seem to work even when microfinance does (there is an entire chapter on this, but it boils down to adverse selection and moral hazard, which is exactly the problem of insurance in the rest of the world), and why there are so many unfinished buildings in developing countries (the poor use them as a way of committing to saving, because subtracting bricks that have been partially built into a house is much more difficult than raiding a savings account). I even learned about a real-world example of a Giffen Good (my ECONS101 tutors will know that I have been a Giffen Good skeptic for a long time), which comes from this 2008 paper (ungated version here).

The book doesn't strike me as a textbook, although I know some who are using it as such. It is very empirical, and that is a good thing. This is an excellent place to start, if you want to understand the economics of the poor, and the state of research as it was in 2011. Highly recommended, and I look forward to also reading their new book, Good Economics for Hard Times.

Sunday 15 December 2019

Alcohol consumption worldwide

Our World in Data recently updated their excellent article on alcohol consumption, with interactive graphics showing cross-sectional differences between countries, and trends over time. The data are pretty clear - New Zealand is neither the least, nor the most, negatively affected by alcohol consumption. Here are a few of the graphs that stuck out for me (though I encourage you to browse through the whole article and have a play with the graphs yourself). First, total alcohol consumption per capita over time:


Notice that New Zealand is towards the bottom of this group of Western countries all the way through the series from 1890 to 2014. Next, consumption by type of beverage for New Zealand, over time:


There's only a few data points in this one, but the massive increase in wine consumption since the 1970s is apparent. Finally, an illustration of why alcohol is such a concern for many public health researchers:


The big four mortality risk factors are high blood pressure, smoking, high blood sugar, and obesity. Then there's a big gap, but alcohol leads the rest of the risk factors in terms of the number of deaths, contributing to over 1200 deaths per year. Once you factor in the other costs of alcohol-related harms, it's easy to see why this is a focus.

Anyway, Our World in Data is an excellent site, and this is just one of many articles on topics as diverse as income inequality and renewable energy. It's a great place to visit to visualise some data, and the best part is that the graphs are (somewhat) customisable and the data are freely accessible as well. Enjoy!

Wednesday 11 December 2019

Kittyconomics

What does an economics professor do when they have three cats and too much time on their hands? Create a website called Kittyconomics, of course. People love cat videos. So, what better way to help people love economics, than using cat videos to do so?

You can also view the videos on the Kittyconomics YouTube channel. At the moment, there are only four short (about 90 seconds long) videos, on: (1) Preferences; (2) Value; (3) Opportunity cost; and (4) Scarcity. They don't quite explain things in the way I would, and I'd question the order of the videos (for instance in ECONS101 and ECONS102, I talk about scarcity before talking about opportunity cost).

However, they are fun and useful, and a good way to make some lazy cats earn their keep. Enjoy!

[HT: Marginal Revolution]

Thursday 5 December 2019

Summer reading list for the Prime Minister

Earlier this week, NZIER released their first Summer Reading List for the Prime Minister, which they "hope the Prime Minister, her advisors, and anyone interested in economics and public policy, will find both informative and enjoyable to read". I had a small (and uncredited role) in the list, recommending one of the books (2014 Nobel Prize winner Jean Tirole's Economics for the Common Good) that made it onto the final list.

The reading list includes:

You can read a brief synopsis of each book on the reading list PDF file. Here was my short list of recommendations (with some brief notes):
  • Factfulness, by Hans Rosling. Rosling developed Gapminder, and a strong proponent of using data rather than intuition to answer questions (I reviewed it here);
  • Reinventing Capitalism in the Age of Big Data, by Viktor Mayer-Schonberger and Thomas Ramge. This one is one my soon-to-read list, but seems important for understanding the period of creative destruction we are going through right now;
  • Economics Rules: The Rights and Wrongs of the Dismal Science, by Dani Rodrik. This one is a more sensible version of Doughnut Economics, by an author who is less in love with their own metaphors. Rodrik himself notes that "this book both celebrates and critiques economics" (I reviewed it here);
  • Scarcity: The New Science of Having Less and How It Defines Our Lives, by Sendhil Mullainathan and Eldar Shafit. I haven't read this one yet, but it looks like it strikes a good balance between empirical research and real-world application, while selling an important story about the importance of understanding how people respond to having less than they need.
  • Economics for the Common Good, by Jean Tirole. This one is probably a bit dense, and long (560+ pp.) but I had to include something from a recent Nobel laureate. This is Tirole's reflections on the most important contributions to society that economics can make in the future.
My list is clearly more contemporary, with all the books released in the last few years. However, if you're looking for some summer reading for yourself, the books on either list are a good place to start. Enjoy!

Monday 2 December 2019

Sunday opening hours and alcohol consumption

According to the research literature (nicely summarised in the book Alcohol: No Ordinary Commodity), one of the most effective policies for reducing alcohol consumption and related harm is to reduce the opening hours of licensed outlets. However, that policy advice seems to apply to on-licence outlets (bars, nightclubs, etc.). If bars are open more hours, people drink more alcohol at bars - that seems pretty uncontroversial. However, the literature is much less clear on the effects of off-licence outlets' (bottle stores, supermarkets, etc.) opening hours on consumption. If off-licence outlets are open longer, do people purchase and consume more alcohol?

The reason for the lack of clarity may be because people purchase alcohol from off-licence outlets to consume at home, and they can also store alcohol at home for future consumption. So a forward-planning consumer can purchase alcohol for consumption at home when they find themselves close to an off-licence outlet, and then store it for later consumption.

However, are consumers really forward planning in this way? I have a report coming out soon that looks at the purchasing behaviour of pre-drinkers (people who had been drinking before going out to the CBD). I can't release the details of that report just yet, but in the meantime, consider this 2016 article by Douglas Bernheim (Stanford), Jonathan Meer (Texas A&M), and Neva Novarro (Pomona College), published in the American Economic Journal: Economic Policy (ungated earlier version here).

Bernheim et al. are actually investigating (somewhat indirectly) whether consumers regulate the amount of alcohol they store at home, in response to the availability of retail alcohol. If consumers restrict the availability of alcohol to themselves by not storing quantities of alcohol at home, then restricting alcohol sales on Sundays would reduce alcohol consumption (and liberalising Sunday sales would increase consumption).

Bernheim et al. investigate this by looking at regulations across U.S. states that either restricted or liberalised the number of hours that alcohol could be sold on Sundays, over the period from 1970 to 2007, and how those changes related to changes in alcohol consumption. They found that:
Widening the allowable on-premise Sunday sales window by 1 hour is associated with a statistically significant 0.94 percent (standard error = 0.29 percent) increase in sales. Taking the linear specification literally, the point estimate implies that allowing 12 hours of Sunday on-premises sales increases total liquor consumption by roughly 11 percent...
In contrast, expanding the allowable off-premise Sunday sales window by 1 hour is associated with a small and statistically insignificant 0.08 percent (standard error = 0.24 percent) increase in sales. Formally, we cannot reject the hypothesis that the effect of off-premises sales hours is zero.
In other words, lower Sunday hours for on-licences (bars, etc.) reduces state-level consumption, but lower Sunday hours for off-licences has no effect. Bernheim et al. conclude that:
...the observed pattern coincides with predictions for time-consistent or naïvely time- inconsistent consumers who have reasonably good memories and low costs of carrying inventories.
In other words, consumers do seem to stock up on alcohol, whether purposefully or accidentally (this research is unable to distinguish between those two situations). So, regulating opening hours for off-licences as a way of reducing consumption is likely to be frustrated by consumer behaviour.