Sunday, 31 December 2017

Forget the America's Cup, we should get Lebron James instead!

The economic impact analysis for the America's Cup has taken a beating (for posts from my blog, see here and here and here), and the best I could conclude was that:
...the benefits might outweigh the costs of hosting the America's Cup, but that relies on the costs being kept under control and the number of challenger syndicates increasing by nearly half over the previous Cup. As with most of these events, you could argue that the spending is good value for a big party, but arguing that it has an overall economic gain for the country (or for Auckland) is probably too strong a claim.
Sam Warburton's calculations show that even my conclusion was overly generous, as the costs most likely outweigh the benefits even before you consider any cost over-runs.

Most economic impact studies are junk, but some at least make you think. [*] To illustrate, let's consider an alternative - bringing Lebron James to play for the New Zealand Breakers. Superstar players can probably generate economic impacts, in the same way that high-profile events can generate economic impacts.

In fact back in May, Daniel Shoag (Harvard Kennedy School) and Stan Veuger (American Enterprise Institute) released a working paper on the economic impact of Lebron James. Shoag and Veuger looked at data on the number of restaurants (and eating and drinking establishments more generally) close to stadiums in Cleveland (where Lebron played from 2003 to 2010, and again since 2014) and Miami (where Lebron played from 2010-2014). The areas in Cleveland did better with Lebron James than without, and so did the areas in Miami. Specifically, Shoag and Veuger found that:
Within 1 mile, Mr. James’ presence raises the number of workers employed by eating and drinking establishments by 23.5%, but the effect disappears once again beyond the 7 mile radius.
Lebron's salary is this NBA season is about US$33.3 million, or about NZ$47 million. A one-mile radius around Spark Arena (where the New Zealand Breakers play their home games) includes almost all of downtown Auckland. In the 2013 Census, there was about 4500 people working in the "Accommodation and Food Services" industry in the Waitemata Local Board area (which includes downtown Auckland). Let's assume about all of those workers are employed in eating and drinking establishments in the area within one mile of Spark Arena (a generous assumption, but the Shoag and Veuger study shows some impacts beyond one mile, albeit smaller, so while there are not as many as 4500 of these workers within one mile of Spark Arena, there are many within seven miles).

If we got Lebron James to come play for the Breakers, based on the Shoag and Veuger study that would increase employment in Auckland by around 1050 workers (23.5% of 4500). At the median wage of around $46,000, that equates to about $48 million in additional wages being paid as a result of signing Lebron James to play for the Breakers. Clearly, the benefits ($48 million) outweigh the costs ($47 million).

Forget the America's Cup, we should be bringing Lebron James to Auckland instead!

[HT for the Shoag and Veuger study]: Marginal Revolution, back in June

*****

[*] The economic impact analysis reported here is junk. But, it's not much worse than about 99% of economic impact analyses that are prepared for local authorities in New Zealand, and for which economic and other consultants are paid thousands or tens of thousands of dollars. For comparison, this analysis took me around half an hour, and I'm providing it to you for free. You're welcome!

Saturday, 30 December 2017

The economics of ticket scalping

In yesterday's post about the increasing role of economics in sports, I mentioned this 2009 article by Ross Booth (Monash University), published in the Australian Economic Review. In the article, Booth discusses some of the key topics he teaches in sports economics, one of which is the economics of ticket scalping, and how ticket scalping increases economic welfare. For example, this article from September published in The Conversation, by Paul Crosby and Jordi McKenzie (both Macquarie University) notes that:
...there is an argument that ticket scalping actually enhances the total welfare of concert goers and sports fans. Scalpers act to distribute tickets to those who value them the most, or, as economists’ would say, they increase the allocative efficiency of the market.
Secondary markets for tickets allow potential buyers to indicate how much they want to go to the event – their “willingness to pay”. If tickets can only be bought at a single price on a first come first serve basis, then some people who really want to go will be left out. Secondary markets permit these mutually beneficial exchanges to take place.
Online platforms for buying and selling tickets actually increase this allocative efficiency
However, this traditional view that is used by many economics teachers is not correct, as I have explained before (see here and here). However, this point bears repeating.

Consider the diagram below. The supply of tickets to some event S0 is fixed at Q0 - if the price rises, more tickets cannot suddenly be made available because the capacity of the venue is fixed (note the diagram assumes that the marginal cost of providing tickets up to Q0 is zero).


Demand for tickets is high (D0), leading to a relatively high equilibrium price (P0). However, tickets are priced at P1, below the equilibrium (and market-clearing) price. At this lower price, there is excess demand for tickets (a shortage) - the quantity of tickets demanded is Qd, while the quantity of tickets supplied remains at Q0.

With the low ticket price P1, the consumer surplus (the difference between the price the consumers are willing to pay, and the price they actually pay) is the area ABCP1. Producer surplus (essentially the profits for the venue or the seller) is the area P1CDO. Total welfare (the sum of producer and consumer surplus) is the area ABCDO. At the higher price P0 due to the actions of scalpers (buying at P1 and selling at P0), the consumer surplus decreases to ABP0, while producer surplus remains unchanged. The scalpers gain a surplus (or profit) of the area P0BCP1, and total welfare (the sum of producer and consumer surplus, and scalper surplus) remains ABCDO. So the ticket scalpers don't increase total welfare - their actions don't affect total welfare at all, just the distribution of that welfare between the parties.

However, the above analysis allows us to think about what would be required for ticket scalping to increase total welfare. That would happen if the higher price induced more events (since each event has a fixed number of tickets, the only way to increase total welfare is to have more events). However, that would only happen if the sellers (not the ticket scalpers) received a higher profit from each event (incentivising them to schedule more events). As shown above, the action of the scalpers doesn't affect producer surplus (it stays P1CDO with and without scalping), and so those incentives for the sellers don't materialise.

Crosby and McKenzie do correctly note that online platforms for re-selling tickets:
...arm buyers and sellers with ever increasing amounts of information, and the time and expenses associated with the purchase of each resold ticket (known as “transaction costs”) are greatly reduced.
The main downside of ticket scalping is that some (previously very lucky) consumers could earn a huge consumer surplus by buying their tickets at a much lower price than what they were willing to pay for them, but must now pay a higher price. You might think this unfair, but as I pointed out in an earlier post:
...you probably also think that the high price of milk is unfairthe high price of petrol is unfairthe high price of electricity is unfair, etc.
Despite the actions of ticket scalpers having little effect on economic welfare (they just re-distribute it), governments do seek to clamp down on it. Crosby and McKenzie briefly discuss technology as a means of reducing scalping, but a more recent Conversation article by Keith Parry, Aila Khan, and Blair Hughes (all Western Sydney University) lays out the future of ticketing in more detail:
Internationally, there have been some interesting developments amongst teams, venues and ticketing companies that may eliminate scalping and improve the ticketing experience.
It may not be long before all fans can take advantage of innovations such as mobile-only tickets, biometric access, and even microchipped tickets...
Looking to the future, it may not be long until tickets are physically linked to individuals and our iconic sporting venues are accessed with the swipe of an appendage.
In such a world, paper tickets will become a thing of the past.
One cannot feel that something is lost without the physical memento of a sporting event provided by a ticket stub. The scalpers and bots have much to answer for.
It's not the scalpers or bots that have much to answer for, it's the ticket sellers who consistently under-price event tickets. If they didn't do so, then the scalpers and bots would have no opportunity to profit.

Read more:


Friday, 29 December 2017

The increasing role of economics in sports, and teaching sports economics

The Conversation has been running an interesting series on the increasing role of economics in sports this week. It's not clear to me how you can access the whole series (which is in the Australian Business + Economy section), but here's the first of the articles, by Tim Harcourt (University of New South Wales). He writes:
If you look closely at your favourite sport nowadays, it’s hard to miss the influence of economics. It’s evident from the way players are drafted or how much they are paid, through to individual coaching decisions, and even strategic shifts across entire leagues.
This has been particularly driven by the rise of game theory in economics. Game theory uses mathematical models to figure out optimal strategies, such as what pitches a baseball pitcher should throw, or whether American Football teams should pass more.
Sport lends itself to economics and game theory because players, coaches and agents act similar to the hypothetical rational decision-makers in economic models.
I love sports economics, and made a failed bid to start teaching sports economics at Waikato some years back. A 2009 article in the journal Australian Economic Review (sorry I don't see an ungated version online) by Ross Booth (Monash University) does a really good job of explaining why teaching sports economics is a good thing:
There are basically two schools of thought on the value of teaching sports economics. The first is that, as many students like sport more than economics, the combination of the two is likely to be appealing and you will learn some economics ‘almost as an aside’, as it were: it is a way of engaging you with economics, of reinforcing and applying some key economic concepts. The other school of thought is that it is an industry worth studying on its own merits and, as it is datarich, it also can provide a useful test of economic theory. Of course, both views are not mutually exclusive.
If you have access to the article, Booth provides a good (though somewhat dated now) list of helpful resources for teachers wanting to make use of sports economics insights in their teaching. I use a few sports economics examples in my teaching, and have blogged on the subject many times (see this search for a current list of blog posts with the tag 'Sports'). Sports economics is most useful for teaching applied microeconomics, and Booth notes that includes:
...the principle of comparative advantage in team selection, the economic welfare implications of several methods of ticket distribution and some sources of market power of sports teams and the ways in which they are used.
In particular, many sports economists are interested in competitive balance, since more balanced sports competitions have been shown to attract more fan interest (and more revenues and profits). However, sports economics is also having a direct impact on the sports themselves and the way they are played, as Harcourt notes:
In basketball, Robert D. Tollison is largely behind the explosion of three point shooting in the National Basketball Association. Tollison’s research identified that even though three pointers are less accurate than other shots, over the course of a game and season it makes sense to take more three pointers.
Data analytics and economics are definitely gaining an increased following among sports administrators and managers. Given the increasing role of economics in sports, it makes sense for use to make more use of sports in the teaching of economics.

Wednesday, 27 December 2017

Book Review: The Economics of Just About Everything

It's always going to be difficult for a book to live up to a title like The Economics of Just About Everything, but Andrew Leigh's 2014 book with that title does about as good a job as any. The book is probably best characterised as a newer, Australian-focused version of Freakonomics, by Steven Levitt and Stephen Dubner. Like Levitt and Dubner in their original book (but less so in the subsequent books such as Think Like a Freak, which I reviewed here), Leigh draws on a mixture of international research and his own research to highlight some interesting analyses undertaken by economists in recent years.

Leigh also avoids the temptation simply to repeat topics or themes that have been well covered in other pop economics titles in recent years. That leads to some genuinely unique and interesting chapters. I especially enjoyed Chapter 5, which applies David Galenson's analysis of creative innovation to Australian painters, rock musicians, and novelists. Galenson claims that there are two types of innovators:
...'conceptual' innovators, whose work implements a single theory, and 'experimental' innovators, whose work evolves from real-life experience and empirical observation.
Galenson's theory suggests that 'conceptual' innovators should peak earlier in their careers (that is, at younger ages) than 'experimental' innovators, and his research has demonstrated that it holds true in a variety of cases (including economists). Leigh extends this to a number of Australian cases and shows that it also holds true. However, at points it did leave me wondering whether there was a risk of endogeneity. For example, maybe we consider someone a conceptual innovator because they peaked early and so the causality runs the other way? Leigh never addresses this point, but I think it needed some statement.

Chapter 7 covered global poverty and development and like most pop economics books, it couldn't really do justice to the topic (and to be fair, it's not a topic that would be easy to do justice to in a single chapter). However, I found it interesting that one of the articles that Leigh made mention of was one that I also covered in my blog a couple of months ago.

Chapter 8 (entitled "Smashing the crystal ball") was on forecasting, and does a good job of explaining why it is that you shouldn't really depend too heavily on economic (or other) forecasts. However, it would have been good if Leigh had mentioned more explicitly the uncertainty in these forecasts, and that the uncertainty can often be measured and is genuinely useful to decision-makers (which is a point that I have made before).

Overall, I found this book to be an interesting read, in the style of Freakonomics. Leigh makes the point in the introduction that:
If there are two themes to this book, they are that economics is everywhere, and economics is fun.
I think there is a strong case to be made for those themes, and Leigh does a good job of it. Add it to your holiday reading list, if it's not too late!

Saturday, 23 December 2017

Even less reason to believe in the economic impact of the America's Cup

The New Zealand Herald reported on Thursday:
The Ministry of Business, Innovation and Employment (MBIE) today admitted errors in its report on the economic benefits of the hosting the cup.
Its initial cost-to-benefit ratio estimate was between 1.8 and 1.2, meaning benefits would outweigh cost by between 80 and 20 per cent. However, it today revised this to a high of 1.14 and a low of 0.997, the latter scenario would mean the cost would outweigh the benefits.
"In simple terms, the cost benefit ratio is normally the total benefits divided by the total costs," MBIE said in a statement.
"However, Market Economics had erroneously divided net benefits (new spending less the costs to deliver the goods and services) by total construction costs."
The error was brought to the Government's attention by policy think tank the New Zealand Initiative.
When doing cost-benefit analysis, it pays to be comparing the right costs and benefits, even if the numbers rely on some heroic assumptions. I didn't pick up the mistake in the costs when reading the report. Luckily, Sam Warburton has a keener eye for the detail than I do. I thought the cost-benefit analysis was the most believable of the three analyses, and I concluded that:
...my takeaway from the report is that the benefits might outweigh the costs of hosting the America's Cup, but that relies on the costs being kept under control and the number of challenger syndicates increasing by nearly half over the previous Cup. As with most of these events, you could argue that the spending is good value for a big party, but arguing that it has an overall economic gain for the country (or for Auckland) is probably too strong a claim. 
It now seems even less likely that the benefits of hosting the America's Cup outweigh the costs, even if you make the generous assumption that costs could be kept under control.

[Update]: In the comments, Sam Warburton points to his calculations. Under Sam's final calculations, the costs outweigh the benefits!

Read more:

Friday, 22 December 2017

The unintended consequences of China's coal ban

Internationally, pollution is a problem, but is more of a problem in some countries than others. When it comes to dealing with the problem of pollution, policy-makers have two options: (1) a command-and-control policy, where pollution is heavily restricted (and backed by enforcement mechanisms such as fines for firms that fail to comply); or (2) a market-based solution, such as an environmental (Pigovian) tax or tradeable pollution permits (see my earlier post discussing these two options).

It is well-known that China has a severe pollution problem, and it is also well-known that when China has a problem the solution is often heavy-handed. In this case, that means a command-and-control policy that restricts pollution, in this case implemented by local officials who banned the use of coal (I might add, this is a policy favoured by some in New Zealand as well, and in place in some areas and under some conditions already). The main problem with command-and-control policies is that they are very inflexible to market conditions. However, they can also come with other unintended consequences.

So, it came as little surprise to me when South China Morning Post reported this week:
Clearly [Chinese president Xi Jinping] is highly sensitive to the anger of ordinary people at China’s sky-high levels of pollution. And clearly his message struck home with party officials. After local government officials either restricted or simply banned coal use across much of Northern China, the residents of Beijing enjoyed unseasonably blue skies and fresh air through November.
But the centrally dictated clean-up came at a heavy price. Coal is not only the main source of fuel for power stations and industry, accounting for about two-thirds of China’s electricity generation, it is also burnt to heat millions of households through Northern China.
So when officials imposed their ban on coal use in line with Xi’s concern for the environment, they triggered a price surge and supply squeeze in substitute clean energy, notably natural gas. And millions of poorer households and many towns and villages unconnected to the gas supply grid were left out in the cold.
Over the past couple of months, natural gas prices have jumped by 70 per cent, hurting energy-intensive businesses, many of which were already operating on razor thin margins. Some have been forced to shut down. That’s bad enough, but even more embarrassing as far as government officials are concerned, are the tales of personal hardship that have spread across the internet and through the media, complete with stories about schoolchildren suffering frostbite because of the lack of heating in their classrooms.
The sad thing is that none of this should have been particularly surprising. When you ban the use of coal, how are households where coal is the only heating option supposed to respond? In terms of industry and electricity generation, coal and natural gas are substitutes. When you ban the use of coal, the demand for natural gas will rise, and so will its price.

You can't just legislate away the trade-offs in decisions like these. A market-based solution, such as a tax on coal, would reduce (but not eliminate) coal use, but would also create incentives for firms and households to switch to cleaner-burning fuels. However, those changes take time. If the government wants to push through change more quickly by imposing a ban, then there are clear human costs that will have to be borne. There is no overnight fix for China's pollution problems that avoids these costs, and it appears that the government has backtracked:
Now the government is using the same mechanisms of central control to reverse its policy. That should help to solve its immediate troubles. But in the longer run the underlying problem will remain in place. As long as Beijing continues to govern by diktat, attempting to manage the supply side of the economy in order to hit arbitrary and often impractical targets, it will continue to encounter similar difficulties.
Indeed. The longer term problem is best solved by creating the right incentives.

[HT: New Zealand Herald]

Thursday, 21 December 2017

Locked in lecturers and out-of-control textbook prices

Yesterday I wrote a post about the upcoming change to our first-year economics papers at Waikato, where we will be adopting the CORE curriculum. One of the material considerations in our choice was the cost of textbooks, which have clearly gotten out of control. A recent Quartz article notes:
Seven Rhode Island universities, including Brown and Rhode Island College, will move to open-license textbooks in a bid to save students $5 million over the next five years, the governor announced Tuesday (Sept. 27).
The initiative is meant to put a dent in the exorbitant cost of college and, more specifically, college textbooks. Mark Perry, a professor of economics and finance at the University of Michigan Flint, and a writer at the American Enterprise Institute, estimated last year that college textbook prices rose 945% between 1978 and 2014, compared to an overall inflation rate of 262% and a 604% rise in the cost of medical care.
That is not the result of a general trend of higher costs in publishing, he notes: the consumer price index for recreational books has been falling relative to overall inflation since 1998.
“It begs the question, are we getting 1,000% more value?” asked Richard Culatta, the chief innovation officer for Rhode Island who was the director for the office of educational technology in the Obama administration.
On that last question, the textbook sellers would argue that yes, we are getting more value. However, it isn't clear to me that the 'we' that is getting the extra value is the students. Textbooks have barely changed in the last twenty years. The 'extra value' that the textbook providers believe they are generating is not through meaningful improvements in the quality of the textbook material. Instead, they have increasingly concentrated their efforts on creating supplementary online offerings, tying the textbook to additional online resources for students, but selling the value of those additional resources for lecturers. For instance, textbooks from the big publishers will often include online testing systems that automatically link to university learning management systems (not to mention those that work like learning management systems themselves), extra exclusive multimedia and online content that can be used in lectures or made available to students or used as part of assessment tasks, online data visualisation tools, and so on.

Call me cynical, but I see all those online extras as simple attempts to create and capture value through customer lock-in - exactly one of the things we discuss in ECON100 (and soon ECONS101). Customer lock-in occurs when customers (in this case the lecturers) find it difficult (costly) to change once they have started purchasing a particular good or service. In this case, the lecturers aren't purchasing anything, but the principle remains valid. Once a lecturer starts using these online 'extras' that are tied to a particular textbook, it is difficult for them to change because that would mean losing all of the work that they've put into designing their course around those extras, and starting over afresh with the extras available with the new textbook. That time and effort constitutes a switching cost, which lecturers would obviously try to avoid, thereby locking them into their current choice of textbook. Having lecturers locked in to using a particular textbook allows the publishers to raise prices, which has led us to where we are today, where the Australia-New Zealand edition of the Mankiw textbook Principles of Economics now costs about NZ$185.

Obviously, I'm keenly aware of the switching costs and try to avoid being locked in. So, I've always been reluctant to rely too heavily on the bonus material available from the textbook suppliers. I don't even use their default Powerpoint slides, but that's more because I see the textbook as just one resource among many that I use in my teaching, and not the centrepiece of each course.

Despite my efforts, we were somewhat locked into using the Mankiw textbook in ECON100, since part of the test bank we used was from that book (albeit an earlier edition) and our resources pointed its chapters. Changing to a new textbook would mean we had to develop a new test bank (although we could keep our own questions from the previous one), and re-writing the paper outline and other resources to link to the new textbook.

When we decided to renew and redevelop ECON100 into ECONS101, we were going to face a cost anyway, so the switching cost of adopting a new textbook at the same time was relatively lower. Our ECONS101 students will be the beneficiaries, since as I mentioned yesterday it means that we can adopt the new CORE curriculum. However, it's worth noting that the CORE textbook is free for students, but if we as lecturers want to change textbook, we will face another switching cost.

Wednesday, 20 December 2017

Goodbye to ECON100, hello to ECONS101

It's the time of year to be a little nostalgic, and that feeling was further promoted this week by the conclusion of the final semester of our ECON100 paper at Waikato. So it seemed fitting that I was the tutor for the final tutorial for ECON100 a couple of weeks ago, given that I was also the tutor for the very first ECON100 tutorial, way back in the second week of A Semester 2003.

We introduced ECON100 in 2003 as part of a revamped Bachelor of Management Studies, and at that time the paper was two-thirds microeconomics, and one-third macroeconomics. The first iteration was taught by Steven Lim and Dimitri Margaritis. Many others have contributed as lecturers over the years including Bridget Daldy, Dan Marsh, Mark Holmes, Pam Kaval, Steven Tucker, Susan Olivia, Matthew Roskruge and myself (plus some guest lecturers and visiting lecturers, and of course there have been many, many tutors).

I remember Steven and Bridget as being the real drivers of the development (or re-development) of the paper in the beginning (Steven especially), and ECON100 has had an identity as an introductory paper in business economics that has made it unique within New Zealand. While other universities have continued to teach principles of economics in first year (of various flavours), ECON100 has always had a much greater focus on business applications like game theory, strategy, and pricing.

I first lectured ECON100 in A Semester of 2005, teaching the macroeconomics section (yes, that is surprising given that I exclusively teach microeconomics now). After that, I taught it in the Summer School period in 2006 and 2007, then in A Semester from 2008 to 2017, as well as B Semester in 2011 and 2016-17. The paper format changed in 2008 to be all microeconomics (no macroeconomics) - a change that I agitated strongly for, in order to balance the way that ECON200 was taught (as all macroeconomics). This allowed us to introduce a wider range of interesting business content, and especially to extend our coverage of market failures.

Steven was kind enough (or tolerant enough) to allow me space to innovate in the paper (and everyone else just went along with my suggestions). We began using the Aplia online testing platform in 2009 (after a successful trial run by Dan Marsh that year in summer school), and have used it every semester for the last nine years. We ran quite a successful group video project assessment from 2011 to 2015 (although we eventually dropped it when students stopped putting sufficient effort into the videos and the quality dropped significantly). In 2016, we introduced extra credit marks that could only be earned by completing spot quizzes in lectures, which has substantially improved lecture attendance, particularly late in the semester (this follows my standard practice in ECON110 for many years to offer extra credit marks in lectures). These innovative assessments have drawn attention across the university and internationally, and as I have mentioned before our assessment and grading practices compare well with others.

So, what comes next? It might be the end of ECON100, but it is just the beginning for our new ECONS101 paper, which will be introduced from A Semester next year. The ECONS101 paper will look totally different, but at the same time it will retain some of the key features that make it distinct from offerings at other universities. The biggest change will be the adoption of the CORE curriculum, which has just received further support from the Royal Economic Society:
CORE, which stands for Curriculum Open-access Resources in Economics (www.core-econ.org), is an international collaboration of researchers that has created an online, free, introductory economics ebook called The Economy. It is already the basis of the first year of economics degrees offered by more than a dozen universities in the UK, including University College London, King’s College London, and The University of Bristol.
CORE is probably the most exciting new development in economics teaching in recent years, and we are excited to be offering it. And as you can see from the list above, we're going to be following some high quality international university programmes in doing so. However, ECONS101 will retain a focus on business economics as well, and so future students can expect to have to deal with applications of pricing strategy alongside their learning from the CORE textbook (which I will write more about later this week).

The new ECONS101 paper will return to the two-thirds microeconomics, one-third macroeconomics format that the original ECON100 paper had. I'm thrilled to have Les Oxley to share the paper with in both semesters next year, and we hope that the next generation of Waikato management students will get at least as good an experience with ECONS101 as past students have had with ECON100.

I'm looking forward to it. I can't wait!

*****

In case you're wondering, ECON110 will have a new life too, although only the paper code is changing (to ECONS102) and most of the content will remain similar to previous years (although, that paper is a bit of a chameleon as it has undergone constant redevelopment over the 13 years I've taught it).

Monday, 18 December 2017

The living wage may need an urgent look, but it needs to be a balanced one

In a story entitled "NZ living wage needs urgent look, Massey University and AUT researchers say", the New Zealand Herald reported today:
What could a New Zealand living wage look like?
A team of researchers have begun investigating the concept, which they say could help struggling, low-paid workers and tackle mounting challenges regarding poverty and productivity.
Massey University psychologist Professor Stuart Carr, who is co-leading the new three-year study, said living wages usually refer to higher minimum wage rates, derived from calculations of the material cost-of-living needs of a hypothetical household unit.
"However, the broader concept of living wages goes much further," he said...
The research team saw an urgent case to examine the area.
They said working poverty had "soared" due to low pay, insecure work that provided interrupted or insufficient hours of paid employment and rising housing, energy and food costs – all of which disproportionately affected women, younger and older people, and Maori and Pacific people in particular.
Researchers say that while a national minimum wage is a legal floor intended both to provide protection for workers and encourage fair competition among employers, minimum wages were now widely recognised as failing to provide sufficient cost-of-living income.
"This is due not only to the growth of informal work, poor awareness and weak enforcement of wage laws, but mainly to minimum wage rates not matching increasing living costs and the realities of precarious work," said Professor Jim Arrowsmith, of Massey's School of Management.
 Investigating the living wage is important, but it's difficult to see what canvasing four employers ("a city council; a public-sector Maori organisation; a Pacific social enterprise; and a local small or medium-sized enterprise") will tell us. Especially when there is already a wealth of research on the effect of higher minimum wages.

Much of the theoretical background (and some of the evidence) was summarised by Jim Rose (of the Taxpayers' Union) in an interesting report on the living wage earlier this year (full report here; summary here). While much of the report is a rebuttal of points made by the living wage movement (their report is available here), there are some general points that need to receive a bit more air, starting with:
The economics of a unilateral living wage policy by an individual employer is different to that of a minimum wage increase.
This is a point I have made before. The living wage may be good for employers, but not if all employers pay a living wage. That effectively increases the minimum wage, which is probably not a good idea.

The main reason that most people use to support imposing a living wage is to help reduce poverty, or especially child poverty. However, if you hold that view then you need to confront the fact that:
The Treasury (2013) estimated that 79% of households earning pay below the living wage rate have no children; 6% are sole parents; the remaining 15% of households are couples with children (see graphic below). Almost all teenagers and majority of adults in their twenties earn below the living wage; 29% of low income workers live in families whose income exceeds $60,000...
This is a point that Eric Crampton has made before too (see for example here). So, a living wage would not be well targeted, and as Eric has also pointed out, increasing Working for Families would be a better option than increasing minimum wages. The reason is that a lot of the increase in the living wage would be lost to tax. According to Rose:
The living wage increase has a much smaller effect on the take-home pay of employees with families because of a reduced Working for Families tax credit. In its 2015 Minimum Wage Review, the Ministry of Business, Innovation and Employment (2015) calculated that a couple working 60 hours between them on the minimum wage lose over 40% of a living wage increase to reduced Working for Families and to tax...
Ok, so let's leave the higher minimum wage aside, and consider individual employers (rather than all employers) paying a living wage:
Any employer who unilaterally introduces a living wage is simply raising their hiring standards. The workers who previously won the jobs covered by the living wage increase will not be shortlisted because the quality of the recruitment pool will increase. The Council must by law hire on merit so only those who currently earn $18- $20 in other jobs will be shortlisted for living wage vacancies. These recruits are on about the living wage now so they do not benefit from the living wage policy...
Workers who would not have previously applied for council jobs because they can earn more elsewhere will now apply because of the higher pay. These better paid applicants will crowd out the applicants of the minimum wage workers who currently win these jobs. Living wage advocates do not discuss what becomes of these low-paid workers who are no longer shortlisted. They should.
This is a point that we don't see raised nearly enough. A rational employer will employ labour up to the point where an additional hour of wages costs the same as the revenue it generates. So, if you pay a higher wage, then workers need to be more productive (see also this post). Rose's report addresses this point in some detail, providing a range of evidence (including New Zealand evidence) that suggests a living wage raises hiring standards. The key point is that employers want to be sure that the higher wages will be justified by higher worker productivity (as measured by higher revenues to offset the higher wages).

But what about public sector employers, where revenue is (arguably) less of a consideration? Rose writes about an Auckland Council proposal for a living wage:
Mayor Goff said he could pay for the living wage increase by cutting costs elsewhere... If these expenditures such as on better fleet management and group procurement are of low enough value to be reprioritised to fund a living wage policy for no loss of service, ratepayers are entitled to ask why the expenses were incurred in the first place.
Indeed, if there are cost savings that can be made (with no loss of service) in order to afford a living wage, then why are those cost savings not already being made? Was Auckland Council simply wasting ratepayers' money previously? In reality though, most 'cost savings' are mythical so I'm not sure we can really buy the argument that a living wage would be paid from cost savings anyway. Nevertheless, productivity is still a consideration for public sector services, and the New Zealand evidence in the report does seem to demonstrate that hiring standards increased when Wellington City Council became a living wage employer.

There's a lot of interesting points made in Rose's report, based on a range of theory and research in labour economics. If you're not familiar with the literature, it's well worth a read for that alone.

Coming back to the future research by Carr et al. that led this post, I'll be interested to see what they find. However, I'm not holding my breath that it will be a particularly balanced view, given how it has been reported so far.

Read more:


Friday, 15 December 2017

Uber drivers taking advantage of their riders (again)

Earlier this week, I wrote a review of Brad Stone's book, The Upstarts, about Airbnb and Uber. I'ver blogged about Uber several times before, including this post about Uber drivers gaming the system by logging off in order to induce surge pricing. It turns out that is not the only way that Uber drivers can game the system, as Quartz reported last month:
Some Uber drivers in Lagos have been using a fake GPS itinerary app to illicitly bump up fares for local riders.
Initially created for developers to “test geofencing-based apps,” Lockito, an Android app that lets your phone follow a fake GPS itinerary, is being used by Uber drivers in Lagos to inflate the cost of their trips.
The drivers claim that they use the Lockito app in order to make up for Uber slashing fares earlier in the year:
Williams*, an Uber driver who asked his real name not to be used, says he heard about Lockito a while ago but initially had no interest in using it. “Uber was sweet, until they slashed the price,” he says. “They did not bring back their price up, so the work started getting tough and tougher.”
“When the thing was just getting tougher, I had no choice but to go on Lockito.”...
The funny thing is that Uber is clearly aware of Lockito, but allows drivers to continue using it:
Perhaps most surprisingly, drivers accuse Uber of not only knowing about app, but purposely not doing anything about it because they still want to maximize their profits.
“If you’re using Lockito [with] Uber [it] will tell you “fake location detected”…they will tell you [the driver],” says Williams. “Sometimes when I run it [Lockito], Uber will tell me, “your map of your location…is fake,” you’ll now click OK…and still yet, I take my money…”
I guess that way, Uber can claim that their fares are low and it is the actions of the drivers, not Uber, that results in high fares for passengers. If Uber raised their fares, it seems unlikely that drivers would now stop using Lockito. They've discovered a way to raise their incomes at essentially no cost to themselves, in a similar way to drivers in London and New York who were gaming the surge pricing algorithm.

As we note in the very first topic of ECON110, no individual or government will ever be as smart as all the people out there scheming to take advantage of an incentive plan [*]. This is just another example.

*****

[*] I've borrowed this point from the Steven Levitt and Stephen Dubner book, Think Like a Freak, which I reviewed here.

Wednesday, 13 December 2017

Running the gravity model in reverse to find lost ancient cities

The gravity model of trade or migration (which I have written about before here) must be one of the consistently best-performing empirical regression models (in terms of in-sample prediction, at least). The model is really simple too. In its simplest form, a gravity model suggests that the migration (or trade) flow between two regions is positively related to the 'economic mass' (proxied by population in the case of migration, or by GDP in the case of trade) of the origin and the economic mass of the destination, and negatively related to the distance between the two places. So really, you don't need a whole lot of data in order to run a gravity model (though you do need data on trade or migration flows).

The standard gravity model is based on known data such as the distances between countries (or regions, or cities). But what if you didn't know where the cities were (as might be the case for lost ancient cities), but you did know the size of the trade flows? Could you use the gravity model to triangulate the likely location of those lost cities, by estimating the distance from their trade partners? It turns out that yes, you can.

In what might be the most ingenious use of the gravity model I've ever seen, a new NBER Working Paper (ungated version here) by Gojko Barjamovic (Harvard), Thomas Chaney (Sciences Po), Kerem A. CoÅŸar (University of Virginia), and Ali Hortaçsu (University of Chicago) does almost exactly that. The authors use a dataset of over 12,000 Assyrian tablets from 1930-1775 BCE, 2,806 of which contain mentions of mentions of multiple cities in Anatolia (modern-day Turkey). Of those tablets, 198 contain merchants' itineraries for 227 itineraries relating to travel between 26 cities (15 of which are known, and 11 of which are 'lost'). The authors explain the difference between known and lost cities, as:
‘Known’ cities are either cities for which a place name has been unambiguously associated with an archaeological site, or cities for which a strong consensus among historians exists, such that different historians agree on a likely set of locations that are very close to one another. ‘Lost’ cities on the other hand are identified in the corpus of texts, but their location remains uncertain, with no definitive answer from archaeological evidence. From the analysis of textual evidence and the topography of the region, historians have developed competing hypotheses for the potential location of some of those.
So, the authors use the data from the itineraries to construct a dataset of trade between known cities, and between known and lost cities. Using that dataset they then estimate a gravity model of migration, which provides an estimate of the distance elasticity of trade of about 3.8. That means that each 1 percent increase in distance between two cities reduced trade by about 3.8 percent. This is much higher than modern models of trade where the elasticity is usually about one, but given ancient trade was mostly by road (or by coastal shipping) and the roads were not high quality, that doesn't seem too unusual.

Next comes the really cool bit. They then use the distance elasticity measure to 'back out' estimates of the location of the lost cities. Their method even gives confidence bounds around the estimated point location of each lost city. They conclude that:
...[f]or a majority of the lost cities, our quantitative estimates come remarkably close to the qualitative conjectures produced by historians, corroborating both such historical models and our purely quantitative method. Moreover, in some cases where historians disagree on the likely location of a lost city, our quantitative method supports the conjecture of some historians and rejects that of others.
Eyeballing the results from the maps though, the estimated location of the lost cities doesn't appear (to me) to be particularly close to the historians' qualitative estimates. However in spite of that, this is a very cool paper using the gravity model in a very novel way. Hopefully we see more of this in the future.

[HT: Marginal Revolution]

Tuesday, 12 December 2017

How not to measure sexual risk aversion

Risk aversion seems like such a simple concept - it is how much people want to avoid risk. Conventionally, economists measure the degree of risk aversion of a person by how much of an expected payoff they are willing to give up for a payoff that is more certain (or entirely certain). If you're willing to give up a lot, you are very risk averse, and if you are not willing to give up a lot, you are not very risk averse. But notice that the measure of risk aversion is all about behaviour, either as a stated preference (what you say you would do when faced with a choice between a more certain outcome, and a less certain outcome that has a higher payoff on average) or a revealed preference (what you actually do when faced with that same choice).

So, I was interested to read this recent paper by Stephen Whyte, Esther Lau, Lisa Nissen, and Benno Torgler (all from Queensland University of Technology), published in the journal Applied Economics Letters (sorry, I don't see an ungated version). In the paper, the authors claim to be comparing "risk attitudes towards unplanned pregnancy and sexually transmitted diseases (STDs)" between health students and other students. It is an interesting research question, since you might expect health students to be better informed about the actual risks of sexual behaviour.

However, when you look at the measure they used for risk attitudes, it becomes immediately clear that there is a problem:
To assess participant perceptions of the safety of different forms of contraception and sexual contact in relation to unplanned pregnancy and STDs, they were asked to rate, on a seven-point scale from 0% safe to 100% safe, the level of safety of each of six options. The six responses were then summed and divided by the number of responses to create a measure of average individual attitudes towards the specific risk.
The six options for risk of unplanned pregnancy were condoms; contraceptive pill; sex during menstruation; intrauterine devices; withdrawal method; and contraceptive implant; and the six options for risk of STDs were oral sex; physical contact; kissing; digital penetration; anal penetration; and vaginal penetration. At least, I think that's the case, as it was a little unclear from the paper.

However, their measure is clearly not a measure of risk attitudes (or risk aversion) at all. It is a measure of 'perceptions of safety'. Notice that the measure doesn't ask about students' sexual behaviour at all, and doesn't ask about a trade-off decision. So, it won't tell you much at all about risk aversion. In order to turn it into a measure of (sexual) risk aversion, you would at the very least need to ask the students to choose between two (or more) of the options, with different levels of risk and different levels of either 'beneficial payoff' or (more likely) cost.

Perceptions of safety of the different options is one component of the decision of which option to engage in (or to engage in none of them), but alone it does not tell you about risk aversion. A student might report that they believe the options convey a low degree of safety, but that doesn't mean that the student is risk averse. It just means that they believe that the options presented to them are high risk (low safety). Similarly, a student who reports that the options convey a high degree of safety is not necessarily less risk averse than a student who reports that the options convey a low degree of safety.

How would we expect health students to be different from other students? You might expect health students to be better informed about the actual safety associated with the different options (at least, you'd hope that they would learn this in their health studies!). In other words, you might expect other (non-health) students to over- or under-estimate the degree of safety of the different options to a greater extent than health students. Let's say that non-health students are more likely to over-estimate safety. They are more likely to take risks with their sexual health and in terms of unplanned pregnancy than are health students, because the health students are better informed about the real levels of safety of each option. This would manifest in higher measures of 'perception of safety' among non-health students than among health students. And these authors would interpret this as greater risk aversion among health students, when in fact it is entirely driven by the non-health students being misinformed relative to the health students.

Notice also that the measure of 'perceptions of safety' increases if students believe that oral sex is safer (in terms of avoiding risk of STDs), or if kissing is safer, or if vaginal sex is safer, with no consideration of the actual level of risk associated with each option. It would have been better to evaluate some of the options separately, rather than all together, since evaluating them all together really turns their measure into a general measure of 'perceptions of safety of sexual activity'.

That latter problem aside, the results of the paper are still interesting, provided you interpret them (correctly) in terms of 'perceptions of safety of sexual activity'. Students who reported as virgins had lower perceptions of safety (which might explain in part why they are still virgins). Older students had lower perceptions of safety (I guess, you learn from your mistakes, or your friends' mistakes?). Male students had higher perceptions of safety in terms of STDs, but not in terms of unplanned pregnancy (this one was a bit or a surprise, as I would have expected the opposite). Non-religious students (which the authors label as atheists) had lower perceptions of safety in terms of STDs, but higher perceptions of safety in terms of unplanned pregnancy (I guess the religious students are more worried about pregnancy, which can't easily be hidden from their peers and family, than they are about STDs, which can?).

Anyway, even though the results are interesting, it doesn't change the fact that this is not the way to measure risk aversion.

Monday, 11 December 2017

Book Review: The Upstarts

I've been following the development of Uber for a long time, including writing a few posts (see here, and here, and here, and here, and here, and here, and here). I haven't followed the development of Airbnb nearly as closely (and only mentioned them once on the blog, and then only in passing), but I have used the service. So, I was interested to read The Upstarts, by Brad Stone. The subtitle is "How Uber, Airbnb, and the killer companies of the new Silicon Valley are changing the world".

In the book, Stone does an excellent job of chronicling the history of Uber, Airbnb, and to a lesser extent Lyft and the other minor players in those industries. The book covers their origin stories (both real and imagined), their growth and fights with both their competitors and regulators (especially in the U.S. and Europe), their missteps (like 'ransackgate', and Uber's failure in China), and the story so far up to about the end of 2016 (which is to be expected, from a book published in 2017). Stone seems to mostly get the inside story from the key players involved, but overall the book is more of a highlights (and/or lowlights, depending on your perspective) package than a deep dive into each company and their history. Stone himself is pretty upfront about this in the introduction:
It is not a comprehensive account of either company, since their extraordinary stories are still unfolding. It is instead a book about a pivotal moment in the century-long emergence of a technological society. It's about a crucial era during which old regimes fell, new leaders emerged, new social contracts were forged between strangers, the topography of cities changes, and the upstarts roamed the earth.
I guess we'll have to wait and see if the old regimes actually fall, or whether they just stumble a little bit and continue to be propped up by regulators. This is exemplified in the final section, where Stone asks Travis Kalanick (the CEO of Uber):
When would Uber get to profitability?
Kalanick's response was pretty evasive (teaser - you'll have to read the book to find out!).

The book has a lot to offer the student of economics, with interesting discussions about Uber's introduction of surge pricing (which I hadn't realised was introduced relatively late, in 2012), Uber's pricing experiments in Boston, the price elasticity of demand for Uber (which I've written about before here), and the impact of Uber on the value of a taxi medallion in New York (which fell by more than half between 2013 and 2016, from a starting value of US$1.2 million). I realise that all of those examples relate to Uber and I'd say that is either a fair reflection of the book or my prejudices. The chapters alternate between the stories of Uber and Airbnb, but I found the Uber story much more engaging. The Airbnb story is interesting, but it doesn't have quite the same drama or the same depth of content.

The Boston pricing experiments in particular are interesting, since they directly led to the rollout of surge pricing everywhere:
...[Uber's general manager in Boston, Michael] Pao started running experiments. For a week he held fares steady for passengers but increased payments to drivers at night. In response, more drivers held their noses and stuck around for closing hours. It turned out drivers were in fact highly elastic and motivated by fare increases. Pao then spent a second week confirming the thesis by breaking Boston's Uber drivers into two groups. Some saw surging rates at night while others did not. Again, drivers with higher fares during peak times stayed in the system longer and completed more trips.
Pao now had something that the previous unfocused tests of surge pricing hadn't yielded - conclusive math... Kalanick was convinced, and surge pricing became orthodoxy inside Uber...
Overall, if you're looking for an easy read and to understand how Uber and Airbnb became the monsters they are today, this book is a good place to start. I enjoyed it, and I'm sure you would too.

Saturday, 9 December 2017

STEM-readiness and the gender gap in STEM at university

Consider the proportion of university enrolments that are in STEM (Science, Technology, Engineering, and Mathematics). In contrast with enrolment in the social sciences (for example), the pre-requisite learning at high school is higher for STEM subjects in university, since there are minimum levels of prior mathematics and (often) basic sciences. So, we can think about the proportion of university of enrolments in STEM as represented by the following equation:


That is, the proportion of all enrolled university students (ENR) that are STEM students is equal to  the proportion of students who meet the pre-requisites (READY) who enrol in STEM, multiplied by the proportion of all enrolled students who meet the pre-requisites. This is an over-simplification, but it helps us to understand the difference in the gender enrolment rate in STEM, where typically a higher proportion of male university enrolments are in STEM. This gender difference might arise because more male students who meet the pre-requisites enrol in STEM (compared with female students who meet the pre-requisites), or because more male students who enrol in university meet the pre-requisites (compared with female students who enrol). Knowing the source of this gender gap has important policy implications if you want a similar proportion of enrolments in STEM for each gender, since it tells you whether you need to increase female participation in the high school pre-requisite courses (the second term in the equation), or if you need to encourage female enrolment in STEM for those that meet the pre-requisites (the first term in the equation).

A recent NBER working paper (alternative ungated version here) by David Card (University of California, Berkeley) and Abigail Payne (University of Melbourne) performs a similar disaggregation to the one above, using data on 413,656 university entrants in Ontario over the period 2004 to 2012. First, there is a clear gender gap in the data in terms of STEM enrolment:
Overall, 30.3% of females (a total of 72,033 women) and 42.5% of males (a total of 74,763 men) enrolled in a STEM program. Note that the gap in the proportion of students within each gender group who register in STEM is large (12 percentage points) despite the fact that nearly half (49%) of STEM registrants are female. This reflects the much larger fraction of females than males who enter university in the province.
They find that the difference in the second term in the equation above dominates. Their preferred decomposition results show a:
...13.2 percentage point gender gap in the fraction of newly entering university students who enroll in a STEM program. Overall, 2.1 percentage points are attributable to a lower rate of entering a STEM major by STEM ready females than males...; 1.7 percentage points are attributable to the slightly lower fraction of females than males who are STEM ready at the end of high school and the slightly lower fraction of STEM ready females who enter university...; and 9.4 percentage points are attributable to the higher fraction of non‐STEM ready females who finish high school with enough qualifying classes to be university ready.
That last bit is not surprising in one sense, that female university students are less likely to be STEM-ready (that is, to have the pre-requisites for enrolling in STEM). However, it is surprising in the sense that the reason for that difference is the much greater numbers of female students enrolling who are not STEM-ready, compared with male students who are not STEM-ready. So, the gender gap could be reduced by encouraging more female students to take the pre-requisite courses in high school, or by encouraging more male students who are not STEM-ready into university courses in non-STEM disciplines.

The one potential negative about the research is that the STEM subjects include:
engineering, physical sciences, natural sciences, math, computer science, nursing, environmental science, architecture, and agriculture.
They do make a good case for why nursing is included, but I also wonder about agriculture, and whether excluding those two subjects would make much of a difference to the results.

However, overall the results are interesting and help us to better understand the gender gap. I would be really interested to see if something similar holds for New Zealand.

[HT: Alex Tabarrok at Marginal Revolution, back in September]

Friday, 8 December 2017

The overstated business case for having more female managers

Westpac released a report this week on gender representation in business leadership positions. It's an important topic, and rightly received a lot of media attention (see here and here, for example). However, the media focused attention on the one part of the report that is not worth the (virtual) paper it was written on:
New Zealand's economy has a nearly $900 million annual economic hole because of low numbers of women in management roles, new research suggests.
Let me break that $900 million down, and if at the end of this post you still believe it, I know some Nigerian princes with millions of dollars that they can't get out of their country that you can help.

First, the $900 million is actually $881 million, and it comes from two sources. From the report, (emphasis is theirs):
First, having more women in leadership positions can change perceptions about female competency and skills, and this effect can increase female labour force participation. We estimate that increased female participation at manager level and above would be worth an additional $196 million (or 0.07% of GDP).
Second, women in leadership roles tend to be more supportive of flexible working policies, which in turn also increases labour force participation.
We estimate that if all New Zealand businesses were to achieve gender parity in leadership, it is likely to lead to an increase in the number of businesses offering flexible work policies. The associated benefit resulting from more businesses offering flexible working policies is an additional $685 million (or 0.26% of GDP).
Let's look at where they get each of those two numbers from, starting with the $196 million. They first estimate a model that relates female labour force participation to the proportion of employees that are female managers. They use OECD data for four years only (2011-2014). The results of that analysis suggests that:
...a 1% increase in the share of female employees who are managers is associated with a 0.09% uplift in the female labour force participation rate.
They then extrapolate that number (based on Australian data that 13.3% of male employees are managers, but 8.9% of female employees are managers) and claim that the female labour force participation rate would increase by 0.39% if gender parity in management were attained. The problems with this analysis are many. First, the model only shows a correlation, not cause-and-effect. Nothing in their analysis proves that changing the number of female managers would cause any change in the female labour force participation rate. Second, it's based on only four years of data, during which most countries percentage of female managers and female labour force participation rates will not have changed by much. So the extrapolation will likely be well outside the observed data. Third, the data on the proportion of managers is for Australia, not New Zealand. While our two countries are similar, they are not the same, as the report even notes:
Relative to Australia, New Zealand performs equal to or better in nearly every respect, including pay gaps.
So it seems unlikely that that first number can be believed.

The second number ($685 million) is even less plausible. They first run a model that relates the number of flexible working policies to the proportion of female managers. They find that the:
...marginal effect of increasing the availability of flexible working policies by 13% if the average share of female management increases from the status quo calculated in Australia (22.4%) to parity (50%).
They then use that result in an additional calculation that is summarised in the following table:


The first problem here is the model, which again shows correlation, not cause-and-effect. This model is even more problematic than the previous one though, because the causation clearly runs in both directions - having more flexible working policies will attract more female managers, as well as more female managers being more likely to pressure for implementing flexible working policies. That means that you can have very little confidence in the coefficients from the model because they are biased. Second, again they are using Australian data in part of their calculation.

Third, look at the top row of the table: "Percentage of people not in the labour force citing flexible working policies as a very important incentive to join the labour force". They then assume that 13% of the 23% of people would join the labour force if flexible working policies were increased by 13%. There is no consideration that these people have only been asked about incentives, and not about actually joining the labour force. On top of that, if you roll out flexible working policies in 13% more businesses, that doesn't mean an increase in the availability of jobs with flexible working policies of 13% (unless you also assume that the firms now offering flexible working policies have the same average size as all firms collectively). Maybe it will be larger firms that do this, or smaller firms? This hasn't been considered.

Finally, both numbers ($196 million and $685 million) assume that the increase in labour force equates to an increase in employment. That is by no means a given. If more people enter the labour force, but there are no additional jobs for them, that increase in the labour force becomes additional unemployment, with no increase in GDP at all. Alternatively, the increase in labour supply might reduce average wages, either directly or through an increase in part-time (compared with full-time) work. So, even though GDP might increase, other workers in the economy are made worse off.

There is another bit of analysis in the report that associates each one-percentage-point increase in female managers with an increase in return on assets of 0.07%, and then argues that raising female management to parity would increase return on assets by 1.5%. However, if you believe that, why would you stop at parity? If you went all the way to 100% female management, you could increase return on assets by nearly 5%!

Overall, the Westpac New Zealand Diversity Dividend Report does make some good points. However, the economic impact and business case are extraordinarily oversold as they are clearly flawed, and they do no credit to the overall argument that a more equal representation of women in management would be a desirable result.

Thursday, 7 December 2017

Female representation in economics

The RePEc (Research Papers in Economics) blog reported last month on female representation:
Thanks to the ranking of female authors in RePEc, we have long known the share of women in the RePEc sample of more than 50K authors: 19%. We now know also the shares of women economists by country, US state, field of study and PhD cohort...
European countries are doing better than the world average, especially Latin and Eastern European countries, while Anglo-Saxons are the most masculine (is it that relatively higher salaries for the profession in Anglo-Saxon universities attract the most competitive men?). Latin America is generally below average (except for Colombia and Argentina) while Asia has very low shaes [sic] of female economists, with less than 6% in Japan, China and India, and 9% in Pakistan (you can sort by column in the link).
Where does New Zealand rank? Just ahead of the United States and Australia, in 31st (out of 61 countries in the ranking) with female participation of 16.2% (compared with U.S. 16.1%; Australia 15.9%, U.K. 18.2%). However, things are not nearly so good at the top. The top 25% ranking for New Zealand economists can be found here. Of the 66 economists in that list, only five (that's 7.6% for those of you keeping score) are women (#26 Suzi Kerr; #44 Trinh Le; #58 Anna Strutt; #59 Rhema Vaithianathan; and #61 Hatice Ozer-Balli).

One other thing that's interesting about the RePEc blog post, in light of my post about the Wu and Card research earlier this week, is the difference in economists on Twitter:
While women represent 19% of the RePEc authors, they are only 14% in the Twitter subsample. Looking at the Top 25% of this list of RePEc/Twitter economists by number of followers (3rd row), the proportion of women falls to less than 13%. In fact, the total audience of these women among the top 25% is a little over 3%.
That's not at all surprising, if the online world is as hostile for female economists as the Wu and Card research seems to suggest.

Wednesday, 6 December 2017

No, bitcoin is not bigger than New Zealand

One of my pet peeves is people (especially the media) who directly compare stocks and flows. The latest example is from this Bloomberg article about bitcoin (reproduced in the New Zealand Herald yesterday, but wrongly attributed to the Washington Post):
Bitcoin's extraordinary price surge means its market capitalisation now exceeds the annual output of whole economies, and the estimated worth of some of the world's top billionaires...
Here are five things that have been eclipsed by bitcoin in terms of market capitalisation:
• New Zealand's GDP: The nation's farm-and-tourism-led economy is valued at US$185 billion (NZ$269b), according to World Bank data as of July, putting it some US$5 billion below bitcoin. The cryptocurrency's market cap is also bigger than the likes of Qatar, Kuwait and Hungary.
Comparing the market capitalisation of bitcoin (a measure of the entire stock of bitcoin) with the GDP of a country (a measure of one year's worth of output) is pointless. It doesn't really tell you anything.

If all investors are rational [*], then the market capitalisation (of a company, or of bitcoin) is equal to the discounted cash flow (of the company, or of bitcoin) for all time. How that compares with one year's economic output isn't a meaningful comparison. You would need to compare the market capitalisation of bitcoin with the discounted value of all future years of New Zealand's GDP.

So, saying that the discounted cashflow of bitcoin for all time is greater than one year's economic output of New Zealand doesn't tell you much of anything. It definitely doesn't tell you that "Bitcoin is now bigger than... New Zealand".

*****

[*] Of course, it isn't at all clear that investors in bitcoin are rational. From what I can see, all of the 'bitcoin bulls' have significant investments in bitcoin, so it is hard to conclude anything other than there is a large amount of nest feathering going on. When you see a case where the market value of something is entirely based on the expectation that other investors will be willing to pay more for it in the future (because those investors, in turn, believe that other investors will be willing to pay yet more for it further in the future), then it's pretty likely that you're observing a bubble. And in that case, the people who yell "bitcoin is not a bubble" the loudest are those most likely to lose their shirts.

[Update]: Liam Damm makes a similar point.

Monday, 4 December 2017

The toxic environment for women in econjobrumors.com

Back in August, Justin Wolfers argued yes in the New York Times:
A pathbreaking new study of online conversations among economists describes and quantifies a workplace culture that appears to amount to outright hostility toward women in parts of the economics profession.
Alice H. Wu, who will start her doctoral studies at Harvard next year, completed the research in an award-winning senior thesis at the University of California, Berkeley. Her paper has been making the rounds among leading economists this summer, and prompting urgent conversations...
Ms. Wu mined more than a million posts from an anonymous online message board frequented by many economists. The site, commonly known as econjobrumors.com (its full name is Economics Job Market Rumors), began as a place for economists to exchange gossip about who is hiring and being hired in the profession. Over time, it evolved into a virtual water cooler frequented by economics faculty members, graduate students and others...
Ms. Wu set up her computer to identify whether the subject of each post is a man or a woman. The simplest version involves looking for references to “she,” “her,” “herself” or “he,” “him,” “his” or “himself.”
She then adapted machine-learning techniques to ferret out the terms most uniquely associated with posts about men and about women.
The 30 words most uniquely associated with discussions of women make for uncomfortable reading.
In order, that list is: hotter, lesbian, bb (internet speak for “baby”), sexism, tits, anal, marrying, feminazi, slut, hot, vagina, boobs, pregnant, pregnancy, cute, marry, levy, gorgeous, horny, crush, beautiful, secretary, dump, shopping, date, nonprofit, intentions, sexy, dated and prostitute.
The parallel list of words associated with discussions about men reveals no similarly singular or hostile theme.
I finally read the paper (ungated) this week (by Alice Wu and David Card), and it is every bit as disturbing as advertised. For instance here's Table 1, which shows the ten words most associated with 'female' posts (those most likely to be about a woman because they contain a preponderance of terms like "her" and "she"), and 'male' posts:


That's only the beginning though. As Wolfers notes, Wu and Card then goes on to show that there are differences in the way that females and males are discussed on the forum. For example:
...on average there are 4.07 academic or job related words in each post associated with male, but 1.76 less (a significant 43.2% decrease) when it is asscoiated (sic) with female. In terms of probability, 70.6% of the "male" posts include at least one academic/work term, while 57.4% of "female" posts do.
And:
...a "female" post on average include 1.341 terms related to personal info or physical attributes, almost three times of what occurs in an average "male" post.
In other words, posts related to females are much more likely to focus on physical appearance or personal characteristics, while posts related to males are much more likely to maintain an academic focus. When they look at threads rather than posts, they find similar findings, but also that a thread becomes more personal following a 'female' post.

Finally, Wu and Card goes on to show that top female economists receive more attention on the forum than male economists. However, based on their other results it almost goes without saying that this attention is not as focused on their academic output.

Of course, it is difficult to argue that Econjobrumors is representative of the profession as a whole, or even of young economists. I lasted all of a day or two on the site when I was a PhD student before I saw it as generally a waste of time. Hopefully, young female economists are giving it a wide berth too, because it appears it doesn't paint the best picture of the economics profession. However, Wolfers' article does end on a positive note about Wu:
She is also tenacious, and when I asked Ms. Wu whether the sexism she documented had led her to reconsider pursuing a career in economics, she said that it had not. “You see those bad things happen and you want to prove yourself,” she said.
Indeed, she told me that her research suggests “that more women should be in this field changing the landscape.”
I agree.

[HT]: Marginal Revolution, back in August.

Sunday, 3 December 2017

Lobbyists, rent seeking and deadweight losses

The rise of lobbying in New Zealand has been in the news recently, as Bryce Edwards explained in his regular Political Roundup column in the New Zealand Herald a couple of weeks ago:
Political lobbying is a growth industry in New Zealand. And lobbyists are going to be particularly busy over the next year.
Edwards charts the rise of 'hyper-partisan' lobby groups Hawker Britton and its right-wing counterpart Barton Deakin. It's an interesting read, along with the many links to other articles embedded within it.

Of course, lobbyists are ultimately being employed by firms that are seeking favourable policy settings. Perhaps they are looking for lighter-handed regulation for themselves, or more regulation of their competitors. Economists refer to this sort of activity as rent-seeking, and in ECON100 and ECON110 I discuss it as one of the key reasons that we might consider monopolies (or firms with market power more generally) to be unfavourable for society. Those firms make large profits, and therefore have a large incentive to use some of those profits to protect their market position through lobbying. If government is seeking to regulate their industry or to open it to more competition (or the firms are worried that the government might contemplate doing so), then those firms will employ lobbyists to dissuade governments from those policies that won't favour the firm.

When I was an undergraduate student, I struggled to see how rent seeking was negative for society. Obviously, it seems ethically problematic. But if you take a general equilibrium framework, then if the firm spends some of its profits on lobbyists, that simply becomes income for the lobbyists, and total welfare remains effectively the same (or maybe it even increases due to the producer surplus in the labour market for lobbyists).

However, that position forgets that the market operates across multiple periods. The firm with market power is generating a deadweight loss (for an explanation of why, see the first part of this earlier post). That deadweight loss arises because the firm with market power is able to price above marginal cost. If the government was to open the market to more competition or to regulate prices, then that would force the price down and increase total welfare in the market. Therefore, if the actions of the lobbyists prevents the regulation or the competition, then it has a cost to society that can be measured by the future deadweight losses that continue to accrue. So, lobbying does potentially have real negative consequences for society, and so as a society we should care about the actions of lobbyists and their interactions with our politicians.