I have written a number of posts about debates on the contingent valuation method (most recently here, but see the links at the end of this post for more). A 2016 debate that I blogged about here, was picked up again in 2020 (but I didn't blog about it then because I was kind of busy trying to manage the COVID lockdown-online teaching debacle). So, what happened? The first of two 2020 articles published in the journal Ecological Economics (sorry I don't see an ungated version online) is by John Whitehead (Appalachian State University), a serial participant in contingent valuation debates.
This part of the debate centres on 'adding up tests', which essentially test for scope problems. To reiterate (from this post):
Scope problems arise when you think about a good that is made up of component parts. If you ask people how much they are willing to pay for Good A and how much they are willing to pay for Good B, the sum of those two WTP values often turns out to be much more than what people would tell you they are willing to pay for Good A and Good B together. This issue is one I encountered early in my research career, in joint work with Ian Bateman and Andreas Tsoumas (ungated earlier version here).
An 'adding up test' tests for whether the willingness to pay for the global good (Good A and Good B together) is more than adding the willingness to pay for Good A alone to the willingness-to-pay for Good B alone. In relation to this particular debate, Whitehead summarises where we are up to:
Desvousges et al. (2012) reinterpret the two-scenario scope test in Chapman et al. (2009) as a three-scenario adding-up test. They then assert that the implicit third willingness-to-pay estimate is not of adequate size. Whitehead (2016) critiques the notion of the adding-up test as an adequacy test and proposes a measure to assess the economic significance of the scope test: scope elasticity. Chapman et al. (2016) argue that Desvousges et al. (2012) misinterpret their scope test. Desvousges et al. (2016) reply that they did not misinterpret the Chapman et al. (2009) scope test and assert that their adding-up test in Desvousges et al. (2015) demonstrates one of their points.
Desvousges et al. (2015) field the Chapman et al. (2009) survey with new sample data collected with a different survey sample mode than that used by Chapman et al. (2009) and three additional scenarios. Desvousges et al. (2015) conduct an adding-up test and argue that willingness-to-pay (WTP) for the whole should be equal to willingness-to-pay for the sum of four parts (the first, second, third and fourth increment scenarios). Desvousges et al. (2015) find that “The sum of the four increments … is about three times as large as the value of the whole” (p. 566).
Whitehead joins the debate on the side of Chapman et al., defending them by examining Desvousges et al.'s analysis and showing that it actually does meet an 'adding up test', thereby showing that there are no scope problems in the original Chapman et al. paper. Whitehead concludes that there are a number of problems in the Desvousges et al. analysis:
First, they do not elicit WTP estimates explicitly consistent with the theory of the adding-up test. Their survey design suggests that a one-tailed test be conducted where the sum of the WTP parts is expected to be greater than the WTP whole. Second, there are several data quality problems: non-monotonicity, flat portions over wide ranges of the bid function and fat tails. Each of these data problems leads to high variability in mean WTP across estimation approach and larger standard errors than those associated with nonparametric estimators that rely on smoothed data.
I'm not going to get into the weeds here, because what I want to highlight is the response by William Desvousges, Kristy Mathews (both independent consultants), and Kenneth Train (University of California - Berkeley), also published in the journal Ecological Economics (and also no ungated version available). The response is only two pages long, and is a very effective takedown of Whitehead. Along the way, Desvousges et al. note that Whitehead:
...made numerous mistakes in his calculations... When these errors are corrected, adding-up fails for each theoretically valid parametric model that Whitehead used.
One example of Whitehead's errors is:
He used medians for the tests instead of means, assuming – incorrectly – that the sum of medians is the median of the sum.
That's a fair criticism. However, Desvousges et al. are not satisfied leaving it at that. Instead, they go onto the attack:
Also, we examined the papers authored or co-authored by Whitehead that are cited in the recent reviews... These papers provide 15 CV datasets. Each of the three problems that Whitehead identified for our paper is evidenced in these datasets:
- Non-monotonicity: 12 of the 15 datasets exhibit non-monotonicity.
- Flat portions of the response curve: All 15 datasets have flat areas for at least half of the possible adjacent prompts, and 4 datasets have flat areas for all adjacent prompts.
- Fat tails: In our data, the yes-share at the highest cost prompt ranged from 15 to 45%, depending on the program increment. In Whitehead's studies, the share ranged from 14 to 53%.
If Whitehead's data are no worse than typical CV studies, then his papers indicate the pervasiveness of these problems in CV studies.
Ouch! That seems to have ended that particular debate. My takeaway (apart from not messing with Desvousges et al.) is that the contingent valuation method is far from perfect. In particular, it is vulnerable to scope problems (which my own research with Ian Bateman and Andreas Tsoumas (ungated earlier version here) showed some years ago. Ironically, that contingent valuation has particular problems is a message that John Whitehead himself has also argued (see here).
Read more:
No comments:
Post a Comment