One non-market valuation technique that we discuss in ECON110 is the contingent valuation method (CVM) - a survey-based stated preference approach. We call it stated preference because we essentially ask people the maximum amount they would be willing to pay for the good or service (so they state their preference), or the minimum amount they would be willing to accept in exchange for foregoing the good or service. This differs from a revealed preference approach, where you look at the actual behaviour of people to derive their implied willingness-to-pay or willingness-to-accept.
As I've noted before, I've used CVM in a number of my past research projects, including this one on landmine clearance in Thailand (ungated earlier version here), this one on landmines in Cambodia (ungated earlier version here), and a still incomplete paper on estimating demand for a hypothetical HIV vaccine in China (which I presented as a poster at this conference).One of the issues highlighted in that earlier debate had to do with scope problems:
Scope problems arise when you think about a good that is made up of component parts. If you ask people how much they are willing to pay for Good A and how much they are willing to pay for Good B, the sum of those two WTP values often turns out to be much more than what people would tell you they are willing to pay for Good A and Good B together. This issue is one I encountered early in my research career, in joint work with Ian Bateman and Andreas Tsoumas (ungated earlier version here).Which brings me to two new papers published in the journal Ecological Economics. But first, let's back up a little bit. Back in 2009, David Chapman (Stratus Consulting, and lately the US Forest Service) and co-authors wrote this report estimating people's willingness-to-pay to clean up Oklahoma’s Illinois River System and Tenkiller Lake. In 2012, William Desvousges and Kristy Mathews (both consultants), and Kenneth Train (University of California, Berkeley) wrote a pretty scathing review of scope tests in contingent valuation studies (published in Ecological Economics, ungated here), and the Chapman et al. report was one that was singled out. You may remember Desvousges, Mathews, and Train from the CVM debate I discussed in my earlier post.
Four years later, Chapman et al. respond to the Desvousges et al. paper (sorry I don't see an ungated version online). In their reply, Chapman et al. demonstrate a quite different interpretation of their methods that on the surface appears to validate their results. Here's their conclusion:
In summary, DMT argue that Chapman et al.'s scope difference is inadequate because it fails to satisfy theoretical tests related to discounting and to diminishing marginal utility and substitution. Also, according to them, our scope difference is too small. Once the fundamental flaws in their interpretation of the scenarios are corrected, none of these arguments hold. The upshot is that Chapman et al. must be assigned to the long list of studies cited by DMT where their tests of adequacy cannot be applied.However, Desvousges et al. respond (also no ungated version available). The response is only two pages, but it leaves the matter pretty much resolved (I think). The new interpretation of the methods employed by Chapman et al. has raised some serious questions about the overall validity of the study. Here's what Desvousges et al. say:
...this statement indicates that respondents were given insufficient information to evaluate the benefits of the program relative to the bid amount (the cost.) The authors argue that the value of the program depends on how the environmental services changed over time, and yet the survey did not provide this information to the respondent. So the authors violated a fundamental requirement of CV studies, namely, that the program must be described in sufficient detail to allow the respondent to evaluate its benefits relative to costs. The authors have jumped – to put it colloquially – from the frying pan into the fire: the argument that they use to deflect our criticism about inadequate response to scope creates an even larger problem for their study, that respondents were not given the information needed to evaluate the program.All of which suggests that, when you are drawn into defending the quality of your work, you should be very careful that you don't end up simply digging a bigger hole for your research to be buried in.