One non-market valuation technique that we discuss in ECON110 is the contingent valuation method (CVM) - a survey-based stated preference approach. We call it stated preference because we essentially ask people the maximum amount they would be willing to pay for the good or service (so they state their preference), or the minimum amount they would be willing to accept in exchange for foregoing the good or service. This differs from a revealed preference approach, where you look at the actual behaviour of people to derive their implied willingness-to-pay or willingness-to-accept.
As I've noted before, I've used CVM in a number of my past research projects, including this one on landmine clearance in Thailand (ungated earlier version here), this one on landmines in Cambodia (ungated earlier version here), and a still incomplete paper on estimating demand for a hypothetical HIV vaccine in China (which I presented as a poster at this conference).
The CVM has faced a number of critics over the years. The criticisms are essentially based on whether the estimates provided by the method are fit-for-purpose. That is, does the CVM actually measure the values that it sets out to measure?
The latest contributions to the CVM debate were published in the latest issue of Applied Economic Perspectives and Policy, with the concluding paper titled "Interesting questions worthy of further study: Our reply to Desvousges, Mathews, and Train’s (2015) comment on our thoughts (2013) on Hausman’s (2012) update of Diamond and Hausman’s (1994) critique of contingent valuation". Before I get to that paper though, it's worth me backtracking a bit to earlier in the debate.
Jerry Hausman (MIT) has been one of the staunchest critics of the CVM, and re-sparked the debate with this 2012 article in the Journal of Economic Perspectives (ungated version here). In the article, Hausman reiterates a number of his earlier critiques. Hausman notes there are three long-standing problems with the CVM:
1) hypothetical response bias that leads contingent valuation to overstatements of value; 2) large differences between willingness to pay and willingness to accept; and 3) the embedding problem which encompasses scope problems.He further argues that:
respondents to contingent valuation surveys are often not responding out of stable or well-defined preferences, but are essentially inventing their answers on the fly, in a way which makes the resulting data useless for serious analysis.The hypothetical response bias arises because survey respondents are being asked hypothetical questions - usually they are being asked about their willingness-to-pay for goods that are not traded (perhaps through a small increase in taxes), or for goods that do not yet exist. Because these questions are hypothetical, there is little incentive for respondents to answer in a way that is consistent with what they would do if actually faced with the choice.
The differences between willingness to pay (WTP) and willingness to accept (WTA) are common to CVM studies (e.g. my co-authors and I found this difference in a study of landmine clearance in Thailand (ungated earlier version here)). For rational decision-makers the difference between willingness-to-pay to receive a benefit, and willingness-to-accept a payment in exchange for not receiving that same benefit should be the same. But it turns out that WTP is generally lower than WTA. Many argue that this is consistent with quasi-rational decision-making, i.e. loss aversion leads us to be willing to pay less for something we don't have than what we would be willing to accept to give up that same item - an endowment effect.
Scope problems arise when you think about a good that is made up of component parts. If you ask people how much they are willing to pay for Good A and how much they are willing to pay for Good B, the sum of those two WTP values often turns out to be much more than what people would tell you they are willing to pay for Good A and Good B together. This issue is one I encountered early in my research career, in joint work with Ian Bateman and Andreas Tsoumas (ungated earlier version here).
Fast-forward to 2013, and this article in Applied Economic Perspectives and Policy, by Timothy Haab (Ohio State), Matthew Interis and Daniel Petrolia (both Mississippi State), and John Whitehead (Appalachian State). Haab et al. respond to each of Hausman's critiques. In terms of the first critique, they note that current approaches are reducing hypothetical bias using a range of methods, including asking respondents to sign an 'oath' before responding to the survey. In terms of the WTP-WTA difference, they note (as I did above) that the existence of endowment effects makes these differences consistent with behavioural economic theory. And in terms of scope problems, they note that issues of scope are consistent with diminishing marginal utility (the WTP for Good A depends on whether Good B is already provided or not) and substitution between market and non-market goods. Haab et al. conclude:
in direct response to Hausman’s selective interpretation of the literature, we believe that the overwhelming amount of evidence shows: (1) the existence (or nonexistence) of hypothetical bias continues to raise important research questions about the incentives guiding survey responses and preference revelation in real as well as hypothetical settings, and contingent valuation can help answer these questions; (2) the WTP-WTA gap debate is far from settled and raises important research questions about the future design and use of benefit cost analyses in which contingent valuation will undoubtedly play a part; and (3) CVM studies do, in fact, tend to pass a scope test and there is little support for the argument that the adding up test is the definitive test of CVM validity.And onto the latest contributions to the debate. This paper (sorry I don't see an ungated version) by William Desvousges and Kristy Mathews (both consultants), and Kenneth Train (University of California, Berkeley) responds to Haab et al., argues against a number of specific statements in the Haab et al. paper. They argue that they are highlighting "the limitations of current approaches to guide future research".
Finally, Haab et al. respond (in the paper with the beautifully long title cited above, no ungated version available that I can see), noting that their responses in the earlier piece were based on 'best' practice, not current practice. That is a bit of an indictment of current CVM practice - if 'best' practice is known but not currently followed, then questions would rightly be raised about the reliability of the results of CVM studies.
However, I'm not convinced that in all cases 'best' practice has yet been identified. As Haab et al. note, these issues (especially dealing with hypothetical bias and scope issues) are interesting, and worthy of future study. Especially since at least one of my PhD students will be using CVM in their current research.
Read more:
No comments:
Post a Comment