Some years ago, when I was relatively new to the area of discrete choice experiments (DCEs), I had an interesting discussion with my colleague (and DCE guru) Ric Scarpa. In DCEs, research participants are presented with a series of choices between (usually hypothetical) products (for an example from my own research, with Ric Scarpa and others, see here). Each product is made up of a number of characteristics (e.g. price, quality, colour, etc.) that are referred to as attributes.
Anyway, my conversation with Ric was about what is referred to as 'attribute non-attendance'. That is where, when presented with a choice between two (or more) alternatives, a research participant ignores some of the attributes when making their choice. Attribute non-attendance is problematic, because the conventional models for analysing DCEs assume that research participants are paying attention to all of the attributes. Methods had evolved to deal with this issue, but often required debriefing each research participant and simply asking them if there are any attributes they didn't pay attention to. Of course, research participants often don't know this, or aren't aware of it, or in some choices they pay attention to some attributes, but in other choices they pay attention to different attributes.
In my conversation with Ric, I was trying to argue that we could use eye-tracking technology to uncover more systematically research participants' attribute non-attendance behaviour. My theory was that, if a research participant spends more time looking at a particular attribute, then they are giving it more careful thought and attention, and that would indicate that there is no non-attendance towards that attribute. Anyway it turns out that, as with many of my cool research ideas over the years, someone else had already thought of it. Sure enough - research papers using eye-tracking to control for attribute non-attendance started to appear in the research literature soon after that.
However, there is another aspect to eye-tracking which I didn't consider, and which is covered in this 2017 article by Kelvin Balcombe (University of Reading) and co-authors, published in the Journal of Economic Behavior and Organization (ungated earlier version here). Balcombe et al. report on a DCE designed to identify people's preferences for country-of-origin labelling of meat in the U.K. (and the particular context they used was labelling of pepperoni pizzas). However, their research questions also made use of eye-tracking:
Question 1: Do ‘higher value’ (more attractive) attributes attract more visual attention (ceteris paribus)?
Question 2: If individuals or groups have higher visual attention (relative to other attributes) can we infer that those individuals or groups value that attribute more highly than other attributes?
Question 3: If one individual has higher visual attention towards a particular attribute (relative to other individuals) can we infer that this individual values that attribute more highly than other individuals?
They measured visual attention in terms of 'dwell time' - that is, how long each research participant spent looking at each attribute. Based on a sample of 100 research participants (which seems small, but remember that each participant is making lots of choices, in this case 24 choices each), they find that:
...there is, overall, a reasonable correspondence between ET data and other measures of attribute use such as the frequently employed debriefing questions that have become widely reported in the literature. But, our results confirm once again, that at the individual level stated attendance is a very weak signal in relation to visual attendance and vice versa... we find evidence of longer engagement with high value attributes, as measured by total dwell time as well as total number of fixations. This relationship exists, but it is quite weak. This result bolsters existing work that suggests that ET data does reveal something about how respondents value the attributes used in a specific DCE.
So, it appears that eye-tracking data doesn't only reveal information about attribute non-attendance, but may also be used to extract additional information about how much the research participants value each of the attributes (or, at least, to provide more precise estimates of the value of each attribute). However, as Balcombe et al. note, using eye-tracking is cumbersome and expensive. Is the extra cost and trouble worthwhile? They conclude no:
...if the purpose of generating ET data is to improve the efficiency of estimation then we would recommend increasing sample size as a better strategy to pursue.
In other words, you'd be better off putting your scarce resource funds towards recruiting more research participants, rather than adding eye-tracking technology to the battery of tools you employ. Anyway, it was interesting to see where this research idea has gone (the paper gives a good update on the state of the literature, at least as it stood in 2017). It's a pity I wasn't five or ten years earlier in thinking of it!
No comments:
Post a Comment