It’s a typical afternoon, which means that you’re on Facebook instead of doing whatever it is you’re supposed to be doing. You notice an ad from a company you follow—say, Chevrolet—offering you a chance to win a car if you sign up for a newsletter. Meanwhile, in a parallel universe—where, yes, you’re also on Facebook—the same promotion from Chevrolet pops up offering a chance to win not only a car, but also a handful of smaller prizes, including iPads, gift certificates, T-shirts, and hats. Since all else is equal, it makes sense that the second promotion would be the more enticing one: There are more prizes to win, so objectively it should be a more valuable prospect for you.
And yet, according to new research by Stanford associate professor of marketing Uzma Khan and graduate student Daniella Kupor, it turns out that precisely the opposite is true. The promotion offering only the car will seem more valuable, and as such is the one you’re more likely to enter. So, what’s going on here?
Across a series of experiments, Khan and Kupor found that the addition of smaller prospects (winning those iPads and T-shirts) actually makes the larger prospect (winning the car) seem less likely. And since it seems less likely that you’ll win that car, the whole promotion appears less valuable to you. In short, the common marketing practice of throwing the kitchen sink into a promotion in order to make it seem more valuable is counterproductive.
The “value atrophy” phenomenon
This effect, which Khan calls “value atrophy,” is rooted in the complex interplay between our perceptions of size and likeliness. Previous research has shown that we are good at contrasting the size of outcomes in a given context—in our promotion example, knowing that the car is a larger win than an iPad—but rather poor at instinctively understanding probabilities. Khan’s research systematically documents a link between the two, in which we believe that larger outcomes such as winning a car are less likely than smaller outcomes, such as winning a T-shirt. Which makes sense, and is perhaps even accurate in most cases.
“It certainly seems true that smaller things are more likely,” Khan says. “Think about a lottery: There are a hundred hats and a hundred mugs, but only one Ferrari. I don’t know any jackpot winners, but I know plenty of people who win the smaller things.” Yet her findings illustrate that this belief that larger outcomes are less likely becomes a cognitive shortcut that we overapply when we assess the value of risk-reward outcomes.
There are two key elements in understanding value atrophy. The first is that the effect arises only when smaller prospects are added to a larger one. The second is that the phenomenon occurs only in probabilistic contexts where the outcome is uncertain. Essentially, the contrast generated by placing the smaller outcomes next to a larger outcome not only makes the larger outcome seem even larger than it would have by itself, but it also makes the larger outcome seem even more unlikely. As a result, our overall impression of the value of the whole proposition gets skewed.
Wide-ranging implications
What’s even more interesting, and suggests that this finding has potentially wide-ranging implications, is that value atrophy occurs in both positive and negative scenarios. That is, it makes objectively more dangerous outcomes appear less dangerous, just as it makes more beneficial outcomes appear less beneficial.
In one study, people were asked to imagine that they were headed on a trip to Guatemala and were thinking about buying travel insurance. They then had to decide how much they would pay for two different types of insurance: one that covered the cost of treating serious injury while abroad, and one that offered the same level of coverage as well as the costs of minor cold and flu. The results found that people were willing to pay more for the insurance that covered only serious injury. In other words, the mere addition of the smaller prospects to the larger one actually reduced people’s willingness to pay for what is objectively a better product.
In another experiment, participants were told about a new drug that could help treat hypertension, albeit with some side effects. Half of the participants were told that the drug may increase the likelihood of cancer, whereas the other half were told that the drug may increase the likelihood of cancer, dizziness, cold hands and feet, asthma, tremor and/or insomnia. Rationally, the drug with many possible side effects is more dangerous than the drug with the single possible side effect, yet participants found the drug with multiple side effects to be less threatening, even though such a belief is not really in their best interest.
The health-care realm, Khan believes, might be the one area in which her findings have the most impact, yet it’s easy to spot the double-edged sword here. Laws that are designed to help consumers by informing them of every possible side effect of a potentially harmful drug can actually play into the hands of pharmaceutical companies, ultimately distorting the overall picture of the risks involved. Or consider the insurance industry, which is built on the very sophisticated pricing of risk, where people could be unwittingly exposing themselves to substantial danger. Imagine a homeowner who is less likely to purchase home insurance if providers describe the minor incidents that can happen in addition to the serious damages that might occur.
Still, Khan sees a world of good that can come from this finding. Think of public-health awareness campaigns, for instance. A targeted antismoking message of “Smoking causes lung cancer” would be much more effective than one that provides a detailed rundown of all the negative consequences that await you from lighting up.
It is at the policy level that Khan hopes her findings will be best understood and implemented. “If you’re a health agency and you want people to make informed decisions by giving them all this information about all the risks involved,” Khan says, “you’re actually reducing how risky people think that drug is going to be. If policy makers [understood] this, they would mandate disclosure of information in a way that helps consumers rather than hurts them.”
This piece was originally published by Stanford Business and is republished with permission. Follow them @StanfordBiz.
Why you’re terrible at calculating risk
No comments:
Post a Comment