Consider the following situations:
A surgeon creates an online profile of her qualifications and experience for potential patients, but she does not include her patient mortality rates. After one of her patients dies on the operating table, a lawyer for the patient’s family discovers that the surgeon deliberately withheld the mortality data.
You are eating at a restaurant and see a rat scurry across the floor. As you are leaving the restaurant, you realize that the restaurant did not visibly post information about its health code compliance and violations.
As consumers and citizens in the internet era, we have access to more information than ever when making purchases and other choices that affect our health, safety, and well-being. On websites like Yelp, TripAdvisor, and Amazon, we can sift through hundreds of reviews of a single restaurant, hotel, or item for sale; and organizations and individuals now take it upon themselves to provide a bevy of data about their services and products online.
Theoretically, marketers could take advantage of this information glut to withhold facts and figures they’d prefer we didn’t see. The surgeon could be motivated to hide higher-than-average mortality rates that could drive away patients. The restaurant might conceal information on hygiene ratings that could repulse customers.
As these examples show, sometimes what is not said is at least as important as what is said. But how do consumers react when marketers withhold information that would be relevant to their decisions?
We suspected that consumers would be deaf to marketers’ silence. Indeed, in a series of experiments, we found that, overall, the 1,700 people in our samples were much too forgiving of deliberately undisclosed information. This is in contrast to norms of rationality that a large body of economic research has developed over decades.
How Information Can Unravel
Game theorists have long studied how the “game” of communication and disclosure should play out between consumers and marketers, and how consumers should interpret missing information (i.e., information that is withheld by marketers). They have theorized that, through a process known as information unraveling, consumers will catch on when information is being withheld from them and make smart inferences about what that means.
Here’s how information unraveling might work: Suppose that doctors’ patient mortality rates are classified on a five-point scale ranging from the very worst (one star) to the best (five stars) in the country, and these are displayed on their online profile. If a surgeon withholds her mortality rate, what should the patient infer? Logically, that the surgeon has less than a five-star rating; otherwise, she would have disclosed it. Might the surgeon have earned four stars? According to game theorists, no. If everyone with five stars will disclose, then anyone who has not disclosed must have between one and four stars. Those with four stars therefore will disclose, since they are the best of that range and want consumers to know this. By the same argument, those with three stars will also disclose. The unraveling is now clear: surgeons with five stars to two stars will disclose. So, the theory goes, every patient will know that a surgeon who does not disclose has one star.
This principle predicts that consumers will be maximally suspicious of missing information. If a smartphone manufacturer doesn’t disclose a phone’s battery life, then you probably don’t want that phone (if you care about battery life). If a restaurant doesn’t disclose its hygiene rating, you might not want to eat there (if you don’t want to get sick). And if a surgeon doesn’t disclose her patient mortality rates, you might want to consider other surgeons (if you care about reducing the risk of death).
A Surplus of Trust
These predictions, however, didn’t hold up in our study. When we put the unraveling principle to the test, we found consumers to be naïve and trusting of service providers who were clearly withholding information that was critical to their decisions.
In one experiment conducted online, we asked 493 U.S. citizens to imagine they were patients choosing a doctor based on their online profiles that included patient ratings of five important aspects of care: quality of care, trustworthiness, availability, bedside manner, and value. Respondents saw the average ratings for all doctors in the city on each of these aspects. Next, respondents saw an individual profile that contained only four of the five ratings. One-third of the respondents were told the missing rating had been removed randomly. (This baseline condition was designed to measure the effects of information that was merely missing and not withheld by the provider.) Another third were told the doctor had not provided the missing rating. The final third were told the doctor had refused to provide the missing rating.
When the doctor had simply not provided the information, respondents judged that doctor as being just as good on the missing rating as in the baseline condition (where a rating was randomly removed). When the doctor refused to provide a missing rating, respondents were less likely to choose the doctor, but still did not rate the doctor as poorly as they should have from a rational decision-making perspective. Overall, the respondents didn’t seem to notice or care much when critical information was withheld from them.
Next, we examined how much value consumers place on disclosure of data that would be important to their purchasing decisions. We wanted to gauge the point at which it would make sense for a seller or service provider to withhold critical information — which, in turn, would tell us whether consumers need to be protected from unscrupulous sellers.
In another online experiment, we asked 1,104 U.S. citizens how likely they would be to choose a doctor (Doctor Green) based on ratings displayed in the doctor’s profile on the same five aspects of care used in our previous experiment. For participants in the disclosure conditions, ratings on all five dimensions were available. A doctor’s trustworthiness — arguably, the most important of the five — was displayed as either low (52), moderate (75), or high (99), on a range from 51 to 99. Not surprisingly, the higher the trustworthiness rating, the more likely participants were to choose the doctor.
For those in the non-disclosure conditions, the doctor’s trustworthiness rating was missing. We displayed this in different ways: either as (1) a blank space in the same row where trustworthiness ratings were indicated for other doctors, (2) the label “trustworthiness” displayed but a blank space where the accompanying rating should be, (3) the “trustworthiness” label accompanied by the phrase “doctor did not provide,” or (4) the “trustworthiness” label with the phrase “doctor refused to provide.”
Interestingly, participants in the non-disclosure conditions, who saw no trustworthiness rating, were as likely to choose Dr. Green as people in the disclosure conditions who saw that the doctor had a moderate trustworthiness score. In other words, nondisclosure was more equivalent to a moderate rating than to a poor rating. Although the type of nondisclosure influenced people’s decisions (for example, they were less likely to choose Dr. Green when they saw the “doctor refused to provide” the trustworthiness score), even those in “refused to provide” condition were significantly more likely to choose the doctor than those in the disclosed condition who saw that Dr. Green had a low trustworthiness score of 52.
Simply put, it was much better for Dr. Green to refuse to disclose her trustworthiness rating than to reveal a low one. This suggests that, under some circumstances, concealing even mediocre information may be the best policy for service providers — and that consumers are vulnerable to being harmed by this motivation.
Drawing Attention to Noncompliance
In 2015, North Carolina Senator Thom Tillis suggested during a talk that a regulation requiring employees to wash their hands after using the bathroom was unnecessary. “I don’t have any problem with Starbucks if they choose to opt out of this policy,” he told the Bipartisan Policy Center, “as long as they post a sign that says we don’t require our employees to wash their hands after leaving the restroom. The market will take care of that.”
Late-night comedians had a field day with Tillis’s suggestion. However, it was somewhat in line with game-theory predictions regarding disclosure. His reasoning suggests that as long as restaurants and cafés are required to publish their hand-washing policy, there is no need for the government to mandate hand-washing. If restaurants wanted to opt out, they would just have to disclose that they opted out. And then consumers who care about hygiene would only patronize restaurants that posted clean-hands policies.
But despite the elegant and logical predictions of game theory, we can’t count on people to notice or make the correct inferences about missing information. Even if they do notice nondisclosure, it might not occur to them that the salesperson or marketer may have something sinister to hide unless the nondisclosure is strongly flagged.
Consequently, we recommend that those selling goods and services should be mandated to disclose any information that is relevant and valuable to consumers’ decision making, unless there is a simple way to flag deliberate non-disclosure. Thus, a Starbucks that opts out of employee hand-washing should be required to post a prominent sign to this effect, and a surgeon should be required to disclose performance data that is critical to patient health.
Source: HBR
Research: Missing Product Information Doesn’t Bother Consumers as Much as It Should