A regular traveler is planning an overseas trip but hasn't purchased her plane ticket. So she visits various websites that can predict whether the cost of her ticket will rise or fall.
Two sites say the likelihood of an imminent price increase are, respectively, 60% and 50%. Our traveler does the math and averages the odds at 55%.
She then tries two other websites, and both give their predictions in verbal form: An increase, the sites agree, is "likely." From this, the traveler concludes that the chance of a price hike is "very likely."
This curious difference in how numeric and verbal predictions are calculated is noted in new research, forthcoming in the journal Management Science, by Johns Hopkins University marketing professor Robert Mislavsky. In a series of eight experiments with over 7,000 participants, Mislavsky and co-author Celia Gaertig of the University of Chicago found that people viewing more than one numeric probability forecast averaged the figures, leading to a lower number than the highest given—as in the 55% example above.
However, when the participants looked at multiple forecasts in verbal form, they reached a more certain conclusion than that seen in any of the individual forecasts—as in the other example above, with two predictions of "likely" leading to a determination of "very likely."
Apparently, this behavior wasn't caused by any belief among the participants that an additional verbal forecast provided more new information or better guidance than an additional numeric forecast would.
Mislavsky says the researchers examined various possible explanations—such as how the participants might have perceived the forecasters' confidence and thoroughness, and whether participants relied more on intuition or reason in reaching their conclusions.
"Ultimately, we didn't find strong evidence for one particular explanation, but it's possible that the participants used some combination from among these mechanisms when they responded to the forecasts," adds Mislavsky, an assistant professor at the Johns Hopkins Carey Business School.
The two co-authors say little research to date has examined how people make judgments after combining predictions from external sources—and that research, they point out, has focused on numeric forecasts. No previous research has studied how people combine multiple verbal forecasts, according to Mislavsky and Gaertig.
They observe that while numeric predictions appear more precise because of the use of numbers, such statements can lack "direction."
"For example, if there are two candidates in a political race, a 40% chance of winning doesn't sound promising. But if there are several candidates, the one with 40% will probably win. With a verbal forecast, if you said one candidate is 'likely' to win, that's much more clear-cut than trying to figure how a numeric forecast might apply to the situation. The verbal forecast may lack the precision of a numeric statement, but it provides clearer direction," says Mislavsky.
He adds that the new paper can benefit researchers who study advice taking and decision making in uncertain conditions, as well as anyone seeking or providing advice culled from multiple sources.
"A growing number of platforms offer predictions, such as Kayak, Hopper, Fuelcaster, and FiveThirtyEight," says Mislavsky. "When those platforms are trying to determine how to use or present multiple forecasts, they'd all be well served to consider the different ways people combine verbal vs. numeric probabilities, as we aim to show in this study."
Posted in Science+Technology, Politics+Society