Joe Chidley: Seeing through the prediction addiction

These days, research has become our Nostradamus, and we don't think he's a crank.

New algorithms and big-brained supercomputers are making what was once the chaotic warp and woof of our daily lives into a predictable commodity. Online, they discover awesome new insights from our Instagram habits, combing through terabytes of data to tell us what we want for dinner and which colour of cat we'd really like to see hanging from a tree limb.

It's not just for kicks and giggles, either. Armed with unprecedented computing power and costly advanced degrees, today's medical researchers are making breakthrough after breakthrough — predicting how long we will live if we do or do not do yoga, figuring out how bad almost everything we eat is for us, and developing new treatments that might keep us alive, or at least free from toe fungus and erectile dysfunction, for a little while longer.

In short, we no longer worry about what will happen tomorrow or when we'll die: data science and an ever-growing armada of experts are there to just let us know.

Investors are familiar with the awesome power of prediction. Markets are, after all, mechanisms of forethought, not afterthought. We're constantly looking for new opportunities, latent trends and developing storylines we can put our dollars behind, either to get in before others or to ride a wave on its way up.

But we should also recognize this simple fact about predictions: they are very often wrong.

Recent history is rife with examples, none more painful than Brexit. Polls (including online polls) going into the referendum showed a pretty clear Remain victory, though not by a wide margin. Bookmakers (who had been more accurate than pollsters during the 2015 UK election) were even more wrong, some giving Leave a less than 10 per cent chance of winning the day before the results came in.

Both wrong. Apparently, the models pollsters used expected a bigger voter turnout — it was 72 per cent, which is decent by Canadian standards, but disappointing when compared to the Scottish referendum, which had a turnout of more than 80 per cent. Also, the weather on June 23 was bad in the south of England, where it was presumed more Remain voters live, so there's that.

Amid the Brexit fervour, you might have missed another story of predictive error: the “new old” Diet Pepsi saga in late June. Last year, parent company Pepsico launched its other “new” Diet Pepsi, one that was made with sucralose instead of aspartame, which some medical researchers say might cause cancer in lab rats. That launch followed two years of intense consumer research, according to which a solid majority of original Diet Pepsi drinkers said they liked the new version just fine. So Pepsico took the old DP off the market and launched the new one last August.

Guess what Sales plummeted by more than 10 per cent in the first quarter of 2016. Now, the company is relaunching the old aspartame-y pop.

Lest you think the hallowed halls of academe are free from error, they are not. Stanford University's John Ioannidis, who is a researcher of researchers of sorts, claims that not some, but published research papers are just wrong. (One of the reasons is what he calls publication bias – the tendency for scientific journals to review and publish articles that make the most radical claims.)

To be fair, in politics as in medicine, we tend to notice more when the forecasters get it wrong than when they get it right, which they often do. On the other hand, we only know they get it right . Before the fact, the public, the media, politicians and investors tend to put stock in the predictions, especially when the predictions are in line with their own expectations.

The media plays a role in this believability, too. They dutifully report poll results as indicators of whether a politician is on the rise or decline, sometimes on a daily basis, as if the data mean something more than noise within the poll's margin of error, which itself is often misread. (Do you know what it means that a poll will be accurate to plus or minus three percentage points, 19 times out of 20)

We saw this with Brexit. In the run-up to the referendum, global and U.K. stocks surged along with the polls, some of which showed a six- or seven-point lead for Remain. Given the generally Remain-heavy bias of the financial community – from central bankers to heads of investment firms — it's perhaps not surprising that they would put so much faith in a result they themselves wished for.

Respondents in surveys do the same thing: if they answer at all (more and more don't, in our survey-saturated culture), many will give the answer they think the questioner wants to hear or that they themselves want to believe. But agreeing that a sucralose-sweetened pop tastes good in a focus group is one thing; actually buying and drinking the stuff is another.

For investors, knowing what the polls say, or what the latest research suggests, or what that dynamic chief exec sees for the future, are all important and potentially valuable bits of information. But despite new technologies and our general faith that we're getting better at knowing where the world is headed, they are not infallible.

Whenever there's a big event that will have an impact on markets, or that company you own shares in is about to launch a new product, the inherent risk is that things will not play out the way everyone expects — and the smart move may be to hedge your bets.

You may also like