Donald Trump

Book interview: Paul Goodwin, ‘Forewarned’ - when guesswork isn’t enough

Image credit: Getty images

Whether it’s the result of a US presidential election, the referendum on Brexit, the launch of a new product or simply the weather, we could all do with being better forecasters. Author Paul Goodwin tells us how to do that.

“Forecasters have ended up with a bad name in recent times,” says Paul Goodwin, author of ‘Forewarned’, a book that, as its title suggests, is all about prediction. But it’s nothing to do with astrology or futurology. The University of Bath academic has produced a serious investigation into how we can come up with “well calibrated probabilities based on what we know”.

For Goodwin, who has written academic books on the topic as well as countless peer-reviewed papers, this is a step into the territory of ‘popular science’. And a fascinating read it is too, not least because there’s a sense of discovery about prediction that takes us away from reading tea leaves and into the world of big-data analysis, which, as we quickly discover, has statistical strengths and weaknesses. As indeed does our intuition.

One reason why forecasters have a bad name currently is that they’re getting big events, central to the public perception of forecasting, dramatically wrong. “There have been various shock election results and there have been weather forecasts that have gone wrong,” says Goodwin, “and so my intention with this book was to put the record straight.”

He goes on to explain that ‘Forewarned’ is essentially a consumer’s guide to forecasting, “because, although we may not realise it, we all make predictions. What I’m trying to do is separate the good from the bad in forecasting, so that people can have an idea of when they can trust a forecast, and when they should ignore it completely.” Readers can rest assured that in this they are in good hands. Goodwin is also a Fellow of the International Institute of Forecasters, where two of his colleagues have gone on to bag the Nobel Prize for Economics.

As engineering managers and technologists, a firm grasp on how forecasting works – in our professional lives at least – is indisputably desirable, especially in the fields of business performance and innovation. If we could confidently predict what products are going to sell into which markets, then we can increase our contribution to the financial health of our business while gaining a creative advantage over our competition.

According to Goodwin, one starting point we might chose to adopt – and there are around 150 main principles in forecasting – is that of identifying when it is useful to rely on intuition, and when it is more profitable to turn to hard data.

“Intuition performs well in two opposed situations and that’s because the human brain has limited processing ability. Give me a complex calculation and I can’t do it in my head.” As a result of this we have evolved a problem-solving set of rules in the way we think called heuristics, which while not necessarily very logical, seem to perform well in terms of our short-term goals. “These rules are very simple and so intuition can play a role when the environment favours the use of simple rules.” By way of example, Goodwin refers to an experiment in his book where you predict the outcome of a tennis match simply by choosing the player whose name you recognise. “I’m no tennis expert, but I’ve heard of Andy Murray and so I’ll predict that he’ll beat Fred Bloggs. And that works because better known players will be better known because they’ve won more games.”

There are, of course, outliers and wild cards. We all know who Eddie the Eagle is, but very few of us would be daft enough to place a bet on him actually winning anything. But in general, using our own brain to predict something works best when there is one key piece of information “that’s much more important than anything else around. In something like this computers will get bogged down in the detail.”

The opposed environment where intuition works well is when a person has the three characteristics of practice, feedback and experience of past cases, which basically describes the pre-digital way of weather forecasting.

For making predictions in the huge middle ground between human ignorance and expertise, there are computers. This is the area in which Goodwin thinks that big data really has a role to play. “Computers are better at establishing correlations when there is a mass of data.” Of course, he says, we need to sanity-check some of the correlations (see ‘Correlations and causality’). “The current downside of big data, of course, is that it is humans who decide what data is collected, and it is humans who interpret the results suggested by the data.”

Which brings us neatly on to why the big-data analysis ahead of the recent US presidential election failed to predict victory for Donald Trump. “A lot of the analysts confessed that the way they were interpreting the data was influenced by the prevailing view that Trump wasn’t going to win. But big data is a new phenomenon, and I’m sure we will learn to use it better as time goes on.”

Even if we accept that all kinds of variables have the potential to contaminate our ability to forecast, Goodwin still believes that there are areas of forecasting where we are “pretty good at it” – anticipating human behaviour being one of them. “There is a lot of evidence to suggest that very quick, snap intuitive judgements about people have the potential to be accurate.” So does that mean that we can conduct recruitment interviews based on our ability to sum people up in a moment? Goodwin doesn’t see why not. “There’s evidence to support the idea that the formal interview processes don’t work because we don’t have much practice at it and there’s no feedback. So it’s likely that more informal processes would work best.”

But surely, when it comes to things that are more objective, such as financial performance, we can make a killing with computer algorithms. “If you’re talking about the stock market, then that’s inherently unpredictable. Some tipsters say that you might as well throw darts at a printout of the Financial Times shares index page. Then, of course, what happens is someone will get on a lucky run, and everyone will think they’ve got a magical formula for predicting the market.”

Making proper predictions and forecasts has nothing to do with magic or luck. As Goodwin says, it’s about well-calibrated probabilities.

‘Forewarned: A Sceptic’s Guide to Prediction’ by Paul Goodwin is from Biteback Publishing, £12.99

'Forewarned'

We read it for you

Paul Goodwin’s ‘Forewarned’ is a highly entertaining consumer’s guide to prediction, based on the latest scientific research. But given the apparent frequency of unforeseen financial crashes, shock election results and weather events, are we really justified in putting our faith in professional forecasters? Do they tell you all they know, and do they believe what they tell you?

Goodwin addresses these questions as well as presenting a case for when your gut instinct could be more accurate than computer modelling (and when it isn’t). He also writes on the nature of forecasting itself, which is why we can make reasonable predictions based on data and knowledge.

‘Forewarned’ is one of those books that will leave you far better informed than you were before you picked it up. A perfect Christmas present for the engineer in your household.

Extract: Correlations and causality...

Inferring reliable predictions from the juggernauts of data now commonly available is far beyond the processing capacity of human judgement. When confronted with such volumes of data we have to resort to examining only tiny sub-samples of it. Even then our inherent biases would be likely to distort our perception of these small chunks of information. This is the territory where computers can win. Their ultrafast processors can zip through huge datasets uncritically looking for correlations between variables. They are always consistent and they never get tired, bored or emotional. But their lack of knowledge of the world can be both their strength and their weakness.

It can be a strength because their blindness can throw up correlations that we would never have expected or even thought of investigating. Data analysed by a San Francisco company suggested that orange used cars are more reliable than used cars in other colours. A US online lender found that people default on loans more often when they complete their loan application forms using only capital letters. Such relationships are potentially useful when making predictions.

But there’s a danger – just because two things are correlated it doesn’t necessarily follow that one is causing the other. There’s a strong correlation between Brazil’s population in each year since 1945 and the average cost of a train journey in Britain in these years. But that doesn’t mean that I can blame Brazilians when I get to the station to discover fares have risen again. Neither would I be justified in campaigning to ban ice cream because a spike in sales correlated with an increase in deaths by drowning.

Edited extract from ‘Forewarned: a Sceptic’s Guide to Prediction’ by Paul Goodwin, reproduced with permission

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close