background2

Here we have another book that drew my attention at Gardermoen (I was actually out to get a copy of “Getting To Yes” for my son, but alas, those were sold out at the very moment) and one that turned out to be very worthwhile and more relevant to the safety profession than some might think at first glance.

Taking all the praise on the back cover with a bucket of salt, I started reading with healthy scepticism. I have to admit that the style of writing manages to drag one along easily (personally I was already hooked in the quite interesting Introduction that tells how the invention of the printing press led into the information age and thereby created the possibility to mass produce errors) and it turns out that statistician and political forecaster Silver is a lot more humble than one might expect from the praise heaped upon him by the people mentioned on the back cover.

As the subtitle says, this book is about “The Art And Science Of Prediction”, which sounds pretty much related to the topic of risk intelligence (or for those who don’t like the term: “dealing successfully with risk”). Even better, the first paragraph of the Introduction tells us that the book is about “the things that make us smarter than any computer” and “how we learn, one step at a time”. Silver also cautions about Big Data. There is promise, but also major pitfalls - the signal and noise from the book’s title. One problem is that these days the noise is increasing faster than the signal. Even though people love to predict things they are not very good at it. One thing to keep in mind is that we can never make perfectly objective predictions; we will always draw in some kind of subjective point of view. So we must think about our ideas and how to test them.

The book is more or less split in two. The first half deals with various prediction problems and their underlying causes while the final half explores Silver’s proposed solution: a Bayesian approach.

Assumptions

The first chapter reviews the financial crisis and how prediction errors facilitated the crash. Assumptions in models can have MAJOR impacts. People often have a naïve trust in models and fail to realize how fragile they are to our choice of assumptions - with disastrous results. One common cause is that people focus on the signals that tell us a story about the world as we would like it to be, not how it really is. Decisions and assessments are based on models that are cruder than we realize and then we ignore the uncertainty even if this is an essential part of the problem we are trying to solve.

The second chapter deals with political prediction (in the USA, of course). A good discussion of foxes and hedgehogs here with a nice table on page 54 (of my paperback version) that lists the attitudes of foxes and hedgehogs. Finally a start of a list of what makes some people more successful in dealing with risk (think like a fox). Silver is a statistician so it’s not strange that he is fond of quantitative data, but he acknowledges the importance of qualitative data just as much. A great quote from page 72: “Whether information comes in quantitative or qualitative form is not as important as how you use it”. And of course: know your limitations. Other lessons to take away from this chapter: Don’t be afraid to change your opinion and predictions if the facts call for it. You will profit from multiple views about a problem.

Just a thought as a side line… I wonder how these political predictions would fare in a ‘proper’ democracy with more than two parties (even worse if some of them weren’t there last time) to choose from…

Baseball and weather

Chapter 3 is about baseball, and while I find that a fun game to play (and sometimes watch), I must confess that reading about baseball statistics is even more boring than reading about politics. The main thing to be taken from this chapter is probably that you need both statistics AND humans to do a better job, but basically we concluded that already in the previous chapter and the next chapter about weather forecasting tells us very much the same. But hey, it appears to be an American disease that they have to write about one of their obscure kinds of sport in almost any book. What I really hated about “The Black Swan” were the (seemingly) endless references to Yogi Berra. Just saying.

Actually, Silver is completely slam dunking his point about the combination of statistics and humans in chapter 4 about weather forecasting. It’s rather interesting to read about computing power and needs and a bit of chaos theory. Still, humans manage to add a significant improvement to forecasts (up to 25%) simply because we can see and computers can’t. At the end of the day good old fashioned eyesight and ability to recognize patterns trumps awesome computing power (an everyday example of this phenomenon is captcha-technology used in spam and password protection).

Quite interesting, and to a certain degree funny, is the revelation why it’s sometimes more profitable or wise for weather forecasters to give inaccurate or more negative predictions (an unexpected fine day is no problem while an unexpected rain shower is perceived much worse…). Other incentives also play a role in this field!

Overfitting

While chapter 4 dealt with what Silver perceives as one of the success stories of prediction, weather forecasting (something that the everyday citizen may have a hard time agreeing with), chapter 5 discusses one of the greater failures, earthquake prediction. Some good examples here about finding fake-signals in the noise and a discussion of overfitting a model to existing data (which looks good on paper, but can cause extreme harm in the real world). Another lesson to be taken from the chapter is the difference about prediction and forecasting. While it’s (still) impossible to predict where, when and how strong an earthquake will strike, it’s possible to make a probabilistic statement about the chance of an earthquake of a certain strength occurring in an area in a given period of time.

The next chapter deals with uncertainty and the pretence of accuracy in many (economic and other) predictions. Uncertainty is seen as ‘the enemy’, as something that threatens the reputation of the experts making the prediction and so one ignores or hides the uncertainties - with disastrous consequences when problems arise. This is perfectly illustrated by the Grand Forks flooding where communication of the margin of error might have helped to prevent worse from happening.

Overconfidence is a major problem in predictions and decision making. Getting feedback is essential in getting better. Another challenge in making predictions is that it’s often difficult to determine cause and effect and to see the difference between causation and correlation. Especially economic prediction is notorious because the relationship between different economic variables can change over the course of time. 

Silver also cautions against ignoring data, especially when studying rare events. Ignoring data is often a sign of a forecaster being overconfident or trying to overfit the model. Confidence is often inversely correlated to the accuracy of a prediction. Uncertainty should be a part of any prediction.

Extrapolation and models

Chapter 7 deals with the dangers of extrapolation and overly simplistic assumptions, using misfiring flu-predictions as an example. Also discussed here are self-fulfilling and self-cancelling predictions. Often the very act of prediction can alter the way people behave (an observation that also John Adams makes with regard to risk - a form of prediction, of course - where he says that a risk perceived is a risk acted upon). Sometimes we shouldn’t pretend that we can predict, in some cases it’s much better to prepare oneself for various scenarios.

But, remember also: models are basically always wrong and still they have value because they can help us understand things and when we understand how they are wrong and what to do when the model is wrong can help us dealing with problems in a better way.

Bayes!

Chapter 8 is titled “Less And Less And Less Wrong”. It opens the second part of the book that deals with the way to make better predictions, a little bit at a time, hence the title. This chapter discusses a successful sports bettor and how he combines knowledge of statistics and knowledge of baseball in order to find meaningful relationships in the data (and earn a lot of money along the way). Helpful in determining whether a pattern represents noise or signal is Bayesian reasoning. This deals with how we formulate probabilistic beliefs about the world when we come across new data. In a way this approach leads us closer and closer to the truth as we gather more evidence. A lot of the chapter is devoted to explaining Bayes theorem with conditional probabilities and prior probability (which even can be based on a gut feeling!) by way of some examples.

Drawing on John Ioannidis’ famous 2005 paper about “Why most published research findings are false”, Silver then proceeds to discuss the problem of false positives and takes a critical stance towards the prevailing ‘Fisherian’ frequentist method of statistical testing in most of today’s research (which in turn leads to critical reactions to the book from people that see fit to defend their statistical practices like this article from the Newyorker). A similar point is also found in chapter 11 of Gigerenzer’s “Rationality For Mortals”, by the way. One major pitfall for this ‘traditional’ way of approaching data is that it doesn’t look at the context and does not encourage to think about which correlations imply causation and which ones don’t.

Technology

“Rage Against The Machines” is the title of the ninth chapter, starting with the ‘Mechanical Turk’ and Edgar Allan Poe’s essay on the subject. Even today there is a major reverence for machines. Silver argues that we should develop a healthier attitude towards computers and what they might accomplish for us. Technology is beneficial as a labour-saving device, but we shouldn’t expect machines to do our thinking for us. By example of chess computers Silver discusses heuristics (mental shortcuts) and their role in problem solving. They can both contribute (saving time and resources) AND cause (producing biases and blind spots).

Like the third chapter about baseball, the tenth chapter on poker is one of the ones that I found not all that useful. The most interesting observation is possibly found in the second last paragraph: “The irony is that by being less focused on your results, you may achieve better ones”. Also the next chapter (dealing with economic predictions and stock market ‘gambling’) doesn’t bring too much news. Most interesting are probably some biases like the “winner’s curse”, herding and other cognitive illusions.

I mentioned John Adams earlier in this review, chapter 12 provides another parallel to Adams’ book as it deals with Climate Change and how the evidence should be seen and thought of. Of course we have moved 20 years on since Adams’ book and by now the scientific consensus has seriously moved one direction. Silver advocates a healthy scepticism (which he basically does for everything - great attitude) and discusses some agendas for this, including self-interest and contrarianism. One funny quote from page 380: “There is no reason to allege a conspiracy when an explanation based on rational self-interest will suffice”. And there is of course scientific scepticism which is dealt with in the second half of the chapter and the importance of uncertainty in forecasts (e.g. initial condition uncertainty, structural uncertainty and scenario uncertainty). According to the author no theory, they are always work in progress always subject to further refinement and testing.

Ignorance and knowledge

The final chapter is titled “What You Don’t Know Can Hurt You” and deals with terrorism and the well-known Pearl Harbor attack that seemed quite predictable after the fact. Not only was the USA unprepared, they also had mistaken ignorance for knowledge and made themselves more vulnerable as the result by taking the wrong scenario as the most likely way of attack. But, as said before, separating signal and noise can be difficult, especially with regard to intelligence.

One underlying problem is that we easily mistake the unfamiliar for the improbable. When a possibility is unfamiliar to us we often don’t think about it and we sort-of turn a blind eye on it, we mentally block them. Interestingly Silver then compares earthquakes and acts of terrorism and finds that they follow the same kind of power law and so 9/11 wasn’t an outlier at all and speaking with this in mind the attack should have been as unimaginable as it’s often discussed. Our lack of familiarity is a poor guide to their likelihood (and speaking of actions against events like these, there is a nice critical passage against the “security theatre” that we have to experience whenever we take the plane).

The final passage: “The more eagerly we commit to scrutinizing and testing our theories, the more readily we accept that our knowledge of the world is uncertain, less we live in fear of our failures, and the more liberty we will have to let our minds flow freely. By knowing what we don’t know, we may get a few more predictions right”.

Finally the book has an eight-page Conclusion that draws together many of the lessons that have been discussed before.

I’ve read the Penguin pocket (ISBN 978-0-141-97565-8)