I think that I heard about this book in some forum and found the title highly intriguing, so I went online and acquired a copy. Alas it was not quite what I had hoped for. For starters the book does not live up to either part of its subtitle. Neither is there a good and convincing explanation why (or even that, or what) it is broken (if anything I am inclined to say that this book only contributes to something breaking), nor get we a solution how to fix it. Let me correct that, yes we do get a solution (namely Hubbard’s favourite way of risk assessment), but the way it is presented I do not think it is going to solve a lot.
Early on in the book it became clear that the writer and I have different outlooks on risk assessment and risk management, which is just fine. Once in a while I even seek out different opinions and frameworks, just to see if I can learn something from them, and/or to reflect and improve on my approaches.
I prefer qualitative approaches in most cases because they often give you more robust results than quantitative assessments. Generally with less effort lead to better decisions because you can draw on a greater variation of factors - especially those that are very hard (or impossible) to express in a number. That does not make me blind to the drawbacks of many qualitative methods and neither do I deny that one sometimes should use good quantitative methods.
Hubbard appears to be more absolute and he is clearly a big fan of quantitative and mathematical methods. Those are ‘sophisticated’ and all other approaches are ‘soft’ (which is one of the kinder descriptions he uses throughout the book). He recommends using Monte Carlo approaches as the approach to risk analysis, and while I agree that Monte Carlo can be terrific tools, I do not think that it is the only way forward. A more varied toolbox would be wiser, in my opinion, especially for more operational applications, or for rough prioritization where building a full model is not the smartest way forward.
There are definitely useful parts in this book, and even some funny ones. The ‘How to Sell Snake Oil’ section on page 71 is almost worth buying the book. The problem is that you have to sift through the material and remove some of Hubbard’s biases and cherry picking to find out what. After a while, I saw that I started skipping some parts and hopping to the next section. Partly, this was caused by Hubbard’s style of writing that is not very appealing, besides he appears to regard himself very highly. Self-confidence is good of course, but you can push things too far. And when you spend several pages on bitching on Taleb (partly entirely unrelated to the subject of risk, like about whether Taleb has an MBA or not) that does not improve things. By the way, Frank Knight gets the same treatment, with Hubbard pointing out what an idiot he was to define the words ‘risk’ and ‘uncertainty’ the way he did - instead of acknowledging the added value of making a difference between quantifiable risks and that which are non-quantifiable (even though the choice of words may have been unlucky).
But, some may say, isn't this a book that has received mostly positive reviews on Amazon?! Firstly, I’d like to point again to that LP titled “50.000.000 Elvis Fans Can’t Be Wrong”. You still do not find it in my collection. Or will ever. Secondly, I presume that these people may have a different background (profession, competence, whatever) than I do. Maybe it actually does work for them. But allow me to point out a couple of flaws (incomplete, just to illustrate some points), after that you are free to get the book and make up your mind for yourself (after all, do not believe me, judge things critically for yourself, and please come back afterwards and feel free to discuss things).
Exhibit 1: As far as I am to judge, Hubbard screws up in the first example with the DC 10 crash (United Airlines 232) in July 1989. Not immediately because he is completely right that there was a common mode failure that took out three independent hydraulic systems in one stroke which probably had a larger probability of happening than three independent failures. Then, however, he fails to explain how this example explains the failure of risk management (and if anything, this example might have been an argument against numerical assessments). Worse, he totally goes overboard declaring that human error (the failure to detect the hair cracks in the turbine blades that destroyed the hydraulic system) is an even more common mode failure and that the use of wrong risk analysis methods is the most common mode failure of all. We are only on page 6 and warning signals are blaring all over the place…
Exhibit 2: Hubbard mentions regularly that the use of scoring methods and matrices is worse than doing nothing (and similar phrasings). While I am sceptical myself of most matrices and very, very aware of their flaws and pitfalls, I must say that the book does not do a very good job pointing out why they are making things worse, especially when everyday experience tells otherwise. And an anecdote like someone telling that 5x5 matrices are useless just because they failed to include the Shuttle crashes? Seriously??
Exhibit 3: The book’s title announces that it is about risk management. Hubbard even spends time defining risk management (good definition too). But when you look closely, while his definition says something else, he basically equates management with analysis. In all of his book he focuses completely on risk analysis, while he says extremely little about management - that what comes after the assessment (and also can be done without major assessment or analysis for that matter) to do something about the risk. Instead of dealing with that, he just goes on and on about analysis. Like a tool alone will save us. And, by the way, a tool in itself is just that. It is very much about how to use it! And besides, more precise knowledge of a risk 'measure' does not mean better risk management.
Taleb said that it is easier (and often more useful, I might add) to know whether you are fragile than to try to exactly predict a black swan happening. I am therefore afraid to say that Hubbard’s very first example is probably the one that turns most of this book rather obsolete. Can the common mode failure of three independent hydraulic systems happen? You do not need a Monte Carlo or whatever analysis to calculate if your risk is E-4 or E-7, you can just get to work after having assessed your fragility. Better analysis is NOT (or at least not always) necessary to do better risk management.
Exhibit 4: As indicated above, Hubbard has a very binary approach. If you don’t do it his way (both with regard to definitions and methods) you are simply wrong in his view. There is no middle way. Interestingly, reality shows us otherwise, and for most everyday actions and decisions also Hubbard himself will probably use simple qualitative assessments. It is not a question of ONE superior tool, it should be about the right tool for the right application.
Exhibit 5: Finally I would like to address the most basic premises of the book - and Hubbard’s argument. Hubbard criticizes most risk assessment methods for their failure to measure objectively. But dear Douglas… Risk cannot be measured objectively. Risk is a subjective thing, and there are various ways to look at it. And besides, what risk are you talking about and risk for whom? This is probably the greatest flaw because the entire argument hinges on this premise. An illusion of precision and certainty which is exactly where Knight’s view can help us.
An additional thought: John Adams says that “A risk perceived is a risk acted upon”. As soon as we become aware of a risk, we instinctively react on it, for example by becoming more observant of it. This means that risk per definition cannot be objective. By the sheer act of noticing it, we already alter it. This means that risk will be different for everybody and thereby inherently subjective.
In the end I sit and think that Hubbard should have decided what kind of book to write. When he is constructive, it suits him much better than pages and pages of ranting. I think he would have done a wonderful book about how to do good Monte Carlo simulations (and who knows, maybe he intended to, but then the financial crisis came along and it was too good a chance to pass by - for him and/or Wiley?). The pretense to have written a book on risk management, what’s wrong with it and how to fix it is just something that is far from the truth and the result is a book that I find hard to recommend. There are many, many better books about risk, risk management and risk assessment. Start with those, not this one.
There is a companion website with downloads and examples if you are interested to explore:
Wiley & Sons, 2009
A more flattering review and summary can be found here: