Print

Here’s a book that I have considered checking out for a long time (first published in 1995 this book is almost two decades old!!) but somehow never got around to read. Well, that omission is set right, finally. Much thanks to a discussion about risk and uncertainty that I had with Walter Zwaard a while ago.

To warn/inform any potential readers right away: I do think that this is a mandatory piece of literature for anyone who wants to deal with risk in a more advanced way. At the same time it’s also one of the most philosophical books on the subjects that I’ve read so far, and I’m aware that this will not necessarily please people who are looking for a “how to”-type of book. None of that to be found here; please move on. One valuable consequence of this book, however, may be that it may help to keep you humble with regard what your advice about risk reduction might manage to accomplish in the big picture…

Risk Compensation and Cultural Theory

As Adams tells us already in the preface, there are two central elements that run through this book: risk compensation and cultural theory (applied to risk). To be honest, and consider this as warning no. 2, Adams discusses cultural theory almost ad nauseam in this book. Sometimes he does this at such detail (over and over again) that one may get a somewhat impatient feeling of “yeah, yeah, I have understood the principle by now, please get on with it”.

What are Risk Compensation and Cultural Theory? Risk Compensation means that safety interventions are likely to be frustrated by behavioural responses that reset the overall level of risk to the original level (Adams regularly refers to human “risk thermostats”). Two related (or sub-types?) of this phenomenon are the distribution of risk from one type of person to others (which also may mean that the total risk for all of society may actually increase) and that positive effects of safety measures are used for performance improvement, restoring the original level of risk.

Cultural Theory is one way to illuminate a world of plural rationalities. Where scientific facts fall short of certainty (and they most often do) humans are guided by assumptions, inference and beliefs. That means that risk is culturally constructed. With help of some stereotypes, Cultural Theory can help to illustrate the various beliefs that underpin opinions/attitudes towards risk, and it can maybe help to bridge the gaps between these different premises. 

Let’s take a look at the various chapters:

Intro

The short first chapter is an introduction to risk where Adams argues (justly) that everybody handles risk all the time and that it’s a balancing act between perceived rewards of success and perceived cost of failure. Rarely are risk decisions made with quantifiable probabilities, yet decisions get made nevertheless. ‘Real people’ do not aim for zero risk, but there is a huge industry (formal and informal) trying to reduce risk and help balancing the various risks. 

Kelvinist approach, or not?

Chapter 2 starts with discussing reports from the Royal Society about risk. In 1983 they distinguished between objective risk (what experts find) and perceived risk (what a lay person thinks). The Society’s 1993 report even expands on the non-objective view by exploring the perspective that risk is culturally constructed and that therefore both the adverse nature of some events and their probability are subjective. The gap between these views still existed when Adams wrote his book, and it still exists today. Some people have a Kelvinist approach to risk and think that it’s something that can be measured. But: our perception of risk alters the risk because as soon as we see risk we react to it and thereby alter the conditions, and thus the risk itself.

This is illustrated by Gerald Wilde’s model (modified by Adams) that shows how a variety of factors influence each other within the ‘risk thermostat’: Propensity to take risk, Balancing behaviour, Perceived danger, Rewards and Accidents all influence each other in one way of another. (You can find a picture and description of the risk thermostat in this article by Adams: http://john-adams.co.uk/wp-content/uploads/2006/risk,%20freedom%20&%20responsibility.pdf).

An interesting conclusion is that because of the risk thermostat no-one (apart from the odd regulator, safety advisor and the like) wants zero risk. Risk is inescapable and often also desirable, so everybody willing takes risks. This, however, is not necessarily the starting point of most literature on risk. Instead the ‘safety profession’ seeks to reform the thinking of people. To no avail because there appears to be a Dostoevskian irrational drive for freedom and risk. Much of safety literature also ignores (or at least ignored at the time of writing the book) the potential loss of rewards because things can get too safe and Risk Compensation kicks in. Adams then introduces the ‘dance of risk thermostats’ showing how various risks interact and how people react to them. All of this makes risk something that cannot be measured.

The chapter concludes with a discussion of risk and uncertainty, going back to Frank Knight’s work (find his classic book “Risk, Uncertainty and Profit” here), but acknowledging that we commonly use the words interchangeably. Also Adams uses the concept of risk more in a sense that is not found in the realm of calculations but tends more towards uncertainty because many things are simply not precisely.

Myths of human nature

Chapter 3 explores “Patterns In Uncertainty” starting with a little rehash: risk perceived is risk acted upon. The future is uncertain and inescapably subjective; it does not exist except in the minds of people attempting to anticipate it. Anticipations are formed by projecting past experience into the future, but when we anticipate harm, we take avoiding actions. This is one reason that accident rates are a very weak measure of risk. It is the very determination of the measures to change risk that frustrates their ability to measure it. Douglas and Wildavsky: “Can we know the risks we face now and in the future? No, we cannot; but yes, we must act as if we do”

Adams then explains four myths of nature (i.e. how nature behaves - benign, ephemeral, capricious or perverse/tolerant) and four myths of human nature according to Cultural Theory (fatalist, individualist, egalitarian or hierarchist) and combines these. These world views are different ways in which people rationalize the world around them and help them to survive in the face of uncertainty. These world views are also powerful filters through which people see the world and they are reinforced by the company one keeps.

These cultural filters are also one explanation of why people react differently to the same evidence, or more often to the lack of it, thereby constructing different risks culturally. These differing risks are not a proof for the rationality or irrationality of one or several of the views in the argument (although participants in a discussion may often classify them as such) but they are a clear sign for that the participants are arguing rationally from different premises. Cultural filters also affect how people perceive positive and negative effects of risk and safety measures. This explains (partly) why people have different behaviour in response to the same objective reality. Debate is usually characterized not by irrationalities, but by plural rationalities.

When evidence is inconclusive, the scientific vacuum is filled by the assertion of contradictory certitudes. For the foreseeable future scientific certainty is likely to be a rare commodity and issues of health and safety will continue to be decided on the basis of scientific knowledge that is not conclusive. This is illustrated by our ignorance about the hazards of chemical substances which is illustrated on the black cover of the book where some tiny white spots illustrate our knowledge in a sea of ignorance. While many have a demand for more science and research as the answer to this lack of knowledge, Adams argues that “even more urgent than the need for more science is the need for a better understanding of the bridge of inference and belief”.

Error, Chance, Culture

“Error, Chance and Culture” is the title of the next chapter. When coming up with safety measures people are often seen as the problem and often positive reasons that people have for taking risks (the rewards found in the risk thermostat) are being forgotten. No one wants an accident, but everyone wants to be free to take certain risks (by the way, in response to number lovers Adams comments: “rarely outside the casino would an individual be able to attach a number to his intended level of risk”). Excessive prudence is a problem rarely contemplated in risk and safety literature, but definitely a reality one should be concerned about.

Then Adams discusses the balancing behaviours of the four stereotypes from Cultural Theory and their attitudes towards risk. Egalitarians will strive towards zero risk and try to reduce variance. Fatalists fear the worst and hope for the best, but think that things are predestined. Hierarchists try to affect risk through regulation, engineering, training and the like. Also individualists try to reduce risk, but with a greater focus for the positive rewards of risk taking. Acceptability of risk differs between these stereotypes because of their differing cultural filters. Do keep in mind that the categories from Cultural Theory are caricatures - real people are more complex. And of course: context matters!

Scale and voluntariness are important factors in taking risk and can be drivers to behave in different ways to different risks, like a Greenpeace campaigner who takes large personal risk in his tiny boat in order to minimize the risk to the planet or an endangered species.

Dake concluded in 1991 that “The perception of risk is set in a continuing cultural conflict in which the organization of rights and obligations, and of community loyalties and economic interests, matter as least as much as scientific evidence”.

The final part of Chapter 4 discusses the efficacy of interventions (somewhat preluding Chapters 7 and 8). Because people compensate for externally imposed safety measures, risk regulators and safety engineers are chronically disappointed in the impact they make on the accident toll. Looking at over-all fatality statistics it seems that some risks have been suppressed in certain activities only to see them pop up in others.

Measuring risk...

Chapter 5 sees us return to “Measuring Risk” and the problems connected to that. Adams mentioned earlier on the problems attached to measuring risk because they are changed as soon as they are spotted and he also mentioned how weak accident statistics are to measure risk. But, given their widespread use Adams elaborates on the problems connected to accident statistics. The first problem one runs into is the unreliability of historic accident records (“not only is it an untrustworthy guide to the future, it is an untrustworthy guide to the past”). The second is that there is no agreed scale for the measurement of the magnitude of adverse events.

Fatalities are relatively well recorded, but rather rare event with a huge variety of causes and therefore an unreliable guide to remedial actions. Injuries are typically under-recorded and suffer from systematic problems (Adams has worked a lot in road safety and there the police rather randomly assign magnitude of injury without any medical knowledge - if they record at all) and this problem gets worse the further one descends into the iceberg… Uncertainty increases as the severity decreases.

Implementing safety measures based on accident statistics also run into the problem that patterns may be perceived where they are not present, and as one sees frequently; that a lower level, after the measures have been implemented, is celebrated as evidence for a successful intervention while it simply may have been mechanisms like regression to the mean or accident migration (accidents disappear one place and pop up on another). Adams says that available studies show convincingly that claims for the successful treatment of safety problems that rest on simple before-and-after accident counts will inevitably exaggerate the effect of the safety treatment.

Cultural filters and bias play a big role in this. Cultural filters select and construe evidence with regard to costs and rewards to support established biases. They are particularly effective in cases where the available evidence is contested, ambiguous or inconclusive - something that fits most controversies in the safety world. The less clear the evidence, the greater influence belief and assumption will have. These have roots in our previously filtered experience of the world, which is formed by input through our senses and through the information that we gather all day long through media, colleagues, books and the like.

As a side-line remark: I don’t necessarily agree with how Adams uses/defines some terms like near miss and hazard (check e.g. side 89), but this doesn’t do any harm to his message.

Monetizing risk?!

The sixth chapter raises a further problem: if and how risk can be translated to cash. Behaviour is assumed to seek an “optimal” trade-off between the benefits of risk and the costs. Enter cost benefit analyses (CBA), presented as THE method in some of the literature and sometimes even mandatory by law or regulations. But, measuring costs and benefits of a varied bunch of multifaceted and incommensurable factors is not an easy thing to do. Impossible, probably.

Adams uses some UK government guidance documents as examples to explain his point(s). Again the basis is not a disagreement about the nature of rational thought, but the differences in the premises on which the contending parties have built their rational arguments. Reason for many economists is reduced to calculation. However, as some of the guidance documents agree upon (but then choose to ignore): not everything can be (directly) expressed in money.

Underlying principle for a CBA is that a measure should produce a Pareto Improvement: will the change make at least one person better off and no-one worse. Since these changes are very rare the principle is often diluted to “will the project produce a potential Pareto improvement”. One very critical element is that the only acceptable judges of the value of losses arising from a project are the people suffering them, but rarely the willingness-to-accept is used or useable in CBA because a statement “at no cost this is acceptable” will blow up the entire calculation (and act as a quasi ‘veto’ for a project). Because of this most projects rather use the willingness-to-pay (sometimes taken from a standard) as a criterion, but this may just be a fraction of a willingness-to-accept criterion, and often not quite fair because they’re usually based on some arbitrary number decided by the analyst or a regulator. To complicate things further, Adams explains how the definitions of cost and benefits are very much dependent on the legal or moral context of the problem.

The final part of the chapter looks into CBA and fatalities. Here the moral question of what one is willing to accept is even more important. CBA changes the question in these cases into the acceptability of an increased probability of a fatality. This may work for small changes, as soon as likelihood of fatality increases one will encounter the infinity-problem (“at no cost…”) once more… 

When the risk of death is spread over a greater population Adams observes somewhat cynically (one is reminded of Stalin’s quote about statistics) that the greater the ignorance about the identities of the lives at risk, the lower will be their value; the more ignorant people are about the risks of a project, the more favourable its benefit-cost ration will be.

Things complicate further when one is to value the lives of others: “Allowing people a monetizable interest in the fate of others creates a problem that is difficult to manage because of the indeterminate numbers of people who might be concerned about other people. This problem leads him to speculate about the motives of people who claim to value other people and about the problem of double counting”.

Interesting is the rejection of a lower level in ALARP. Small risks changes (e.g. one in a million) should not be accepted automatically because if large numbers of people are affected by the small change the aggregate value of this risk increase may be very significant after all.

Especially the example from chapter 9 with the 300 year prognosis puts things around CBA nicely in perspective: one is expected to say something about the costs in 300 years ahead in time. But looking 300 years back there wasn’t even the US of A - how can you possibly say something sensible about something so far in the future. 

On a final note, one major problem that I personally see in many CBA is that they reduce acceptability to one number and that is very poor decision making indeed as it neglects the richness and multitude of factors that should be part of a good decision. Or, as Adams says: CBA builds on the belief that “agonizing judgment can be replaced by mechanical calculation”. Additionally it only gives an appearance of serious deliberation and “in practice such methods are useful only for preaching to the converted; CBA is almost always used not to make decisions, but to justify decisions that have already been made”.

Seatbelts, helmets and other misunderstandings (?)

Chapters 7 and 8 then take some concrete examples from road safety, discussing risk compensation and how effective safety measures really are and if it can be documented at all (or if “evidence” rather is used to promote a policy or belief).

Chapter 7 entirely deals with seat belts. While everybody will understand the benefits of having a seat belt if you are IN a crash, the benefits for road safety in general are more dubious, not in the least because of behavioural changes after seat belts became mandatory in many countries, seeing increases in fatalities for other traffic participants (especially the weaker ones, like pedestrians). While at times technical the chapter is a great read with regard to understanding statistics properly and looking at side-effects as well as how certain political processes may go despite the evidence. As Drawing on the fable of the emperor’s new clothes Adams writes: “Once an idea, however preposterous, becomes accepted by, as espoused by, established authority, it can become very difficult to dislodge. The idea becomes self-reinforcing”, and: “The fact that large numbers of others believe the idea can become sufficient reason for believing. After a while evidence is no longer required”.

Chapter 8 takes some more examples from road safety (safer roads and vehicles, bike helmets and alcohol use) more or less making the same arguments as in the previous chapter. Road accident deaths and injuries increase and decline over time in a way that is remarkably independent of safety measures. Whenever deaths and injuries do decline, it’s seized upon as evidence that strengthens the initial belief in the latest measure. If they go up, this is ignored or explained away by pointing at confounding variables (which always can be found in complex systems like traffic).

Since safety measures do affect behaviour Adams proposes to place a big spike on the steering wheel instead, since this likely will lead to extremely careful (but slow) driving and thus enhanced safety. But, since in most countries road safety and traffic efficiency are regulated by the same department of government, often safety measures are decided upon their efficiency gains. After all, rewards and gains perceived by politicians and civil servants (and therefore also their incentives) are very different from those experienced by the people on the road.

Global warming?

After the relatively manageable road safety problems (well, one may actually doubt that after reading) Chapter 9 takes a really BIG risk issue, namely the Greenhouse Effect. Now, a lot has been said in the past 20 years on the issue and I have to admit that I’m not the most knowledgeable person on this lofty issue, so I’ll keep this short and recommend that everyone reads for themselves. I do, however, suspect that many of Adams’ critical observations with regard to models, assumptions and beliefs are still valid (despite that scientific consensus now probably is that there is a Greenhouse effect).

Risk Society

Chapter 10 is called “The Risk Society” and deals for a big part with Ulrich Beck’s work and his seminal book “Risikogesellschaft” where Beck argues that science and technology have created new risks that did not exist earlier on and that we live in a society where production of wealth has been overtaken by a production of risk. Risk is a defining characteristic of the age we live in. As you may know I’ve previously read Beck’s follow-up book “Weltrisikogesellschaft” (see review/summary here) which is one of the hardest books that I’ve read ever. Also Chapter 10 is very much on the philosophical side of things, at least the first half, discussing Beck and Cultural Theory, then comparing Beck and Wildavsky who both agree upon the importance of culture and society in forming perceptions of risk, but come to different conclusions because of their different premises. Adams critically discusses the solutions that Beck and Wildavsky propose. This includes a discussion of how professionals (e.g. engineers) ought to be able to act (and be protected) in the public interest in safety matters. This makes for an interesting end of the chapter where we enter the realms of ethics, taking responsibility and possibly whistle blowing by discussing an example from the Fellowship of Engineering. Some interesting perspectives on covering one’s behind, acting in the interest of the client versus acting in the interest of the public and also about why in government circles often wrong decisions are taken without any consequences for decision makers.

Picking up again the dilemma from earlier on in the book: “Can we know the risks we face now and in the future? No, we cannot; but yes, we must act as if we do”, Adams comments that: “As if” is ambiguous. Some act knowing that their knowledge is partial and conditional. Others, of strong belief and conviction, manage to conjure certainty out of ignorance. Yet others, those advocating a scientific approach, act as if uncertainty is a temporal condition that can be overcome by dint of more research. They divert attention away from the question of how to act in the face of uncertainty by focusing their energies on the impossible task of removing uncertainty”. But… the scientific approach to risk, assuming that uncertainty is a problem that can be cracked by science, will not be the solution since everything suggests that science can but scratch at the margins of the problem.

Can we manager risk better?

The final chapter asks the question if we can manage risk better. Beck proposed a democratic control of risk for which competence to make our own judgements is necessary - illustrative of the “if only we had more information” ideal. Which offers a challenge because if scientists cannot manage to agree on risks, how is the scientifically untutored rest of the population supposed to engage in a rational debate on risk? Interestingly in genuine case of uncertainty hazards can be much clearer to non-scientists than to scientists.

The philosopher Alfred North Whitehead called the mistake of confusing reality with one’s abstractions the ‘fallacy of misplaced concreteness’. Abstractions are necessary in order to be able to handle problems, but one must be aware of that abstractions are also culturally constructed and culturally maintained. Therefore it’s important to be vigilant of those parts of reality that do not conform to one’s model. If abstraction and reality lead to different results, it’s not reality that was wrong.

Adams then complicates Cultural Theory a bit, first by adding a fifth stereotype, the observing philosopher that does not engage, but only observes and somehow floats above the other four stereotypes. Another complication is by demonstrating how the various stereotypes can react differently on the earlier presented myths of nature. Somehow this flows into a couple of paragraphs that criticize the deregulation of safety by actually creating greater bureaucracy and less effect.

Finally the author returns to the question that he wrote above this final chapter and offers some view points from a number of authors, picked from the various ‘cultural camps’. All only possess a small window on the truth and help to curb the excesses of the other three. None of the offered quotes really give an answer to Adams’ question, of course, but personally I do like Wildavsky’s statement:

“Safety results from a process of discovery. Attempting to short-circuit this competitive, evolutionary, trial and error process by wishing the end - safety - without providing the means - decentralized search - is bound to be self-defeating. Conceiving of safety without risk is like seeking love without courting the danger of rejection”.

Summing up

Adams finishes with a half-page summary of his own relativistic insights:

As I said at the start of this review/summary this is a highly philosophical book. Occasionally, no, I should say often, it’s also a very humoristic - and at times truly sarcastic - book that brought a smile on my face on many an occasion. Just a quote from page 57 (where he discusses balancing behaviour): “…it is sometimes acknowledged that zero risk is an unattainable ideal but, nevertheless, one towards which we should all continually strive. Those who believe it is actually attainable are clearly deluded.”

Even better is the middle of page 96 in the discussion of CBA: “The Department of the Environment appears to be saying that the use of a money standard may not be sensible or moral, but it is convenient; therefore a money standard can be employed so long as you are only using it to measure preferences or relative values and not actual values. Perhaps this makes sense to an economist, but other readers are likely to need help”.

And there are many more - read, enjoy, reflect, philosophize, but don’t forget to act too.

I’ve read the 5th impression from 2000 (don’t know if this differs from earlier versions), ISBN 1-85728-068-7 (paperback version).