Print

Day Four

Where Day 3 introduced (more or less) Course 1 of the Master, Day 4 was to tackle Course 2 ‘The Sociology of Accidents’. An ambitious task. JB presented us with five different views on what risk is and why accidents occur. A nice summary of these is found in the SINTEF report that we had received as recommended reading beforehand.

Risk as energy to be contained

This is a notion of risk that makes immediate sense. Risk is seen as something that can be ‘measured’ and barriers have a certain reliability, as have humans. Many of the traditional risk management tools and applications have come into place because of this view. Here we find the basic energy-barrier model, the well-known hierarchy of control, as well as the Swiss Cheese Model and most of the models from the First Cognitive Revolution.

Of course the view can be challenged. Energy is not necessarily the enemy, but often something that is desired (just think of a nice hot cup of coffee). In complex systems you don’t always know where the hazards come from. Barriers can bring problems of their own: they are not always independent, they have side-effects and they can make the system more complex and thereby more risky.

Risk as a structural problem of complexity

Especially the last mentioned element gave rise to Perrow’s Normal Accidents Theory. Systems are characterized by a degree of coupling (loose to tight) and their interactions (from linear to complex). According to Perrow tight coupling must be controlled by centralization while complexity should be handled by decentralization. Since Perrow found that no organisation can be both at the same time, he posed that complex tightly coupled systems have to be avoided at all time. While most people in Safety have a relatively optimist view (risks can be handled), Perrow (a sociologist, by the way) is one of the few pessimists.

Note that Perrow did not really define ‘complex’. Also he sees it as a structural property of a system while Rasmussen sees it as a functional property.

One problem with the NAT model is that there are little dynamics in it. What if something goes from loose to tight coupled in an instant (see Snook)? Additionally it was argued that there were actually organisations that were centralized and decentralized at the same time. Enter the next view.

Complex Systems can be Highly Reliable

People like Weick and LaPorte (all organisational scientists) argued that some organisations (quickly labelled HRO) like air craft carriers or nuclear energy managed to move continuously between centralised and decentralised behaviour/management, depending upon the task at hand. Traits of HRO were thought to be:

In the end HRO has rather become a business model than a real field of study although there are clear links to resilience engineering, especially because HRO doesn’t see safety as the absence of a negative, but the presence of positives.

The book to read, obviously, “Managing The Unexpected”.

Risk as a gradual acceptance

This view sees Risk as a social construct. Path Dependency is important; accidents have a history! Concepts are Incubation Period, Practical Drift (Snook once more - the behaviour of the whole is not reducible to the behaviour of the actors), Fine-tuning, information deficiencies and the Normalization of Deviance. Accidents happen when normal people do normal jobs. Accidents are seen as societal rather than technical phenomena. 

Barry Turner’s “Man-Made Disasters” (a book to buy when you have ample funding - check Amazon! But get the general idea in this paper by Pidgeon and O’Leary) is one central piece of work (by the way, I love the multi-facetted kaleidoscope metaphor in the Weick article). Another important piece of work is the Challenger investigation that spoke a lot of normal organisational processes: “fine-tuning until something breaks”. Diane Vaughan (book: “Challenger Launch Decision”) argues that safety lies in organisational culture, not in barriers.

Note the difference (nuance) between Reason’s latent failure (which indicates an error) and the notion of drift (not an error, but fine-tuning). Success can become the enemy of safety when we get too good at Faster Better Cheaper.

Risk as a Control Problem

Which is actually a good transition to the final and closely related view. Central in this is Jens Rasmussen’s famous 1997 ‘farewell’ paper “Proactive Risk Management in a Dynamic Society: A Modelling Problem” and the figure that pictures the ‘drift’ (as a consequence of fine-tuning, ETTO-ing and the like) of activities between three boundaries (economy, workload and ‘acceptable performance’).

The irony is that you often only find out where the boundary is by going right through it! The model is mainly the basis for a discussion to look at your challenges and how to deal with them. There is sometimes a cybernetic approach with networks (e.g. Nancy Levenson’s STAMP).

The enemy in this view are goal conflicts and distributed decision making, as well as success (again).

Dissecting accident reports

The day was closed with an interesting exercise where a number of public accident reports (by the CSB, RAIB and others) from various sectors were put on display and students were asked to pick one and then in groups find out what models, biases and possible traces from ‘old view’ safety had influenced the results.

During the feedback session many interesting reflections were made and experiences shared, including:

 

>>> To Day 5