For many years, I have been working with and for safety on level crossings. The other week, however, I found myself in the local newspaper (front page and all…) arguing against a recently implemented safety measure. Paradoxical, or not? First, some necessary background information…
Level crossings are places on the railroads that generally have a rather high risk. Here is a possibility that common road users (cars, busses, cycles and pedestrians) meet a train. Trains contain a lot of energy (mass and speed!), cannot steer away (being ‘locked’ to the rails) and take a long distance to stop. In many cases, the meeting of train and common road users is fatal, or at least highly damaging, for the latter.
For this reason, often safety measures have been implemented, like automatic barriers, warning lights, signs and the like. Safety measures will vary, depending upon the situation. In Norway, for example, all level crossings with public roads have to be secured, while the smaller infrequently used level crossings in the woods or rural areas will have significantly less safety measures. Locations with the highest risk or locations that provide special problems are preferably removed, by building a fly-over or culvert and making it a crossing in different levels. Separating various modes of traffic in space is better (but more expensive) than building barriers.
When you leave the station of my hometown and travel north, after about a mile you will meet four unprotected level crossings connected to some farms beside the train line. If you try Google Maps and put in the coordinates 59.565506, 11.297634 you get an idea of the situation.
The terrain is relatively flat, but slopes on one side and there are some curves. Vegetation can be another challenge that disturbs the line of sight. Most of these level crossings are used very infrequently and are even closed with a fence that the user can open if necessary (for example, when a farmer has to get to the fields on the other side of the tracks). One of the level crossings is clearly in daily use for farm traffic and people who live in the woods a few hundred meters from the tracks. All level crossings have “Stop, Look, and Listen” signs.
It is safe to say that the situation has been like this for many years, decades probably, and the people who live there and use the level crossings are very familiar with the situation (I actually spoke to one of the farmers). Save for the odd visitor, I dare say that only people with experience and knowledge of the situation use these level crossings. The chance for an unfamiliar random passer-by is remote because the roads are literally going nowhere.
Another safety measure
During a recent upgrade/maintenance on the line, someone noticed that none of the unprotected level crossings on the line had a sign that ordered the train driver to blow the horn to signalise “Train coming”. The infrastructure manager decided to correct this non-compliance. From that moment on, train drivers did as they were ordered and created a decent amount of noise in the wide area that previously had been quiet.
Blowing horns is a safety measure with a long historic tradition. From their early beginning, locomotives have been equipped with loud horns or bells to warn vehicles or pedestrians that they were coming. Steam locomotives had steam whistles, the later diesel and electrical locomotives got specifically designed train air horns. Train air horns are significantly louder than their counterparts in cars and trucks are.
To reduce the impact on people living nearby, train drivers have to blow the horn only between 06:00 and 22:00 - apart from emergencies (like people or animals on the tracks), of course. The problem is that even by restricting the honking to these hours, there still is noise at least 4 times per hour. There are people living within 100 metres of the place where the trains blow the horn - with NO sound-reducing barriers in-between. There is also a nursing home close by.
Additionally, even though rules try to stipulate how the horn signal is to be given (and I assume train drivers are trained about this), not all train drivers do it the same way. Some blow the horn significantly longer, earlier, or louder than others, creating more nuisance than others do.
Some critical questions
I have not been part of, or seen, the assessments that were part of the decision to putting up signs that are causing the noise. When I filed a complaint, the answer I received referred to compliance to some rule. And I presume (but again, I can be mistaken) that this is as simple how the process went: a non-compliance was identified and without much thought a process was started to fix the problem. Undoubtedly with the best of intentions and the assumption that it would improve things, but I doubt that real risk-based thinking was part of the process.
And while I wasn’t part of the process, I can ask a couple of critical questions:
1: Management of Change?
I wonder if someone has assessed the effects of the change. It looks so simple and straightforward, but is it really? One may think that the more safety measures the better, but believing that implementing safety measures is only positive or neutral is a fallacy. Neither do more safety measures automatically mean lower risk/safer.
Just some points for consideration:
- More often than not, new measures create new problems. In this case among other things noise.
- The frequent use of sound signals may cause desensitisation. After a while, people may get used to it and no longer react to it. The warning signal thereby loses its function because it is overused. Kind of the “Cry wolf” effect. Experience shows that people do not react (or only badly) when an alarm goes off in a shopping centre and offices (“Oh, it’s probably another false alarm”), and the “Don’t leave your luggage unattended” messages on airports have become part of the background noise, nothing more.
- It can lead to changed behaviour, for example that people start trusting on the horn signals and stop looking out for themselves, leading to risky situations when a train driver forgets to give the signal, or outside the period of 06:00 - 22:00.
2: Suitable and effective measure?
While the measure has a long tradition and is implemented as a standard solution, I wonder if it is a solution that suits our modern time at all?
These days, people walk around with their iPods or mobile phones and headphones all the time, or have loud music in cars. On the locations we are talking about here, there are probably tractors that are pretty noisy themselves and some farmers use hearing protection while operating them. So it's doubtful that the warning signal reaches its intended 'audience'.
We should also consider how honking is perceived in normal life. While we cannot fully compare road and rail traffic (for example the meaning of a ‘red signal’ does not have the same impact), the function of using a horn is generally as a warning signal in extraordinary situations (although I know that drivers in some ‘southern’ or ‘eastern’ cities appear to use it as a means to signalize that they participate in traffic). So shouldn’t be using the train horn also be reserved for extraordinary situations, like when there are people on the rails when the train approaches, or possibly when there is fog and visibility is bad?
Using a danger/warning signal as a constant signal means that its effect wears off very quickly, leading to the above mentioned desensitisation.
3: Gain or pain?
One may wonder what the cost/benefit looks like in this case.
Typically these cost/benefit assessments only look at safety and money. I presume that the process (if there was any, because usually they are done very implicitly, especially when triggered by non-compliances) went somehow like this: There is an additional safety measure, so we assume safety improves. The expenses are relatively limited for the infrastructure manager who has to put up a couple of signs (although you may be surprised about the total costs if you factor in the design, planning, updating of maps, etc.) while the cost for the railway undertaking is nil (train drivers have to do their job, and that they do already).
However, this look at cost/benefit is too limited. Besides the elements safety and money, also other factors should be included in the assessment, like efficacy (the assumed safety gain may not be there at all) and the effect on other areas than safety (like environment and welfare in this case). One will also notice then that we are dealing with various, non-overlapping groups of stakeholders that all have different costs and benefits. Safety gain (or loss) concerns passengers and personnel in the train and road traffic using the level crossing (gains for these two groups are different, mind you!), cost and compliance concern the infrastructure manager and railway undertaking while environmental and welfare aspects concern the people living in the area.
Dealing with risk means making trade-offs between various aspects. Even if safety would improve thanks to the signalling with the horn, someone else pays the price. What is more important? Quite importantly also, who gets to decide? Were other stakeholders, like the neighbours or the municipality, even involved in the decision? Or, were they only confronted with the results? (As far as I know the answers the answers to the last two questions are No and Yes, respectively).
I am not necessarily a fanatic adept of utilitarianism, but I am rather sure that Bentham would not approve of this. To me it does not sound like the solution with the greatest gain for the greatest number.
What can we learn
Let me first say very clearly that I do not know if the implementation of this ‘safety measure’ was wise. Answering the questions raised above might help determining that. And there are surely other important factors not mentioned here that should be considered too. For now, I am very sceptical.
Some may wonder what the fuss is all about. Why make safety so complicated. Simple answer: because it already is. I do not make it so. In fact, I think that this case illustrates very well that safety is often way too complex to dumb it down into compliance with a rule and implementing a simplistic standard measure. Other lessons might be:
- One size usually fits no one really well.
- More is not always better. It is not always a matter of the old Dutch saying of “baat het niet dan schaadt het niet” (“if it doesn’t help, it doesn’t do harm”), because it just might do harm.
- Implementing safety measures will change the system that can (will) lead to changed behaviour, which may actually lead to a reduced level of safety.
- Compliance is not the same as safety.
- Talking to various stakeholders will give you a richer picture of the problem, and lead to better (more robust) decisions.
Also published on Linkedin.
The Frequency Illusion bias can be a funny thing. And I’m not only saying this because for some strange reason it is also called the Baader-Meinhof Phenomenon. The Frequency Illusion is when people learn or notice something new they start seeing it everywhere. It is why pregnant women see pregnant women everywhere. Or take myself. Last winter we bought another car we are extremely satisfied with. Afterwards I started noticing the same model and make everywher. Same colour even. There appear to be zillions of them in our little part of the world.
It can also happen in a professional sense like when you learn something new and then recognise it in many events and situations that you come across. You actually you have to watch out for this so that you don’t over-do it and attribute every event and every situation to that new thing, be it culture, drift, complexity or whatever.
Still, once you start studying human factors you will recognise a lot of stuff therefrom in everyday life, and media. Sometimes you even come across some rather funny examples. Let’s look at one.
Upon reading that headline some thoughts popped in my head, immediately. Vandalism? Not likely (but not impossible), being an old lady. Dementia? That’s more of a possibility… But wait, we are so quick with our judgements and biases. Another thing to watch out for, even more maybe than seeing things we learned recently. Let’s first get some information.
What had happened was that the lady in question was on a day out to an art museum with the nursing home where she lives. One of the pieces of art, made by avant-garde artist Arthur Köpcke in 1965, looks like a giant unfinished crossword puzzle.
What the old lady (aged 90) did was finish some missing parts of the puzzle. Which gets me laughing aloud, but that is just my sense of humour.
How on Earth…
At first one may wonder “How could she do that?!”. She was in a museum after all, isn’t it ‘common sense’ that you cannot and shall not write on pieces of art in a museum? She must be dementing!
But, as the interviews in the media show, she may be living in a nursing home, this old lady had her senses very much together. And while the Mirror calls it an error, the lady even admits to that she did it on purpose! Vandalism after all then?
No! A perfect example for local rationality. If you look at the context the actions of the old lady make perfect sense.
Perfect sense? Come on, some may say, as you mentioned above this was a museum! Right, but:
- This museum (of modern art) contains a lot of interactive art where visitors are encouraged to interact with the pieces of art.
- Not this one, but that was not made explicitly clear. Instead there was the clear instruction, or at least encouragement on the piece of art, saying "insert words".
- The lady therefore believed that she would be acting in the artist’s interest and intention if she did insert words.
- Because she was not carrying one herself, she went to borrow a pen from the museum staff making her intentions clear. Nobody stopped her.
Looking at this context the old lady’s actions make a lot of sense. People do things because it makes sense to them at that moment and place, given their current knowledge, resources and goals. That is what local rationality means in a nutshell: it makes sense for that person, at that moment, at that place.
Interestingly, the old lady may actually have done the artist and owner of the piece a favour. The piece was rather unknown before she ‘reworked’ it. After it was in the news all over the world the value has gone up significantly.
By the way, this was not the first time that people unknowingly destroy ‘art’ framing it in their context and local rationality, the most famous probably being Joseph Beuys’ Fettecke (‘grease corner’) being ‘cleaned up’. Maybe this should be a hint for some artists that they should make their art recognizable as such. Environment determines behaviour in a major way, after all.
p.s. The museum has now put up a sign telling visitors that the crossword is NOT one of their interactive pieces of art…
Also published on Linkedin.
A couple of weeks ago the IOSH raised the alarm. “IOSH Says More Action Needed on Preventable Deaths” said the header, “The emphasis comes after an annual rise in work-related deaths in Britain” the press release continued. Now that sounds serious, but it also triggers some questions.
Lies, damn lies and …
First question: “an annual rise in work-related deaths in Britain”?
Let’s look at what that means. Ah, well, the numbers went from 142 last year to 144 this year. In absolute numbers this is indeed a somewhat higher number. But looking at trends (which we shouldn’t do from year to year, of course) this is not a rise, it’s a fairly stable level. And the slightly higher number is most likely explainable by random fluctuations. As you may recall from a few weeks back, Norwegian road traffic fatalities went down and up again by 30, and I suppose that Norwegian road users is a smaller population that the working population in Britain!
Doing just a superficial check (Google: “fatalities Britain”), the first hit brought me to the most recent statistics on fatalities in the workplace in Great Britain 2016. You can download the report, or see them online. What do we see? There has been a steady decline in fatalities from the mid 1990s on (with ups and downs, as expected - pity that the HSE didn’t provide a rolling average) which has been levelling out the past few years. So: rise? No. Has something dramatic happened? No.
Of course every fatality is one too many. It’s a tragedy for the people involved, especially those left behind. But if you want to convey that message that then you should just say so and not wrap it in some nonsense (non-existent) trend.
Another reason to be cautious about these ‘Cry wolf’ press releases is that they may trigger simplistic interventionism which may work entirely counter-effective.
Second question: What does the adjective 'preventable' add?
It is an expression for hindsight, for sure and as that it gives me a bad taste. It says that ‘they’ (employers, employees, others?) should have known better. Some of these deaths could have been prevented, if only… And yes, some, maybe even many of them could have been prevented, but what does this conclusion help us? On the positive side it tells us what we can do better next time. On the other, negative side, it’s an expression of blame that gives us an adversarial start. Not a good starting point for improvement I’d say.
But that is in retrospect. It’s also possible to read the IOSH’s press release forward looking: “More action needed on preventable deaths”. But that raises of course another problem. Because, what is a ‘preventable death’, or more general, a ‘preventable accident’? How on earth would you know in advance?
Some may offer the ‘All Accidents Are Preventable’ slogan as an answer. I’m not sure if Shelley Frost, executive director of policy at IOSH, meant this when she stated that “All deaths are avoidable”, but still, don’t both statements make the term ‘preventable’ entirely redundant?
But are they really? All accidents are only preventable if we have full control, full foresight and unlimited resources. Since we have neither of these (after all, we are living in a messy, uncertain world and have to do with limited knowledge, time and resources) not all accidents are preventable.
That doesn’t mean that we shouldn’t try to do our utmost, and so I applaud that IOSH is committed “to supporting professionals in building capability within organisations, enabling them to deliver an effective health and safety agenda for their workforce”. Which then (hopefully) will prevent many accidents, and harm. But I still wonder about the clumsy, binary and unrealistic use of language. Proper use of language is extremely important of doing effective health and safety work, after all!
I understand the sentiment and I appreciate the engagement, because every fatality is a tragedy for those involved. I also understand that organisations like IOSH use each and every opportunity to reach the media in order to raise awareness and get attention for safety. Still… I find it rather unprofessional to seek sensation and beat some drum that isn't there. Now, I only get a feeling that the IOSH (an organisation that has to stand for quality in the profession!) apparently cannot tell the signal from the noise.
There are ways to address the issue without spinning the information this way. Why not frame the message in line with the facts. For example:
“We see no improvement…”
“There are still high levels…”
“Every fatality is one too many, and therefore…”
And please drop that ‘preventable’ nonsense. Talking from hindsight is not a good idea.
Also published on Linkedin.
Heroes are an important element of mythology. As a ‘Safety Mythologist’, I am more than just a bit interested in all kinds of elements of mythology. Let’s take a look at heroes in relation to safety. Not an exhaustive review, but just some musings and reflections.
Believe it or not, heroes are to some degree a problematic thing in Norway. This may come as a surprise, what will all the Viking sagas and their quest for fame through heroic deeds? Well, seems they left that behind when they were done raiding most of the civilized (and part of the then not-yet civilized) world. Sometime in the course of the past 800 years Viking heroism was replaced by something called janteloven that goes against focus on individual achievement and (most likely) against heroism. Yet, you will surely find heroes in the workplace.
When you look at safety literature, or management literature in general, you may notice that there is also some promotion of heroes or champions. Examples are (top) managers that are to front and promote an implementation, or the people who have brought upon an exceptional performance (e.g. reported many incidents, came up with a great solution, or ‘saved the day’).
Recently, I read a book on behavioural change. This book encouraged managers to compliment performance of a task where the employee is in kind of a ‘stretch’ situation: a task where he either physical or mentally acts at the boundaries of his capacities. Sounds like a typical trait of a hero.
I understand the message, and I even agree under certain conditions. If we want to develop, we often have to go beyond boundaries and stretch ourselves to achieve something. Role models can (and should) be used in a positive way, as an inspiration and an example to follow. There is, however, also an important pitfall hidden in here. The question is what are these people labelled a heroes for? Are we talking about stretching in personal development or is the hero fixing problems with regard to competing objectives? And, as always, there is the question of what side-effects there will be.
This reminded me of the following anecdote that I was told by my friend and colleague Beate Karlsen. I’m sure that similar incidents have happened other places and you can add your own experiences.
Our story takes place at an organisation that manages infrastructure. They are responsible for both the development of new infrastructure, and the maintenance of existing. There were particular challenges with regard to particular critical technical competence that was necessary for both maintenance and construction projects. There were only a few people with this competence.
In order to deal with these challenges and do maintenance and personnel planning in a good manner, the organisation has a Masterplan. This Masterplan looks at planned maintenance 24 months in advance. A more detailed plan is made with a 12 months perspective and then there is a 4 month planning where things are even further detailed. 14 days in advance the actual jobs are put into a time table and after that there is no major planning; the working orders are distributed. Everything that comes within that 14 day timeframe will be disruptive to the plan and lead to adjustments or cancellations.
Parallel to the day-to-day business of maintenance there was a highly profiled and prestigious construction project. As most major projects, it was managed on budget, time and progress. Funding was the least of their problems. Time and progress, on the other hand, were more problematic because the opening date for the new infrastructure had been decided on a high political level and celebrities and media had been booked for the official opening.
Despite the established planning regime described above, with a planning department working full-time with the Masterplan and more detailed plans and working orders, the project seemed to have ignored or not noticed this regime. The project was in the habit of requesting scarce resources really late, often on the same day they needed them.
Usually the technical department would agree to the request from the highly profiled project. Consequences were that it became harder and harder to make the shifts to go up, maintenance had to be postponed or rushed through and people became tired and stretched thin. Obviously one solution would have been to decline all last-minute requests from the project, but this would at the same time mean that one would miss the income from well-paid hours by the project. Besides there was a lot of perceived and real pressure to deliver, not in the least because the same top-manager was responsible for the project and maintenance.
Therefore the technical department would stretch themselves into extremes to meet all demands. They would also try to solve the situation in a ‘soft’ way, by discussing and explaining the challenges to the project, take up issues with regard to task risk assessments that never had a proper level of quality when done in the last minute, availability of assets and so forth. This seemed to create some understanding from the project, but no lasting improvement.
As a next step, the technical department tried to use incentives. It was communicated to the project that all requests for support had to be done within a reasonable timeframe, otherwise the cost for the support would be many times the ordinary price. But, as said, money was not a problem for the project and they happily paid whatever was necessary to get the job done. The project would also show their gratitude for the help by awarding the ‘heroes’ that fixed their problem by sending cake to the department to celebrate when a job was done.
Celebrations are great, but in this case they were yet another factor that over time contributed to a drift into an unwanted direction with negative consequences. These included signs of burn-out among the employees with critical competence, several serious near-misses because of the very superficial risk assessments and not getting done all necessary maintenance for safe production on the existing infrastructure.
There is kind of a happy end: the project was finished in time and hoorays all around.
What Can We Learn From This?
Firstly you should be very conscious about what heroes you want to cultivate. It may lead to problems in the long run because it can send signals about the preferred behaviour (‘sexy’ projects above ‘boring’ maintenance, production above occupational health, fixing problems above structured and systematic work, etc.). In this case it more or less awarded bad planning with serious disruptions to other tasks, and even endangering safety and health.
This case also contains some great lessons about how incentives and competing objectives work. Research has actually shown that paying a fine for ‘breaking a rule’ kind of legitimizes the behaviour (e.g. a study with fining parents for picking up their children late at a day care center in Israel), so simply making the hours more expensive is not a solution.
Concluding, I am all in favour of flexibility and a strong orientation on solving problems in a practical way. I’m also very much in favour of praising people who have delivered a great job. But watch out for side-effects. Celebrating heroes is a strong expression of the logic that lives in your system and organisation and it is amazingly enduring. It creates expectations - maybe even obligations - for the future. And, ironically, sometimes when a hero fixes a problem, he actually contributes to prolonging it!
Also published on Linkedin.
Compliance is often the most basic place to start working on safety. Rules are often needed to set some basic standards of what society or the organisation sees as an acceptable standard. Rules are also needed to weed out the really bad organisations; by fining them, or even shutting down their business.
Lots of things to say about compliance, as you can see from this introduction (if you want more, it is discussed among others in Myth 41 of my book), but right here, right now I’d like to focus on just one thing: many people seem to live with a clear misunderstanding that compliance is the same as safety. It is not.
What few people seem to realise is that rules are almost always compromises between different agendas. Also keep in mind that no rule is applicable in each and every situation. Rules often deal with common situations, specifying one best way of how to deal with a situation. The real world is much messier than that, alas, and will throw specific situations right in your face where the rule does not fit so well.
History has taught us that even if you comply with rules, even so an accident may occur. Sometimes the combination of several acceptable factors (especially if they are bordering the threshold of acceptance) can cause accidents. Erik Hollnagel calls this functional resonance. Even though all factors are within acceptable limits, their unfortunate combination together can cause an accident.
The other day, my former colleague John Awater told me a fabulous way to illustrate the issue. The example below was inspired by this:
Say your system exists of four different elements that work together. Each of them is to uphold high standards. The norm says that they are to operate at a minimum of 75%. The system as a whole will be clearly unsafe when it drops below 50%.
Say that the individual components are functioning really well, at 80%, 80%, 85% and 85%, so not even close to the threshold. The total effectiveness of the system, however, is not better than 0,8 * 0,8 * 0,85 * 0,85. Which is just above 0,49 - but below our safety threshold of 0,5!
This is of course not the REAL way things work (if only because it will be hard to put simple numbers on something so complex) and just a gross approximation, but it is a nice little model to show you why compliance can fail.
It is also a good illustration to demonstrate that reductionism is often not helpful. In general, it is not the separate parts that create safety, but the parts in interaction. So (back to last week’s post) why so much focus on behaviour?
Also published on Linkedin.
Page 4 of 11