The May issue of NVVK Info brings my review of the book on the role of incentive schemes on (process) safety by Andrew Hopkins and Sarah Maslen.
Find the original (Dutch) piece here.
The English translation is found below:
Risky Rewards: How Company Bonuses Affect Safety, by Andrew Hopkins and Sarah Maslen
This book caught my interest while glancing through the latest Ashgate newsletter. Andrew Hopkins is a known name, of course. He is emeritus professor from the Australian National University and author of books like “Failure To Learn”. Like that book, also this new one takes inspiration from the Texas City disaster, and from Deep Water Horizon. After Texas City, Hopkins became interested in the role that incentive schemes play in the process. He was helped in his work by Sarah Maslen who is a research fellow at the School of Sociology at the Australian National University.
This relatively compact book (ca. 160 pages) presents the results of their study into the effect of bonuses on safety, especially major accident hazards. Their aim is answering three questions through case studies:
- Do incentive schemes work as intended? Are humans primarily driven by financial incentives of rather motivated by other rewards?
- Do financial rewards motivate the intended behaviour or do they have unintended consequences?
- Are the right indicators being used?
Interestingly, one conclusion is that - contrary to what some may assume - not all bonuses are evil or ineffective.
Mismatch and manipulation
The book starts with a general discussion of human motivation and incentive schemes. Modern criticism of writers like Daniel Pink is the point of departure. Pink argues that people are more motivated by intrinsic (job satisfaction, autonomy) than extrinsic (money) rewards. Still, companies rely mainly on financial rewards and incentives. As Pink says, there is “a mismatch between what science knows and what business does”. Financial rewards work well in some settings, but not in modern situations, e.g. where creativity is required.
The authors are less negative about bonus systems than Pink, however, because these systems have other functions as well, like the need to be recognized and get feedback, which is given through performance evaluations as part of the incentive schemes.
Bonuses can generate effects that can make them counterproductive. One main risk is that goals that are expressed in numerical terms lead often to ‘gaming the system’ such that the numbers are affected rather than that is dealt with the subject/problem that they are meant to be an indicator for. Consequences may be non-reporting, adjusting estimates of releases to enter lower categories, rescheduling actions to avoid overdues, using light duty to reduce the number of LTIs and sacrificing quality in favour of quantity. It’s therefore necessary for bonus systems to be carefully designed to guard against these counterproductive unintended consequences. Alas this proves to be extremely difficult.
Long and short term
The main focus of the book is on bonuses in relation to catastrophic risk. One problem (and good thing, of course) is that disasters are rare and this may lead people to prioritize short-time gain (a bonus) over long time effects (major accidents). The third chapter mentions some possible ways to deal with this: 1) delay payment of bonus for some years, 2) identify indicators of current safety management, 3) identify management actions to reduce catastrophic risk, and 4) reward initiatives to reduce catastrophic risk.
Chapters 4 and 5 describe how various long and short term bonus systems work (or rather don’t). It’s striking that in discussions about bonuses rather little attention is spent on the long term variety, probably because only a small group of people is eligible for this kind of bonus (typically CEOs). Because of their potential size (we’re talking about significantly more money than the short term bonuses) it is to be expected that long term bonuses have a major influence. Especially when looking at the construction of these long term incentive systems that look solely at financial results (shareholder value) and at the same time require that the company does better than the average of a chosen group of companies. This causes major incentives to postpone major investments in safety that will affect the bottom line negatively. Granted, a disastrous accident will also affect financial results negatively (see what happened to BP after Texas City and DWH), but chances on this are smaller than the guaranteed effect of extensive spending on safety and then the choice between the two appears to be clean cut.
Performance evaluation systems for short term bonuses usually have a collective and an individual part. Because individual persons don’t have the ability to influence the collective part (for example the number of accidents or the financial result of the concern), this part cannot be effective to influence behaviour effectively. The authors regard the value of the collective part thus rather as a means to create shared concerns (in contrary to having units compete against each other within an organisation, possibly leading to sub-optimisation though getting good results at the cost of others) and/or to share in the profit. Safety (and especially process safety) plays only a rather small role in these collective indicators. Which, in a way is defendable: companies don’t exist in order to be safety: they exist to produce, provide services and make a profit.
Individual performance targets are in principle a good means to affect and motivate behaviour, especially because the personal feedback that is part of the process. Alas this positive effect is ruined at many companies through the way that the bonus is assigned, namely through a normal distribution. The logical consequence of this is that the majority of employees will be rated as ‘average’ or ‘satisfactory’. Because people long after recognition and praise this will feel to many as sugar coated criticism and is rather demotivating than motivating.
There are clear developments in the way of better indicators that not only capture occupational safety but also process safety (a point of improvement found in both Texas City and DWH). The book stresses at several occasions the importance of suitable indicators (like the number of ‘precursor events’ for specific major accident risks) but one problem is that many companies just copy uncritically from some government guidance or others in the sector, or use what information happens to be available. One consequence of this can be that organisations measure things that not necessarily say something about the actual risk that the organisation is facing.
Throughout the book several incentive schemes are discussed in more or less detail with their pros and cons. The most striking is the very innovating example at the end of chapter 5 where one company has created a scheme that deals with ‘fatality risk’ and in a systematic and evolving way created improvement over a number of years. The tragic part is that the shareholders thought that the scheme was too innovative and effectively sabotaged it by incentivizing the CEO in a traditional manner.
An interesting book that gives fine insight into the practice of both incentive schemes and process safety indicators. Both areas see much room for improvement, and I don’t doubt that other safety domains have even more to gain in these areas.