background2

Moral Heroism

People seem to need moral heroes. See also the J&J case. Why is that? And who gets to determine who qualifies as a moral hero? What criteria do we use? Is a moral hero someone who is willing to make sacrifices in the purpose of something larger? Not necessarily has personal gain? Who sticks to his principles?

What view to adopt? Claus Jensen wrote in his Challenger book that there is no point for engineers or managers to be moral heroes. Feldman on the other hand says: “Engineering societies need to require engineers to act in accordance with the prevent-harm ethic.”

And, when looking at the bankers from the NRC article, or the J&J executives… Are they amoral (without any moral - see NRC “leaves no room for moral”) or immoral (unacceptable to a standard).

Time to explore some moral views.

Moral Philosophy 101

Philosophy can roughly be divided into:

  1. Critical thinking, rhetoric
  2. Descriptive philosophy (existentialism, nature of being, nature of knowledge, etc, etc)
  3. Prescriptive philosophy (how should things be?), including moral philosophy.

Sidney took us through a crash course in Moral Philosophy by discussing the five main schools. As often, there is no good and bad. It’s what you prefer, what you choose, what fits your profession:

  1. Virtue ethics
  2. Consequence ethics
  3. Duty ethics
  4. Contract ethics
  5. Golden Rule ethics

The problem with ALL five: What is Good?!

Virtue ethics

Following your moral compass. Virtue is a reliable habit, part of your identity, said to be independent of context. 

Aristotle: on each side of virtue is a vice…

Coward <---- too little --- Courage --- too much ----> Reckless

Scrooge <---- too little --- Generosity --- too much ----> Wasteful

Bragging, arrogant <---- too little --- Humility --- too much ----> Too dependent

Context does actually affect these virtues (tired, work, worry, …). And there are other problems, including conflicting goals, local rationality (context matters! sometimes a virtue works, sometimes it doesn’t) and virtues are underspecified.

Aristotle: when you get wise you get the ability to know which virtue to apply. Phronesis: practical wisdom.

Consequence ethics

In its most extreme form this is called utilitarianism: the greatest good for the greatest number (J. Bentham, John Stuart Mill). Just like virtue ethics doesn’t give a thing about consequences, consequence ethics allow you to ignore virtues. Consequence ethics has major appeals to many managers (makes very much sense in the J&J case: “Sacrifice 5% with breasts for the good of others and the company must be acceptable”).

The ‘fatal’ weakness of utilitarianism is the question of what is good. Utilitarianism cannot deal with justice and rights. Minorities generally draw the shortest straw…

Duty ethics

This is the ethics of principles. Very commonly found in for example the medical world (“doing the right thing”). Immanuel Kant is the ‘big name’: principles are universally applicable, they are not imposed from the outside but come from the inside. 

There are differences between rules (you follow them, depended on context, non-compliance can be punished) and principles (you apply them, they are context-independent, non-compliance is not punishable).

Duty ethics often clash with consequence ethics, often this is illustrated through the different views of practitioners and managers. Duty ethics are often related to the fiduciary relationship (trust): someone puts their well-being (life even) in your hands.

Contract ethics

Right is what the contract tells you to do. Right is what people have agreed among themselves. But: how will you know if the other party will follow their side of the contract. Thomas Hobbes (“Leviathan”) had the answer to this fear of a chaotic universe without rules: we need a souvereign and alienate our rights to this souvereign (e.g. through laws) who acts for us. This is why we need governments, judges, etc.

None of this really works in companies - there is no division of power, there will be no independent judges, so ‘just’ culture can quickly become a case of your contract telling that you can be screwed over for screwing up!

Golden Rule ethics

Everything you want people do to you, do it to others. Similar statements are found in most/all religions and cultures. The problem is again: Who decides what is good (for you and others)? Golden Rule ethics can be inherently egoistic.

 

Why Do People Break Rules?

After this theoretical excursion it was to the problem of ‘rule breaking’. Several reasons were mentioned: Because they can. Rules are sometimes silly. It gets the work done. To finish the design. To survive. Because there are conflicts between rules, or between rules and principles. Because you don’t know them. Because they are ambiguous. There is a number of theories that cover these reasons (and more):

Labelling theory:

Norwegian criminalist Nils Christie: “crime is NOT inherent in the action”. Stanley Cohen: “deviance is not inherent in the person or action, but a label applied by others”.

Control theory:

Rule breaking comes from rational choice and is often seen as a lack of external control. Hence the Orwellian principle of a panopticon: you see everything. Modern variations are e.g. Remote Video Auditing that is implemented in many hospitals. This primarily communicates a lack of trust. It also tends to make the behaviour that it wants to see slide out of view. See Lerner & Tetlock: Accounting for the effect of accountability. People will get better at hiding stuff, or act subversively (“accidentally destroy evidence”). Remember also that every viewpoint reveals something, but hides even more. Another thing to consider: if you do this to change behaviour you have already taken a decision of where the problem lies! The panopticon takes the view point that people don’t have an intrinsic motivation to follow rules.

Learning theory:

Fine-tuning: it gets the job done, for example the Columbia brushing over of foam.

Subculture theory:

People who defy rules (“the rules are silly”) gain traction and status within their subculture. Often this gets stuff done. Formal investigations are blind to this, also because it disappears underground when poked.

Bad apple theory:

There are some bad apples (“idiots”, accident-prone people) and they break rules. Are bad apples locally rational? They do fulfill some function! Accident proneness is an old concept, but it proves to live until today.

Resilience theory:

Safety is a non-dynamic event. GSD: Gets Shit Done. Finish the design.

 

A case

Much of the afternoon was spend discussing a real ongoing case which has to remain confidential. Themes discussed included ”errors” and criminal prosecution, second victims, just culture, negligence and social support.

The Lucifer Effect

Day 2 was concluded by watching and discussing Phil Zimbardo’s 2008 TED Talk.

Are we talking about evil people or evil structures? In the examples that Zimbardo shows (e.g. Milgram, Stanford Study, Iraq prison) it were basically good people who were corrupted by the circumstances (7 steps - “all evil starts with 15 volt”). Not “bad apples”, but a “bad barrel”. Away from focus at the individual level towards looking at the system (“bad barrel makers”).

The same situation can lead to heroism or evil. To be a hero (Zimbardo mentions 4 different types) you have to learn to be deviant.

 

Back to Day 1, or proceed to Day 3.