maandag 20 augustus 2012

dinsdag 14 augustus 2012

Zero accidents = safe!

So... because nobody was hurt when this happened, this practice is perfectly safe.

Hmmm... something tells me that there's something fishy with that logic...

woensdag 8 augustus 2012

How to write an Academic Article - Quick Guide

If you ever have to write an academic article, follow the below guideline and you're safe:

The following article describes the study into [insert theme] which is an important framework for [insert theory] which has been the [insert either: ‘leading paradigm in the field for the past years’, or ‘paradigm that has undergone a shift in latter years’]. This paper describes the study into [brief description here].
The Study
[Describe the metodology, research and results, use a lot of quotes and references, avoid as much as possible original research, but add some statistics and results. Use complicated words at will and sentences shall be combined into long complicated structures with many adjectives and commas. One or two scattergraphs are cool.]
This study has given us greater understanding of [insert theory]. More research is recommended.

Make sure the title is impossibly long and about some ridiculous detail of your research. Like: "The Influence of the Lovelife of the Madagarscar Blowfly on Safety Consciousness in a Postmodern Society within a Brownian Motion Model"

Follow APA rules for layout to make it additionally unappealing (you wouldn't want anybody accidently reading this, being publish should be sufficient).

donderdag 2 augustus 2012

Some thoughts... 'The' Cause (?)

The people who know me, know that I tend to reject the notion of ‘the’ cause. Let’s explore the concept a bit. I believe there are two main variations on ‘the’ cause.

1. A simple philosophy about causal relationships. Everything that happens has just one cause.
2. The elevation of one particular cause as the most important.
3. A very strict definition of the word cause
Re 1: Monocausality

I do believe in the existence of simple accidents with a straightforward linear causal sequence (and have experienced some), not unlike the one below where each effect is basically the cause of the next effect until we come to the final consequence. Working backwards from the final consequence we’ll have a continuous sequence of why - because relationships. E.g. I’m on my way out (context, not cause), change my mind and turn around abruptly only to bump into the door that closes behind me. Some cases might be that simple that causal chains even may be restricted to just one cause and one effect…

Often, however, the world is not quite that simple and causal paths develop more or less independently, only to join up at some point causing some outcome. Please see the example discussed elsewhere on this blog; I don’t see how that can be turned into one linear sequence without dismissing a number of essential factors.

Re 2: More important causes

Are some causes more important than others? In some sense, yes. Especially if one defines preventive and corrective actions to remove them one may want to focus more on one cause than on others. But that’s then ‘more important’ in the sense of prioritizing resources and actions, not ‘more important’ in a causal sense.

In a causal sense it’s a bit more difficult… Leaving aside the discussion if an underlying (or root) cause is more important than a direct cause (we may get back to that issue another day), I find it hard to say that cause A is more important than cause B because it caused the incident more than the other cause… If an investigation has established that both were necessary for the incident to happen I cannot maintain that one should be ranked above the other.

Take the lab example: both not wearing goggles and mixing the ingredients wrongly are needed for chemical substances hitting the eye of the victim. How can one be more important than the other? Sure one might argue that the wrong mixing is the point of loss of control, or the first barrier breached. But still, both are needed for the defined incident.

For now I remain unconvinced that some causes are more important than others. But please supply viewpoints of your own.

Re 3: Strict definitions

Some define the word cause very strictly, for example by only allowing a deliberate act, or discarding conditions as causes (see elsewhere). Or by limiting the meaning to the direct cause (which in effect boils down to option 1). The added value of this isn’t quite clear to me so far - apart from eventual legal use… or as an alternative for what I often would call a direct cause (like in the lab example). But my opinion on the value may change as discussion progresses and knowledge grows…

Some thoughts... Swiss Cheese Model

By now it should be widely known and recognized within the safety community that the SCM does have some serious shortcomings or drawbacks, or at least that a number of misperceptions have led to wrong application. Reason himself is among those to acknowledge this and has, together with Hollnagel and others, written a paper on the subject. I’d advise anyone to read this 2006 Eurocontrol report “Revisiting the Swiss Cheese Model” that is freely available on the net.

Recently there has been published a book which gives a rather deep and detailed critical discussion of the SCM. I won't go into that level because it would require an extensive study of Reason's work and I don't have time for that. Personally I have never seen the SCM as an accident model in the same manner as e.g. the dominos, especially since I have never seen how the SCM practically can be used in an investigation other than that one uses it as a frame of mind to check out barriers that may have failed. The SCM is to me at the most something that “describes how an accident could happen” as Reason have said. In the cases that I mention the model myself, I use it to explain (multiple) barriers and how failure of these can lead to accidents.

The SCM also shows that a failure upstream (call it management) can be stopped underway (e.g. by competent and alert personnel), yet on ‘a bad day’ (another employee, stress, etc.) an accident may be the consequence. One huge drawback of the SCM is obviously that many versions exist. Some versions (e.g. the 1997 from “Managing The Risks…”) are very flexible and don’t put the layers of barriers in any strict order which would even allow an explanation of incidents starting without management failures. Other versions, however, do of course have various layers with designated ‘categories’ that - looking strictly at them - could be seen as if the SCM does say that all accidents are due to management failures.

It has been suggested that the SCM is an updated version of Bird, something I don’t see at all and must be an assumption or conclusion that is not further explained at length, but probably is linked to the fact that both tend to look towards management factors as the root causes of accidents. That being the case I think that Bird’s sequence and the SCM are quite different at heart. Reason’s first two books do reference Bird only once, and then he not even refers to Bird’s domino sequence, but to Bird’s updated pyramid (see “Managing The Risks Of Organizational Accidents”, p. 224).

One important difference is that Bird shows us a sequence of causes leading up to an accident and its consequences/loss. The SCM pictures a series of barriers with possibilities for failure (note that the SCM pictures also ‘holes’ that are not relevant for the accident!) which when several failures line up can lead to an accident because all layers of protection have been breached. If anything then the SCM is about the spaces between Bird’s dominos. Another difference is that the domino sequence in a way pictures the mechanism how upstream factors may cause the next domino to fall and thus show some causal relationship. The SCM does not show the mechanisms for the causes (holes). Contrary to the Bird sequence viewed in a strict sense, the SCM does not have holes in one slice of cheese affecting holes in the other slices.


An understanding of barriers is important for discussing the SCM. In Norwegian railway safety legislation a barrier is defined as: “technical, operational, organisational or other planned and implemented measures that are intended to break an identified unwanted chain of events” (Sikkerhetsstyringsforskriften, 1-3). Other standards and legislation contain similar wording; e.g. ISO 17776:2000 (“Guidelines on tools and techniques for hazard identification and risk assessment”) defines a barrier as a “measure which reduces the probability of realizing a hazard’s potential for harm and which reduces its consequence” and explains that “barriers may be physical (materials, protective devices, shields, segregation, etc.) or non-physical (procedures, inspection, training, drills, etc.)”. When we speak of barriers (and also causes, by the way) in my company we think MTO: man, technical and organisation.

Probably everyone has experienced that barriers are (often) not perfect and barriers such as the ones listed as examples by ISO 17776 can fail from one moment to the next. One can choose to follow a procedure, or one can decide to take the shortcut making the rule-barrier useless. This mechanism doesn’t only apply to the ‘softer’ barriers, this also extends to technical barriers that can be rendered useless in a whim, for example when we don’t wear seatbelts or safety goggles or when a safety barrier is bridged.

When observing a system we have to study it as being the combination of man, machines, procedures and other elements. While it’s possible to see the part as separate items with man as one system and the machine as another, and man not being a part of the machine-system (although one, for example, can argue that the dead man’s switch in a train does unite the two), this view of separate systems is not very useful. A man working with a machine creates a new system that is built up from several sub-systems. This, in my view, gives a much clearer view of systems, and also of machines.

Having this in the back of our minds we should conclude that the SCM actually gives a pretty good mental model of how (or at least: that) barriers can fail - while recognizing its weaknesses at the same time, of course. I do strongly question the SCM’s use as an accident model.

By the way…

I never really understood why the model had to be based on something smelly and yucky like cheese in the first place. Let me propose something more tasteful:

Some thoughts... Causes (and more)...

Causes appear to be difficult things and causation appears to be a difficult subject. A brief check of safety books in the shelf over my desk shows astonishing variations. Heinrich decides to focus on direct causes, Hendrick & Berner completely reject the notion in their 1985 book “Investigating Accidents With STEP” and Hollnagel appears to reject root causes (see his great 2004 book “Barriers And Accident Prevention”). All explain things differently and have their reasons: Hendrick and Berner choose to equate cause to blame, Hollnagel not as much rejects root causes altogether but the notion of the root cause and application to intractable systems (see pages 105 and 106 of his ETTO book) and for Heinrich I’d like to point to my earlier discussion of his work.

As if this isn’t enough safety literature is littered with an incredible number of terms: direct causes, basic causes, underlying causes, root causes, proximate causes, latent conditions, unsafe acts, contributing factors, causal factors, and what not. While there seems to be some kind of general acceptance and understanding of those, one quite often comes across differing interpretations and then flaming discussions are adding to the confusion.

Something that gets me rather confused are ‘proximate causes’. This may be partly due to my ignorance (and the fact that this term is not used in the Dutch language when you study safety or law). Heinrich defined these as “unsafe personal act or unsafe mechanical hazard that results directly in an accident”. Heinrich’s definition is for me clearly a definition of ‘direct causes’. This is agreed upon by that most scientific of all sources, Wikipedia: “In philosophy a proximate cause is an event which is closest to, or immediately responsible for causing, some observed result”. Equating proximate and root causes (as I saw not too long ago) is a concept that I find difficult from a contemporary safety science point of view and in my opinion not very helpful for safety work either.

Conditions and causes

Many safety professionals appear not to be aware of the difference between causes and conditions (you may call the latter context as well, if you want), which sometimes ends up in quite strange causation, all too often blending in context. As an illustration: a few years ago I was one of the people responsible for cleaning up the ‘taxonomy’ of causes in our incident registration/management system. The status at that moment was one of unguarded organic growth over 15 years. Originally the off-the-shelf version had contained Bird’s sequence as the three main categories: Management Failures, Basic Causes (with sub categories for Personal Factors and Job/Environmental Factors) and Direct Causes. Into this structure there had been added elements over the passing of years without any proper policy or philosophy. Often the new elements ended up in ‘wrong boxes’ (e.g. ‘planning’ as a direct cause) and there were exceptionally many context items (e.g. ‘performing work in the rail tracks’) mixed in. Sure, absence of these things would have ‘prevented’ an accident from happening, but it would also have prevented accomplishing the intended outcome. So there is a lesson here, since I do not consider the people who worked with this system in the 15 years before me as complete idiots (something I’m not even qualified to diagnose anyway).

Among professionals there has been the debate if and how an (unsafe) condition can be a cause. I’m in the camp of people who think some conditions can be causes, but not always and automatically. Hart and Honore posed that “causes are what made the difference, mere conditions on the other hand are just those [things] that are present alike both in the case where accidents occur and in the normal case where they do not; and it is this consideration that leads us to reject them as the cause of the accident, even though it is true that without them the accident would not have occurred…”. I find this a rather useful and clarifying definition. Let’s apply it to some examples:

If someone bumps into a lamp post out on the street I agree that we’re talking about the lamp post as a mere condition. Sure, had it not been there, no one would have bumped in it. But reasoning in the line of “what would have prevented the accident from happening” and labeling those things causes is counterfactual reasoning and a fallacy without establishing a proper causal relationship first. The lamp post is intended to be there, was built in accordance with relevant standards and it stands there in the normal case of people not bumping into it and is just minding its business of lamp-posting.

If a workshop burns down after a discarded match sets discarded scraps of paper on fire, I do not agree with the view that only the match was the cause of the fire and the scraps of paper a mere condition. This based on the fact that I refuse to see scraps of paper thrown on the ground as being the normal case. If dumping your trash on the ground is acceptable (regarding it as an unwelcome but normal standing condition), then discarding a match is the same (after all people throw down matches and cigarette butts all the time, most of the time not causing fires). Even more since discarding a match on a concrete floor with no flammable material present would not have made the difference either. So in this example I see two things joining up as causes, namely the act of discarding the match and the condition-turning-cause of the scarps of paper lying there. Counting the last bit of the old-fashioned fire triangle (i.e. oxygen) as a cause, however, would be bullocks. Merely based on the ‘normal case’ argument… Absence of oxygen in this situation would hardly qualify as normal, would it?

Cause-effect relationships have to be explained with logic and must be based on facts (not hunches or guesses, not even on experience!). Anything written down in an investigation report must be able to stand up in a court of law, but hopefully never will. I find the “beyond a reasonable doubt” criterion a good one, and in the case that conclusions have to be based on assumptions/theories/hypotheses this must be clearly stated as such.

The example elsewhere on this blog hopes to illustrate the point about causes and conditions further. This example also shows another difficulty: it’s fully possible to write down acts as conditions and vice versa. The discarded newspaper isn’t quite as clear as the example where the condition ‘not wearing safety goggles’ essentially is the same as the act ‘does not put on safety goggles’.

By the way, a legal background may partly influence what to call a cause and what not because law can only handle people and thus will be focused on human acts. If there are any conditions identified the question usually will be who has caused them. In safety this appears to me as a not very useful notion. In my view cause in law is not quite the same as cause in safety and I find it not useful to define causes in a strict theoretical and legally sound manner. This will not really help preventive actions, but is probably great for judges and lawyers.

So, while I have learned to appreciate some of Hart and Honore’s work as very useful and clarifying (they have been added to my personal list of things-to-read-when-I-grow-up) I will look for causes in a safety context. Especially since the goals are different. Causes in safety must help to define actions while causes in law help other goals. In the newspaper example I can imagine that a judge rules a discarded match being an act of greater carelessness than discarding a newspaper. And I would agree, one reason I could imagine this being ‘more’ cause than the newspaper. Still, you need both and both are acts out of the ordinary that together ‘make the difference’, so in safety terms it’s useful to regard both as causes.

There’s another reason why I’m not particularly happy with thinking about causation in the legal sense: laws and even entire legal systems are quite different from one country to the other (just compare the UK to Germany or France) which may, or will, have consequences for the legal interpretation of or legal requirements for the term ‘cause’. In contrary the language of prevention in safety should be a common one that is understood by all safety professionals, regardless their countries of origin.

Conclusions (sort of)

Three things so far…

Causes in law and causes in safety are not necessarily the same thing. Beware of mixing them up.

The distinction between causes and conditions/context is an area that many safety professionals have too little awareness of. I’m sure that more focus and greater clarity on this will strengthen incident analysis and recommendations for preventive action.

It would be a good thing if the safety world would start to agree on what we call cause and what not, if necessary with some additional cause categories (direct, root, …). Or maybe we should stop using the word cause altogether and use another more neutral term? ‘Contributing factor’ has been proposed and Alan Quilley came up with “factors to be considered to prevent recurrence of the events that led to the unintended consequence”, but I believe FTBCTPROTETLTTUC is an impossible to sell acronym.

Feel free to comment and add more thoughts!

Some thoughts... Management Failure versus Free Will

An interesting point that I read some time ago, is that management failure theory would conflict with the notion of free will: if management failures are the root causes for all accidents then management failures are also the cause for human errors, and not free will. I’m not sure if this very strict reasoning is fully correct. Even if it does sound logical it doesn’t feel right to me. In my opinion at least Bird’s dominos and also Reason’s Swiss Cheese Model (proponents of multi causality) do not exclude free will from kicking in into the causal chain. The basic causes in Bird’s sequence differentiate in job factors and personal factors the latter explicitly including things like motivation which is at least one personal factor that is clearly related to free will. I’m rather sure that Reason has similar mechanisms, but I’m too lazy to check.

One might point out that still the most left ‘domino’ (management) causes the fall of the next (which includes the personal factors) and does not describe for causes materializing half way the sequence. I sometimes get the impression that models sometimes are treated as if they were laws of nature that have to apply in each and any situation, in exactly the same way and the same order (a hair-rising example is the treatment by some people of Heinrich’s ratios, expecting to find the same everywhere). But that’s hardly realistic. Heck, that’s what they’re called models for - a simplified representation of reality and thus not describing all and every possibility.

I live in the belief that nobody rises in the morning and goes to his job with the intention to screw up massively and create an accident. There are others much more qualified than me writing about human error (including violations) and its causes (and have luckily done so), but roughly I’d say that two important reasons/causes for human error or violations are found in: 1) an overly optimistic perception of their control over the situation (remember that about 90% of all drivers believe that they’re better than average drivers) and 2) especially conflicting objectives. These are ‘causes behind the cause’ for deliberate acts and for safety work it is eminent to identify those in order to determine preventive actions that keep future errors and accidents from happening.

For a legal case it may be sufficient to stop having established as a cause that someone willingly chose not to follow a safety rule. Acting on that single act (e.g. by punishing the violator, or explaining the rule to him once more) is often very ineffective from a safety point of view. The decision not to follow a safety rule may be a deliberate act of free will, but there may have been incentives and other mechanisms behind this decision. In case of conflicting objectives (e.g. the company claims ‘Safety First’, but rewards people cutting corners in order to maximize profits) it’s more effective to address those causes behind the causes.

When studying safety many of us have learned not just to focus on an error by an employee, stop the investigation there and blame the person. Instead we’re taught to look further than the person at fault. But the opposite is true as well: defaulting to management failures as causes for accidents is not the way things should be done. That’s a kind of jumping to conclusions without any basis in reality or facts that is just as bad as Heinrich’s decision to stop at the direct cause and focus on that alone. A comment recently I heard was in the line of “we’re not satisfied unless we have found at least one management cause”. I’m convinced that this is said in the very best of intentions for the improvement of safety, but even this well-meaning framing of your mind is going to bias the result in a way that should not be acceptable. Remember what Hollnagel said: WYLFIWYG! I believe he said it in relation to ‘human error’, but it applies to anything. Some causes are simply not management induced and that’s that.

Some people go a step further and reject the existence of management failures entirely, a.o. because these are human failures. Agreed, in the end, management/organisational failures are decisions of men and thus human failures, but they are of another kind. I find it helpful to distinguish between (direct) causes on a more personal level (call it sharp end/operational, if you want) and (underlying) causes on organisational or management level that are further upstream. Additionally, sometimes it’s hard or impossible to determine what the ‘deliberate act’ or more or less active failure in the management was.

Take for example the accident that happened at Sjursøya on 23 March 2010, something that has taken a considerable amount of my working hours since. The full report of the Norwegian Accident Board (a fairly good one, even though I do not support all of the recommendations) is online, check it out for details (available in English). Short version: operational error(s) directly caused a runaway set of goods wagons which went unstoppable downhill. There is a 100 meter difference in height between the point of origin and the ending point, 8 km further down in the harbour of Oslo. There the wagons, speeding at about 130 km/h crashed in a building and killed three people, injuring several others. The outcome could have been even worse, by the way, since the wagons might have hit wagons with jet fuel, had the accident happened at another point of time.

What lies behind the operational error (I’ll do the simplified version) is that over the last two decades the use of the goods terminal had slowly been changed without anyone noticing and many baby steps ended up in using the terminal differently than intended. Working procedures and local risk analysis had not followed the same development and thus safety barriers had unknowingly been eroded in such a way that a rather simple error could lead to such a drama.

I tend to be very critical towards the labeling of deficiencies in risk analysis as causes - often this is a sign of lazy or unrealistic safety people since you will almost always find something related to the accident not being in the risk analysis (a complete risk analysis being a fiction and not very helpful either). But here it certainly was the case and together with other management factors this created over many years (impossible to pin down on persons, acts or dates) the situation as it was on the date of the accident without anybody being really aware of the situation sliding towards the breaking point.

While blaming the operational personnel that had ‘violated’ long forgotten rules would have been a possibility (unnecessary because they blamed themselves enough as it is) it was decided to look at the underlying factors. I don’t want to discuss the legal part, but also the DA chose not to have a legal action against the operational personnel. The companies involved, however, were fined which my company paid without any appeal (in contrary to the other company involved). Negative side effect: several millions of Norwegian kroner down the drain that could not be used for prevention and improvement.

Comments and discussion appreciated!

Some thoughts... Organisational Accidents

In “Managing The Risks Of Organizational Accidents”, Reason describes organisational accidents as having “multiple causes involving many people operating at different levels of the business”. For individual accidents “a specific person or group is often both the agent and the victim of the accident”. Reason adds that the distinction between the two is hard - an individual accident may turn out to be an organisational accident. As said above the book sees the entire term as a myth because the ‘default search for management failures’ would turn any individual accident into an organisational accident.

I don’t know if Reason actually intended to use his description as a proper definition. One may wonder how useful this is anyhow because one could separate the two only in retrospect - after an investigation. Strictly defined or not, for my understanding the distinction was more that between relatively straightforward cases and more complex cases with sometimes less linear and clear causal chains. The only real value of the term is thus that we are aware of this possible complexity.

Comments? Opinions?

woensdag 6 juni 2012

Output - outcome - steering

A rough sketch of the difference between output and outcome. have to work some more on this, so comments are more than welcome!

zaterdag 26 mei 2012

Level Crossing Safety Awareness Day

A sneak preview of the level crossing safety campaign we are about to launch in the first full week of June...

donderdag 3 mei 2012

dinsdag 7 februari 2012

Philip K. Howard - The Death Of Common Sense (a summary)

ISBN 978-0-8129-8274-9 (Random House paperback with added afterword)
A couple of years ago I was studying Law and while I hadn’t even finished my Bachelor (which I never did because of moving abroad, but that aside) I had already decided upon the theme of my paper, which went into a more philosophical direction: Do Laws And Regulations Actually Improve Safety? I started collecting material and literature and although it has been published for the first time in 1994 this book so far escaped my attention. Much of the material discussed here, however, would have been of major relevance, so let’s catch up with a 2004 version that adds a new afterword to the book.
The book (which is written in an easy accessible way, and quite entertaining too) itself is divided into four parts, the new afterword and an after-afterword with notes by the author on reform activities. Each of the first three parts discusses more or less one main problem of the legal system (untamed growth of regulations, lack of responsibility/accountability and the phenomenon of ‘rights’) while the fourth part tries to suggest some solutions.
Part 1
The first part is called “The Death of Common Sense” and deals with the enormous growth of rules and laws and their level of detail over the past 40 years. These regulations “often seem to miss the mark or prove counterproductive” (p.7) and the government that issues them has reasons that “often seem remote from human beings who must live with the consequences” (p.9). Examples are notoriously drawn from OSHA and environmental care legislation.
Page 8 has the idea of ‘regulatory budget’, i.e. “no law could be passed without a budget detailing its cost to society”. This idea isn’t really fleshed out in the book, but the lack of realization what a regulation with cost is something that returns also in the other parts (most of all part 3).
Should rules be as detailed as they are? Herbert Kaufman stated that “Only precise, specific guidelines can assure a common treatment of like cases” (p.9) reflecting the thoughts of the 18th Century Rationalists who wanted a self-executing body of law without involvement of human judgement. “Through detailed rules, regulation would be made certain” (p.10). The result is, according to Howard that the “regulatory system has become an instruction manual. It tells us and bureaucrats exactly what to do and how to do it. Detailed rule after detailed rule addresses every eventuality, or at least every situation lawmakers and bureaucrats can think of” (p.10) and in the end “We seem to have achieved the worst of both worlds: a system of regulation that goes too far while it also does too little” (p.11). The explanation for this: “the absence of one indispensable ingredient of any successful human endeavor: use of judgment”, which doesn’t work because “Human activity can’t be regulated without judgment by humans” (p.11) and “Human activity cannot be so neatly categorized. The more precise the rule, the less sensible law seems to be” (p.15).
Also seem the details rules not to be used what they originally are intended for - take the Occupational Safety & Health Act from 1970 that should maximize the safety of workers, but instead OSHA inspectors are out to find violations from the thousands of rules and are generally “unwilling even to discuss whether a violation has anything to do with safety”“OSHA inspectors, in the words of everyone who has to deal with them, are ‘just traffic cops’ looking for rule violations”. (p.15)
Law and lawmakers cannot anticipate every future contingency, and even if they could language would prove to be a barrier. As Hart said: “there is a limit, inherent to the nature of language to the guidance that words can provide” (p.16). Additionally, “context is vital” (p.17) and will be of utmost importance in deciding if an action is acceptable/safe/…, but instead “lawyers instead focus on the legal language as if it were the oracle, and refuse to act without its clear permission” (p.19). Aristotle said it even better: “it is impossible that all things should be precisely set down in writing: for enactments must be universal, but actions are concerned with particulars” (as quoted on p.51).
Inspired by observations by Vaclav Havel (“…in a communist socialist society people were not allowed to act without explicit authorization. In a free society, by contrast, the presumption is the opposite: we are free to do what we want unless it is prohibited” - p.20), Howard goes as far to state that “modern regulatory law resembles central planning” (p.21) because “rigidity of legal dictates precludes the exercise of judgement at the time and place of the activity. Government … is blinded by its own predeterment rules, entranced by the rationalists’ promise that all can be set out before we get there” (p.21). The result: “the words of law are like millions of trip wires, preventing us from doing the sensible thing” (p.21).
A further complicating factor is that lawmakers and bureaucrats often do not look back and instead just stack new laws and rules upon already existing ones - sometimes or often contradictory (p.26) and further adding to the phenomenon that words can impose rigidity as well as offer clarity (p.22). A truly hilarious example of rigid application of detailed rules in found on pages 36 and 37 where the case of ‘toxic bricks’ is presented.
There is for instance the impossible task of knowing all the details: “Making law detailed, the theory goes, permits it to act as a clear guide. People will know exactly what is required. But modern law is unknowable. It is too detailed.” (p.29) This striving for detail and certainty “has destroyed, not enhanced , law’s ability to act as a guide” (p.30) and: “What good, we might ask ourselves, is a legal system that cannot be known?” (p.31).
Howard notes another problem: “When laws cannot be complied with, individual officials, who supposedly have no discretion, have complete power” (p.32) and thus: “If noncompliance can be found under any law, what protection do we think all this legal detail is providing?” (p.33).
The philosophy behind detailed rules (see the Kaufman quote above) was that fairness and equal treatment before the law was guaranteed by those. “Fairness, however, is a far more subtle concept than making all the words on the page apply to everyone. Uniformity in law is not uniformity in effect” because “Uniform application of a detailed rule… will almost always favour one group over another” (p.34). Even more because “loopholes only exist because of precise rules” (p.43) and thus those who want not to comply will find a way anyhow.
The result of all this is not surprisingly that people are losing respect for the law. Or even worse: “By emphasizing violations rather than problems, regulation creates bitterness and adversariness. everything must be put on the record. Business will not share information. A culture of resistance sets in”. (p.48)
Friedrich Hayek posed the right question that should be asked before starting to write detailed rules: “How can anything good happen, if individuals cannot think and do for themselves? Rules preclude initiative. Regimentation precludes evolution. Letting accidents happen, mistakes being made, results in new ideas. Trial and error is the key to all progress.” (p.50)
Part 2
Titled “The Buck Never Stops” this part deals with the lack of responsibility and accountability. Nothing gets done, or only at a slow pace. “Reasons flow out why nothing can happen, but getting anyone to say yes is like scaling a mountain” (p.58) accompanied by “the hassle factor - the growing levels of paperwork” (p.95). One reason for this is that “how things are done has become far more important than what is done” because process “once existed to help humans make responsible decisions. Process now has become an end in itself” (p.60) and “Without a clear goal in sight, process just spins on forever”. (p.91)
This has resulted in no one taking responsibility and rather hiding behind forms and meetings: “We have deluded ourselves into thinking that the right decisions will be ensured if we build enough procedural protection. We have accomplished exactly the opposite: Decisions, if they happen at all, happen by default. Public decisions are not responsible because no one takes responsibility”. (p.61) This is a major difference to real life where procedures are “a management device, always subject to change or exception where a procedure gets in the way of a sensible result. For us, the result is paramount”. (p.62) “Rationalism rears its head again. Process is a kind of rule. It tells us not what to do, but how to do it” (p.75) which makes initiative “unlawful” (p.64) and leadership becomes “heresy” (p.73).
The original idea was simple: a set of general rules for people responsible for the execution of a civil service (p.77), however the need for checks and balances and fair, public debate eventually resulted in a situation where “courts always find a way to overturn any decision they didn’t like” (p.81). That’s how lawyers work: “Under our adversary system the role of counsel is not to make sure the truth is ascertained but to advance his client’s cause by any ethical means… Causing delay and sowing confusion not only are his right but may be his duty”. (p.86) And so we now have “circled back to the world where people argue, not about right and wrong, but about whether something was done the right way”. (p.84) There is a growing recognition that this is not the way things should be: “Confrontation and the adversary process do not create an atmosphere conductive to the careful weighing of scientific and technical knowledge, and distort the state of scientific and technical agreement and disagreement”. (p.87) While: “science, like everything, involves judgment. Mistakes will always be made. Endless scrutiny does not necessarily make for a better judgment; the loss of perspective may well make for a worse judgment”. (p.88) Howard ventures that “court decisions in fact tend to be sensible, probably because only a couple of people are looking at the issue and can see the big picture”. (p.88) And like we saw before that context is vital, decision depends upon point of view (p.89).
The idea to introduce process was of course to prevent corruption and manipulation, but “too much control can have the same effect as too little” (p.98). And there’s this: “No society I can find has ever thought that the most effective way is to treat every citizen as a crook and impose an elaborate, mind-numbing, and demanding ritual on every motion”. (p.99) As Woodrow Wilson said: “Suspicion itself is never healthful, either in the private or in the public mind”. (p.99) Plato went even further: “good men do not need laws to tell them to act responsibly, while bad people will always find a way around law”. (p.99) Fellow Greek Polybius agreed: there can be “no rational administration of government… when good men are held in the same esteem as bad ones”. (p.108)
So the effect is in the end not the desired one because “taking advantage comes naturally when government can’t easily do anything about it”. (p.100) “We have our foot heavy on the accelerator, seeking government’s help in areas like guarding the environment. Simultaneously, we have stomped hard on the brake, refusing to allow any action except after nearly endless layers of procedure”. (p.106) Manipulation is especially easy in minor cases when no one dares to take the effort, for example firing an ineffective public employee (find an outrageous example around p.103). “The irony cannot be allowed to pass: Process was intended to make sure everything was done responsibly. It has instead become a device for manipulation, even extortion”. (p.105)
What we need is to focus again: “Which is more important: the process or the result?”. (p.107) And “what if this involves a value judgment? Isn’t this the only kind of judgment that makes sense? Democracy is not intended to purge our values, but to reflect them”. (p.108) “Responsibility is what matters. Process is only one of many tools to get there”. (p.111)
Part 3
The third part, “A Nation Of Enemies” discusses the phenomenon of ‘rights’, its growth over the past decades and the results - basically derailing or at least alienating an entire nation. The problem is sketched on p.116: “When you have a right to something, it doesn’t matter what anyone thinks or whether you are, in fact reasonable”. One reason for this is that the contents of the word ‘rights’ has become different: “Rights were synonymous with freedom, protection against being ordered around by governments or others” (p.118) while nowadays rights don’t as much protect but rather provide which by the nature of the rights leaves “no room for balance, or looking at it from everybody’s point of view”, instead “Rights give open-ended power to one group, and it comes out of everybody else’s hide”. (p.119) Howard illustrates this with many scary examples from among others rights for the disabled and how this has gotten out of hands: “discrimination has become an obsession”. (p.136)
In contrast to this we find JFK’s famous quote: “Ask not what your country can do for you, but what you can do for your country” (p.120) Effectively the rights have become another element in paralyzing the state. The situation should be as follows: “Making trade-offs… is much of what government does: …whether allocating use of public property, creating new programs, or granting subsidies, benefits one group more than another, and usually at the expense of everyone else. Most people expect their elected and appointed leaders to balance the pros and cons and make decisions in the public interest”. (p.117/118) Democracy is also about compromise (p.156). But this is not the case - use/abuse of rights prevents government from making reasonable judgments in the public interest. (p.126) The result: “Like printing money, handing out rights to special interest groups for thirty years has diminished not only the civil rights movement, but the values on which it was founded. Rights intended to bring an excluded group into society, have become the means of getting ahead of society”. (p.135)
Because fear reigns “only a fool says what he really believes. It is too easy to be misunderstood or to have your words taken out of context”. (p.137) The effect for business and social life is disastrous: “The absence of spontaneous give-and-take stifles the dream of mutual understanding, just as it diminishes enjoyment. Nor is it good for anyone’s success. How can good ideas spring out, how can any aspect of business run effectively, if people are afraid to talk?” (p.139) Or: “When it reaches the point where sensitivity stifles communication, it has gone too far”. (p.145)
Howard concludes that “Handing out rights does not resolve conflict. It aggravates it” (p.152) because “The fight for rights becomes obsessive, like a religious conviction”. (p.153) But: “You can’t please everybody… So why is it appropriate to handle these issues as ‘right’? Rights are a kind of wealth and, like other forms of wealth, attract hangers-on”. (p.154) “We have accepted the pretense that government services should be treated as constitutional right. They are not; they are only benefits provided by a democracy”. (p.167) But today the situation creates envy and enemies: “The injury is deeper than fights over money. Citizens feel an almost involuntary resentment when they see that other citizens, directly or indirectly, are ordering them around”. (p.170) Howard says that rights were meant to be a shield but have become a sword instead. (p.170)
Part 4
The final part is titled “Releasing Ourselves” and works towards a solution. In its simplest form it’s found on p.176: “When the rule book got tossed, all that left was responsibility”.
The core of the problem: “How law works, not what it aims to do, is what is driving us crazy. Freedom depends at least as much on deciding how to do things as on deciding what to do” - “rigid rules shut out our point of view” so we “feel powerless because we are not given a choice”. “Modern law tells us our duty is only to comply, not to accomplish. Understanding of the situation has been replaced by legal absolutism”. (p.177) “By exiling human judgment in the last few decades, modern law changed its role from useful tool to brainless tyrant”, while we should know that “when humans are not allowed to make sense of the situation, almost nothing works properly”. (p.178) Even more, Howard states that “judgment is to law as water is to crops” and Hart observed that any sensible system of law must offer its citizens “a fresh choice between open alternatives”. (p.179) So, we should move away from details and towards principles because “Rules dictate results, come what may… principles do not work that way; they incline a decision one way, though not conclusively, and permit a judgment that fits the situation. Principles allow us to think”. (p.179)
The situation now is sketched again on page 182: “The effort to achieve social quiescence through clear rules, while plausible enough as a theory, has in fact infected the nation with a preoccupation with using law as a means to win: If the law is clear, we can fit ourselves into its words, and then - voilà - we get exactly what we want”. But… “human nature turns out to be more complicated than the idea that people will get along if only the rules are clear enough”. Which makes sense, give-and-take is “the interaction that weaves the fabric of every strong community and healthy relationship”. (p.183)
And by the way, do we really need all those rules? “The majority of people will do right if they’re given goals and left to get the job done” (p.184) This may not always work out as thought since “making judgments is hard, and sometimes failures are inevitable” (p.186) But, “trial and error, not a static monument, is what makes democracy thrive. Democracy was intended as a dynamic system, ever adjusting balance in a diverse society. The constant back and forth was not thought to be dangerous but protective; it makes vested power insecure and keeps society away from protracted disequilibrium”. (p.187)
“One basic change in approach will get us going: We should stop looking to law to provide the final answer”. “Modern law has not protected us from stupidity and caprice, but has made stupidity and caprice dominant features of our society”. (p.189) “Hard rules make sense only when protocol - as with… speed limits - is more important than getting something done”. (p.190) “Let judgment and personal conviction be important again. There is nothing unusual or frightening about it. Relying on ourselves is not, after all, a new ideology. It’s just common sense”. (p.191)
The afterword ("A New Operating System Based On Individual Responsibility") is written 12 or so years after the original book and has some reflections along with further recommendations, building upon the fourth part of the book. First it’s argued to restore responsibility - getting away from that “avoiding legal risk becomes our goal” (p.195) and instead a situation where people are “personally responsible” because “to induce prudent behavior, everyone needs to be at risk” (p.198)
Howard then argues to replace four fallacies with new principles:
1) The notion that law is permanent should be replaced - old law must be adjusted to meet current challenges. (p.199) One problem lies in a lot of old of obsolete law that creates problems today. “A healthy democracy requires fresh choices, constantly balancing social goals and making tradeoffs among competing public goods”. (p.201) “Government programs must constantly be reevaluated” and “every program should automatically expire” after some years. So-called sunset laws could be the tool to accomplish this. (p.202) And other action enforcing mechanisms should be applied as well, for example “if legislators refuse to make needed decisions, the responsibility defaults to another branch”. (p.203)
2) Secondly law should not be detailed, but rather simplified into basic goals and principles (p.204) Howard tells of a meeting he had with a large delegation of the OSHA after having criticized the agency. Asked about ideas about improvement he suggested that OSHA should radically simplify its regulations and redirect its energy toward actual safety, not compliance with rules: “Simplifying the regulations into a set of general principles, I argued, would liberate everyone to rely on common sense - factory foremen as well as inspectors. Mindless bureaucratic compliance would be replaced by a focus on actual safety”. “No this would never work they concluded: ‘People need to know exactly what to do’. So I asked a final question: ‘How many of you have ever read all these rules?’ No hand went up. ‘So how do you expect a factory foreman to read and understand all the rules? It’s beyond the capacity of real people to deal with this level of detail’.” (p.206)
Howard realizes that “simplifying of law will not be done with the willing cooperation of bureaucrats who’ve spent their lives micromanaging society. They will only cooperate if they can be fired when they don’t”. (p.207) So a third principle is:
3) Public employees must be accountable. (p.207) As Howard says: “Restoring accountability has many virtues. It is the precondition to simplification. There is no need to tell officials how to do their jobs if they can be fired when they don’t work out”. (p.209)
4) Finally, lawsuits have to be bounded by reasonable social norms. (p.209) “The culture of self-protection has chilled professional interaction, causing countless tragic errors as doctors and nurses keep their qualms to themselves rather than taking legal responsibility by speaking up. Any legal system that works this badly must be built upon some serious misconceptions about the rule of law. And so it is with the out-of-control American system of litigation”. (p.211) Fear reigns: “Americans no longer feel free in daily dealings, and go through the day looking over their shoulders instead of where they need to go”. “The first requirement of a sound body of law is that it should correspond with the actual feelings and demands of the community”. (p.212)
Concluding Howard proposes ‘America 4.0’: “Modern society is organized around the fear of bad choices… Instead of holding people accountable when something goes wrong, we demand law to guarantee something like that will never happen again” (p.213). “We are so focused on avoiding bad values that we have lost the freedom to assert good values” (p.214) But according to Thomas Edison “nothing that’s any good works by itself… You got to make the damn thing work”. (p.215)
I agree with the author’s end note on p.220 that the message of this book “remains as valid today as when it was first published. We cannot create a hands-free system of governance. Law is supposed to be a framework, not an automated program for public choices… Only people, not rules, can make good things happen. That’s as true in government and law as in every other life activity”.
And this also applies to other countries!

zaterdag 28 januari 2012

Just Culture - Sidney Dekker (A Summary)

A chapter by chapter summary of:

Just Culture - Sidney Dekker
2007, Ashgate Publishing, ISBN 978-0-7546-7267-8

Summary mainly constructed from direct quotes…

Comments BUCA (my comments in the text are in italics and marked with my initials):
1.     The author doesn’t define in the beginning of the book the term ’just culture’. This either means that he takes it in the sense like James Reason described it in his book on organizational accidents and he “hopes” the reader is familiar with this. Or the author wants the reader to discover how he understands ‘just culture’ by reading the book…
2.     The author does take the phenomenon of criminalizing error (and especially from the Anglo/American law system) as starting point for his discussion. But with only a little thinking one can see how conclusions and discussions can be relevant for other viewpoints, like European law systems and/or company internal processes (compliance focus and scapegoating versus leaning and improving).

Preface and prologue
Dekker poses some of the themes he will discuss later in the book:
·          There is a trend towards criminalization of human error.
·          There are no objective lines between ‘error’ and ‘crime’. These are constructed by people and are drawn differently every time. It doesn’t matter as much where the line gets drawn but who draws it.
·          Hindsight is an important factor in determining culpability.
·          Multiple (overlapping) interpretations of an act are possible and often necessary to capture its complexity.
·          Some interpretations (notably criminalization), however, have significant negative consequences for safety that overshadow any possible positive effects.
·          When we see errors as crimes, then accountability is backward looking and means blaming and punishing. One should rather see an error as an organizational, operational, technical, educational or political issues, accountability gets forward looking and can be used for improvement.

1 Why bother with a just culture?
Consequences of admitting/reporting a mistake can be bad. Some professionals have a “code of silence” (omertà), and often this is not uncommon or only applicable for a “few bad apples”. The problems often are structural trust and relationship issues between parties. Trust is critical. Hard to build, but easy to break. But there is little willingness to share information if there is a fear of being nailed for them.

Responding to failure is an ethical question. Has ‘just’ to do with legal criteria or is ‘just’ something that takes different perspectives, interests, duties and alternative consequences into evaluation?

Once the legal system gets involved there is little chance for a “just” outcome or improvement of safety. Rather than investing in safety people or organizations invest in defensive posturing so they are better protected against prosecutorial attention. Rather than increasing the flow of safety-related information, legal action has a way of cutting off that flow. Safety reporting often gets a harsh blow when tings go to court.

Case treatment in court tends to gloss over the most difficult professional judgments where only hindsight can tell right from wrong.

Judicial proceedings can rudely interfere with an organization’s priorities and policies. They can redirect resources into projects or protective measures that have little to do with the organization’s original mandate. Instead of improvements in the primary process of the organization it can lead to “improvements” of all the stuff swirling around the primary process like bureaucracy, involvement of the legal department, book-keeping and micromanagement. These things make work for the people on the sharp end often more difficult, lower in quality, more cumbersome and often less safe.

Accountability is a trust issue and fundamental to human relationships. Being able to and having to explain why one did what he did is a basis for a decent, open, functioning society. Just culture is about balancing safety (learning and improvement) and accountability. Just Culture wants people to bring information on what must be improved to groups or people who can do something about it and spend efforts and resources on improvements with safety dividend rather than deflecting resources into legal protection and limiting liability. This is forward looking accountability. Accountability must not only acknowledge the mistake and harm resulting from it, but also lay responsibilities and opportunities for making the changes so that the probability of the same mistake/harm in the future goes down.

Research shows that not having a just culture is bad for morale; commitment to the organization; job satisfaction and willingness to do that little extra outside ones role. Just Culture is necessary if you want to monitor safety of an operation, want to have an idea about the capability of the people or organization and to effectively meet the problems that are coming your way. Just culture enables to concentrate on doing a quality job and making better decisions rather that limiting (personal) liability and making defensive decisions. Just culture promotes long term investments in safety over short-term measures to limit legal or media exposure.

Wanting everything in the open, but not tolerating everything.

2: Between Culpable and Blameless
Events can have different interpretations: is a mistake just a mistake or a culpable act? Often there is no objective answer, only how you make up your mind about it.

Companies often impose difficult choices on their employees. On one side “never break rules, safety first” on the other “don’t cost us time or money, meet your operational targets, don’t find reasons why you can’t”.

A single account cannot do justice to the complexity of events. A just culture:
·          accepts nobody’s account as ‘true’ or ‘right’,
·          is not about absolutes but about compromises,
·          pays attention to the “view from below”,
·          is not about achieving power goals,
·          says that disclosure matters and protection of those who do just as much,
·          needs proportionality and decency.

3: The importance, risk and protection of reporting
Point of reporting is to contribute to organizational learning in order to prevent recurrence by systematic changes that aim to redress some of the basic circumstances in which work went awry.

What to report is a matter of judgment; often only outcome leads us to see an event as safety relevant. Experience and “blunting” (an event becoming “normal”) affects reporting. Ethical obligation: if in doubt then report.

In a just culture people will not be blamed for their mistakes if they honestly report them. The organization can benefit much more by learning from the mistakes than from blaming the people who made them. Many people fail to report not because they are dishonest, but because they fear the consequences or have no faith that anything meaningful will be done with what they tell. One threat is that information falls in the wrong hand (e.g. a prosecutor, or media - notably for government related agencies due to freedom of information legislation!). Some countries provide for this reason certain protection to safety data and exempt this information from use in courts.

Getting people to report is difficult and keeping the reporting up is equally challenging. It is about maximizing accessibility (low threshold, easy system, …) and minimizing anxiety (employees who report feeling safe). It is building trust, involvement, participation and empowerment. Let the reporter be a part in the process of shaping improvement and give feedback. It may help to have a relatively independent (safety) staff to report to instead of line reporting (which does have advantages regarding putting the problem where it should be treated and having the learning close to the process).

4: The importance, risk and protection of disclosure
Reporting is giving information to supervisors, regulators and other agencies. Main focus: learning and improvement.
Disclosure is about giving information to customers, clients, patients and families. Main focus: ethical obligation, trust, professionalism.
These two can often clash, as can various kinds of reporting (internal/external). If information leaves the company often there is the danger that the information will be used against the reporting employees (blaming, law suits, criminal prosecution).

Often people think that not providing an account of what happened (disclosure) means that one has something to hide, and often suspicion rises that a mistake was not a “honest” one. This again is a breach of trust that may lead to involvement of the legal apparatus. Truth will then be the first to suffer as the parties take defensive positions.

Many professions have a “hidden curriculum” where professionals learn the rhetoric to make a mistake into something that is no longer a mistake, making up stories to explain or even a code of silence.

Honesty should fulfill the goals of learning from a mistake to improve safety and achieving justice in the aftermath of an event. But “wringing honesty” out of people in vulnerable positions is neither just nor safe. This kind of pressure will not bring out the story that serves the dual goal of improvement and justice.

5: Are all mistakes equal?
Dekker discusses two kinds of ways to look at errors: technical and normative. The difference is made by people, the way they look at it, talk about it and respond to it.
Technical errors are errors in roles. The professional performs his task diligently but his present skills fall short of what the task requires. People can be very forgiving (even of serious lapses in technique) when they see these as a natural by-product of learning-by-doing. In complex and dynamic work where resource limitations and uncertainty reign, failure is going to be a lasting statistical reality. But technical errors should decrease (in frequency and seriousness) as experience goes up. It is seen as an opportunity for learning. The benefit of a technical error outweighs the disadvantages. Denial by the professional may lead to people around him to see the error as a normative one.
Normative errors say something about the professional himself relative to the profession. A normative error sees a professional not filling his role diligently. Also: if the professional is not honest in his account of what happened, his mistake will be seen as a normative error.
Worse, however, is that people are more likely to see a mistake as more culpable when the outcome of a mistake is really bad. Hindsight plays a really big role in how a mistake is handled!

Note by BUCA: Dekker discusses normative errors rather briefly (compared to technical). I’m surprised that he doesn’t address the “compliance” issue under the moniker of normative errors. A look at errors from a compliance oriented Point Of View is also rather accusative and little focused on improvement!

6: Hindsight and determining culpability
We assume that if an outcome is good, then the process leading up to it must be have been good too - that people did a good job. The inverse is true too: we often conclude that people may not have done a good job when the outcome is bad.

If we know that an outcome is really bad then this influences how we see the behaviour leading up to it. We will be more likely to look for mistakes, or even negligence. We will be less inclined to see the behaviour as forgivable. The worse the outcome the more likely we are to see mistakes.

The same actions and assessments that represent a conscientious discharge of professional responsibility can, with knowledge of outcome become seen as a culpable, normative mistake. After the fact there are always opportunities to remind professionals what they could have done better. Hindsight means that we:
·          oversimplify causality because we can start from the outcome and reason backwards to presumed or possible causes,
·          overestimate the likelihood of the outcome because we already have the outcome in our hands,
·          overrate the role of rule/procedure violations. There is always a gap between written guidance and actual practice (which almost never leads to trouble), but that gap takes on causal significance once we have a bad outcome to look at and reason back from,
·          misjudge the prominence or relevance of data presented to the people at the time,
·          match outcome with the actions that went before it. If the outcome was bad, then the actions must have been bad too (missed opportunities, bad assessments, wrong decisions, etc).

Lord Hidden, Clapham Junction accident: “There is almost no human action or decision that cannot be made to look flawed in the misleading light of hindsight. It is essential that the critic should keep himself constantly aware of that fact”.

Rasmussen: if we find ourselves asking “how could they have been so negligent, so reckless, so irresponsible?” then this is not because the people were behaving so bizarrely. It is because we have chosen the wrong frame of reference for understanding their behaviour. If we really want to know whether people anticipated risks correctly, is to see the world through their eyes, without knowledge of outcome, without knowing exactly which piece of data will turn out critical afterward.

7: You have nothing to fear if you’ve done nothing wrong
A no-blame culture is neither feasible nor desirable. Most people desire some level of accountability when a mishap occurs. All proposals for a just culture emphasize the establishment of and consensus around some kind of line between legitimate and illegitimate behaviour. People then expect that cases of gross negligence jump out by themselves, but such judgments are neither objective nor unarguable. All research on hindsight bias shows that it turns out to be very difficult for us not to take into account an outcome.

The legitimacy (or culpability) of an act is not inherent in the act. It merely depends on where we draw the line. What we see as a crime and how much retribution we believe it deserves is hardly a function of the behaviour. It is a function of our interpretation of that behaviour.

People in all kinds of operational worlds knowingly violate safety operating procedures all the time. Even procedures that can be shown to have been available, workable and correct (question is of course: says who?). Following all applicable procedures means not getting the job done in most cases. Hindsight is great for laying out exactly which procedures were relevant (and available and workable and correct) for a particular task, even if the person doing the task would be the last in the work to think so.

Psychological research shows that the criminal culpability of an act is likely to be constructed as a function of 3 things:
·          was the act freely chosen?
·          did the actor know what was going to happen?
·          the actor’s causal control (his or her unique impact on the outcome).
Here factors that establish personal control intensify blame attributions whereas constraints on personal control potentially mitigate blame.

Almost any act can be constructed into willful disregard or negligence, if only that construction comes in the right rhetoric, from the legitimated authority. Drawing the line does not solve this problem. It has to be considered (very carefully!) who gets to draw this line and make structural arrangements about this. Question is if the people who get this task indeed are able to have an objective, unarguable neutral view from which they can separate right from wrong.

Just culture thus should not give people the illusion that it is simply about drawing a line. Instead it should give people clarity about who draws the line and what rules, values, traditions, language and legitimacy this person uses.

8: Without prosecutors there would be no crime
We do not normally as professionals themselves whether they believe that their behaviour “crossed the line”. But they were there, perhaps they know more about their own intentions than we can ever hope to gather. But…:
·          we suspect that they are too biased
·          we reckon that they may try to put themselves in a more positive light
·          we see their account as one-sided, distorted, skewed, partial - as a skirting of accountability rather than embracing it.

No view is ever neutral or objective. No view can be taken from nowhere. All views somehow have values and interests and stakes wrapped into them. Even a court rendering of an act is not a clear view from an objective stance - it is the negotiated outcome of a social process. So a court may claim that it has an objective view on a professional’s actions. From the professional’s perspective (and his colleagues) that account is often incomplete, unfair, biased, partial.

To get to the “truth” you need multiple stories. Settling for only one version amounts to an injustice to the complexity of an adverse event. A just culture always takes multiple stories into account because:
·          telling it from one angle necessarily excludes aspects from other angles
·          no single account can ever claim that it, and it alone, depicts the world as it is,
·          if you want to explore opportunities for safety improvements you want to discover the full complexity and many stories are needed for this.

9: Are judicial proceedings bad for safety?
Paradoxically when the legal system gets involved, things get neither more just nor safer.

As long as there is fear that information provided in good faith can end up being used by a legal system, practioners are not likely to engage in open reporting. A catch 22 for professionals: either report facts and risk being persecuted for them, or not report facts and risk being persecuted for not reporting them (if they do end up coming out along a different route).

Practioners in many industries the world over are anxious of inappropriate involvement of judicial authorities in safety investigations that, according to them, have nothing to do with unlawful acts, misbehaving, gross negligence or violations. Many organisations and also regulators are concerned that their safety efforts, such as encouraging incident reporting, are undermined. Normal structural processes of organizational learning are thus eviscerated; frustrated by the mere possibility of judicial proceedings against individual people. Judicial involvement (or the threat of it) can create a climate of fear and silence. In such a climate it can be difficult - if not impossible - to get access to information that may be critical to finding out what went wrong and what to do to prevent reoccurrence.

There is no evidence that the original purposes of a judicial system (such as prevention, retribution, or rehabilitation - not to mention getting a “true” account of what happened or actually serving “justice”) are furthered by criminalizing human error:
·          The idea that the charged or convicted practioner will serve as an example to scare others into behaving more prudently is probably misguided: instead practioners will become more careful in not disclosing what they have done.
·          The rehabilitative purpose of justice is not applicable because there is little to rehabilitate in a practioner who was basically just doing his job,
·          Above that: correctional systems are not equipped to deal with rehabilitation of this kind of professional behaviour for which people were convicted.
·          The legal system often excludes the notion of an accident or human error, simply because there are typically no such legal concepts.
Not only is criminalization of human error by justice systems a possible use of tax money (that could be spent in better ways to improve safety) - it can actually end up hurting the interests of the society that the justice system is supposed to serve. Instead: if you want people in a system to account for their mistakes in ways that help the system learn and improve then charging and convicting a practioner is unlikely to do that.

Practioners like nurses and pilots endanger the lives of other people every day as a part of their ordinary job. How something in those activities slides from normal to culpable then is a hugely difficult assessment for which a judicial system often lacks the data and expertise. Many factors, all necessary and only jointly sufficient are needed to push a basically safe system over the edge into breakdown. Single acts by single “culprits” are neither necessary nor sufficient. If you are held accountable by somebody who really does not understand the first thing about what it means to be a professional in a particular setting then you will likely see their calls for accountability as unfair, coarse and uninformed (and thus: unjust).

Summing up - judicial proceedings after an incident:
·          make people stop reporting incidents,
·          create a climate of fear,
·          interfere often with regulatory work,
·          stigmatize an incident as something shameful,
·          creates stress and isolation that makes practioners perform less well in their jobs,
·          impede (safety) investigatory access to information.

More or less the same as for criminal legal actions applies also to civil legal actions.

10: Stakeholders in the legal pursuit of justice
In this chapter Dekker discusses the various stakeholders:
·          victims,
·          suspect/defendant,
·          prosecutor,
·          defense lawyer,
·          safety investigators,
·          lawmakers,
·          the employing organisation.

Practioners on trial have reason to be defensive, adversarial and ultimately limited in their disclosure (“everything you say can and will…”).

Language in investigation reports should be oriented towards explaining why it made sense for people to do what they did, rather then judging them for what they allegedly did wrong before a bad outcome. An investigation board should not do the job of a prosecutor.

In countries with a Napoleonic law tradition a prosecutor has a “truth-finding role”. But combining a prosecutorial and (neutral) investigative role in this way can be difficult: a magistrate or prosecutor may be inclined to highlight certain facts over others.

In general can be said that establishing facts is a hard thing. The border between facts and interpretation/values often blurs. What a fact means in the world from which it came can easily get lost. Expert witness is a solution to this problem, but prosecutors and lawyers often ask questions that lie outside the actual expertise.

Dekker argues a difference between judges and scientists in their deriving judgment from facts. Scientists are required to leave a detailed trace that show how their facts produced or supported particular. Dekker argues that scientific conclusions thus cannot be taken on faith (implying that decisions of a judge can - I tend to agree for a part, but want to state that both scientific and judge’s conclusions can contain a major deal of interpretation and thus “faith” - BUCA).

For employing organizations: The importance of programs for crisis intervention/peer support/stress management to help professionals with the aftermath of an incident cannot be overestimated.

Most professionals do not come to work to commit crimes (and this is a major difference with common criminal acts which are nearly always intended! - BUCA). Their actions make sense given their pressures and goals at the time. Professionals come to work to do a job, to do a good job. They do not have a motive to kill or cause damage. On the contrary: professionals’ work in the domains that this book talks about focuses on the creation of care, of quality, of safety.

11: Three questions for a just culture
Many organizations kind of settle on pragmatic solutions that allow them some balance in the wake of a difficult incident. These solutions boil down to 3 questions:
1. Who in the organization or society gets to draw the line between acceptable and unacceptable behaviour?
2. What I where should the role of domain expertise be in judging whether behaviour is acceptable or not?
3. How protected are safety data against judicial interference?

re 1 - The more society, industry, profession or organization has made clear and agreed arrangements about who gets to draw the line, the more predictable the managerial or judicial consequences of an occurrence are likely to be. That is, practioners will suffer less anxiety and uncertainty about what may happen in the wake of an occurrence, as arrangements have been agreed on and are in place.

re 2 - The greater the role of domain expertise in drawing the line, the less practioners and organizations may be likely to get exposed to unfair and inappropriate judicial proceedings. Domain experts can easier form an understanding of the situation as it looked to the person at the time, as they probably know such situations from their own experience:
·          It is easier for domain experts to understand where somebody’s attention was directed. Even though the outcome of a sequence of events will reveal (in hindsight!) what data was really important, domain experts can make better judgments about the perhaps messy or noisy context of which these (now critical) data were part and understand why it was reasonable for the person in question to be focusing on other tasks and attention demands at the time.
·          It is easier for domain experts to understand the various goals that the person in question was pursuing at the time and if the priorities in case of goal conflicts can have been reasonable.
·          It is easier for domain experts to assess whether any unwritten rules or norms may have played a role in people’s behaviour. Without conforming to these tacit rules and norms, people often could not even get their work done. The reason, of course, is that written guidance and procedures are always incomplete as a model for practice in context. Practioners need to bridge the gap between the written rule and the actual work-in-practice, which often involves a number of expert judgments and outsiders often have no idea about the existence of these norms, and would perhaps not understand their importance or relevance for getting the work done.
That said, domain experts may have other biases that work against their ability to fairly judge the quality of another expert’s performance like psychological defense (“if I admit that my colleague made a mistake my position is more vulnerable too”).

re 3 - The better safety data is protected from judicial interference, the more likely it is that practioners could feel free to report.

Dekker then goes on discussing various solutions with an “increasing level of just culture”. Key elements in these: trust, existing cultures and the legal foundation for protection of safety data.

12: Not individuals or systems, but individuals in systems
The old view sees human error as a cause of incidents. To do something about incidents then we need to do something about the particular human involved. The new, or systems view, sees human error as a symptom, not a cause. Human error is an effect of trouble deeper inside the system. Pellegrino says that looking at systems is not enough. We should improve systems to the best of our ability. But safety critical work is ultimately channeled through relationships between human beings or through direct contact of some people with the risky technology. At this sharp end there is almost always a discretionary space into which no system improvement can completely reach (and thus can only be filled by and individual or technology-operation human). Rather than individuals versus systems, we should begin to understand the relationships and roles of individuals in systems. Systems cannot substitute the responsibility borne by individuals within the space of ambiguity, uncertainty and moral choices. But systems can:
·          Be very clear where the discretionary space begins and ends.
·          Decide how it will motivate people to carry out their responsibilities conscientiously inside of that discretionary space. Is the source going to be fear or empowerment? In case of the former: remember that neither civil litigation nor criminal prosecution works as a deterrent against human error.

Rather than making people afraid, systems should make people participants in change and improvement. There is evidence that empowering people to affect their work conditions, to involve them in the outlines and content of that discretionary space, most actively promotes their willingness to shoulder their responsibilities inside of it. Holding people accountable and blaming people are two quite different things. Blaming people may in fact make them less accountable: they will tell fewer accounts.

Blame-free is not accountability-free. But we should create accountability not by blaming people, but by getting people actively involved in the creation of a better system to work in. Accountability should lay out the opportunities (and responsibilities!) fro making changes so that the probability for harm happening again goes down. Getting rid of a few people that made mistakes (or had responsibility for them) may not be seen as an adequate response. Nor is it necessarily the most fruitful way for an organization to incorporate lessons about failure into what it knows about itself, into how it should deal with such vulnerabilities in the future.

13: A staggered approach to building your just culture
Dekker suggests staggered approach which allows you to match your organisation’s ambitions to the profession’s possibilities and constraints, the culture of your country and its legal traditions and imperatives.
Step 1: Start at home in your own organization. Don’t count on anybody to it for you! Make sure people know their rights and duties. See an incident as an opportunity to focus attention and learn collectively, do not see it as a failure or crisis. Start with building just culture from the beginning during basic education and training/introduction: make people aware of the importance of reporting. Implement debriefing and incident/stress management programs.
Step 2: Decide who draws the line in your organization. How to integrate practioner peer expertise in the decision to handle the aftermath. Empowering and involving the practioner is the best way for improvement.
Step 3: Protect your organisation’s data from undue outside probing.
Step 4: Decide who draws the line in your country. It’s important to integrate domain expertise in the national authority who will draw a line since a non-domain expert doing this is fraught with risks and difficulties.

Unjust responses to failure is often a result of bad relationship rather than bad performance. Restoring that relationship, or at least managing it wisely, is often the most important ingredient of a successful response. One way forward id to simply talk together. Building good relations can be seen as a major step toward just culture.

If professionals consider one thing “unjust” it is often this: split second operational decisions that get evaluated, turned over, examined, picked apart and analyzed for months - by people who were not there when the decision was taken, and whose daily work does not even involve such decisions.

Often a single individual is made to carry the moral and explanatory load of a system failure - charges against that individual serve as a protection of “larger interests”.