Last weekend I decided to take all of this to a new level. I've started a new website, and eventually company of my own: Mind The Risk.
As a consequence I won't be posting on the HEACH-blog, but future postings will be found on Mind The Risk. Additionally most of the material from this blog will be moved there in the near future (all reviews and summaries of professional literature is already there).
Find the new site here. And thanks for visiting us!
HEACH
For an explanation of what HEACH actually is, please visit www.heach.nl
donderdag 12 juni 2014
dinsdag 13 mei 2014
Indexed
Was just reminded of what once was one of my favourite blogs to follow and started following it again. Great observations here. Take for example this: http://thisisindexed.com/2014/05/back-back-beasts/
woensdag 7 mei 2014
Trapping Safety Into Rules
In the picture below you'll find a scan of the review I wrote of this book for the magazine of the Dutch Society of Safety Science, NVVK Info. This was published there in the May 2014 issue.
The English translation goes roughly like this:
The English translation goes roughly like this:
Trapping Safety
Into Rules
edited by Corinne
Bieder and Mathilde Bourrier (ISBN 978-1-4094-5226-3)
My current
employer has a very fine internal library that contains a decent selection of
books on safety. The fine ladies in the library also alert professionals to new
releases that aren’t in the collection yet, but may be wanted in future. The
title of this book immediately caught my attention when it was mentioned in a
newsletter and that very day an order for a personal copy went out.
On 278
pages the book presents a collection of 15 articles, plus an introduction and epilogue
by the editors. The contributions are based upon experiences within various
branches: (mainly Norwegian) offshore, air traffic control, railways and patient
safety, and there are also some more theoretically oriented contributions. The
fact that the book is a collection of articles slightly affects the coherence and
flow (even though I’m pleasantly surprised by the fact that the contributors
refer to each other’s contributions quite frequently) and in some cases also
the readability suffers. Some authors just seem to have a hard time ditching
the academic approach to writing that they forget that books should be easy to
read. A listing of references that fills an entire paragraph is just nonsense
and should have been caught and removed by the editors.
The book’s title
might suggest that the book has a critical stance with regard to (safety)rules,
but as the subtitle ‘How desirable or avoidable is proceduralization?’ indicates
the book highlights both sides and discusses both the pros and cons of safety
rules in four different parts with the following headers:
1: Where do
we stand on the bureaucratic path towards safety?
2:
Contrasting approaches to safety rules
3:
Practical attempts to reach beyond proceduralization: The magic tools illusion
4: standing back
to move forward
The subjects
discussed in the book are too varied to cover in a full overview here, so let
me pick out some highlights. In part 1 we find among others an interesting
discussion of ‘No rule, no use’. This
is an attitude that almost any safety professional will have met at some time (or
even regularly). A safety measure may be wise to implement, but never the less
the question arises: “Where does it say that we have to do this?”. Ergo, quite
often without a rule saying so, one does not implement a safety measure.
In the
second part especially Kenneth Pettersen’s discussion about abductive thinking appealed to me. Me discusses this from a background in airplane
maintenance. Mechanics
often manage in doing their jobs and dealing with unexpected situations because they deviate from rules. The
standard rules often aren’t helpful to solve unique problems, so one has to
deviate. This is one of the most interesting contributions in the collection
and more or less mandatory reading.
Part 3 is the
most voluminous part of the book. It discusses a number of attempts and
experiences with possible alternatives to traditional rules and regulations. The
first two contributions are from medical care and are about the use of
checklists in operation rooms -what I’m missing here is the nuance that a (good)
checklist in fact is little more than a very smartly written procedure - and
about crew resource management in hospitals. After that we get two little exciting
articles about safety management (systems) and their influence on procedures in
air traffic and railways.
Much more
exciting are no less than three articles about regulations and safety culture
with a background in Norwegian offshore where elements of safety culture are implemented
in laws and regulations over the last decades. This phenomenon is regarded with
a critical eye amd among other things the authors discuss how far it is
possible to steer and regulate safety culture. Especially chapter 13 is
interesting with its argumentation why regulators should leave safety culture
alone and rather stick to rules.
The final
part contains only two articles of which the first deals with the paradox role that
procedures apparently have in safety management, because they on one side are a
means to transfer knowledge and prescribe a safe way of doing a job, but on the
other hand they may also cause rigidity and ‘auto pilot behaviour’. The final
article by Todd La Porte has some interesting view on the differences between administrative
and operational safety rules. Of special interest is the anecdote about the
aborted exercise on an aircraft carrier and the praise that the “culprit” (the
sailor who had made a mistake, but reported this immediately upon discovery) received
from the highest ranking officer.
In the
epilogue both editors stress once more that rules are affected by the way they
are created (preferably bottom-up, although also this is no guarantee for a good
rule) and that one before creating a new rule must ask oneself the question
what the added value of a rule will be. One of the leasons to be learned from
this book.
Published
by Ashgate (www.ashgate.com)
donderdag 24 april 2014
All Accidents Are Preventable?
Really?
I think for the statement "all accidents are preventable" to become REMOTELY true (ish) we will need to add some words. For example:
All accidents are preventable... with the benefit of hindsight.
or
All accidents are preventable... in theory.
or
All accidents are preventable... given unlimited knowledge, resources, perfect prediction (and quite some luck).
All of which, regrettably makes it a rather useless statment in my everyday job. And besides, do we really want to prevent absolutely everything? Really??
I think for the statement "all accidents are preventable" to become REMOTELY true (ish) we will need to add some words. For example:
All accidents are preventable... with the benefit of hindsight.
or
All accidents are preventable... in theory.
or
All accidents are preventable... given unlimited knowledge, resources, perfect prediction (and quite some luck).
All of which, regrettably makes it a rather useless statment in my everyday job. And besides, do we really want to prevent absolutely everything? Really??
dinsdag 1 april 2014
Gerd Gigerenzer - Gut Feelings - Short Cuts To Better Decision Making
This book
was on my “things to do” list for quite a while (triggered by Gladwell’s
“Blink” and Gigerenzer’s “Risk” book) and went substantially up to the top of
the list after “Antifragile” and Taleb’s promotion of heuristics. It’s very
compact, almost 230 sides of easy to read (and often even slightly humorous)
text.
The first
part of the book is titled “Unconscious Intelligence”. We think of intelligence
as deliberate, conscious activities guided by the laws of logic. Much of our
mental activities, however, are unconscious and driven by gut feelings and
intuitions without involvement of formal logic. Our mind often relies on the
unconscious and uses rules of thumb in order to adapt and economize. One reason
it needs to do so is because there is only so much information we can digest at
one time.
One
advantage is that these simple rules are less prone to estimation and
calculation error and are intuitively transparent as opposed to complex models.
So, an intuitive shortcut, or heuristic, often gets us where we want without a
smaller chance of big errors, and with less effort. Gigerenzer says therefore
that it’s not a question if but when we can trust our intuitions.
One example
for this that an experienced chess player will usually generate the best
solution first and he will not do better with more time to reflect and
reconsider - rather in contrary. Inexperienced players on the other hand will
most of the time benefit from long deliberation. So, stop thinking when you are
skilled. Thinking too much about processes we master (expertly) will usually
slow down or disrupt performance as everyone who has tried to think about going
down the stairs can confirm. These things run better outside our conscious
awareness, so more is not always better.
Heuristics
try to react on the most important information, ignore the rest and lead to
fast action. Heuristics are a result of evolved capacities of our brain: simple
rules are developed over time and thanks to practice we are able to perform an
action really quickly and well - effective and efficient. The selection of the
applicable rules is unconscious. Gigerenzer thus views our mind as an adaptive
toolbox with rules of thumb that can be transferred culturally or genetically
and also developed by ourselves, or adapted from existing rules.
One
function of intuition is also to help us master one of our main challenges: we
don’t have all the information, so we have to go beyond the information that is
given to us. Out brain often “sees” more than our eyes by “inventing” things in
addition to what in fact is seen, like depth (a ‘third dimension’) in a (2D)
drawing.
If a gut
feeling will have the wanted/correct outcome depends upon the context it’s used
in. So a heuristic is neither good nor bad, this depends upon the environmental
structures. Selection of rules of thumbs can be triggered by the environment
(‘automatic rules’) or selected after a quick evaluation; conscious or (often)
not. If the first chosen of the latter category ‘flexible rules’ doesn’t work
another is selected. Gut feelings may seem simplistic, but their underlying
intelligence is selecting the right rule of thumb for the right situation
(depending on circumstances, environment, etc).
It’s
perceived wisdom that complex problems demand complex solutions. In fact,
however, in unpredictable situations the opposite is true. As things are, our
world has limited predictability. Keeping that in mind one may consider that we
should spend less resources and money on making complex predictions (and on
consultants who make them for us). In hindsight a lot of information may help
to explain things and it’s easy to fit information to past events. In order to
predict the future, however, much of the information one gets is not helpful in
predicting and thus it’s important to ignore the information that is not of
value. The art of intuition is to ignore everything except the best clue that
has a good chance on hitting that useful information. Psychological research
suggests that people often (but not always!) base intuitive judgments on one
single good reason. (do check pages 150 and 151!).
The final
chapter of part one discusses intuition and logic. It starts with the famous
‘Linda problem’. Criticizing Kahneman et.al. Gigerenzer argues that calling the
intuitive solution of most people a fallacy is not correct because gut feelings
(or humans in general) are not governed by the rules of mathematical logic. He
says that the human brain has to operate in a uncertain world, not in the
artificial certainty of a logical system. Our brain has to make sense of
information given and go beyond it. It zooms in on certain parts that seem particularly
relevant within the context and it views words (like ‘probable’) in their
common use and conversational meaning rather than in a formal academic/logical
sense. Gigerenzer sees heuristics as a way to success rather than as a cause
for error.
I find that
there are arguments for both ‘sides’. Humans are bad with probabilities and
numbers (something which Gigerenzer addresses in his other book, by the way)
but humans don’t operate necessarily to the rules of formal logic - as anyone
can confirm who has seen humans in action. An interesting thing to think about
and keep in the back of your mind.
Gigerenzer
concludes that logical arguments may conflict with intuition, but that
intuition is often the better guide in the real world. Nevertheless many psychologists
treat formal logic as the basis of cognition and many economists use logic as
the basis for ‘rational’ action. This isn’t how the world works, however, and
logic is blind to content and culture and it ignores environmental structures
and evolved capacities. Gigerenzer closes by stating that good intuition must
go beyond information given and therefore beyond logic.
Part two is
called “Gut feelings in action” and discusses a couple of real-life examples of
intuitions. It starts with the functions of recognition (a very strong memory
function of our brain) and recall (not so strong, especially with age).
Recognition helps us to separate the old from the new. The recognition
heuristic can help us making intuitive judgments, like when someone who knows
nothing about football is asked what team will win, he will probably pick the
best known, and more often than not be right. This means that in some cases
ignorance even can be beneficial because more knowledge may mean that one
cannot rely on this heuristic anymore. One remarkable fact is that during tests
people who relied on the recognition heuristic made snap decisions which
appeared to impress people with greater knowledge who needed time to reflect.
The
recognition heuristic is an example of a flexible rule which is chosen by our
unconscious intelligence after an evaluation process. It can be overridden
consciously in several ways. Another
example of ‘one reason’ decisions is the way that political preferences are
commonly ranged on a left-right scale, even if the subject at hand is totally
unrelated to the left-right opposite.
Not always
do we rely on just one reason, often we make intuitive judgments based on evaluation
of a sequence of cues that are evaluated, but again only one determines the
final decision - so called sequential decision making. We go through a series
of cues (most important first, second next, etc) and evaluate options. As long
as options ‘score equal’ we continue evaluating, but at the first cue where one
option is best we stop. We don’t evaluate all pros and cons to find the optimal
solution rather we choose the ‘first best’. Sequential decision making based on
the first good reason is very efficient, transparent and often more robust and
accurate than complex models.
One tool
for sequential decision making is a ‘fast and frugal tree’ which through a
couple of yes/no questions leads to a quick decision, rather than working
through a huge, complex complete decision tree. Fast and frugal trees have
three building blocks: 1) a search rule that looks up factors in order of
importance, 2) a stop rule that stops looking further if a factor allows so,
and 3) decision rule that classifies an object. Among
others medical services use these ‘developed intuitions’ for making diagnoses.
These simple, transparent empirically informed rules help making better
decisions.
The last
two chapters deal with moral behavior and social instincts (e.g. imitation and
trust). People do morally unbelievable things (like the example of a WW II mass
murder illustrates) because of their reluctance to break group order. Peer
group pressure (consciously or not) is enormous and it may even overrule deeply
rooted moral instincts like “You shall not kill”.
One
important heuristic to understand is that people will usually opt for the
default (chose by the environment, or ‘system’) instead of making a conscious choice
- as is illustrated by the differences in percentages of organ donors between various
countries. By understanding the process and framing instructions or requests
well, one may be able to steer things in a desired (and preferably morally
just) direction.
This
summary/review obviously only scratches the surface of themes treated in this most
recommended book. Hope I tickled your interest, now go and read it for
yourself.
I’ve read
the Penguin pocket version ISBN 978-0-141-01591-0
donderdag 13 maart 2014
Antifragile - a review (or two, sort of)
In the picture below you'll find a scan of the review I wrote of Taleb's latest book, Antifragile, for the magazine of the Dutch Society of Safety Experts, NVVK Info and was published there in this month's issue.
The English translation goes roughly like this:
Antifragile - Things That Gain from Disorder, by Nassim Nicholas Taleb
(ISBN 9780141038223)
While I was
reading this book - during the days around Christmas 2013 - a major storm
ravaged northern and western Europe. At one point my wife asked “What is that
book about, anyway” and the perfect illustration of my answer presented itself
in all newspapers the next morning: hundreds of thousands of households in Norway,
Sweden, The Netherlands, Germany, France and The United Kingdom were without
electricity. How much easier could you want for an example to explain the concept
of a fragile system? Some systems are
so much optimalised that they work smoothly in the normal situation (in Taleb’s
vocabulary ‘mediocristan’), but as
soon as an event of certain impact (unexpectedly, or dismissed as not probable)
manifests itself, the vulnerable system collapses as a house of cards. The
system is fragile and breaks, often
with catastrophic consequences.
Those who
have read Taleb’s last book, the bestseller and masterpiece “The Black Swan”, will
notice that “Antifragile” picks up things where Taleb stopped the last time. The
new book is the closing part of a trilogy that started about ten years ago with
“Fooled By Randomness”. In “Antifragile” Taleb tries primarily to provide some strategies
that enable us to cope with the fragility of systems - on one hand by assessing
if something is fragile, or is made
fragile by decisions that are made, and on the other side by discussing
possibilities to make things (more) antifragile.
Taleb use a
triad, or three different states, to illustrate the point. The first state is fragile: things break as a consequence
of shocks, disorder or time. One example is an e-book that depends if your reader
has enough power. The second state is robust: things are insensible to shocks (until
a certain level), like an ordinary book that usually “works”. Finally there’s antifragile: things that do profit from
disorder, like an oral delivery which goes from mouth to mouth and often only
wins fantasy. For some this may seem strange; after all in common understanding
it’s robust that is used as the opposite of fragile, but Taleb aims at another
kind of opposite: not the difference between braking and not breaking, but
between being harmed by and benefiting from something.
It’s likely
that the reader has to get used to Taleb’s use of language and his jargon. He
had a couple of specific “own” words (for example ‘fragalista’ which he uses as a label for people who advance fragility - often named, like the
president of the Federal Reserve) or terms which he uses to characterize a
certain phenomenon (“soccer mom”). This
also applies to his style of writing. Taleb likes to stroll, and it shows in
his books. This is no scientific academic report or paper (luckily it isn’t!) but
a continuous discussion and reflection where the author switches from one
subject to another. Sometimes he returns to a subject, sometimes he doesn’t. The
contents may vary from seriously technical or mathematical stuff to fantastic
or just every day reflections.
The
language is flowery and at times outrageous. Taleb is erudite with a preference
for the Classics and we’ll know this! All of this is coupled to an enormous
waywardness and a certain arrogance (he doesn’t shy back of kicking seriously
against the established order within among others economy en science). This may be a certain threshold for some
readers. With that in
mind it’s a relief that one doesn’t need to read everything. Some parts of the
book can be skipped without any problem and without missing part of the
message. In fact, on various occasion the author even indicates that a part can
be skipped.
One may question how relevant this book is for safety professionals since there
is only a limited discussion of ‘real’ safety subjects in the book (Fukushima as one example of fragility is a rare exception). My opnion: definitely relevant. Taleb’s books are about uncertainty
and risk, about the thinking about risk and about decision making (with all
kinds of consequences). This alone makes the book (and definitely its predecessor)
almost mandatory reading.
Do note
that this is by no means a ‘Do Book’ that will lead you to an antifragile state (presuming that this
is possible and desirable) within 7 Steps. Reading the book can, however, help
to create awareness and start a process. To this end it is necessary that each
reader picks out relevant elements for himself and translates those to his own every
day practice (domain blindness is one returning theme, by the way). Examples
are fragility enhancing things like naive
interventionism (the urge to implement measures that are likely to be unnecessary
which may cause unwanted side-effects that in fact cause a worse situation) and
agency problems (decisions taken or advocated
by people who will profit in some way from that decision).
On the positive
side Taleb discusses a number of tools that may help to enhance robustness or
even gain antifragility. To name but
a few: the principle of redundancy (preferably applied aggressively because in
a time of scarcity one can gain a lot from your own abundance) even though it’s
despised by economists and bean counters, optionality (making use of
opportunities when they present themselves), flexibility, using heuristics (practical mental rules of
thumb that make the everyday life easier) and allowing some hormensis in our lives and organizations
(some disorder and stress is necessary to make things stronger, like not always
washing your hands before diner can enforce your immune system and is thus
healthy...).
One other
thought that struck me while reading the book is that Taleb’s fragile/antifragile thinking unwillingly manages to combine two safety
theories that are often regarded as opposites within the same framework. I’m
obviously talking about Perrow’s Normal Accident Theory (fragile, Black Swans) and the antifragile
concepts of High Reliability Organisations (Weick & co) and Resilience
(Hollnagel, Woods etc.). For safety professionals it may be confusing that Taleb
on several occasions says that antifragile
and resilience (to him synonymous to robust)
aren’t the same thing, while elements that “we” (safety professionals) range
within the resilience framework are seen by Taleb as antifragile elements. A matter of definition, I think, and don’t
let that stand in the way of gathering some useful lessons from this book.
Labels:
antifragile,
black swan,
books,
literature,
NVVK Info,
resilience,
Risk,
Taleb
woensdag 26 februari 2014
Black Swan workshop 20 February 2014
Petromaks -
Black Swan Workshop,
Stavanger,
20 February 2014
This
workshop was part of a new (?) project that aims to develop concepts, approaches
and methods for proper understanding of risk for Norwegian petroleum
activities, with due attention to the knowledge dimension and surprises. And so
over 100 participants gathered in Stavanger, coming from mainly oil and gas
companies, universities, some students and a few ‘outsiders’ as well.
Chairman of
the day, Terje Aven, opened the day,
introducing the research program and the goal of the day: to have an exchange
of ideas about the Black Swan concept and how to meet the challenges they pose.
Knowledge is one of the central elements because we need to know where to put
resources. Not in finding solutions right now, but having guidance on the
direction.
Aven
defined a Black Swan as “a surprising extreme event relative to present
knowledge or beliefs”. This means that a Black Swan can be a surprise for some,
but not for others, it’s knowledge dependent.
Three different
types of Black Swans: 1) unknown unknowns, 2) unknown knowns, and 3) known but
not believed to be likely because probability is judged to be too low.
Since
knowledge is a key, Aven thought that risk assessment may have an important
role in dealing with Black Swans - but we have to see beyond current
risk assessment practices since traditional assessment practices may ignore the
knowledge dimension about uncertainties and surprises.
A couple of approaches for improved risk assessment were presented which included RedTeaming (playing ‘devil’s advocate’), scenario analysis/system thinking, challenge analysis (providing arguments for events to occur), and anticipatory failure determination based on theory of inventive problem solving. Aven also addressed different types of risk problems caused by knowledge uncertainties which put weight on the use of the cautionary and precautionary principles.
The program
before lunch featured four academic speakers. Andrew Stirling from the University of Sussex was the first of
these. He warned that, since Black Swans are not objective, one should not try
to bury subjectivity under analysis. An argument in favour of precaution
followed and the interesting observation that defence of scientific rationality
(against application of the precautionary principle) is often surprisingly
emotional.
According
to Stirling, uncertainty requires deliberation about action - you cannot
analyze your way out of it. Deliberation will produce more robust knowledge
than probabilistic analysis. Some examples were used to illustrate that
evidence-based research often includes so large uncertainties that they often
can be used as argument for pretty much any decision.
A matrix
was presented in which knowledge about likelihood and knowledge about
possibilities were presented against an problematic/unproblematic scale. As was
demonstrated risk assessment is (according to the definition of Knight) only
applicable in one quadrant (good knowledge about both possibilities and
likelihood), yet we are forced to use this instrument a.o. through regulations
and political forces in a desire for ‘evidence based policies’. The other
quadrants were labeled ‘uncertainty’ (’risk’ with bad knowledge on likehood),
‘ambiguity’ (bad knowledge about possibilities, but good about likehood) and
‘ignorance’ (bad knowledge about both). A number of tools were proposed to get
out of the “risk corner” and have a wider view. Tools included the ones that
Aven had mentioned before. One way is to be more humble and not be caught in
‘swan-alysis’…
Concluding Stirling said that the point is not putting things in boxes (e.g. what type of Black Swan we’re dealing with) but rather what to do things for. Critical deliberation is more important than analysis. One problem with the Black Swan metaphor may be the suggestion of white (= good, many of those) and black (= bad, only a few - so we’re good), but things are definitely not that binary!!
Ortwin Renn had an inspiring Powerpoint-free speech about
different Black Swans, the role of (risk) analysis and risk management
(methods). Some Black Swans are in the long tails of the normal distribution,
others are about problems in knowledge or knowledge transfer. There often is a
disparity of knowledge within or between companies - transfer of knowledge may
be beneficial. And there are the “real” surprises, the rare events which have
never been seen, or there is no pattern that could predict them.
One reason
for the popularity of the Black Swan is that we humans experience many random
unique events in the course of our lives. Our memory builds on unique events,
not on routine. But… risk assessment works the other way… and builds on a very
formal approach.
How to deal
with these challenges? We can’t say about probability of Black Swan events, but
we can say something about the vulnerability of our system! This we can
do by analysis, but a different kind of analysis.
Resilience
is about the ability of a system to cope with stress without losing its
function. Highly efficient systems are usually NOT resilient. There is an
unavoidable trade-off between efficiency and resilience. This trade-off is not
a mathematical decision. It’s about risk appetite, compensation, politics, our
attitude to precaution. It’s an open question we need to deliberate about!
ESRA
chairman Enrico Zio had a
presentation that reflected his search for answers around the theme of
uncertainty. We try to predict by modeling, but there is a difference between
‘real risk’, assumed risk and expected protection. So there is a multitude of
risk profiles and various analyses don’t give the same answers. One solution
might be to combine deterministic and probabilistic safety assessments.
Zio continued by discussing two different approaches to safety which Hollnagel calls Safety I (look at things that go wrong) and Safety II (look at things that go well - resilience thinking). Improving the ration between Safety I (decrease) and Safety II (increase) may be one way to decrease Black Swan risks. Observability and controllability are two important issues related to Safety II.
Concluding
Zio warned about trying to solve today’s (complex) problems with the mindset of
yesterday.
After a number
of critical remarks with regard to knowledge and probability Seth Guikema from John Hopkins
University talked about why we cannot discard historical data and expertise.
Guikema underlined that risk assessment helps to understand problems, but does
not make the decisions. People do!
Historical data
can be useful as it says something about the frequency and extent of the events
that have happened, but it cannot give information about eventual Black Swans
in the future. While you cannot determine the inputs themselves, you can use
models from things that have happened and run these with BIG events and see how
the system responds to these. Guikema illustrated this with example from the
impact of historical hurricanes on power grids in the USA and how the recent
hurricane that ravaged the Philippines would affect the USA. This variation on
‘red teaming’ proves to be a useful way to assess vulnerability. One ironic
comment was that people tend to forget the lessons learned from previous
storms.
Again: data
driven models cannot help initiating events, but can be used to assess the
impact and more. An iterative ongoing process was presented as shown in the
picture below.
After lunch
there was possibility of participants to present views or discuss practical
examples. Regrettably only a few used this opportunity - despite the rather
large number of participants.
First up
was Conoco Phillips who presented two different cases. Kjell Sandve discussed some challenges within risk management especially
the question if increasing costs were ‘caused’ by HSE requirements. One main
problem according to Sandve was a challenge to reduce requirements once they
had been applied one place (but maybe weren’t quite necessary everywhere). He
asked also if we actually have the right tools and if risk assessment supports
good decisions. His appeal to the project was: Don’t make more complex tools/methods
- rather the opposite!
Malene Sandøy starting talking about a 1970s’ decision
problem, i.c. how high should jackets for facilities be build and the Black
Swan that one met some years later when it turned out that the seabed subsided
and platforms were “sinking”. Related Black Swans were stricter regulations
with regard to wave height, new design requirements and not in the least a much
longer life time for the structure than originally anticipated.
After a
rather technical discussion of design loads and how these analyses could be
used Sandøy ended up with a clear improvement potential: From calculations (analyses)
that “no one” understands to broad discussion in the organization of scenarios
to design for.
Third speaker during the audience participation was yours truly who brought some views from outside the oil & gas breaking through some barriers of domain dependence. The presentation included a retrospective on some Black swans that have affected Norwegian Railways in the past years, both negative (the 22/7 terror attack) and positive (the national reaction to 22/7, the ash cloud and, interestingly, the financial crisis). Safety related accidents are rarely Black Swans, but as Stirling said one shouldn’t be too stuck up with putting things in boxes and a more ‘relaxed’ approach to the definition will give several ‘Dark Fowl’ events. One example was the Sjursøya accident in March 2010 which was discussed. A quick assessment of the system where the accident originated (the Alnabru freight terminal) leads to the conclusion that this was a Fragile system. Measures taken after the accident were discussed related to the Fragile - Robust - Antifragile triad from Taleb’s latest book.
Ullrika Sahlin from Lund University came from an environmental background which was interesting because she related Black Swans not so much to events (as safety folks tend to do), but rather to continuous exposure from certain processes. Sahlin presented a couple of thoughts about the subject and expectations from the project.
Her
presentation included discussions around various perspectives on risk (a.o.
traditional statistical and Bayesian), the quality of our knowledge and the
processes we use to produce the knowledge we use for evidence based management
and not in the least assumptions (to which one member from the audience
remarked that “If you assume, you make an ASS of U and ME”).
One advice
from Sahlin was that we should communicate our estimates with humility and
communicate our uncertainties with confidence.
Igor Kozine from the University of Danmark was the last
participant who had a relatively spontaneous short presentation around a 2003
Financial Times article that told about a Pentagon initiative to have people
placing bets on anticipated terror attacks as a way of risk assessment. This project
was discontinued (at least officially) because of public outrage.
After a
coffee break with engaged mingling and discussions there was a concluding
session with discussion and reflections. Themes that came along included:
- In line with Safety I/II: should we focus on failures or on success? Is focusing on success a way to manage Black Swans? Regulators may have a problem with those approaches… Rather than either/or one should compare alternative strategies. There aren’t too many error recovery studies yet.
- What about resilience on a personal level? How much must one know? Dilemma between compliance and using your head in situations that are not described in the rules - see Piper Alpha.
- Taleb’s Antifragility and Resilience: safety people may have a different understanding about Resilience than Taleb, and (according to Rasmus from the University from Danmark) also original literature differentiates between Robust (bounces back to normal) and Resilient (bounces back to a NEW normal).
- Stirling summarized that one important dilemma is the question: What is the analysis for? Shall it help making a decision, or shall it describe knowledge about some risks? One solution may be to look at the recipient of the analysis, not just at the object to be analyzed. What are the power structures that are working on us?
- The last question was if we really need new methods, or rather a good way to use existing ones and their output? Aven concluded that risk assessment in the broad sense of the term has a role to play. Knowledge is important as is the communication of knowledge.
Labels:
antifragile,
Aven,
black swan,
fragility,
resilience,
Risk
vrijdag 17 januari 2014
Ulrich Beck - Weltrisikogesellschaft
(ISBN
978-3-518-41425-5)
This is
definitely one of the most difficult to read books I’ve encountered in the past
years, due to the way it’s written. As it’s in German (and I’m not sure if this
particular version has an English translation) this may be a bridge too far for
some anyway, but even for people who are used to reading both German and
academic papers this one may be prove challenging. Anyway, it was for me and I
considered a few times to just give up, but I decided to persist and at least
try to squeeze the main essence from it. This wasn’t easy due to the flowery
and circular language that is frequented by words that common mortals will
avoid at any cost in every day conversations (or writing, for that matter). It’s
probably a typical example of a book written by an academic for other academics…
Okay,
having given that warning let’s proceed to the contents. The book is a
follow-up of his 1986 classic “Risikogesellschaft” and expands on the ideas
presented there and an English book from 1999. It deals with risk on a global
level and how is dealt with it, or should be dealt with is. Pretty much of the
discussion is fairly philosophical in character and grounded in sociological
theories (Beck is a Professor in Sociology, after all) which I’m not all that
familiar with, so that may very well be one reason that much of the writing
doesn’t appeal all that much to me. If I try to summarize some of the most
important points these would include:
- The “success” of modern society - and not its failure - has created some major risks (often as unintended side effects). These include economical/financial, ecological/environmental and terrorism, and related, some kinds of warfare.
- These major risks have impact that transcend national borders and often have global impact (hence the title of the book: World Risk Society).
- Trying to solve these problems/risks with common approaches on national levels is futile (trying to have an individual environment-friendly life-style is nice, but as long as not all the major countries change course rather pointless).
- Risks are no longer dealt with only by governments and regulators, but also by bottom-up transnational organisations (e.g. Greenpeace) or top-down supranational organisations. Or even by ad hoc cooperation of for example customers who decide to boycott a certain brand and thereby effectively cause a change in policy.
- The effects of risks aren’t necessary dependable on the fact that anticipated consequences actually materialize; if presented “well” the possibility becomes the reality which is best illustrated by the experiences of everyone who tries to travel by plane after 9/11.
- The perception of risk does not necessarily depend on ‘scientific’ analysis and rationality, but also on presentation, knowledge and ignorance, information and disinformation, culture and religion.
- And let’s not forget the role of massmedia.
- The distribution of risk is in many cases asymmetrical and often in favour of the decision maker, but not necessarily so - many global risks will affect everyone in a most democratic manner. Even so, risk export happens. Often.
Risiko
Ergo Sum.
As a whole
an interesting, but very demanding, book which gives a slightly different view
on risk from a totally different perspective than most safety professionals
usually deal with. Some clear links to Taleb’s work too. But I’d say start with
Taleb and you will have covered a good deal of Beck’s points too, in a (IMHO) more
pleasantly readable way. The Wikipedia pages on Beck and his work are very
informative, by the way.
dinsdag 5 november 2013
Zero Harm is an Occupational Dissease
A little article that I wrote some time ago about one (of many) fallacy of Zero Harm... Read it at Safety Cary.
donderdag 30 mei 2013
John Lanchester - Whoops! Why Everyone Owes Everyone and No One Can Pay
Just a quicky...
Came across this book by pure chance and consequently spent a weekend reading it. A great read, not only because of the theme, but especially because of Lanchester’s style of writing which is often humorous (despite the not all that cheerful theme) with a set of references that show that John has a good taste in movies (opening the book with quote by Spinal Tap’s David St Hubbins should give a solid clue).
In seven easy readable chapters Lanchester manages not only to explain the principle of the most important financial instruments and constructions (which then failed and plunged us into a terrible crisis), but also the human, psychological, cultural, political (and what not) causes behind the crisis. This includes incentives, greed and dogma behind massive deregulation (I’m surprised that he doesn’t link to Naomi Klein’s work on Disaster Capitalism) and the failure of so-called risk management by banks and regulators (he ties in both Taleb and Kahneman!). The latter makes especially chapters 5 and 6 interesting reading for safety professionals and others interested in risk (management/assessment) and regulations/enforcement.
A most recommended read, even though the conclusion (“We haven’t learned anything and this is going to happen again!”) is a bit depressing of course.
The book appears to be available in several different versions, with alternate titles even. I've read the "enhanced" Penguin pocket/paperback version: ISBN 978-0141045719.
maandag 8 april 2013
Barriers Trilogy - final installment
The third part of our Barrier Trilogy can now be read on SafetyCary's blog.
Please keep coming with your feedback!
maandag 25 maart 2013
Gerd Gigerenzer – Reckoning With Risk, Learning To Live With Uncertainity
A quite interesting and enjoyable book that I accidently picked up while strolling through the University book store at NTNU. Most books on risk in my bookshelf are either safety or finance oriented, Gigerenzer takes his examples from the medical (mammography, HIV testing) and justice (DNA finger printing, court evidence) worlds, which is interesting and nice for a change.
While I don’t subscribe to Gigerenzer’s very strict (and in my opinion limited) definition of ‘risk’ as uncertainty that can be expressed as a number (a probability or frequency), it’s very worthwhile to read the book and I support his statement that people have to overcome their illusion of certainty and gain proper insight in uncertainty and risk. The book may be an eye opener to many. I have always been sceptical about a lot of statistics, but after reading “Reckoning With Risk” I will be even more so. I guess concepts like base rate frequency, false positives and natural frequencies are now glued to my brain forever.
On a critical note – safety experts may want to comment the DASA example on pages 28/29 (both views about measuring safety mentioned here are due for criticism) and the “Why most drivers are better than average” chapter (page 214-217) where the author oddly replaces the question of “who is the better driver” with “who has less accidents” without noticing (or explaining) himself. There may be a connection between “better” and “less accidents”, but that relations goes one way (better, thus less accidents), not backwards from (less accidents, thus better).
In connection to this book I’d recommend one also checks out Paulos’ “Innumeracy”.
I’ve read the Penguin paperback edition ISBN 978-0-14-029786-7
Note: In the USA this book has been published as “Calculated Risk”
maandag 11 maart 2013
Barrier Trilogy - Part 2
Proudly presenting the second part of our Barrier Trilogy. Feedback of any kind is HIGHLY appreciated since it only can help to improve the method.
maandag 4 maart 2013
Missing something?
Me too. But some recommendations aren't meant to be. Such is life.
maandag 18 februari 2013
zondag 17 februari 2013
A Barrier Trilogy, part 1
Read it on Safety Cary's blog. Comments and discussion is highly welcomed!
Labels:
barriers,
management,
Risk,
safety,
Swiss Cheese Model
zaterdag 16 februari 2013
Working at height in the middle of Oslo
Looking out the window, preparing a course... when a subject for an exercise in risk assessment present itself.
woensdag 30 januari 2013
Willem Top: How To Build A Management System That Works
Willem Top is kind of a legend in safety - at least in The Netherlands and Belgium. When I started in the safety profession in 1992, my first major course was Safety Auditing and an induction in the International Safety Rating System (ISRS) from DNV. At the course I was given two books, “Risk/Loss Control Management” and “Safety Auditing”, both written by Willem Top and both of which have been used from one time to another during the past 20 years of my professional career.
Now Willem has collected some of the knowledge and experience he has gathered in his long career as a professional and bundled it in this e-book. I’m no particular fan of e-books (love to read from paper and highlight/comment on the printed version) but at least this saves some space in the bookshelf.
On 200 pages Willem delivers an easy to read, practical and rather unpretentious view on how a management system (regardless the scope, even though he comes from a safety background the book is rather holistic) can be built with the help of a number of basic elements/principles. Willem himself calls it a “travel guide” rather than a “cook book”.
At first one may get a perception that Top concentrates very much on avoidance of losses, reading on in the book one discovers that he very much promotes the use of positive drivers, call it leading indicators if you want, in order to reach this goal.
Some people will criticize the culture-as-system approach that seems to be promoted on page 29, but Top is quick to apply an important nuance. He knows his limitations, being a chemical engineer and no psychologist and he stresses some of these limitations throughout the e-book.
The proposed elements for the management system sound sensible. Sure, I would have made a slightly different cluster than Top’s 17 steps, but that’s quibling over details in definitions. The elements discussed are essential. On a critical note, the PLAN, TRAIN, DO model that Top promotes does in fact contain check/evaluate and improve steps, but these are a bit hidden in the model. I would propose to follow Deming’s PDC(S)A.
What I like is that Top stresses the fact that a management system can be written down, but in some instances also done orally, or just through actions and giving the example. This might be an eye opener for quite a few. Especially with the ISRS in the back of my mind which I always perceived as very solid and complete, but also as very bureaucratic. Willem Top addresses another drawback of the ISRS (without actually naming the ISRS) namely the rating, which may become a goal in itself, thereby prioritizing elements that gain points and neglecting essential elements that create control. He does propose an alternative rating for his 17 step approach as a measure for implementation.
A benefit of this rating system is that Top listed the desired state properties of the various steps (at least some of them) which guides the assessment further than just a simple yes/no tick-list-of-requirements. And whether you use the rating or not, it is a fine (check)list of points to consider when building and implementing your management system.
You can order this e-book directly from the author. Check Willem Top’s website:
Labels:
audit,
books,
improvement,
incidents,
PDCA,
quality management,
safety,
Top
Rob Long: For The Love Of Zero
The second book by Rob Long is mainly dedicated to dismantling the “Zero Harm Cult”, its language, its way of thinking and possibly negative effects. The book has a lesser degree of “hopping through” than the first and (I think that) it’s rather meant to read from start to end. While the book is divided in three sections, there are two main parts, the first concentrating on the “zero” phenomenon while the second part (chapters 7 onwards) deals with alternative strategies.
The book opens with a discussion of the fascination with ‘zero’ in general. One may dispute the relevance of several examples, but at least it’s amusing. The second chapter discusses some of the arguments pro and contra ‘zero harm’ and also shows some of the more extreme forms of use of ‘zero harm’ language, where ‘zero harm’ has begun to replace central and essential safety terminology like ‘risk’. A dubious trend, to say the least.
An interesting twist to some already known arguments against ‘zero harm’ is that Rob Long discusses it in relation to psychological disorders and fundamentalism. The entire chapter 6 is dedicated to the latter and one may criticize Rob mildly for the fact that he goes slightly overboard here and provokes a comment that he’s a bit of a fundamentalist himself (an anti-fundamentalist-fundamentalist, so to speak). Rob also spends ample time on the (negative) consequences of ‘zero harm’ language.
While the discussion of ‘zero harm’ is very thorough, other themes are treated a bit shallow, like Rasmussen/Reason’s SRK-model of human error and Heinrich’s pyramid (and in that case entirely missing the value of that metaphor, i.c. learning from weak signals). The discussion of HRO and Risk and Safety Maturity on the other hand is most interesting.
Very worthwhile are pages 48 and 49 with the essentials of motivation that elaborate on themes discussed in “Risk Makes Sense”. Most valuable in my opinion are the parts on good goal setting which are found in chapter 5, 7 and 8, especially underlining the importance of positive goals.
One drawback of the book is that there is a certain degree of repetition from “Risk Makes Sense”, some parts are even literally copied. So I wouldn’t recommend reading them too closely after each other (unless you want to save some time and are able to skip/skim some sections).
A third book in the series, “Real Risk”, is in the making. Regarding the contents of the first two books that should be one to watch out for!
Abonneren op:
Posts (Atom)