Posts tonen met het label Risk. Alle posts tonen
Posts tonen met het label Risk. Alle posts tonen

donderdag 24 april 2014

All Accidents Are Preventable?

Really?

I think for the statement "all accidents are preventable" to become REMOTELY true (ish) we will need to add some words. For example:

All accidents are preventable... with the benefit of hindsight.

or

All accidents are preventable... in theory.

or

All accidents are preventable... given unlimited knowledge, resources, perfect prediction (and quite some luck).

All of which, regrettably makes it a rather useless statment in my everyday job. And besides, do we really want to prevent absolutely everything? Really??

donderdag 13 maart 2014

Antifragile - a review (or two, sort of)

In the picture below you'll find a scan of the review I wrote of Taleb's latest book, Antifragile, for the magazine of the Dutch Society of Safety Experts, NVVK Info and was published there in this month's issue.


The English translation goes roughly like this:

Antifragile - Things That Gain from Disorder, by Nassim Nicholas Taleb 
(ISBN 9780141038223)

While I was reading this book - during the days around Christmas 2013 - a major storm ravaged northern and western Europe. At one point my wife asked “What is that book about, anyway” and the perfect illustration of my answer presented itself in all newspapers the next morning: hundreds of thousands of households in Norway, Sweden, The Netherlands, Germany, France and The United Kingdom were without electricity. How much easier could you want for an example to explain the concept of a fragile system? Some systems are so much optimalised that they work smoothly in the normal situation (in Taleb’s vocabulary ‘mediocristan’), but as soon as an event of certain impact (unexpectedly, or dismissed as not probable) manifests itself, the vulnerable system collapses as a house of cards. The system is fragile and breaks, often with catastrophic consequences.

Those who have read Taleb’s last book, the bestseller and masterpiece “The Black Swan”, will notice that “Antifragile” picks up things where Taleb stopped the last time. The new book is the closing part of a trilogy that started about ten years ago with “Fooled By Randomness”. In “Antifragile” Taleb tries primarily to provide some strategies that enable us to cope with the fragility of systems - on one hand by assessing if something is fragile, or is made fragile by decisions that are made, and on the other side by discussing possibilities to make things (more) antifragile.

Taleb use a triad, or three different states, to illustrate the point. The first state is fragile: things break as a consequence of shocks, disorder or time. One example is an e-book that depends if your reader has enough power. The second state is robust: things are insensible to shocks (until a certain level), like an ordinary book that usually “works”. Finally there’s antifragile: things that do profit from disorder, like an oral delivery which goes from mouth to mouth and often only wins fantasy. For some this may seem strange; after all in common understanding it’s robust that is used as the opposite of fragile, but Taleb aims at another kind of opposite: not the difference between braking and not breaking, but between being harmed by and benefiting from something.

It’s likely that the reader has to get used to Taleb’s use of language and his jargon. He had a couple of specific “own” words (for example ‘fragalista’ which he uses as a label for people who advance fragility - often named, like the president of the Federal Reserve) or terms which he uses to characterize a certain phenomenon (“soccer mom”). This also applies to his style of writing. Taleb likes to stroll, and it shows in his books. This is no scientific academic report or paper (luckily it isn’t!) but a continuous discussion and reflection where the author switches from one subject to another. Sometimes he returns to a subject, sometimes he doesn’t. The contents may vary from seriously technical or mathematical stuff to fantastic or just every day reflections.

The language is flowery and at times outrageous. Taleb is erudite with a preference for the Classics and we’ll know this! All of this is coupled to an enormous waywardness and a certain arrogance (he doesn’t shy back of kicking seriously against the established order within among others economy en science). This may be a certain threshold for some readers. With that in mind it’s a relief that one doesn’t need to read everything. Some parts of the book can be skipped without any problem and without missing part of the message. In fact, on various occasion the author even indicates that a part can be skipped.

One may question how relevant this book is for safety professionals since there is only a limited discussion of ‘real’ safety subjects in the book (Fukushima as one example of fragility is a rare exception). My opnion: definitely relevant. Taleb’s books are about uncertainty and risk, about the thinking about risk and about decision making (with all kinds of consequences). This alone makes the book (and definitely its predecessor) almost mandatory reading.

Do note that this is by no means a ‘Do Book’ that will lead you to an antifragile state (presuming that this is possible and desirable) within 7 Steps. Reading the book can, however, help to create awareness and start a process. To this end it is necessary that each reader picks out relevant elements for himself and translates those to his own every day practice (domain blindness is one returning theme, by the way). Examples are fragility enhancing things like naive interventionism (the urge to implement measures that are likely to be unnecessary which may cause unwanted side-effects that in fact cause a worse situation) and agency problems (decisions taken or advocated by people who will profit in some way from that decision).
On the positive side Taleb discusses a number of tools that may help to enhance robustness or even gain antifragility. To name but a few: the principle of redundancy (preferably applied aggressively because in a time of scarcity one can gain a lot from your own abundance) even though it’s despised by economists and bean counters, optionality (making use of opportunities when they present themselves), flexibility, using heuristics (practical mental rules of thumb that make the everyday life easier) and allowing some hormensis in our lives and organizations (some disorder and stress is necessary to make things stronger, like not always washing your hands before diner can enforce your immune system and is thus healthy...).


One other thought that struck me while reading the book is that Taleb’s fragile/antifragile thinking unwillingly manages to combine two safety theories that are often regarded as opposites within the same framework. I’m obviously talking about Perrow’s Normal Accident Theory (fragile, Black Swans) and the antifragile concepts of High Reliability Organisations (Weick & co) and Resilience (Hollnagel, Woods etc.). For safety professionals it may be confusing that Taleb on several occasions says that antifragile and resilience (to him synonymous to robust) aren’t the same thing, while elements that “we” (safety professionals) range within the resilience framework are seen by Taleb as antifragile elements. A matter of definition, I think, and don’t let that stand in the way of gathering some useful lessons from this book.

woensdag 26 februari 2014

Black Swan workshop 20 February 2014

Petromaks - Black Swan Workshop,
Stavanger, 20 February 2014

This workshop was part of a new (?) project that aims to develop concepts, approaches and methods for proper understanding of risk for Norwegian petroleum activities, with due attention to the knowledge dimension and surprises. And so over 100 participants gathered in Stavanger, coming from mainly oil and gas companies, universities, some students and a few ‘outsiders’ as well.

Chairman of the day, Terje Aven, opened the day, introducing the research program and the goal of the day: to have an exchange of ideas about the Black Swan concept and how to meet the challenges they pose. Knowledge is one of the central elements because we need to know where to put resources. Not in finding solutions right now, but having guidance on the direction.

Aven defined a Black Swan as “a surprising extreme event relative to present knowledge or beliefs”. This means that a Black Swan can be a surprise for some, but not for others, it’s knowledge dependent.
Three different types of Black Swans: 1) unknown unknowns, 2) unknown knowns, and 3) known but not believed to be likely because probability is judged to be too low.

Since knowledge is a key, Aven thought that risk assessment may have an important role in dealing with Black Swans - but we have to see beyond current risk assessment practices since traditional assessment practices may ignore the knowledge dimension about uncertainties and surprises.


A couple of approaches for improved risk assessment were presented which included RedTeaming (playing ‘devil’s advocate’), scenario analysis/system thinking, challenge analysis (providing arguments for events to occur), and anticipatory failure determination based on theory of inventive problem solving. Aven also addressed different types of risk problems caused by knowledge uncertainties which put weight on the use of the cautionary and precautionary principles.

The program before lunch featured four academic speakers. Andrew Stirling from the University of Sussex was the first of these. He warned that, since Black Swans are not objective, one should not try to bury subjectivity under analysis. An argument in favour of precaution followed and the interesting observation that defence of scientific rationality (against application of the precautionary principle) is often surprisingly emotional.

According to Stirling, uncertainty requires deliberation about action - you cannot analyze your way out of it. Deliberation will produce more robust knowledge than probabilistic analysis. Some examples were used to illustrate that evidence-based research often includes so large uncertainties that they often can be used as argument for pretty much any decision.

A matrix was presented in which knowledge about likelihood and knowledge about possibilities were presented against an problematic/unproblematic scale. As was demonstrated risk assessment is (according to the definition of Knight) only applicable in one quadrant (good knowledge about both possibilities and likelihood), yet we are forced to use this instrument a.o. through regulations and political forces in a desire for ‘evidence based policies’. The other quadrants were labeled ‘uncertainty’ (’risk’ with bad knowledge on likehood), ‘ambiguity’ (bad knowledge about possibilities, but good about likehood) and ‘ignorance’ (bad knowledge about both). A number of tools were proposed to get out of the “risk corner” and have a wider view. Tools included the ones that Aven had mentioned before. One way is to be more humble and not be caught in ‘swan-alysis’…


Concluding Stirling said that the point is not putting things in boxes (e.g. what type of Black Swan we’re dealing with) but rather what to do things for. Critical deliberation is more important than analysis. One problem with the Black Swan metaphor may be the suggestion of white (= good, many of those) and black (= bad, only a few - so we’re good), but things are definitely not that binary!!

Ortwin Renn had an inspiring Powerpoint-free speech about different Black Swans, the role of (risk) analysis and risk management (methods). Some Black Swans are in the long tails of the normal distribution, others are about problems in knowledge or knowledge transfer. There often is a disparity of knowledge within or between companies - transfer of knowledge may be beneficial. And there are the “real” surprises, the rare events which have never been seen, or there is no pattern that could predict them.

One reason for the popularity of the Black Swan is that we humans experience many random unique events in the course of our lives. Our memory builds on unique events, not on routine. But… risk assessment works the other way… and builds on a very formal approach.

How to deal with these challenges? We can’t say about probability of Black Swan events, but we can say something about the vulnerability of our system! This we can do by analysis, but a different kind of analysis.
Resilience is about the ability of a system to cope with stress without losing its function. Highly efficient systems are usually NOT resilient. There is an unavoidable trade-off between efficiency and resilience. This trade-off is not a mathematical decision. It’s about risk appetite, compensation, politics, our attitude to precaution. It’s an open question we need to deliberate about!

ESRA chairman Enrico Zio had a presentation that reflected his search for answers around the theme of uncertainty. We try to predict by modeling, but there is a difference between ‘real risk’, assumed risk and expected protection. So there is a multitude of risk profiles and various analyses don’t give the same answers. One solution might be to combine deterministic and probabilistic safety assessments.


Zio continued by discussing two different approaches to safety which Hollnagel calls Safety I (look at things that go wrong) and Safety II (look at things that go well - resilience thinking). Improving the ration between Safety I (decrease) and Safety II (increase) may be one way to decrease Black Swan risks. Observability and controllability are two important issues related to Safety II.

Concluding Zio warned about trying to solve today’s (complex) problems with the mindset of yesterday.

After a number of critical remarks with regard to knowledge and probability Seth Guikema from John Hopkins University talked about why we cannot discard historical data and expertise. Guikema underlined that risk assessment helps to understand problems, but does not make the decisions. People do!

Historical data can be useful as it says something about the frequency and extent of the events that have happened, but it cannot give information about eventual Black Swans in the future. While you cannot determine the inputs themselves, you can use models from things that have happened and run these with BIG events and see how the system responds to these. Guikema illustrated this with example from the impact of historical hurricanes on power grids in the USA and how the recent hurricane that ravaged the Philippines would affect the USA. This variation on ‘red teaming’ proves to be a useful way to assess vulnerability. One ironic comment was that people tend to forget the lessons learned from previous storms.

Again: data driven models cannot help initiating events, but can be used to assess the impact and more. An iterative ongoing process was presented as shown in the picture below.


After lunch there was possibility of participants to present views or discuss practical examples. Regrettably only a few used this opportunity - despite the rather large number of participants.

First up was Conoco Phillips who presented two different cases. Kjell Sandve discussed some challenges within risk management especially the question if increasing costs were ‘caused’ by HSE requirements. One main problem according to Sandve was a challenge to reduce requirements once they had been applied one place (but maybe weren’t quite necessary everywhere). He asked also if we actually have the right tools and if risk assessment supports good decisions. His appeal to the project was: Don’t make more complex tools/methods - rather the opposite!

Malene Sandøy starting talking about a 1970s’ decision problem, i.c. how high should jackets for facilities be build and the Black Swan that one met some years later when it turned out that the seabed subsided and platforms were “sinking”. Related Black Swans were stricter regulations with regard to wave height, new design requirements and not in the least a much longer life time for the structure than originally anticipated.
After a rather technical discussion of design loads and how these analyses could be used Sandøy ended up with a clear improvement potential: From calculations (analyses) that “no one” understands to broad discussion in the organization of scenarios to design for.

Third speaker during the audience participation was yours truly who brought some views from outside the oil & gas breaking through some barriers of domain dependence. The presentation included a retrospective on some Black swans that have affected Norwegian Railways in the past years, both negative (the 22/7 terror attack) and positive (the national reaction to 22/7, the ash cloud and, interestingly, the financial crisis). Safety related accidents are rarely Black Swans, but as Stirling said one shouldn’t be too stuck up with putting things in boxes and a more ‘relaxed’ approach to the definition will give several ‘Dark Fowl’ events. One example was the Sjursøya accident in March 2010 which was discussed. A quick assessment of the system where the accident originated (the Alnabru freight terminal) leads to the conclusion that this was a Fragile system. Measures taken after the accident were discussed related to the Fragile - Robust - Antifragile triad from Taleb’s latest book.


Ullrika Sahlin from Lund University came from an environmental background which was interesting because she related Black Swans not so much to events (as safety folks tend to do), but rather to continuous exposure from certain processes. Sahlin presented a couple of thoughts about the subject and expectations from the project.

Her presentation included discussions around various perspectives on risk (a.o. traditional statistical and Bayesian), the quality of our knowledge and the processes we use to produce the knowledge we use for evidence based management and not in the least assumptions (to which one member from the audience remarked that “If you assume, you make an ASS of U and ME”).

One advice from Sahlin was that we should communicate our estimates with humility and communicate our uncertainties with confidence.


Igor Kozine from the University of Danmark was the last participant who had a relatively spontaneous short presentation around a 2003 Financial Times article that told about a Pentagon initiative to have people placing bets on anticipated terror attacks as a way of risk assessment. This project was discontinued (at least officially) because of public outrage.

After a coffee break with engaged mingling and discussions there was a concluding session with discussion and reflections. Themes that came along included:
  • In line with Safety I/II: should we focus on failures or on success? Is focusing on success a way to manage Black Swans? Regulators may have a problem with those approaches… Rather than either/or one should compare alternative strategies. There aren’t too many error recovery studies yet.
  • What about resilience on a personal level? How much must one know? Dilemma between compliance and using your head in situations that are not described in the rules - see Piper Alpha.
  • Taleb’s Antifragility and Resilience: safety people may have a different understanding about Resilience than Taleb, and (according to Rasmus from the University from Danmark) also original literature differentiates between Robust (bounces back to normal) and Resilient (bounces back to a NEW normal).
  • Stirling summarized that one important dilemma is the question: What is the analysis for? Shall it help making a decision, or shall it describe knowledge about some risks? One solution may be to look at the recipient of the analysis, not just at the object to be analyzed. What are the power structures that are working on us?
  • The last question was if we really need new methods, or rather a good way to use existing ones and their output? Aven concluded that risk assessment in the broad sense of the term has a role to play. Knowledge is important as is the communication of knowledge.

vrijdag 17 januari 2014

Ulrich Beck - Weltrisikogesellschaft

(ISBN 978-3-518-41425-5)

This is definitely one of the most difficult to read books I’ve encountered in the past years, due to the way it’s written. As it’s in German (and I’m not sure if this particular version has an English translation) this may be a bridge too far for some anyway, but even for people who are used to reading both German and academic papers this one may be prove challenging. Anyway, it was for me and I considered a few times to just give up, but I decided to persist and at least try to squeeze the main essence from it. This wasn’t easy due to the flowery and circular language that is frequented by words that common mortals will avoid at any cost in every day conversations (or writing, for that matter). It’s probably a typical example of a book written by an academic for other academics…

Okay, having given that warning let’s proceed to the contents. The book is a follow-up of his 1986 classic “Risikogesellschaft” and expands on the ideas presented there and an English book from 1999. It deals with risk on a global level and how is dealt with it, or should be dealt with is. Pretty much of the discussion is fairly philosophical in character and grounded in sociological theories (Beck is a Professor in Sociology, after all) which I’m not all that familiar with, so that may very well be one reason that much of the writing doesn’t appeal all that much to me. If I try to summarize some of the most important points these would include:
  • The “success” of modern society - and not its failure - has created some major risks (often as unintended side effects). These include economical/financial, ecological/environmental and terrorism, and related, some kinds of warfare.
  • These major risks have impact that transcend national borders and often have global impact (hence the title of the book: World Risk Society).
  • Trying to solve these problems/risks with common approaches on national levels is futile (trying to have an individual environment-friendly life-style is nice, but as long as not all the major countries change course rather pointless).
  • Risks are no longer dealt with only by governments and regulators, but also by bottom-up transnational organisations (e.g. Greenpeace) or top-down supranational organisations. Or even by ad hoc cooperation of for example customers who decide to boycott a certain brand and thereby effectively cause a change in policy.
  • The effects of risks aren’t necessary dependable on the fact that anticipated consequences actually materialize; if presented “well” the possibility becomes the reality which is best illustrated by the experiences of everyone who tries to travel by plane after 9/11.
  • The perception of risk does not necessarily depend on ‘scientific’ analysis and rationality, but also on presentation, knowledge and ignorance, information and disinformation, culture and religion.
  • And let’s not forget the role of massmedia.
  • The distribution of risk is in many cases asymmetrical and often in favour of the decision maker, but not necessarily so - many global risks will affect everyone in a most democratic manner. Even so, risk export happens. Often.
This only scratches the surface, of course, but that’s what I took out of it and remembered after a first difficult read through it. And I should of course mention one brilliant quote that every safety pro should take note of:
Risiko Ergo Sum.

As a whole an interesting, but very demanding, book which gives a slightly different view on risk from a totally different perspective than most safety professionals usually deal with. Some clear links to Taleb’s work too. But I’d say start with Taleb and you will have covered a good deal of Beck’s points too, in a (IMHO) more pleasantly readable way. The Wikipedia pages on Beck and his work are very informative, by the way.

donderdag 30 mei 2013

John Lanchester - Whoops! Why Everyone Owes Everyone and No One Can Pay


Just a quicky...

Came across this book by pure chance and consequently spent a weekend reading it. A great read, not only because of the theme, but especially because of Lanchester’s style of writing which is often humorous (despite the not all that cheerful theme) with a set of references that show that John has a good taste in movies (opening the book with quote by Spinal Tap’s David St Hubbins should give a solid clue).

In seven easy readable chapters Lanchester manages not only to explain the principle of the most important financial instruments and constructions (which then failed and plunged us into a terrible crisis), but also the human, psychological, cultural, political (and what not) causes behind the crisis. This includes incentives, greed and dogma behind massive deregulation (I’m surprised that he doesn’t link to Naomi Klein’s work on Disaster Capitalism) and the failure of so-called risk management by banks and regulators (he ties in both Taleb and Kahneman!). The latter makes especially chapters 5 and 6 interesting reading for safety professionals and others interested in risk (management/assessment) and regulations/enforcement.

A most recommended read, even though the conclusion (“We haven’t learned anything and this is going to happen again!”) is a bit depressing of course.

The book appears to be available in several different versions, with alternate titles even. I've read the "enhanced" Penguin pocket/paperback version: ISBN 978-0141045719.

maandag 8 april 2013

maandag 25 maart 2013

Gerd Gigerenzer – Reckoning With Risk, Learning To Live With Uncertainity

A quite interesting and enjoyable book that I accidently picked up while strolling through the University book store at NTNU. Most books on risk in my bookshelf are either safety or finance oriented, Gigerenzer takes his examples from the medical (mammography, HIV testing) and justice (DNA finger printing, court evidence) worlds, which is interesting and nice for a change.

While I don’t subscribe to Gigerenzer’s very strict (and in my opinion limited) definition of ‘risk’ as uncertainty that can be expressed as a number (a probability or frequency), it’s very worthwhile to read the book and I support his statement that people have to overcome their illusion of certainty and gain proper insight in uncertainty and risk. The book may be an eye opener to many. I have always been sceptical about a lot of statistics, but after reading “Reckoning With Risk” I will be even more so. I guess concepts like base rate frequency, false positives and natural frequencies are now glued to my brain forever.

On a critical note – safety experts may want to comment the DASA example on pages 28/29 (both views about measuring safety mentioned here are due for criticism) and the “Why most drivers are better than average” chapter  (page 214-217) where the author oddly replaces the question of “who is the better driver” with “who has less accidents” without noticing (or explaining) himself. There may be a connection between “better” and “less accidents”, but that relations goes one way (better, thus less accidents), not backwards from (less accidents, thus better).



In connection to this book I’d recommend one also checks out Paulos’ “Innumeracy”.

I’ve read the Penguin paperback edition ISBN 978-0-14-029786-7

Note: In the USA this book has been published as “Calculated Risk

maandag 11 maart 2013

Barrier Trilogy - Part 2

Proudly presenting the second part of our Barrier Trilogy. Feedback of any kind is HIGHLY appreciated since it only can help to improve the method.

zaterdag 16 februari 2013

Working at height in the middle of Oslo

Looking out the window, preparing a course... when a subject for an exercise in risk assessment present itself.

dinsdag 15 januari 2013

Dr. Robert Long and Joshua Long: Risk Makes Sense

The last few days were usefully spent on reading the Long’s first book “Risk Makes Sense”. Already the title appealed a lot to me as I’m (like the authors) convinced that a life without risk is impossible - and really boring too.
The book is handy, about 150 pages and rather accessible and easy readable. The book aims at a wider audience than just HSEQ experts, and deviates from a typical text book build-up too because instead of building up to one major conclusion, the chapters can be read separately and learning points picked up all over the place. One should be able to hop through the book like one surfs the net.
On a critical note – at some moments the text feels a bit fragmentary, or hopping between subjects. I think some things could have been explained a bit better or more extensive for the un-initiated. I had no problem following the book, but then, I’ve read quite a lot literature which makes it easy to place various references in context. Not everyone has this backing when reading the book and I think that might give a few minor hurdles for some readers.
One of these instances is where the idea of one brain and three minds is posed. I understand what the writers say, but I would like to see in greater depth where it comes from. I would also like to see this concept discussed in relation to Rasmussen’s/Reason’s SRK-model and Kahneman’s two systems. Robert informed me that several subjects will return in later books where more will be explained.
One thing that makes it easier to read the book for un-initiated is a brief glossary found at the start of the book. This gives a good and quick framework for what is about to come. Another thing that makes the book easy accessible are the transitional paragraphs between the various chapters. Each chapter is concluded with some questions for workshop use (or reflection and further ‘home’work). Excellent idea to give a bit extra to the book.
The book draws (among others) a lot of inspiration from Weick’s work on HRO and the concept of mindfulness he proposes. Since I’m a major fan of this approach it’s a big thumbs up there!
The first chapter concentrates on myth busting. Quite an interesting take on some widely entrenched beliefs here. Throughout much of the book you’ll find a well-founded and well-reasoned trashing of the zero cult. I’m looking forward to read more of that in the second book (which seems to promise to do that, judging from the title). I also love the phrase ‘safety cosmetics’. This describes very well some things of what I see around in safety practioning!
In that regard it is interesting to see that the writers connect themes to religion and fundamentalism. Too much of that which is done and decided is rather based on dogma than on reason.
I like the way the writers stress learning throughout the book. That’s something we are struggling with/working on all of the time in the company I work. We had about 30.000 registrations of big and small cases (incidents, complaints, proposals for improvement etc) last year. We are relatively good at handling these, but how do you get the learning points from essential cases into as many heads/minds as possible? Challenging! Some thoughts found in this book.
I found also the angle about language very interesting. Language is one of our most powerful tools. As such it can be easily misused. I also read with interest what has been written about the art and importance of dialogue. This reminds me very much of the work and writings of my Belgian friend Johan Roels. His book on crucial dialogues (alas only in Dutch, check here) elaborates on some of the themes touched upon in “Risk Makes Sense”.
There is quite some space spent on conscious and subconscious actions in the book. Of these I find especially the argument about the counterproductive effect of a rationalist approach in the non-rational setting intriguing. I truly hope that the writers will come back to that in the two follow-up books, since I’m more than just a bit interested in the ambiguous relationship between rules and safety that has some of its origin in this phenomenon.
Off to the second book now. 
 
I've read the second edition, ISBN: 978-0-646-57094-5
P.S. One thing I appreciated a lot: the book does mention what the writer’s company does, the tools that they have developed, but the book never tends towards sheer marketing. Good job! I’ve seen many books which are mainly the vehicle to sell and I’m no fan of that practice, to say things mildly.
Meanwhile, feel free to check them out on the web: