Posts tonen met het label antifragile. Alle posts tonen
Posts tonen met het label antifragile. Alle posts tonen

donderdag 13 maart 2014

Antifragile - a review (or two, sort of)

In the picture below you'll find a scan of the review I wrote of Taleb's latest book, Antifragile, for the magazine of the Dutch Society of Safety Experts, NVVK Info and was published there in this month's issue.


The English translation goes roughly like this:

Antifragile - Things That Gain from Disorder, by Nassim Nicholas Taleb 
(ISBN 9780141038223)

While I was reading this book - during the days around Christmas 2013 - a major storm ravaged northern and western Europe. At one point my wife asked “What is that book about, anyway” and the perfect illustration of my answer presented itself in all newspapers the next morning: hundreds of thousands of households in Norway, Sweden, The Netherlands, Germany, France and The United Kingdom were without electricity. How much easier could you want for an example to explain the concept of a fragile system? Some systems are so much optimalised that they work smoothly in the normal situation (in Taleb’s vocabulary ‘mediocristan’), but as soon as an event of certain impact (unexpectedly, or dismissed as not probable) manifests itself, the vulnerable system collapses as a house of cards. The system is fragile and breaks, often with catastrophic consequences.

Those who have read Taleb’s last book, the bestseller and masterpiece “The Black Swan”, will notice that “Antifragile” picks up things where Taleb stopped the last time. The new book is the closing part of a trilogy that started about ten years ago with “Fooled By Randomness”. In “Antifragile” Taleb tries primarily to provide some strategies that enable us to cope with the fragility of systems - on one hand by assessing if something is fragile, or is made fragile by decisions that are made, and on the other side by discussing possibilities to make things (more) antifragile.

Taleb use a triad, or three different states, to illustrate the point. The first state is fragile: things break as a consequence of shocks, disorder or time. One example is an e-book that depends if your reader has enough power. The second state is robust: things are insensible to shocks (until a certain level), like an ordinary book that usually “works”. Finally there’s antifragile: things that do profit from disorder, like an oral delivery which goes from mouth to mouth and often only wins fantasy. For some this may seem strange; after all in common understanding it’s robust that is used as the opposite of fragile, but Taleb aims at another kind of opposite: not the difference between braking and not breaking, but between being harmed by and benefiting from something.

It’s likely that the reader has to get used to Taleb’s use of language and his jargon. He had a couple of specific “own” words (for example ‘fragalista’ which he uses as a label for people who advance fragility - often named, like the president of the Federal Reserve) or terms which he uses to characterize a certain phenomenon (“soccer mom”). This also applies to his style of writing. Taleb likes to stroll, and it shows in his books. This is no scientific academic report or paper (luckily it isn’t!) but a continuous discussion and reflection where the author switches from one subject to another. Sometimes he returns to a subject, sometimes he doesn’t. The contents may vary from seriously technical or mathematical stuff to fantastic or just every day reflections.

The language is flowery and at times outrageous. Taleb is erudite with a preference for the Classics and we’ll know this! All of this is coupled to an enormous waywardness and a certain arrogance (he doesn’t shy back of kicking seriously against the established order within among others economy en science). This may be a certain threshold for some readers. With that in mind it’s a relief that one doesn’t need to read everything. Some parts of the book can be skipped without any problem and without missing part of the message. In fact, on various occasion the author even indicates that a part can be skipped.

One may question how relevant this book is for safety professionals since there is only a limited discussion of ‘real’ safety subjects in the book (Fukushima as one example of fragility is a rare exception). My opnion: definitely relevant. Taleb’s books are about uncertainty and risk, about the thinking about risk and about decision making (with all kinds of consequences). This alone makes the book (and definitely its predecessor) almost mandatory reading.

Do note that this is by no means a ‘Do Book’ that will lead you to an antifragile state (presuming that this is possible and desirable) within 7 Steps. Reading the book can, however, help to create awareness and start a process. To this end it is necessary that each reader picks out relevant elements for himself and translates those to his own every day practice (domain blindness is one returning theme, by the way). Examples are fragility enhancing things like naive interventionism (the urge to implement measures that are likely to be unnecessary which may cause unwanted side-effects that in fact cause a worse situation) and agency problems (decisions taken or advocated by people who will profit in some way from that decision).
On the positive side Taleb discusses a number of tools that may help to enhance robustness or even gain antifragility. To name but a few: the principle of redundancy (preferably applied aggressively because in a time of scarcity one can gain a lot from your own abundance) even though it’s despised by economists and bean counters, optionality (making use of opportunities when they present themselves), flexibility, using heuristics (practical mental rules of thumb that make the everyday life easier) and allowing some hormensis in our lives and organizations (some disorder and stress is necessary to make things stronger, like not always washing your hands before diner can enforce your immune system and is thus healthy...).


One other thought that struck me while reading the book is that Taleb’s fragile/antifragile thinking unwillingly manages to combine two safety theories that are often regarded as opposites within the same framework. I’m obviously talking about Perrow’s Normal Accident Theory (fragile, Black Swans) and the antifragile concepts of High Reliability Organisations (Weick & co) and Resilience (Hollnagel, Woods etc.). For safety professionals it may be confusing that Taleb on several occasions says that antifragile and resilience (to him synonymous to robust) aren’t the same thing, while elements that “we” (safety professionals) range within the resilience framework are seen by Taleb as antifragile elements. A matter of definition, I think, and don’t let that stand in the way of gathering some useful lessons from this book.

woensdag 26 februari 2014

Black Swan workshop 20 February 2014

Petromaks - Black Swan Workshop,
Stavanger, 20 February 2014

This workshop was part of a new (?) project that aims to develop concepts, approaches and methods for proper understanding of risk for Norwegian petroleum activities, with due attention to the knowledge dimension and surprises. And so over 100 participants gathered in Stavanger, coming from mainly oil and gas companies, universities, some students and a few ‘outsiders’ as well.

Chairman of the day, Terje Aven, opened the day, introducing the research program and the goal of the day: to have an exchange of ideas about the Black Swan concept and how to meet the challenges they pose. Knowledge is one of the central elements because we need to know where to put resources. Not in finding solutions right now, but having guidance on the direction.

Aven defined a Black Swan as “a surprising extreme event relative to present knowledge or beliefs”. This means that a Black Swan can be a surprise for some, but not for others, it’s knowledge dependent.
Three different types of Black Swans: 1) unknown unknowns, 2) unknown knowns, and 3) known but not believed to be likely because probability is judged to be too low.

Since knowledge is a key, Aven thought that risk assessment may have an important role in dealing with Black Swans - but we have to see beyond current risk assessment practices since traditional assessment practices may ignore the knowledge dimension about uncertainties and surprises.


A couple of approaches for improved risk assessment were presented which included RedTeaming (playing ‘devil’s advocate’), scenario analysis/system thinking, challenge analysis (providing arguments for events to occur), and anticipatory failure determination based on theory of inventive problem solving. Aven also addressed different types of risk problems caused by knowledge uncertainties which put weight on the use of the cautionary and precautionary principles.

The program before lunch featured four academic speakers. Andrew Stirling from the University of Sussex was the first of these. He warned that, since Black Swans are not objective, one should not try to bury subjectivity under analysis. An argument in favour of precaution followed and the interesting observation that defence of scientific rationality (against application of the precautionary principle) is often surprisingly emotional.

According to Stirling, uncertainty requires deliberation about action - you cannot analyze your way out of it. Deliberation will produce more robust knowledge than probabilistic analysis. Some examples were used to illustrate that evidence-based research often includes so large uncertainties that they often can be used as argument for pretty much any decision.

A matrix was presented in which knowledge about likelihood and knowledge about possibilities were presented against an problematic/unproblematic scale. As was demonstrated risk assessment is (according to the definition of Knight) only applicable in one quadrant (good knowledge about both possibilities and likelihood), yet we are forced to use this instrument a.o. through regulations and political forces in a desire for ‘evidence based policies’. The other quadrants were labeled ‘uncertainty’ (’risk’ with bad knowledge on likehood), ‘ambiguity’ (bad knowledge about possibilities, but good about likehood) and ‘ignorance’ (bad knowledge about both). A number of tools were proposed to get out of the “risk corner” and have a wider view. Tools included the ones that Aven had mentioned before. One way is to be more humble and not be caught in ‘swan-alysis’…


Concluding Stirling said that the point is not putting things in boxes (e.g. what type of Black Swan we’re dealing with) but rather what to do things for. Critical deliberation is more important than analysis. One problem with the Black Swan metaphor may be the suggestion of white (= good, many of those) and black (= bad, only a few - so we’re good), but things are definitely not that binary!!

Ortwin Renn had an inspiring Powerpoint-free speech about different Black Swans, the role of (risk) analysis and risk management (methods). Some Black Swans are in the long tails of the normal distribution, others are about problems in knowledge or knowledge transfer. There often is a disparity of knowledge within or between companies - transfer of knowledge may be beneficial. And there are the “real” surprises, the rare events which have never been seen, or there is no pattern that could predict them.

One reason for the popularity of the Black Swan is that we humans experience many random unique events in the course of our lives. Our memory builds on unique events, not on routine. But… risk assessment works the other way… and builds on a very formal approach.

How to deal with these challenges? We can’t say about probability of Black Swan events, but we can say something about the vulnerability of our system! This we can do by analysis, but a different kind of analysis.
Resilience is about the ability of a system to cope with stress without losing its function. Highly efficient systems are usually NOT resilient. There is an unavoidable trade-off between efficiency and resilience. This trade-off is not a mathematical decision. It’s about risk appetite, compensation, politics, our attitude to precaution. It’s an open question we need to deliberate about!

ESRA chairman Enrico Zio had a presentation that reflected his search for answers around the theme of uncertainty. We try to predict by modeling, but there is a difference between ‘real risk’, assumed risk and expected protection. So there is a multitude of risk profiles and various analyses don’t give the same answers. One solution might be to combine deterministic and probabilistic safety assessments.


Zio continued by discussing two different approaches to safety which Hollnagel calls Safety I (look at things that go wrong) and Safety II (look at things that go well - resilience thinking). Improving the ration between Safety I (decrease) and Safety II (increase) may be one way to decrease Black Swan risks. Observability and controllability are two important issues related to Safety II.

Concluding Zio warned about trying to solve today’s (complex) problems with the mindset of yesterday.

After a number of critical remarks with regard to knowledge and probability Seth Guikema from John Hopkins University talked about why we cannot discard historical data and expertise. Guikema underlined that risk assessment helps to understand problems, but does not make the decisions. People do!

Historical data can be useful as it says something about the frequency and extent of the events that have happened, but it cannot give information about eventual Black Swans in the future. While you cannot determine the inputs themselves, you can use models from things that have happened and run these with BIG events and see how the system responds to these. Guikema illustrated this with example from the impact of historical hurricanes on power grids in the USA and how the recent hurricane that ravaged the Philippines would affect the USA. This variation on ‘red teaming’ proves to be a useful way to assess vulnerability. One ironic comment was that people tend to forget the lessons learned from previous storms.

Again: data driven models cannot help initiating events, but can be used to assess the impact and more. An iterative ongoing process was presented as shown in the picture below.


After lunch there was possibility of participants to present views or discuss practical examples. Regrettably only a few used this opportunity - despite the rather large number of participants.

First up was Conoco Phillips who presented two different cases. Kjell Sandve discussed some challenges within risk management especially the question if increasing costs were ‘caused’ by HSE requirements. One main problem according to Sandve was a challenge to reduce requirements once they had been applied one place (but maybe weren’t quite necessary everywhere). He asked also if we actually have the right tools and if risk assessment supports good decisions. His appeal to the project was: Don’t make more complex tools/methods - rather the opposite!

Malene Sandøy starting talking about a 1970s’ decision problem, i.c. how high should jackets for facilities be build and the Black Swan that one met some years later when it turned out that the seabed subsided and platforms were “sinking”. Related Black Swans were stricter regulations with regard to wave height, new design requirements and not in the least a much longer life time for the structure than originally anticipated.
After a rather technical discussion of design loads and how these analyses could be used Sandøy ended up with a clear improvement potential: From calculations (analyses) that “no one” understands to broad discussion in the organization of scenarios to design for.

Third speaker during the audience participation was yours truly who brought some views from outside the oil & gas breaking through some barriers of domain dependence. The presentation included a retrospective on some Black swans that have affected Norwegian Railways in the past years, both negative (the 22/7 terror attack) and positive (the national reaction to 22/7, the ash cloud and, interestingly, the financial crisis). Safety related accidents are rarely Black Swans, but as Stirling said one shouldn’t be too stuck up with putting things in boxes and a more ‘relaxed’ approach to the definition will give several ‘Dark Fowl’ events. One example was the Sjursøya accident in March 2010 which was discussed. A quick assessment of the system where the accident originated (the Alnabru freight terminal) leads to the conclusion that this was a Fragile system. Measures taken after the accident were discussed related to the Fragile - Robust - Antifragile triad from Taleb’s latest book.


Ullrika Sahlin from Lund University came from an environmental background which was interesting because she related Black Swans not so much to events (as safety folks tend to do), but rather to continuous exposure from certain processes. Sahlin presented a couple of thoughts about the subject and expectations from the project.

Her presentation included discussions around various perspectives on risk (a.o. traditional statistical and Bayesian), the quality of our knowledge and the processes we use to produce the knowledge we use for evidence based management and not in the least assumptions (to which one member from the audience remarked that “If you assume, you make an ASS of U and ME”).

One advice from Sahlin was that we should communicate our estimates with humility and communicate our uncertainties with confidence.


Igor Kozine from the University of Danmark was the last participant who had a relatively spontaneous short presentation around a 2003 Financial Times article that told about a Pentagon initiative to have people placing bets on anticipated terror attacks as a way of risk assessment. This project was discontinued (at least officially) because of public outrage.

After a coffee break with engaged mingling and discussions there was a concluding session with discussion and reflections. Themes that came along included:
  • In line with Safety I/II: should we focus on failures or on success? Is focusing on success a way to manage Black Swans? Regulators may have a problem with those approaches… Rather than either/or one should compare alternative strategies. There aren’t too many error recovery studies yet.
  • What about resilience on a personal level? How much must one know? Dilemma between compliance and using your head in situations that are not described in the rules - see Piper Alpha.
  • Taleb’s Antifragility and Resilience: safety people may have a different understanding about Resilience than Taleb, and (according to Rasmus from the University from Danmark) also original literature differentiates between Robust (bounces back to normal) and Resilient (bounces back to a NEW normal).
  • Stirling summarized that one important dilemma is the question: What is the analysis for? Shall it help making a decision, or shall it describe knowledge about some risks? One solution may be to look at the recipient of the analysis, not just at the object to be analyzed. What are the power structures that are working on us?
  • The last question was if we really need new methods, or rather a good way to use existing ones and their output? Aven concluded that risk assessment in the broad sense of the term has a role to play. Knowledge is important as is the communication of knowledge.