woensdag 21 september 2011

Frist comment on Heinrich on EHSQ Elite

Before writing my discussion on Heinrich and Manuele there was a thread on the EHSQ Elite group on Linkedin to which I contributed.


Here's the post that triggered me:

What concerns me is SHE's continued belief in Heinrich. I would encourage all reading this thread to read "Heinrich Revisited - Truism or Myth" by Fred Manuele, published in 2002.

Heinrich's work is truly myth, including the triangle. Colleagues, please take time to read Mr. Manuele's work and then comment. The beliefs in Heinrich's myths keep us focused on less serious but frequent incidents believing that reducing their risk will somehow reduce risk of serious and fatal injuries where the exposures may be completeley different

My reaction:

Thanks for the tip on the “Heinrich Revisited” book. I wasn’t aware that it existed so I ordered it right away. I still have a copy of Heinrich’s 1941 book waiting to be read, so that’ll be two in one go… Meanwhile I checked out the Heinrich article on Wikipedia and there one can find what looks to be a good summary of Manuele’s criticism. I found some good and valid points here, but also some that I’d like to attribute to wrong understanding and the like. This activated me to sit down for some writing since I’d like to raise some critical comments myself…

The first would be that it’s rather easy to just sit there and criticize Heinrich’s work. Why not rather take the effort to update/upgrade his work to today’s standards and for example write a book “Heinrich Revised” instead? As I say often to managers and employees: It’s really easy to complain, but much more useful to take that extra step and try to improve.

It seems to be fashionable these days among some safety experts to do a bit of Heinrich bashing. What annoys me a great deal is that people tend to take Heinrich’s findings literally as if it were laws of nature (e.g. the 1-10-30-600 ratio, or whatever the distribution is, see the reference in Wikipedia to “Heinrich’s Law” - ridiculous) and then criticize Heinrich upon finding out that it isn’t. Or even worse: people posing conclusions that their own registrations aren’t good enough because they don’t live up to the “correct” ratio (I don’t make this up - happened at a forum I attended - twice within one hour!). What bullocks is that?!

Heinrich’s work at the time was plainly groundbreaking. I don’t recall anyone before him approaching safety in a semi-scientific way and leaving us at least two concepts/metaphors/models (the dominos and the pyramid) that are still somehow relevant and applicable today. Yes, certainly his work does have shortcomings. But don’t forget that we’re 80 years further down the road, have a truckload of additional safety science in our luggage and have had ample possibility to test, try, refine and partly reject his findings. Which is what a major part of scientific work is all about.

What is inexcusable, in my opinion, is when things get taken out of context and are rejected based on that wrong understanding or twisted application.
(To tackle that other Heinrich-thing in that regard: are there still people who believe that, in analogy with the brilliant and easy to explain/understand domino metaphor, all accidents are a simple and linear chain of events? Ever heard about the concept of lies-to-children?).

Let’s discuss the ratio a bit further. I still have to read it in Heinrich’s own words, but as I interpret this in the line of today’s safety science: it is totally unimportant whether the ratio is 1-10-30-600 or rather 1-25-60-500 - which probably would make a rather odd pyramid, haven’t tried to draw this. I can tell from my own experience (having the luxury of working in companies with over 15.000 to 30.000 registrations of all kinds of incidents per year and thus ample data to play with) that each incident type may have specific and probably due to circumstances changing ratios (Andrew Hale wrote at least one paper on this subject in the late 1990s/early 2000s) which may result in distributions that aren’t very pyramid shaped at all. Linda Bellamy mentioned a diamond shaped one; I would like to add an example from my own line of work. When we look at collisions of trains with track workers you have an incident type that would generate a kind of uneven hourglass. Collisions with trains are pretty binary events: either the subject dies or it’s a near-miss (rather “near hit”). There are only few ‘happy’ people hit by a train that are in need of a first aid kit or consult with a medic…

The point to be made with the pyramid is that GENERALLY (and luckily) there are only few fatal accidents, more with injuries, even more with material damage and even more near misses. Heinrich’s premises was thus that attacking the frequent low-consequence incidents will eventually reduce or prevent the number of serious accidents as well. Seems to make sense, if only because it’s pretty tedious to wait for the really strong signals to appear - much better to (re)act on weak signal and prevent the strong ones from happening at all.

Here then are at least two of the misunderstandings that often occur:

1) People tend to forget what it was that Heinrich actually counted. This (in)famous pyramid consists of data from a huge number of various incidents (wasn’t it something like 1,5 million?) which he ranged in a certain way according to the consequences resulting in this pyramid-like distribution. Meaning several things: Firstly Heinrich not only counted apples and pears; no he went out to get the entire fruit market from coconuts and bananas to raspberries and tomatoes and ranged these from rotten to ripe to green summing up these categories. The pyramid thus collects things that are very, very different in nature. Secondly let’s get another thing very clear: the pyramid is about CONSEQUENCES and consequences alone. Which gets us to point 2.

2) Often forgotten and/or misunderstood is one of the central things in Heinrich’s work, namely the Common Cause Hypothesis. This says that serious accidents and its precursors (i.e. minor incidents) are due to the same causes. Thus providing a theoretical grounding why one should do something about the “weaker signals” to prevent the serious accidents from happening. Crucial here is of course the COMMON CAUSE. Logic dictates that we will find a huge number of different causal chains (or rather tree - see the domino comment) within our fruit market (*) and looking at preventive measures that work one has to remain within one causal tree and not blend these together. It’s folly to think that preventive measures on apples will do something about the rotten coconuts (replace “apples” by “finger cuts in kitchen” and “rotten coconuts” by “BLEVE” if you like). Yet this is the un-nuanced message that is often spread by ignorant/lazy colleagues and consultants. It’s hardly Heinrich’s fault that this happens, or is it? (We could draw a parallel to religions and fundamentalism here by the way, I won’t now).
(*) Note: And likely a couple of underlying “causes” that cover a much larger selection, but these are usually so general/generic/non-specific that they are rather classification terms (think of catch-all phrases like “competence” and “culture” or the GFT/BRF from Tripod) than real causes in the chain.

Anyway, the case is: while part of Heinrich’s conclusions may not be literally valid anymore (if they ever were, literally), the general line is applicable today as much as it were then. Zooming in on a certain accident/incident type and its precursors (instead of taking a consequence as the point of departure) both a pyramid(ish) ratio (specific to each type and probably company) can be found and the common cause hypothesis can be applied. And even better: it has been scientifically tested. Sure, it took a long time since 1931, but read for example Linda Wright’s PhD (2003) and I guess the Storybuilder project in the Netherlands can provide additional evidence here.

Concluding with my own view on the title of this thread: wishing for zero fatalities is obviously a good thing but as a goal it’s useless. Just how are you going to steer on such infrequent events as fatalities or LTIs (see various eminent authors on this subject; apart from the fact that you can only go so far with controlling external factors - at least in my business). And it may do actual damage as some of our group members justly argued. In our company we rather talk about a zero-vision, not goal. Goals are to be set on the weaker signals that may lead to an accident (and eventually fatality), or rather on some pro-active leading indicators. But that was another thread, wasn’t it?

Geen opmerkingen:

Een reactie posten