Lessons from the Grenfell disaster
By Dr David Leabeater
When tragedies occur, it’s in our nature to seek comfort from ‘learning the lessons’. I want to explore what Grenfell teaches us in terms of managing risk.
Here’s one to start with: “It’s not your mistake that destroys you. It’s what you do next.”
Think of Nixon. Watergate burglary. Big mistake. But probably survivable. Years of lies and cover-ups. Indefensible.
Take Anfield. Terrible mistakes on the day. Tragic. Subsequent distortions, cover-ups and lies in the cold light of day and maintained for years. A different story!
I’ll be interested to find out exactly how and why the Grenfell disaster came about.. But we’ve all seen with our own eyes how politicians and leaders dealt with it.
The Ontology of Risk
“Ontology … the study of what things are.” Does a risk exist if no-one knows about it?
Pre-Grenfell, thousands of residents around the world dwelt in Grenfell-like towers. Nothing was being done. There was no duty or budget or plan to remediate them. The risk did not exist to all intents and purposes.
Post-Grenfell, our leaders woke up to a Schrodinger-like world in which the residents and towers for which they were responsible faced a catastrophic risk. Or not. The cladding was dangerously combustible. Or it wasn’t. They didn’t know. They had a dilemma. If they find out they may be forced to act. A huge financial and political liability pops into existence if they have a problem.
But does a risk exist while they don’t know? Is there an obligation to find out if there is a problem? And if so, how quickly? Some did investigate urgently. And some looked out of their depth when they found out they had a huge unplanned problem. Others delayed finding out. I suspect one reason for delaying is to have time to prepare a plan. But is that OK? And how long can you delay finding out? Is there a risk in just remaining in the dark?
Managing catastrophic risks
As a BPO sales exec hosting a client visit to India I was once asked, “What’s your Disaster Recovery plan in the event of nuclear war between Pakistan and India?“
I said, “Our DR plan differentiates probability and severity of risk. Nuclear war would rate as high severity risk but very low probability.”
What I didn’t say out loud: “Plus there is no relevant remediation to nuclear war especially given that your business, the global economy and life on earth may be wiped out.”
Severe risks are pretty easy-to-understand. Not so risk “probability”. I recommend we differentiate proliferation of risk from one-off riskiness. Consider Lotto. The chance of my numbers coming up are vanishingly small – 1 in 45 million. Yet there are regular Lotto winners. That’s because each week punters buy millions of tickets.
Pre-Grenfell, residential fire risk seemed low and declining compared with earlier times. Lots of people have lived safely for years in high rise towers. The risk seems vanishly low.
But now, post-Grenfell, we are forced to look through the other end of the telescope. Domestic fires may be rare. But hundreds of residents in hundreds of towers are at risk night after night for years. From this perspective, seemingly low risk events can’t start to look almost inevitable if they aren’t managed responsibly.
In business we are trained to be resilient. Life is risky. Stuff happens. All sorts of catastrophes can be imagined. Avoiding them is another story. So, we are trained to be economic rationalists and not to over-react. In a world of austerity politics, the burden of proof will be against expensive risk mitigation. And it’s not hard to find reassurance in empirical data showing that nothing has gone wrong to date so the odds must be low.
Ownership of risk
So, we are surrounded by risks. Sure, some of them are catastrophic. “So what do you expect me to do about it?” That a common challenge.
For a few years, I led a QA and Testing business. I regularly spoke with CIOs about risk. Turns out there were only a few times a year that kept them awake at night: typically the cyclical Major Software Release.
“Why?” You may ask. I already knew that the problem with major software releases isn’t the INTENDED new changes. They are easily tested. It’s the fact that if you make changes in one part of an IT environment it can cause UNINTENDED changes to other core systems so that they don’t work properly or at all.
“Surely the answer”, I naively said, “is to regression test your other systems to ensure they still work as expected.”
“Fine in theory” was the typical response “but my I’m responsible for millions of lines of code, thousands of use cases and hundreds of applications. I can’t possibly regression test everything. Every major release I hope that the system won’t crash on my watch.”
The default mindset for many CIOs was ignore the risk and ride their luck. That’s why catastrophic failures have been relatively common across the global IT industry. (The risk has declined a bit with the rise of new methods such as agile and DevOps – but that’s another story.) I did offer a constructive suggestion …
“Why not just test the scenarios that could cause a catastrophic failure (and cost you your job)”
About the author
Dr David Leabeater is a senior Sales, Strategy, Business and Account Director with considerable MNEA experience, from Sydney, Australia. PhD Economics & Philosophy, First class.
Enjoyed reading these articles?
Have you got something you can share with the Network. Why not submit an article