Call me a**l if you like
May 5 – OK, one problem probably down. Our kit just failed on what seems to be a one off basis. There’s no identifiable explanation other than one of those things that’s inexplicable (unless, and the client firmly denies this) it had been turned off at the time of the incident. We can monitor it was on shortly beforehand. We can’t say so at the time, and candidly, I now think that’s the explanation we have to go with. We’re taking no action as a result.
As a consequence the atmosphere in company #1 has become somewhat more relaxed. The idea of doing a visit to all sites to check kit and potentially swap out and replace was horrible, albeit one was have a contingency plan for (which you can call a**l if you like – I just call it cautious).
The latter point has been the major focus of discussion after this incident has been declared over. We’ve been reviewing – as a central management team, and bringing others in as well – just what could really go wrong that could massively disrupt what we do, like this, to ensure we have angles covered.
Some are obvious. Fire at HQ would be a big problem – but we have contingency plans for that. It’s silly things like that this kit failure that are harder to predict.
No one can be sure they cove all those things. It’s just higher up our radar now.
NB - The stars are there not because I'm prudish - but I bet your firewalls and filters are