Mathematical Dependencies: The Hidden Killer

A commercial aircraft engine shuts down in flight in the neighborhood of once in every 50,000 hours of operation. That is significantly more hours than a pilot will see in a lifetime of flying. For all practical purposes, a single engine shutdown is rare. Still, most commercial aircraft flying today have a second engine, with yet again, a 1 in 50,000 hour rate of failure. To refer to an earlier article in this series, the airline is playing the game with two very reliable dice.

What are the odds of two engines shutting down during the same hour of flying? Assuming each failure is independent of the other, it is mathematically P(A and B) = P(A) * P(B). That’s losing both engines one in every 2.5 billion hours of flight. Given that, Sully Sullenberger and Jeff Skiles must have thought they had the unluckiest day ever. Both engines rolled a one, both shutting down within seconds of one another. It was indeed a rare event, but not as rare as two independent engine failures. Both engines failed from a common cause: a flock of very large birds. In this case, the probability of both engines failing was slightly, if not considerably, higher than the independent failure of both engines. Mathematically, P(A and B) > P(A) * P(B). This is what we call mathematical dependency.

We system designers try our best to play the game of life with multiple dice. We set two alarm clocks. We double-check that we have locked the front door. We backup our most valuable electronic data. Redundancy and recovery are central concepts in everyday life, including high reliability operations. We have two pilots in the front of the aircraft, not because it takes two pilots to fly, but because we want a second pilot should the first go into cardiac arrest while flying. We have two nurses hang blood just in case the first nurse makes a patient identification error. We’ve learned, where possible, to not put ourselves only one human error or one equipment failure away from harm.

With good design, the problem shifts from fear of individual failures, to fear of mathematical dependencies. Two engines on an aircraft is good design; a common cause that knocks out both engines is not. When we’re highly reliable it’s because, in general, we play the game with two, three, or even four dice. We make use of mathematical independence. Yet, dependencies still lurk in the shadows. It is up to the system designer to spot them early, before they lead to harm. However, they are not always easy to see.

The most common dependency is when two otherwise independent failures become linked together by a significant, but generally unrecognized or unavoidable, common cause. Think of your two telephone alarms (set at 6:00 am, and 6:15 am), linked together by a common battery failure, or a common misunderstanding of the actual time of your morning meeting. It may look like you set two independent alarms, but there are clearly dependencies in play.
A second type of dependency is even more insidious. Consider the Joint Commission’s requirement for two independent patient identifiers before the start of any test or procedure on a patient. The Joint Commission intends that three dice be in play. To get it wrong, you as a healthcare provider would have to initially select the wrong patient. It’s either walking into the wrong room, or heading to the wrong bed in a multi-bed patient room. Second, you’d have to not detect that their name does not match the patient you are looking for. Finally, you’d have miss that the date of birth does not match your records. Three dice – all looking independent. Target the wrong patient, fail the name check, and fail the date-of-birth check.

The problem is our human propensity to drift. The more comfortable we become with the patient, the less our risk monitors see the risk of misidentification; the more we drift into skipping a step. Remember the last article; give me even a tiny incentive to deviate from a rule, and in the absence of perceived hazards or threats, I will likely choose to cut the corner. So, a nurse or a doctor begins to drift. They choose the at-risk behavior of skipping the confirmation of patient name. One at-risk behavioral choice takes out one of the three dice. Or not?

Mathematical dependencies: the hidden killer, remember? When a nurse or doctor drifts into that choice to not confirm name, what have they done with the third dice, the date-of-birth confirmation? What we know is that virtually every time a nurse chooses to skip checking the patient’s name, they also chose to skip checking date of birth. Natural and predictable human drift through one mis-interpretation of risk takes out both “independent” identifiers. They are called independent, but this only true when both checks are intended. One temptation to cut a corner wipes out both independent identifiers. Now, by one at-risk choice, instead of three dice, we’re down to one.
Leaders and safety specialists will often tour their operations looking for what James Reason calls the “swamps.” It’s the latent preconditions for human error. Find those preconditions, eliminate them, and you reduce the rate of human error. Reduce the rate of human error, the safer the system becomes. Consider this the undergraduate work of safety science. The graduate work is taking that tour looking for mathematical dependencies. Where are the human and hardware components in my new “highly reliable” system susceptible to common cause failures? Where can drift among the human components wipe out multiple dice simply by one mis-interpretation of risk? If you learn to design and operate highly reliable systems, your root cause analyses will begin to focus more and more on the mathematical dependencies that might have been in play.

If you take a look at many of the world’s catastrophic accidents, from birds taking out aircraft to tsunamis taking out nuclear power plants, you’ll find those systems weren’t as robust as we system designers might have thought. Take a closer look and you’ll see unanticipated and unwanted mathematical dependencies. Look at how you set your three alarm clocks for tomorrow morning’s critical meeting, and you might be surprised how many dependencies exist in your plan.

David Marx
CEO Outcome Engenuity

[maxbutton id=”39″ url=”/wp-content/uploads/2017/05/WhatWeBelieve_Issue5_050217.pdf”]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.