The most critical skill in aviation safety is making good decisions, both before flight when time is plentiful and in flight when circumstances change and we may be rushed. The ability to generate and decide between diverse options (often with incomplete information and in the crunch) is essential to mitigate risk and achieve a safe outcome. This critical skill set is the central focus of the FAA Safety Program which requires the Aeronautical Decision Making (ADM) module as a required core subject at all levels. I often compare each flight to a football game when coaching a student pilot to manage their expectations. Our careful and essential plans made beforehand in the huddle are often out dated the moment the ball is snapped and the opposing team breaks through the line. By their nature both these environments are fluid and change is almost the norm. Pilots, like quarterbacks, must be ready to decide on the fly and embrace flexible decision-making. Suddenly it is time for a new plan and some fancy footwork! I recently completed the Stanford Strategic Management Course in decision-making and would like to share some amazing insights from the business world I think any pilot will find useful.

Risk management is essential in business and is a well-funded subject of research at business schools.
Risk management is essential in business and is a well-funded subject of research at business schools.

This program at Stanford is highly regarded in the business community and has been proven under real world pressure and earned many companies, most notably Chevron Energy, amazing increases in efficiency (read “profits”) as a result of employing trained decision makers at all levels of management. Chevron has deployed over 4,000 trained decision makers inside the company and requires their participation on all higher level choices in the boardroom and in the field. The results are astounding.

The central take-away from this course is that without training we humans are pretty bad at making and evaluating decisions. Behavioral scientists have provided incontrovertible evidence that the human mind is “predictably irrational.” We are, by nature, subject to an amazing number of debilitating cognitive biases and we also tend to depend on results or outcomes to evaluate our decisions as good or bad. This process of using the results to judge the decision might seem obvious and valid to most people as a best practice in life: “let’s see how this turns out” and then iterate. This ongoing change and validation builds the heuristics or “rules of thumb” by which we construct our lives and guide our future actions. Sometimes this is conscious and involves the higher order thinking processes but most often it operates almost reflexively and is built into our human operating system.  Daniel Kahneman labels this “System One thinking” in his book Thinking, Fast and Slow and he points out how the results usually escape review by the higher level auditor of the conscious mind. I highly recommend this book for more depth on this subject (but he did win a Nobel Prize for his work in this field and it is a bit dense).

As a decision-making example, let’s say you had a bit too much to drink at a party but despite being impaired you drove home and arrived safely. Did this happy outcome validate your decision to drive? Absolutely not, it was still a bad decision. But unless you consciously address this and understand it, you might develop a tolerance for this risk based on luck and persist in this behavior (we all know people who do). Conversely, suppose in the same impaired state you instead decided to use a sober designated driver to get home but ended up in an accident on the way home. Would this poor outcome cause that to be a bad decision? Again no, the decision was in this case sound but circumstances beyond your control led to disaster. In summary, our decision-making process needs to stand completely independent of the outcomes to be useful and decisions can and should be evaluated entirely on structure and internal merits (more on this in future posts). This is not how we usually conduct ourselves in life and also, unfortunately, not how we proceed in our world of aviation.

Suppose we press on into deteriorating weather and make it home successfully by flying through lower than expected conditions, below personal minimums and maybe even on the edge of our comfort level. Without thinking too much about it, human nature causes us to expand our range of what is “acceptable and safe” and enlarge our operating envelope. The successful result validates the poor decision and reinforces the future erroneous behavior. This process of “human accommodation” is built into our operating system which makes the exotic and unusual the “new normal.” Unless we scare ourselves really badly, this new standard is welcomed into our comfortable repertoire quite quickly. Instead of evaluating the original decision based on objective standards, the outcome validates this bad decision and it becomes part of our future operating instructions. Even worse, we might even congratulate ourselves on our skill or cleverness and make this an even more durable imprint. This process in the light of day  is “trusting to luck” in establishing our new SOP.  If we stopped and analyzed this we would clearly realize “luck” should never be part of the planning process and any formula involving “maybe” in aviation should be discarded.  Unfortunately our  automatic reinforcement process escapes the higher order thinking skills (Kahneman’s “System Two) of analysis and evaluation. We develop a new heuristic to guide our actions without even “deciding” at all. This is a classic erosion of standards that we find often in NTSB reports; supposedly smart pilots doing very dumb things!  To improve our game, the correct procedure requires carefully and consciously evaluating all critical flights very soon after landing. It is especially important to ask the question “was that a result of skill or luck?” and put the focus is back on the original decision process. We can stop this automatic reinforcement process fairly quickly but this takes time and discipline.

Columbia DIsasterClosely related to this automatic decision trap but operating on a higher, more conscious level is “normalization of deviance.” This is defined as: “The gradual process through which unacceptable practice or standards become acceptable. As the deviant behavior is repeated without catastrophic results, it becomes the social norm for the organization.” This was the major player in NASAs flawed decision process to launch shuttles with leaking “O-Rings” and accept foam shedding from fuel tanks. These occurrences became “acceptable practice” as the process  kept working and we generated a “new normal.” In both of these accidents, creeping standards in the very conscious decision process opened the door to huge risks that led to our highly Challenger Disasterpublic shuttle tragedies. Examine your own flying and life activities and see if some of these same forces are at play. As mentioned, most traps are embedded in the fabric of everyday experiences and operate automatically on a sub-conscious level. Kahneman also points out these forces are similar to optical illusions.Even when you are aware of the correct answer we are physiologically compelled down the wrong path. Good decisions require hard work and discipline.

I would really appreciate your comments on these ideas and perhaps your experiences that involve these tendencies. This is the subject of my talk at Sun N Fun in April and hopefully more details on how to correctly evaluate decisions will follow. Stop by Forum Room 3 Thursday at noon if you are at the Florida show!

David blabs about safety!
David blabs about safety!

7 thoughts on “Managing Risk in Flying: Cognitive Traps!

  1. I see two cases of disparage for safety as it relates to flying, under the title of bad decision making. 1) Human need for thrilling active and 2) disbelief of condition.

    I think one overlooked aspect in your editorial is the human condition for seeking “thrill.” One of the issues with breaking personal limits and decision making, I think, is the “thrill” element. By ducking under clouds and skipping along the mountain tops when one should have taken the first available airport a certain level of thrill is added. Of course, the pilot registers how dangerous what they are doing is, and every-time they do it successfully they still know it was dangerous. But there is that thrill, the challenge… Every time they do it they feel like they have conquered the beast. And they are somehow superior in skills to their counter parts who tell them they are taking risks. (Until the day the beast bits)

    Every action must have some sort of reward (the results)… The reward for the continued risk in this case is that ego boost and dangerous thrills.

    In the other case disbelief, it is just the notion they refuse to believe what they are doing is wrong or needs intervention. This is the case with the NASA space program shuttle accidents. There was not any action taken because they refused to see an issue. They disbelieved…

    What as their reward? They didn’t have to deal with the problem… it’d just go away on it’s own, Right?

    In the end, AS you mentioned we need to move away from the Dr. Phil like “How’s that working for ya?”, results based thinking. And rather ask, “How close was I to stupid today; and what I am I overlooking?”

    1. Yes, “thrill seeking” is a problem in the pilot population. And increasingly this seems to be aligned with showing off for the Go-Pro (who can film the wildest things in a plane) This does align also with basic ego satisfaction. I usually advocate the antidote of “calming the inner child” since we are definitely quelling some kindergarten level needs here!

      This article was mostly directed at those who are trying to “do it right” and get trapped by our insidious mental apparatus. There is not much help for the accidents the NTSB codes as “O D” (ostentatious display) For them it is not “if” but just a matter of “when” they will end up a smoking hole…too bad.

  2. I also think there is the reinforcement from other pilots when someone survives a stupid decision. “He is a good stick because he landed with two inches of ice on his wing”. This also of course sends the wrong message to students and low time pilots that “good” pilots make these decisions. Where the pilots that we never hear about are the ones probably should be held up as examples.

    If I was in you seminar David I think I would bring up the point that the answer would be hard and fast personal minimums. “If I see conditions A and B then I’m going to turn around and land” decision is made no exceptions. Of course things are not that simple and the question then becomes one of how do you expand your personal minimums. If my minimum ceiling is 1000 AGL and one day I try 500 ft and I survive I now might have a new personal minimum but depending on how far I take this I fall into the cognitive traps that you are talking about. Very complicated but very interesting.

    1. Good advice…hard personal minimums! The best advice I heard on this is the same: “never adjust your personal minimums in flight” (the temptation to cheat it) We all are prone to cognitive traps (or we would not buy lottery tickets…)

  3. For the sake of the discussion – a couple of comments about personal minimums and how they may change over time:
    – Personal minimums aren’t just weather related. In most airplanes, I will be on the ground with an hour’s fuel in the tanks. Don’t care if it’s 10 gallons in a Mooney or 38 gallons in an Aerostar. That’s a minimum I don’t invade in planning or in the air.
    – Night flight – If the autopilot isn’t working, VFR conditions only or a competent 2nd pilot along for the ride. Oh, I practice IFR at night all the time without the autopilot. Just want “George” or someone else with me at night when conditions warrant. 20 years ago, that wasn’t a personal minimum of mine.
    – How am I feeling? I never used to think about it much. These days, I’m much more cognizant of my mental and physical state.

    1. Great points from a wise pilot! Lots of “conditions” must be considered to arrive at a comfortable “minimum.” Regulatory minimums are legal but rarely safe and cannot account for your currency, physical condition, equipment issues. Play it safe too…a margin and viable alternatives always necessary!

    2. One other — at least 12 hours bottle to throttle. Nothing to do with being able to physically fly. Eight hours is plenty. The extra four hours is for the day (which has occurred a few times in nearly 40 years of flying) when things go straight down the tubes and how sharp I am may make all the difference.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s