Managing Risk in Flying: Cognitive Traps!

The most critical skill in aviation safety is making good decisions, both before flight when time is plentiful and in flight when circumstances change and we may be rushed. The ability to generate and decide between diverse options (often with incomplete information and in the crunch) is essential to mitigate risk and achieve a safe outcome. This critical skill set is the central focus of the FAA Safety Program which requires the Aeronautical Decision Making (ADM) module as a required core subject at all levels. I often compare each flight to a football game when coaching a student pilot to manage their expectations. Our careful and essential plans made beforehand in the huddle are often out dated the moment the ball is snapped and the opposing team breaks through the line. By their nature both these environments are fluid and change is almost the norm. Pilots, like quarterbacks, must be ready to decide on the fly and embrace flexible decision-making. Suddenly it is time for a new plan and some fancy footwork! I recently completed the Stanford Strategic Management Course in decision-making and would like to share some amazing insights from the business world I think any pilot will find useful.

Risk management is essential in business and is a well-funded subject of research at business schools.
Risk management is essential in business and is a well-funded subject of research at business schools.

This program at Stanford is highly regarded in the business community and has been proven under real world pressure and earned many companies, most notably Chevron Energy, amazing increases in efficiency (read “profits”) as a result of employing trained decision makers at all levels of management. Chevron has deployed over 4,000 trained decision makers inside the company and requires their participation on all higher level choices in the boardroom and in the field. The results are astounding.

The central take-away from this course is that without training we humans are pretty bad at making and evaluating decisions. Behavioral scientists have provided incontrovertible evidence that the human mind is “predictably irrational.” We are, by nature, subject to an amazing number of debilitating cognitive biases and we also tend to depend on results or outcomes to evaluate our decisions as good or bad. This process of using the results to judge the decision might seem obvious and valid to most people as a best practice in life: “let’s see how this turns out” and then iterate. This ongoing change and validation builds the heuristics or “rules of thumb” by which we construct our lives and guide our future actions. Sometimes this is conscious and involves the higher order thinking processes but most often it operates almost reflexively and is built into our human operating system.  Daniel Kahneman labels this “System One thinking” in his book Thinking, Fast and Slow and he points out how the results usually escape review by the higher level auditor of the conscious mind. I highly recommend this book for more depth on this subject (but he did win a Nobel Prize for his work in this field and it is a bit dense).

As a decision-making example, let’s say you had a bit too much to drink at a party but despite being impaired you drove home and arrived safely. Did this happy outcome validate your decision to drive? Absolutely not, it was still a bad decision. But unless you consciously address this and understand it, you might develop a tolerance for this risk based on luck and persist in this behavior (we all know people who do). Conversely, suppose in the same impaired state you instead decided to use a sober designated driver to get home but ended up in an accident on the way home. Would this poor outcome cause that to be a bad decision? Again no, the decision was in this case sound but circumstances beyond your control led to disaster. In summary, our decision-making process needs to stand completely independent of the outcomes to be useful and decisions can and should be evaluated entirely on structure and internal merits (more on this in future posts). This is not how we usually conduct ourselves in life and also, unfortunately, not how we proceed in our world of aviation.

Suppose we press on into deteriorating weather and make it home successfully by flying through lower than expected conditions, below personal minimums and maybe even on the edge of our comfort level. Without thinking too much about it, human nature causes us to expand our range of what is “acceptable and safe” and enlarge our operating envelope. The successful result validates the poor decision and reinforces the future erroneous behavior. This process of “human accommodation” is built into our operating system which makes the exotic and unusual the “new normal.” Unless we scare ourselves really badly, this new standard is welcomed into our comfortable repertoire quite quickly. Instead of evaluating the original decision based on objective standards, the outcome validates this bad decision and it becomes part of our future operating instructions. Even worse, we might even congratulate ourselves on our skill or cleverness and make this an even more durable imprint. This process in the light of day  is “trusting to luck” in establishing our new SOP.  If we stopped and analyzed this we would clearly realize “luck” should never be part of the planning process and any formula involving “maybe” in aviation should be discarded.  Unfortunately our  automatic reinforcement process escapes the higher order thinking skills (Kahneman’s “System Two) of analysis and evaluation. We develop a new heuristic to guide our actions without even “deciding” at all. This is a classic erosion of standards that we find often in NTSB reports; supposedly smart pilots doing very dumb things!  To improve our game, the correct procedure requires carefully and consciously evaluating all critical flights very soon after landing. It is especially important to ask the question “was that a result of skill or luck?” and put the focus is back on the original decision process. We can stop this automatic reinforcement process fairly quickly but this takes time and discipline.

Columbia DIsasterClosely related to this automatic decision trap but operating on a higher, more conscious level is “normalization of deviance.” This is defined as: “The gradual process through which unacceptable practice or standards become acceptable. As the deviant behavior is repeated without catastrophic results, it becomes the social norm for the organization.” This was the major player in NASAs flawed decision process to launch shuttles with leaking “O-Rings” and accept foam shedding from fuel tanks. These occurrences became “acceptable practice” as the process  kept working and we generated a “new normal.” In both of these accidents, creeping standards in the very conscious decision process opened the door to huge risks that led to our highly Challenger Disasterpublic shuttle tragedies. Examine your own flying and life activities and see if some of these same forces are at play. As mentioned, most traps are embedded in the fabric of everyday experiences and operate automatically on a sub-conscious level. Kahneman also points out these forces are similar to optical illusions.Even when you are aware of the correct answer we are physiologically compelled down the wrong path. Good decisions require hard work and discipline.

I would really appreciate your comments on these ideas and perhaps your experiences that involve these tendencies. This is the subject of my talk at Sun N Fun in April and hopefully more details on how to correctly evaluate decisions will follow. Stop by Forum Room 3 Thursday at noon if you are at the Florida show!

David blabs about safety!
David blabs about safety!