Ben Luther and Jeff Canclini

The 2019 Flight Test Safety Workshop featured a number of papers in the field of risk science, including a presentation of Douglas Wickert’s paper, which had earned him honors at SETP’s Symposium. Risk science has evolved, perhaps to be the topic du jour, and this may provide insight to our Chairman’s question, why the flight test accident rate has not been decreasing over the last decade.  This is different than the prevailing improvement in aviation safety overall.  

The workshop was the catalyst for many more in-depth conversations. One such discussion has continued for almost a year between Jeff Canclini and Ben Luther.  Together, they have responded to Wickert’s Risk Awareness presentation with a shared desire to leverage risk science in their organizations. They share their exchange in this forum as a first step toward their aim of raising awareness of risk science and as a reason to distribute Barham and Hughes paper, “A Different Perspective: Why Flight Test is Distinctively Complex.” So here is your chance to eavesdrop on their commentary.

Canclini: Some of the people I’ve talked with about Wickert’s “Risk Awareness” paper are skeptical or overwhelmed by its complexity.

Luther: I have had a similar experience in receiving both positive and negative feedback to ideas around complexity: both genuine interest and doubt, people telling me it uses big words.  I try to be a conduit for this work to diffuse into the FT profession.

Canclini: My initial take is more positive. Perhaps because I was already inclined toward Colonel Wickert’s aim of finding improved methods for dealing with uncertainty and randomness.  I leaned this way after reading several relevant books: Nassim Taleb’s “The Black Swan” and “Decision Traps: The Ten Barriers to Decision-Making and How to Overcome Them” by Russo and Shoemaker. I also gave a presentation about how to employ some of Nassim Taleb’s 10 principles for a “Black Swan-Proof World” to flight test at the 2017 FTSW and at the 2011 SETP European Symposium.

Luther: I am also very positive about it.  I see this as a burgeoning field for aviation, as well as personal transport, renewable energies…really all the contemporary technologies.  Wickert’s work is academic in nature, and unfortunately, some find that intimidating. We shouldn’t be intimidated by experts, but rather we should celebrate and thank them for their pursuit of science and knowledge. That opens an opportunity for you and I to interpret their work and bring it to our profession. 

Canclini: Col Wickert’s approach in his Risk Awareness paper elegantly applies some of Taleb’s “Black Swan” tenets to flight test. Many of its observations and strategies correlate well with “Why Flight Test is Distinctively Complex”, written by Starr Hughes and retired LM Fellow Bob Barham. However Hughes and Barham don’t offer prescriptive strategies for preventing the “chaotic” domain (also known as pure uncertainty). Wickert’s paper attempts to take this on; he attempts to deal with the increasing complexity of the SUT (System under Test) and a flight test accident rate that hasn’t gone down over the last decade. With that in mind, the TPS curriculum revamp presentation asked the question about how best to prepare testers for systems that manifest themselves like the “internet of things.”

Luther: I hadn’t come across Barham and Hughes paper before. It is good, and I used it this week as a resource to explain to new-FTEs the difference between a TRR (Test Readiness Review) and an SRB (Safety Review Board) . Rather than looking for strategies in their paper, I was able to see justifications for our current practices, the delineation between a TRR and an SRB that is standard practice in flight test, though not universally implemented. I was able to explain the difference using the Cynefin model: TRR for Complicated, with added SRB for Complexity. That places the 2D Risk Assessment Matrix as a gate between the two. It is a lovely model for what flight test professionals already do which made it an excellent tool to teach the TRR and SRB.  I didn’t need the paper for strategy, but instead I used the academic principles.

Canclini: Wickert’s four heuristic (rules of thumb) for flight test together with his four prescriptions for cultivating risk awareness encompass all of the domain quadrants in the Cynefin model presented in Barham and Hughes paper.

Luther: I really like his observation that the tools in use within flight test are not wrong, just incomplete. This complements an idea that I’ve had: That we overuse the 2D Risk Matrix, slapping it against everything in the field, in places where it isn’t the optimum tool.

Wickert’s Flight Test Heuristics

Canclini: Col Wickert’s Flight Test Heuristics (Rules of Thumb), made me think along the following lines:

I. Keep it Simple

This is the same suggestion as Nassim Taleb offered in “The Black Swan.”

Luther: That is my go-to. At every hazard identification session, TRR and SRB: how can we make this simpler? How can we sever causality? How can we bound the outcomes? How can we break up this interdependent system? Sometimes, I look to the sources of energy first–an application of the Energy and Toxicity Analysis tool, a hazard identification tool from some decades back. It calls for users to identify the sources of energy and follow those. If there isn’t enough energy to hurt anyone, that answers the question.

II. Slower is faster.

Canclini: I seem to recall test pilot Rogers Smith offering that excerpt as “the sniper’s creed” in a paper or TED talk years ago.

III. Seek contrary data.

Canclini: One of the best books I read about THE importance of framing and bias was “Decision Traps: The Ten Barriers to Decision-Making and How to Overcome Them.” I was lucky enough to attend a workshop given by the authors, and it humbled me. I never realized how bad we (humans) are at making good decisions whether by not accounting for bias, not giving enough weight to contrary data, and failures in correctly framing a problem.

Luther: I was interested to learn that ETPS now teaches cognitive bias. That is a rich vein for personal reading and improvement.

IV. Surprises are warnings. 

Canclini: I agree, although it’s important to understand that in the “uncertainty” domains, randomness means there may not be any warnings to act on or look for. If one is only vigilant for warnings, it means we can get “fooled by randomness,” a theme repeated often in Black Swan.

Luther: I liked Taleb’s “Fooled by Randomness.” I thought it was better than the original Black Swan book, though you do need Black Swan as an introduction.  The Cynefin construct in Barham’s paper complements this wonderfully and provided me with some better insight using the Cause-Effect relation: Simple being a known, singular, concise Cause-Effect; Complicated being an indirect, one-to-many, but known Cause-Effect; and Complex being an unknown in advance, many-to-many, Cause-Effect relationship.  That leaves Chaos as the state where the Cause-Effect relation is unknowable, even retrospectively. That explained to me the occurrences when there is no warning in a Chaos state and why surprises are warnings that you assumed the wrong state.

Cultivating Risk Awareness

Canclini: Wickert also writes about Cultivating Risk Awareness.  Everyone in flight test should be aspiring to that. Wickert had 4 points on that subject as well:

I. Identify and characterize the nature of the unknowns.  

Canclini: This is the foundational tenet of Russo and Shoemaker’s “Decision Traps” i.e., framing the problem accurately.  In the case of Wicket’s paper, the framing is whether one is in the risk or uncertainty domain.

Luther: Yes, part of clearly understanding the problem.  We should be identifying the Cynefin Framework Context Domain as standard practice.  We should hear the following in the office, “Boss, this abc problem is located in the complicated domain and can be solved with time, data and experts.”  Alternately, “Boss this xyz problem is located in the complex domain and no (reasonable) amount of data will help.”

II. Reduce the reducible ignorance.  

Canclini: Leadership commitment to plan and test appropriately.  Taleb says to do this by ensuring, “Every Captain goes down with his ship”.

III. Democratize safety decision making.  

Canclini: This is similar to Taleb’s admonition to eliminate the “Agency effect” which I discussed in my 2017 FTSW presentation. 

Luther: Agency effect is certainly a problem.  But I’ve experienced too much safety democracy as well.  Facebook culture invites comment from everybody which is distracting.  I see a need to focus on those with skin in the game and those with real expertise.  Of course, balance is the key to the art. You also don’t want biases, for which new inputs are a valuable defense.  A broad range of input aids in avoiding bias.

IV. Resist drift. 

Canclini: All the authors above talk to the importance of this through education and appropriate leadership.

Luther: This is the big cultural challenge for flight test organizations:  The role of the safety officer, to be perennially paranoid, to be able to push back on drift continuously.

Canclini: These are admittedly by themselves “common sense” strategies. So the most important question is, can flight testers do better than the current status quo using any of the paper’s strategies?  Some would say “no”. Rather, use the tools we have (i.e., FFRR, SRB, GMP, THA, ORM, TSM, RMP’s, etc.) in a better manner, and alleviate overworked and under resourced test teams.

Luther: I focus on teaching the presence and limitations of the existing tools with the theory that if you have a bigger selection of tools available, and an understanding of their limitations, then a better tool selection will be possible.  I’d like stronger tools but I don’t think they will be coming any time soon because we operate in the Complex environment where fundamentally we cannot know the Cause-Effect relationship. Consequently, no reasonable amount of data will help.

Canclini: I believe employing the strategies above could augment our existing tools/processes and improve flight test safety in the complex arena.  In addition to the strategic steps listed above, practical applications could include: 

Having an initial step that frames every hazard into the risk or uncertainty domain (Wickert). Planning groups to include sessions on “what could happen” – contrarians are a part of the process.  Introducing the concepts in the Barham and Hughes paper.

Luther: Yes I agree. Five years after leaving the military I now realize why the story telling culture was so important for aviation safety:  It conveyed pattern matching for handling risk in complex domains.

Canclini: More intriguing for me is the potential role of AI and quantum computing to increase knowledge even in the “pure uncertainty” domain.  The week after the FTSW, I attended a LM conference on the impact of Artificial Intelligence/Deep Learning and quantum computing. I learned that AI is already employing strategies no human would ever consider in highly complex environments and that quantum computing may soon provide predictions in areas that we currently deem highly uncertain.  For example, AlphaGO was a self-taught computer.  I wonder if future systems under test could be modelled accurately enough and can AI offer better plans and warnings?

Luther: I think your statement, “be modeled accurately enough” is key.  I’m no AI or big data expert. My understanding is limited to the application of the Cynefin model which I found instructive.  I know that in complicated domain problems, big data is key. We understand the relationships, even though they are one-to-many and extended.  We know the Cause-Effect. So more data and more time leads to better modeling and tighter answers. But in the Complex domain, we don’t know the relationships so we can’t program them.  I presume that there is some scope for AI in this case, though I don’t know. The idea of pattern matching is used in both AI and humans solving complex domain problems, so that is promising.  I’d be concerned about the time frames involved since the AI programing task is enormous. I was privileged to witness an AI lecture where the MIT Professor explained her work with mammograms. It was humbling, though it takes place over a decade using thousands (millions??) of data points.  I have hours or days to solve my tasks and two prior experiences as the set of data points.

Editor’s note: Included as an attachment, with the author’s permission, is a paper by Bob Barham and Starr Hughes, A Different Perspective – Why Flight Test Is Distinctively Complex, which complements the discussion above.

Download A Different Perspective – Why Flight Test Is Distinctively Complex.

Video: A Different Perspective – Why Flight Test Is Distinctively Complex.

Copyright © 2018