Two serendipitous things happened this week.

I.
In the first case, I was in a virtual meeting with a group of four other test professionals—friends, really—our conversation wandered about pleasantly, like a river in the flatlands. In the course of our discussion something stood out. I’ll rephrase it in my own words, if only because I didn’t accurately capture it when I heard it. Everybody in the Department of Defense is doing autonomy and AI, but does anybody know who all is doing it? The speaker targeted the acquisition community in particular, but it applies to agencies and organizations that are not strictly part of “test.”

The last issue of FTSF was about “lessons learned.” Coupled with the question above, this goes right to the heart of lessons learned. How are we ensuring that we talk to the right people or do the right “literature review” as we conduct this kind of test (autonomy and AI)? I asked this question to two of the panelists from December’s podcast, a panel discussion about AI, autonomy, and flight test, which resulted in the second serendipity.

II.
In separate correspondence with two of the podcast guests, I heard analogous opinions suggesting that we are not “organized” for this kind of test and evaluation. According to WigB: “Our organizational structure for Test doesn’t help us.” Avery phrased it differently, but said something similar: “Part of the problem seems to be that the topic of community involvement and collaboration—it’s not the primary focus for most organizations doing this work. There is not currently a lot of funding or manpower available, so everyone has to focus on the core mission and work. Therefore, no one has much time or resources available to support collaboration efforts.”

I’ll jump rapidly to my conclusion. Literature review may not be appropriate for this kind of test. I think we have, inadvertently, focused on the term to our detriment. Instead, I think we should be talking to people. Ultimately, I think the Air Force Test and Evaluation Summit—as discussed in my winding conversation (above)—got it right when they emphasized people over process.

So skip the literature review, and go sit down with someone. It may be a conversation about how we test AI or autonomy, or perhaps you’ll ask about how we are organized for test. It may be about acquisitions process or private industry research and development. Finally, I hope it includes a mention of this newsletter, because at least one of the podcasts guests had never heard about it. Talking to someone will benefit you, and it will help us Reach Everyone.

One thought on “Better than “Lessons Learned”

Comments are closed.

Copyright © 2018