Microsoft copilot
Getting your Trinity Audio player ready...
10 min read

Three years ago, in the middle of a pandemic, I suggested pilots might have a few things to teach the wider world about risk. Managing weather is certainly not the same as managing messy public health problems, but some universal principles apply to all risky activities. I think most of those lessons have held up pretty well.

In 2023, Covid has rapidly receded from the headlines, but now artificial intelligence (AI) is here to kill us. That’s according to many prominent voices in the computer science community, and more than a few traffic-chasing news outlets too. Once again, I believe pilots have relevant experience to share on this topic. In fact, how to manage technology has been the defining aviation debate of the last 50 years.

Microsoft copilot

Microsoft’s new tool is called Copilot. Does that mean we need to use CRM?

The parallels are not exact, but many of the discussions about AI safety will sound eerily familiar to any pilot who has read an aviation magazine in the last decade. Microsoft seems to acknowledge this, at least implicitly: their ChatGPT-powered tool is called Copilot. We might talk about autopilots and glass cockpits instead of large language models and neural networks, but the central concern is the same: who is really in control and what safeguards are needed?

(By the way, can we agree on a standard name for generative AI tools? We have “George” for autopilots, but I’m not aware of a similar consensus for ChatGPT. Do I chat with “Aida” or “Aiden?”)

The lessons from aviation’s man vs. machine struggle could fill a book, but here are five that are relevant to today’s AI debate.

1. Don’t ignore new technology. There is a natural (and understandable!) tendency to hide from fast-moving technology, because it can be confusing or even scary. This is almost never a good idea. The way to minimize the risks and maximize the benefits of AI is to get educated and develop procedures for managing it, whether that’s government regulation or just personal preferences. If the good guys ignore the latest tech, that doesn’t mean it dies, it just means the bad guys own the category. Ignoring AI will be the worst of both worlds, since many people will still use it but without the training to use it wisely. 

Cirrus provides a powerful example from the aviation world. Early in the airplane’s life, the whole airplane parachute system was installed but was not integrated into standard training and procedures. As a result, pilots had all the costs of the new system (including moral hazard) but with none of the safety benefits. Over time, the Cirrus community embraced the parachute, updated its training approach, and fatal accidents plunged. This is a great example of talking honestly about a controversial technology and then using it for good. Note: that doesn’t mean you have to fall in love with it, but you do have to engage with it.

2. Constantly update your opinion. What will AI look like in 2050? You can find hundreds of articles on this topic right now, but the only honest answer is: nobody knows. We should at least beware of our natural biases, including our tendency to be much better at imagining bad outcomes than good ones. Consider modern airline cockpit technology, which directly reduced the number of pilots in each airplane but ended up increasing the total number of pilots dramatically. A Lockheed Constellation often had four or five pilots up front for an Atlantic crossing, while a Boeing 777 has just two. It was easy to look at the change and envision fewer pilots, but that’s not what happened. The Boeing makes flying much safer and more reliable, opening up the world of airline travel to millions more people than ever before. The long term positive trend swamped the short term negative one—be on the lookout for something similar with AI.

This reinforces the old advice, “when the facts change, you should change your mind.” A more recent aviation example of this is GA autopilots, which evolved from simple and often dangerous in the early 1970s to sophisticated and reliable today. Our training, habits, and procedures need to recognize this change and react if we want to take advantage of these new tools. AI may undergo similar changes, or it may go the other way, devolving from useful to dangerous. In either case, we have to work hard not to get stuck in old mindsets based on old facts.

Wreckage of Air France 447 is recovered

The crash of Air France 447 reminds us to lever let the technology get ahead of our own situational awareness.

3. Always maintain situational awareness. Modern AI is more like a hardworking intern than an all-knowing god, which means we have to provide constant direction and supervision to ensure it’s serving our larger purpose. If we don’t, AI will quickly produce results that are both completely believable and completely false. 

That might ring a bell for any pilot who has read the final reports for Air France 447 or Lion Air 610, accidents where the avionics got confused and the pilots never caught up. The way to prevent such a nightmare scenario is to maintain situational awareness at all times, a big picture perspective of where you are, where you’re going, and who’s in control. Steve Green has suggested here at Air Facts that we actually need to think in terms of two different situational awarenesses: one for the airplane and one for the automation. If they aren’t identical, it’s time to take immediate control. That’s great advice for managing any technology, even your smartphone.

4. Know when to break the rules. The sign of a true master, whether it’s in art, sports, or engineering, is the confidence to occasionally break from “best practices” and chart a new course. That is needed more than ever with the current crop of generative AI tools. While they are great for summarizing documents and answering questions, they can quickly become derivative and boring. The best creators might start with ChatGPT or Midjourney, but will often go far beyond them and be truly creative.

Even pilots, some of the most compliant and rule-following people on earth, must occasionally ignore the rules. FAR 91.3 reminds us that only the pilot in command is responsible for a safe flight, and as such they “may deviate from any rule of this part to the extent required to meet that emergency.” There are countless examples of this, and not all of them are as dramatic as the Miracle on the Hudson or the Gimli Glider. They should remind us that rules and structures exist to protect us, not for their own sake; when they are no longer doing what they were designed to do, it is time to change course.

5. Humans need to stay in the loop. No matter how impressive it is, all this technology only exists to work for the human who is using it, not the other way around. Too much press about AI (and even a lot about autopilots) tends to anthropomorphize the technology, as if it has its own desires or goals or relationships. That’s a terrible approach. We are not bystanders; we all have agency over what happens next with technology.

The overwhelming message from pilots is to maintain a “pilot in command mindset” at all times. That is definitely true when the autopilot is flying your Cessna, but it’s also true when the smart cruise control is driving your Chevy or when ChatGPT is writing your report. Only one participant gets to judge how things are going and make the final decision, and that is the human. Both of the tragic 737 Max crashes are the result of pilots ceding too much authority to the automation. In the end, they forgot the most important part of the PIC mindset: know how to turn the computer off when needed! 

AI for aviation?

There are huge differences between a Garmin autopilot and artificial general intelligence, so we should be careful not to stretch this analogy too far. In particular, the generative approach of Bard or Stable Diffusion is fundamentally different from calculators or deterministic programs. But the lesson of history is that some larger themes are timeless, whether we’re talking about Roman soldiers or American astronauts, and that’s certainly true for technology. We’ve probably been debating the role of the human since the wheel was invented, and there is a lot to learn.

Garmin Autonomi

Garmin’s Autoland technology isn’t ChatGPT, but it is an example of how AI can help pilots.

History also reminds us that, amidst all the doom-mongering, there is the very real possibility that some of this new stuff might actually be good. That’s true of AI in general but also as it applies to aviation. Garmin’s Autoland is essentially an early version of aviation AI, considering a host of conditions from weather to airplane performance to airport facilities before making a decision, then flying a stable approach to landing. Future versions might not require an emergency—I would love to make better in-flight decisions with help from a virtual co-pilot that analyzes current conditions and suggests alternative strategies. Or how about an AI engine analyzer to interpret the CHTs and fuel flow and warn of looming engine trouble? Rest assured companies like Garmin and ForeFlight are working on these kinds of problems.

As these examples show, technology is not an all or nothing choice. If you don’t trust the response ChatGPT gives you about a math problem, you can always plug it into an old calculator or do it by hand. If you don’t like what the autopilot is doing, try flying in HDG mode instead of fully-coupled NAV mode, or punch off the autopilot and hand fly using the flight director. Human skill augmented by AI is often the right balance, not a binary choice between 100% man or 100% machine. Just make sure to follow good crew resource management (CRM) habits, including the positive exchange of controls: while the computer might not understand it when you say “my airplane,” it can be a subtle reminder to yourself.

Finally, we should retain just a touch of humility. While technology can and does fail, humans aren’t exactly perfect. The vast majority of aviation accidents are caused by the person in the left seat, not some runaway computer. Terrain warning systems and datalink weather receivers (to take just two examples) have made flying safer, not by replacing the pilot but by filling in gaps in our skill set or providing essential information at the right time. Modern autopilots are preventing loss of control accidents with built-in envelope protection. There’s no reason to think that can’t happen again with the next generation of technology.

Humans are essentially hiring AI to work for us, and like any chief pilot interviewing a potential copilot, we should approach that relationship methodically: understand its strengths and weaknesses, clearly define its role, and watch it like a hawk, especially early in its career. Technology can be a great collaborator, but the human is the boss, the pilot in command. That means sometimes we need to say “my controls” and turn the darn thing off!

John Zimmerman
6 replies
  1. Scott R Winick
    Scott R Winick says:

    John – The concept in your point #3, which is the need for pilots to maintain two forms of situational awareness: the airplane, and the automation – is an excellent contribution. Thanks.

    Reply
  2. Kenny
    Kenny says:

    Your comment on always maintaining situational awareness brings to mind the technology shift that occurred when pocket calculators (HP and TI) replaced slide rules. Those of us who went through engineering school with the ever present K&E Log Log Duplex Decitrig were amazed when the pocket calculator came out. We also realized that with an errant keystroke you could easily get a wrong answer that was accurate to 8 significant digits. The slide rule user had to mentally determine the order of magnitude of the answer, so there was a thought process required before you knew the final answer, and the user had to convince himself that the answer made sense. Similarly, the adoption of AI into aviation should make us all ask the question “does this make sense?” when acting on information that is provided through AI.

    Reply
    • Leonard
      Leonard says:

      Your analogy of transitioning from slide rule to calculator hits home to me. When I entered GA Tech as a Freshman, only the “rich kids” had HP-35 calculators. We members of the masses had our slip-sticks, which never had “battery exhaustion” problems halfway through the Physics Final. Eventually when the price of calculators dropped, we transitioned; but we already had a well developed “sense” of the answer we’d expect.

      Reply
    • Greg Johnson
      Greg Johnson says:

      John, great article.
      Kenny, spot on, and I remember well going from the slide rule to electronics, then from steel measuring tapes to electronics distance meters, then from EDMs to GPS based surveying, and the trend of changes continues. It always has to be proven and managed carefully.

      Reply
  3. 47Th BLACKARCHERS
    47Th BLACKARCHERS says:

    Bravo Zulu.
    Post is of great value & of lot of use.
    Let’s hope
    Artificial Intelligence will not turn in to
    Accidental Involvement.
    Bravo Zulu
    VT – DER
    HAL PUSHPAK
    Flying Machine in history.

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *