Autonomous control systems—what does it really mean for aviation?

This 1,200-word piece is not intended to be a manifesto of advocacy for autonomous aircraft. Its purpose is to explain what autonomous control systems in general do, and—at a very perfunctory level—how they do it. It is not a blueprint for how to build one. But if nothing else, it shows that that real people like me actually have considered these things, and have figured out ways to do them successfully.

What does “autonomous” mean?

“Autonomous” is a hot buzz-word in aviation, these days. While often reviled, it’s widely misunderstood, and even misused—as in the case of one of my favorite oxymorons: “semi-autonomous.” In the most basic sense, autonomous means “in charge.” That does NOT mean “not subject to rules.” (There’s a difference between autonomy and anarchy.) But autonomous does mean possessing full authority to operate within the rules—authority that cannot be usurped by any external authority, at any time. No little “I’ve got it” button; no remotely-piloted operations. A true autonomous aircraft has no cockpit; no displays, buttons, knobs, or levers; no flight controls. And of course, no pilots.

How do autonomous control systems work?

A350 cockpit
Sophisticated avionics? Yes. Autonomous? No.

Despite all of the pious talk about go/no-go decisions, in the real world, each flight that any of us makes is a continuous string of decisions that amount to this:

  • Continue with the plan, OR
  • Modify the plan

Someone—or something—has to make those decisions, in real-time. Pilots do that. Autonomous control systems do that, too. “Autopilots” do NOT make decisions—they just do what their pilots tell them to do. These two respective defining characteristics are mutually exclusive. If you’re going to build an autonomous aircraft control system, autonomy has to be baked-in from the very beginning. You cannot add it in, later.

Machine decision-making already is mature, everyday technology. It is successfully employed in applications that are far more complex than flying an aircraft. But let’s consider machine decision-making in an aviating context.

I hope that every pilot among the readership can agree that the highest priority should be the safety of the aircraft and its contents (people and things).

  • Axiom number 1: Safety is the highest priority of an autonomous aircraft control system.

Next, I suspect that we all can agree that getting from point A to point B also is of great importance.

  • Axiom number 2: Successful completion of the flight at its intended destination is the second-highest priority of an autonomous flight control system.

How do pilots do this? They utilize available resources to mitigate (overcome) any impediments that can prevent them from safely getting from point A to point B.

An autonomous control system does it the same way.

What are “resources?” Generally, abilities and/or things that endow abilities. The aircraft can climb and descend; it can turn, and vary its speed. It may have a surplus of fuel on board, etc.

What are “impediments?” Generally, rules, performance limitations, and circumstances, that interfere with our objectives; they can pop up at any time. Examples: restricted airspace; the aircraft’s service ceiling; a thunderstorm directly ahead; unanticipated winds; an engine failure; runway closures; medical emergencies.

What does “mitigate” mean? Take action that prevents an impediment from thwarting our mission. Examples: fly around the thunderstorm; stop for extra fuel, etc.

  • Axiom number 3: Autonomous flight control systems utilize available resources, to mitigate impediments that could jeopardize the safe completion of a flight.

Human pilots don’t train for every conceivable combination of circumstances; machines don’t need to be programmed for that, either. Impediments to the Plan are ad hoc; mitigations are ad hoc.

Global Hawk
The transition from remotely-piloted to truly autonomous requires a different approach to software.

System architecture: I’m going to surprise many, by eschewing the “diversity = innovation and progress” model, in favor of a “consistency = reliability and safety” model. A properly-designed system architecture will permit every vehicle–from an S-LSA to a Boeing 747–to employ the exact same code base, so that the behaviors of every vehicle will be predictable and reliable. Thus, any autonomous aircraft control system designed by YARS would be a classic Expert System. No “Machine Learning.” Unfortunately, space constraints prevent me from including a detailed treatment of this, here and now. Perhaps in a follow-up guest blog…

Questions and doubts

“It’ll never work. Flying an airplane is too complicated for a machine to do it.” Have you seen this video?

What if it breaks? The aviation industry already builds–and relies upon–“can’t fail” computer technology, in straight-forward everyday fly-by-wire control systems. That same level of reliability (contrast with redundancy) would have to be a characteristic of any certificated autonomous aircraft control system, whether hooked up to an FBW or to a good old control-cables-and-servos aircraft.

Separation anxiety: The objective of Air Traffic Control is separation, so let’s not conflate that end with some specific means. It is both stunning and demoralizing to realize that the FAA’s multi-billion-dollar “Next Generation” air traffic control system still relies upon voice communications conducted on simplex open-frequency VHF channels–the mother of all party lines. The FAA already has admitted that ADS-B is insufficient even to handle just the “surveillance” part of the ATC task, in a sky filled with literally millions of drones, large and small. Future traffic separation will be both automated and distributed. In other words, aircraft will “work it out” with other aircraft that present an airspace-occupation conflict. But voice recognition and synthesis already is everyday technology–isn’t it, Alexa? So, if an autonomous aircraft control system had to communicate with classic ATC via good old simplex VHF, it could.

Why would you build an autonomous aircraft control system–or not?

“Not everything that CAN be done, SHOULD be done.”

Why would anyone design, certify, and build an autonomous aircraft control system, which–by definition–would eliminate the cockpit and the flight crew from an aircraft?

Three reasons immediately come to mind:

  • Increased safety. Pilot error is the leading cause of aircraft accidents. Eliminate the pilots; eliminate their learning curves and their errors; improve safety. MCAS fiasco. Kobe Bryant tragedy.
  • Lower cost. No direct expenses or training costs for flight crews.
  • Increased access to aviation. Autonomous aircraft would democratize aviation. You wouldn’t need to be–or to hire–a pilot, in order to fly.

Why would anyone NOT design, certify, and build an autonomous aircraft control system, which–by definition–would eliminate the cockpit and the flight crew from an aircraft?

  • Regulatory hurdles. Thomas Edison’s electric light bulb didn’t meet the regulatory requirements for candle-wax purity.
  • Labor objections and resistance. Don’t take away my job! And fanning the flames of these next two:
  • General-public anxiety. Everything from Chicken Little to SkyNet.
  • Passenger anxiety. Two things will quickly mitigate this: successful cargo service, and low seat prices.

Mama, don’t take my Kodachrome away

From my cold dead hands! The advent of autonomous aircraft will NOT result in the prohibition of our beloved hand-flown aircraft–any more than the advent of automatic transmissions resulted in the prohibition of stick shifts and clutch pedals. And I wouldn’t want it to. I enjoy abusing an airplane, just as much as the next guy/gal does. Most of all, I still treasure training new pilots. It’s the most fun that I still can have with my clothes on.

35 Comments

  • Hi, Thomas,

    I’m sorry, but I have to disagree. I do not want autonomous aircraft, ships, or automobiles. I’m a pilot, and a retired computer systems engineer, and I’ve been dealing with these things now for 40 years. Let me state, no computer system is intelligent, that is, sentient and capable of thinking. There is not a computer system in the world that has yet had an original thought.

    Yes, it’s perfectly possible to make a computer ‘compose’ something that sounds like Mozart, by analysing all of his compositions and then emulating them, but that is not an original thought, it is merely emulation according to certain rules of composition, which is not how a human being composes music.

    Artificial Intelligence is NOT intelligent, it is simply a rule-based algorithmic system with a large database at its disposal. Expert Systems, for all their capability, cannot think. Although you state, quite reasonably, that pilot error is the major cause of accidents, you do not in the same breath acknowledge that pilot skills and sentience have averted untold numbers of incidents, largely because they have not been recorded, it was a given that the pilot would use his/her skills and sentience and intelligence to take actions that would avert a situation.

    I truly believe that in the cockpit of an aircraft, or the bridge of a ship, or the driving seat of a car there should be human sentience to take control. As for your point that we will still be able to fly, drive or sail, I fear not. Oh the politicians and safetycrats will not outlaw us all at once, just like that, but, they will start by prohibiting us from driving in towns, and then they will extend that to the suburbs and finally all roads. Or else they will make the bureaucratic hurdles and licencing requirements so onerous and difficult that we give up, or they will arrange with insurance companies that the premiums for manual control of cars, planes or boats is so expensive that again, we give up.

    I am now retired, and still hold a pilot’s licence as well as a driving licence, and a motorcycle licence, and I’m damn well going to exercise their privileges for as long as I can, but I fear the day is fast approaching when all these pleasures will be outlawed, and those of us who still wish the drive, ride, fly, or sail will be pariahs, put in the same category as drunk drivers and smokers or worse.

    Hopefully this will be after I’ve passed away, but make no mistake, even as you are reading this some politician somewhere is actively rubbing his or her hands with glee at the prospect of making a name for themselves by introducing legislation, all for our own good of course, to implement these restrictions.

    Adrian Ryan

    • I will never be a passenger on a 100% autonomous machine. I’m a professional pilot and an advocate of autopilots. I use them confidently and yes in them, my finger and abilities are right there all the time. Sorry.

    • I’m 63 years old, worked in IT (programming for the DoD, among other things, plus contracting to a jet engine manufacturer for the last 15 years) and welcome autonomous flight.
      I will flat out state that I think that folks that “will never be a passenger” on such a machine are simply Luddites. There is no doubt that a properly programmed machine will outperform a human at flying. Heck, most autopilots do a better job at following course, straight and level, than most pilots.
      In the future we would likely still be able to pilot our own planes (I had a Skyhawk for 17 years, until I sold it to buy a house) but the system will keep us separated from the big iron that is happily performing as if on rails. And hopefully without voice communication—certainly the weakest link.
      It’s incredibly daft that we are still largely adjusting mixture and relying on magnetos for small planes in this day and age.
      My father had what folks called a “young mind”, and it turns out that I do, as well. Accept what it inevitable, and avail oneself of what is useful.

    • I have decided to fly gliders that are as far removed from autonomous airplanes as you can get but I wonder whether this discussion also occurred when the world moved from manned to autonomous elevators. Probably the most concerned people were the elevator operators.

  • Passionate subject, always. And my passion is: ignoring how many millions flights do not become accidents because of pilot’s “rights” invalidate the classic “take the pilots error out of the equation by removing the pilots” speech we’ve been seeing for decades. Great comparison with autopilots – and automated systems in general, but any airliner nowadays has a dozen active bulletins with automation related issues. That makes me feel very far from any autonomous passenger or cargo flight becoming popular in the near future. It will come. It will. But I have a strong feeling that the fact that we still need five pilots – three electronic and two human – to do an autoland down to zero feet ceiling half a century after the ILS invention is a sign of the the road we still have ahead. Yes, it is a biased opinion. But humans are biased, just like the machines they create.

  • and yet autonomous cars still crash, as will autonomous aircraft as their numbers increase. Pilot error becomes computer error/hacking/crashing.
    I’m no engineer, but I do have enough flight experience (28,000 hrs or so..and counting)
    to have had several flights where computer glitches were overcome by pilot non-error.

  • The technology that is likely to develop from this is a worthy cause, but after dealing with two electrical failures, a few dozen hydraulic problems, countless resets, re-boots, power- up, power-downs all in the lastest Boeing aircraft models I doubt that we will be riding around in an autonomous aircraft in the near future. I think that I would rather drive….

    • Right? That’s what I mean. Take a pitot system or the inertial out and the fly by wire freaks out and go to secondary mode: first thing it does is to disconnect the autopilot. We are a light year away of an empty flight deck.

        • Not really. The autopilot system uses the FBW to fly the airplane (3 AP- 3 PFCs – 4 ACEs – Control surfaces, just to use the airplane I fly). But when the FBW runs out of input data, it becomes so simplified that the AP can’t do it. So, in an autonomous acft, we need – not to mention the decision making process – redundancies that we still don’t have. And serious system flaws that the system itself can deal with without a human to save the day. It is a long road – that Airbus is trying hard, by the way.

          • “But when the FBW runs out of input data, it becomes so simplified that the AP can’t do it. ”

            Think about what you just said.

            Then re-consider what you think you know about “Fly By Wire.” FBW is NOT all of that great value-added stuff, like envelope protection (requires air data). FBW is simply the replacement of mechanical cables and push-rods, with servo-motors that are commanded by electronic circuitry. When that circuitry encounters certification rules, it has to rise to a level of complexity that is beyond some joystick that drives potentiometers, optical encoders, pressure transducers, or hall-effect sensors. It requires some kind of processor – a “computer,” if you will – to convert joystick movement (or autopilot commands) into control-surface deflections. If that circuitry EVER fails, you’re screwed – there is no cable/push-rod “backup” connection between that joystick and the control surfaces. And yet, we build this stuff to a level of reliability that we routinely trust with our lives. Go figure.

            Also, please give some consideration to this: Redundancy is NOT a valid design objective. RELIABILITY is a valid design objective. Redundancy is merely one method of attempting to increase reliability – and not the preferred method, at that.

  • You yourself said it, and it is true for all times and places: Not everything that CAN be done SHOULD be done.
    Humanity is coming to a crossroads – “expert system” all of us out of jobs and watch as our economic system collapses, or acknowledge that we “could do that, but choose not to”, accepting the slightly less than perfect situation that would ensue as the price of being human.

    I’m glad I’m already old – that is a brave new world that I want no part of.

    • I get your theory exercise, it is pretty interesting. But “mummies chat among themselves in pure grammar”. The technology is by no means mature. Autonomous cars keep killing people that didn’t want to be part of the experiment in american streets, and we’ve been researching this for decades. It’s been 40 years since the first autonomous city train, and we have no long passengers line up to today anywhere in the world. SpaceX looks autonomous, but have twenty plus people supervising the operation. It is a long way ahead. The ILS has been around for 80 years, and everytime we do an autoland (which safety is so complicated to ensure, that we rarely actually do) we have to write a report about its performance. Some things (HomeSafe, Wayfinder etc) are part of the way, but nothing even close to the end of it.

  • Autonomous flying will have its place, but you must remember, you cannot make a zero risk environment. Computers fail and if no human is available to take over, there will be an accident.

    I see the best environment to use autonomous aircraft is in combat flying with very high to extremely high threat environment where pilot loss could be expected. Losing a computer is not as much of a concern as losing a pilot. And if the pilot is shot down, we will send in more pilots to rescue him/her, exasperating the situation. Cargo and tanker operations are also good candidates for autonomous flight.

    If it is dumb, dirty, and/or dangerous, then autonomous flight would be a good option.

    • Bob S., I suspect that at the very least your young’ens would get on that plane without you. You think of computers as the enemy, I think of them as extensions of my wishes.

  • A passionate subject. For point A to point B transportation the autonomous thing is going to be better and safer. I don’t think the passengers on a plane care whether airplane management is carried out by a couple of 25 year olds with a bunch of simulator time, or experienced 65 years olds with advancing memory loss, or three computers. They just want to get to Disneyland. Everytime some kid shoots by me at 80mph in a 35 zone, I long for autonomous cars.

    Eventually, self driven vehicles will be relegated to designated areas.

    Perhaps, in the grand scheme of things, this is just a step in the evolution from carbon based lifeforms to silicone based; which would be much better adapted to the variables of a life in space.

  • I liked the explanation regarding the concept of Autonomous Vehicles, and instantly thought, “this is not my cup of tea.” My head immediately went to the love of aviation thru flying A/C like a Citabria. To me the simplicity of that type of aircraft is pure brilliance.

    After reading all the others providing input to this subject reinforces my thoughts!

  • Why should Thomas Yarsley’s ideas be limited to peace-time. Perhaps he should advance his ideas toward explaining how wars could fought in the future minus pilots in aircraft.

  • Generally safe and reliable autonomous operations, including flight and surface transportation, may have their day—eventually—but that day is not now. It is, for certain application, a worthy goal, but it shouldn’t be oversold by starry-eyed promises of perfection that can never be reached. There will be many horrific accidents and much blood shed before most of the bugs get worked out. Pilots and drivers are certainly fallible, but so are engineers and programmers. Substituting the mistakes of one for the mistakes of the other won’t necessarily be an improvement. I don’t share the assumption that the flying public won’t care if their plane has a human pilot or is flown by computer. We have all experienced computers “crashing” (a very apt word) or otherwise functioning improperly, sometimes in ways that are all but imperceptible until it is too late. It’s nice to know that there’s someone up front with real “skin in the game.”

    • So, can we presume that you never will board (or have boarded) a fly-by-wire aircraft?

      Remember, the only connection between that joystick and the ailerons is… wait for it… a computer.

      Think about it.

      • That goes to secondary mode as soon as it loses any essential information and, guess what, gives it back to the pilots! In direct mode there are no computer laws involved. So… from enhancing flying characteristics to making decisions is a loooong way.

  • Tha NASA Global Hawks (pictured) always have a pilot in the ground control station who can alter the programmed autonomous path, if needed.

  • People do not realize that the 787 is not even remotely “cutting edge”, let alone bleeding edge, in computing power and software. The plane’s design is over a decade old. So it is quite pointless to equate current commercial flying to what is state of the art in machine learning and AI. In any case, as someone pointed out, the industry will progress through delivery drones, UAMs, and autonomous cargo flights before finally going single pilot and then no cockpit.

    • It will, but not next week. We do not have the solutions and fail proof systems to do it. And although the 787 is a decade old – like any airplane will be when released, because they are not designed in a pair of years over under tested technologies – we have constant updates not only into its software like to any modern airliner. And yet, they are so far away of being self sufficient as “minds” on their own.

  • You pick the most advanced airliners today, you lose your air data or inertial inputs: secondary mode in the fly by wire, and the autopilot disconnect warning is the first thing you will hear. I wonder how an autonomous airplane would fix this? I mean it. It will happen? Certainly. Soon? Absolutely not.

  • In addition to YARS’ link to the Piper video, Aviation Week recently reported that the TBM 940 received FAA and EASA Autoland approval for the Garmin HomeSafe system.

    More interesting is the report that Airbus completed an autonomous airliner research project where they flew a new A350-1000 500 times autonomously without ANY pilot input.

    • In those 500 times the A350 flew autonomously, how many emergencies did the automation have to handle? If any, were they they types of emergencies where sensors disagree with reality? Just flip through ASRS reports to see the myriad novel ways airplanes find to break themselves — 1740629 and 1741042 for example.

      Then consider the human factors. How would automation have handled Sully’s decision to land in the Hudson–not now, that we have the hindsight to program the automation for such an event, but prior? At an airport, does the automation know it has to go around because it can hear some chucklehead mis-hear taxi instructions and see him start across your landing runway? Does it matter to the automation if this happens when you have nail-bitingly minimum fuel, a pax having a heart attack in the back, and the chucklehead is crossing just beyond the threshold? I’d land a tad long and brake hard. What would automation do? Who knows.

      There are some things automation can do better than pilots, and pilots should use that to their advantage. However, when the proverbial poo hits the fan, I don’t think it can replace the judgment of a well-trained, experienced, thoughtful human in the seat.

    • Of the 500, 450 to gather data for the image recognition needed for the 6 flights using the “electronic VFR” (aka Wayfinder) that the A350 actually did – it was never autonomous, the airplane did not decide where or when to go, couldn’t even change configuration by itself. And about the HomeSafe – this something much more advanced in this sense – is an emergency system, capable of, in many cases – any GPS signal variation great enough – only to crash within airport limits. Amazing? Yes, I’d love to have it on my own aircraft. But it is only an emergency device, depending on a perfect working airplane, to save the passengers of a flight with an incapacitated pilot. It is just better than the other option, not a reliable way of mass flying.

  • I’ll take my chances with the possibility of pilot error over:
    a “there’s apparently a glitch in the system” or “seems a short circuit caused that to happen“ or “what’s it doing now/it’s doing it again” or “that software version was faulty…here is the new and improved update”. Did I get my point across?

    • Of course, as mentioned a few times in the responses, we will not go autonomous overnight, and we certainly won’t be using a 757. The progression to pax aircraft will most likely start with cargo/military aircraft, perhaps for a time with a single pilot before going full autonomy. Meanwhile, on the pax aircraft side of things, the progression will be single pilot starting with regional aircraft before moving up to the largest pax aircraft.

      In Chandler Arizona, Waymo (Google/Alphabet) has a driverless taxi system up and running. Having a driverless taxi drive autonomously in a city with constantly changing dynamics is orders of magnitude more complex than flying trans Atlantic.

Leave a Reply

Your email address will not be published. Required fields are marked *