Artificial intelligence (AI) is the hot technology of 2016, finding its way into research papers and cocktail party conversations alike. As usual, most talk is either hopelessly optimistic or relentlessly negative (you know a trend is mainstream when you start reading headlines like, “Is fashion ready for the AI revolution?”). Cut through all the hype, though, and pilots can find a lot of reasons to be enthusiastic about AI.
But first, some perspective. In spite of what Hollywood would have us believe, AI does not mean all-knowing supercomputers will take over the world. That type of general AI is probably decades off, so we can relax about HAL 9000. On the other hand, specialized AI is tailored to solve particular problems or work in specific industries, and it is much more realistic. Techniques like machine learning use powerful computers to sort through vast amounts of data, learn patterns, and work smarter.
If some of this sounds familiar, that’s because aviation was an early adopter of AI. In fact, many pilots have been flying with very primitive forms of AI for years, even if they didn’t realize it: autopilots, FADEC, and load-shedding electrical systems all use computer power to make intelligent decisions. If you’ve read any of the academic discussions about autonomous cars, you’ll have noticed that many refer to early airplane autopilot studies as key sources.
Somewhat quietly, though, the boring autopilot is starting to grow up. It began with Garmin’s Electronic Stability and Protection system, a major advance in autopilot technology that has received surprisingly little attention. This passive safety system continuously monitors the airplane’s attitude and uses the autopilot servos (even when the autopilot is off) to nudge the airplane back to safety if the bank gets too steep.
Researchers at University College London are taking it a step further by applying real AI to an autopilot. Instead of just programming it to fly certain pre-planned profiles, the team is using machine learning to make a more resilient autopilot that can adapt to changing conditions. By watching a human fly and storing hundreds of hours of detailed data from real flights, it builds a library of scenarios. Whereas a traditional autopilot might give up when an engine fails or the turbulence gets really rough, this new design leans on its experience to keep flying. There’s a long way to go, but initial testing (on a Cirrus SR-22 no less) is very promising.
AI-powered autopilots will take increasingly creative forms as they mature, to the point where “autopilot” may not even be the right word. For example, a DARPA project called Aircrew Labor In-cockpit Automation System (yes, ALIAS) aims to create a full replacement for a human copilot. Much more than just code, this includes a complete mechanical system that manipulates the stock controls of an airplane. The idea is to have a drop-in option for the military, so there’s no need to design a new, autonomous airplane. Aurora Flight Sciences has made surprisingly good progress on this project, as they showed off in a video last week.
So far, most of the AI innovation in aviation has focused on control of the airplane or its systems. Things get much more interesting when such technology is used for real decision-making tasks. For this type of work, machine learning is often paired with deep learning and neural networks to create powerful algorithms that attempt to “think” like a human.
At this point we are mostly in the imagination stage, but there are some exciting possibilities to imagine. AI is being used to interpret MRIs in hospitals right now; imagine the power of AI to read an engine monitor. Is that high CHT and low EGT on the #4 cylinder a problem? AI could easily compare that engine signature to a database of millions of hours of engine data. Indeed, Savvy Aircraft Maintenance Management has pioneered something like this with their Failed Exhaust Valve Analytics program, which warns of impending valve failure based on digital engine monitor data.
Weather is another place where AI is being used to make real world decisions in a variety of industries, and aviation should be no exception. Wouldn’t it be nice to receive an alert when your destination weather starts going down, then have a suggestion for the perfect alternate, based on your preferences for weather, approach type and runway length? Or how about using AI to decipher the radar image in front of you, pulling in other weather data to help you decide whether it’s convective or not? Given enough data and enough computing horsepower, AI can do more than just display images: it can make suggestions.
These scenarios all point to the real power of AI in its fully developed form – it is predictive instead of reactive. Just like your iPhone tells you when to leave for your meeting, an aviation app could create the perfect VNAV profile given the STAR you’re flying or your fuel burn rate, then provide custom alerts before you have to ask. Or it could suggest a new route in flight when the winds change.
Such predictive AI will probably involve voice alerts and speech recognition, too. The interface of the future may resemble a conversation more than a computer to be programmed. We’re certainly not there yet (yes, I have Siri), but the first signs of this are starting to appear. Garmin’s Telligence system uses voice commands to complete hundreds of common tasks in the cockpit. This is certified and available to install today, and it most certainly isn’t the last voice command product we’ll see.
Aviation will get a big boost from the drone market when it comes to developing practical, airborne AI. After all, on a quadcopter that’s inspecting a pipeline there is no pilot to make decisions, so the AI is essential. Billions of dollars are being spent to develop drone technology that avoids terrain, obstacles, traffic, and weather, or self-diagnoses a mechanical problem and returns to base. Expect to see lots of interesting ideas taken for a test drive on drones before coming to your Cessna.
Can anything stop this thrilling race to the future? Certainly. If history is any guide, regulation will move far slower than technology. What’s possible and what’s certified are not the same, and we can expect the FAA to be cautious in approving bold new capabilities. Not all of that caution is unjustified, and the good news is that aviation is actually ahead of cars in many ways – we’ve been regulating and training on automation for decades.
Hopefully the FAA avoids a common trap here. As is so often the case when debating technology, the discussion quickly becomes binary: human intelligence vs artificial intelligence. Good vs. evil. This makes great headlines, but it’s a false choice.
Researchers at MIT use the phrase “extended intelligence” to signify how AI is used to augment human decision-making rather than replace it. That may sound like spin, but it’s actually a crucial distinction. Extended intelligence, just like a glass panel or a deice system, is simply a tool. When used appropriately by pilots, it can improve the safety, utility and fun of flying. That’s a realistic – and exciting – future.
- Christmas reading list: 24 books for pilots to read in 2024 - December 6, 2024
- Weather flying means learning to read clouds - November 11, 2024
- What’s wrong with the teardrop pattern entry - September 25, 2024
AI is way beyond what is needed for autonomous aircraft. In fact, unless you plan to have the sky to yourself, AI (in lieu of a pre-programmed Expert System) would be counter-productive. On-the-spot “creativity” is what gets pilots into trouble in the first place. To increase safety, we need to replace creativity with predictability. That’s 180 degrees away from what you’ll get with true AI.
AI is not “essential” to decision-making. In fact, it’s not needed at all. Good old “if-then-else” abetted by “when/while” is sufficient for all of the decision-making that’s needed to get an aircraft safely from A to B.
Maybe, Tom. I see your point about a good old “if-then-else” approach, but that’s what led to an Airbus autopilot throwing up its hands and saying “your airplane” to those Air France pilots over the Atlantic. A little more adaptive approach to autopilots would be OK with me. After all, “return to home” works great for a 4 lb. drone 50 feet up in the air; I’m not sure that’s the right answer for a 100,000 lb. airplane at 37,000 feet over a thunderstorm.
John,
The autopilot disengaging in the AF 447 crash was not a matter of technology “giving up” in the absence of airspeed data – that was exactly what the autopilot was specifically designed to do by its human designers. Program it to do differently, and the auto-controls will obey. Dumb programming equals dumb and disastrous results. Lesson learned: Don’t program dumbly.
I am with Yars on this:
It is easy peasy to program an autopilot to use alternate or even synthesized data inputs (such as GPS altitude and GPS ground speed, corrected for known winds aloft, or to use an alternate electronic airspeed sensor rather than the base pitot tube sensor) in the event of the loss of a primary sensor input. It is also easy enough to program the autopilot to use electronic attitude and heading reference (AHRS) data to manipulate the flight controls to maintain straight and level flight in any flight regime. Ditto with manipulation of throttles using the auto-throttle control system.
The problem with AF447 and Airbus is that the designers of the autopilot and autothrottles suffered a failure of imagination, i.e., the designers did not account for, with the loss of a primary airspeed indicator, that the pilot flying would effectively “go crazy” and mis-manipulate the aircraft right into the ocean or the ground.
Technology is not the limiting factor here – the designer’s imagination is the limiting factor, along with an (un)healthy dose of government’s typically sclerotic over-regulation.
Duane I liked your response and followed you until the end, how did government overregulation contribute to AF447, a European airplane flying an international route?
I agree that the designers anticipated a more rational human response to the loss of airspeed, which had happened many times in past until this crew failure made it famous.
Jose,
My point about sclerotic government over-regulation is not about the AF447 crash, but about the ability of aircraft designers and manufacturers, and those who design add-on features, to innovate and bring to market sensible safety and performance improvements, is and always has been horribly hindered in the USA by FAA over-regulation. The FAA regulation model may have worked somewhat OK back in the early days, through the 1970s perhaps, but since then technology has developed at a far faster rate than has the ability of the FAA to certify such technology in light aircraft. And that is what has held back technological advancements, like better and cheaper auto controls, for decades.
The FAA seems finally to be bending a little, such as their decision earlier this year to allow the sales of non-TSO glass panels via issuance of equipment and aircraft specific STCs by EAA in conjunction with Dynon. And the Part 23 reforms recently published by FAA but not yet in effect should help too. But the sad fact is that the FAA has been the single biggest hurdle to upgrading light aircraft for many decades.
HAL is a few decades away? That’s not a long time – in fact, well within the aviation career span of a young person contemplating an piloting career today. What are the job prospects when all of those copilots who get shoved out of the cockpits after having been replaced by AI hit the streets? And where will the future “Captains” get their training and apprenticeship?
The really frightening aspects of this go far beyond aviation – almost all job categories will potentially be decimated (and I don’t mean the one-out-of-ten sort of decimation that the original latin term meant; I mean 99 out of 100. Where will all of the pilots, railroad crewmembers, truck drivers, maritime crewmembers, to say nothing of vast segments of general management, the medical professions, lawyers, accountants and yes, even government employees go to “retrain” in a world where you will need an IQ equal to Sheldon Cooper’s to even be considered for any sort of job?
This sort of future has been written about in such various journals as the NY Times and the Wall Street Journal, so it is not merely the musings of an old retired pilot who is glad that he had his career when flying was something humans could do well. I am glad that I won’t be around for very much of it – that is not a world I’d want to live in!
One other scenario that might save us from all of this is if mankind decides to just say no to all of this AI and live on our own merits.
I’m with Tony Vallilo on this one. In my opinion, along with Stephen Hawking, I regard AI as possibly the greatest threat to humanity. I already feel that there is way too much automation in the cockpit already as demonstrated by AF447 and the Air Asiana crash at SFO, along with others. My commercial pilot colleagues also feel that they do very little actual flying, and feel that they are little more than Flight Systems Managers.
Tony brings up a very serious and significant sociological point – when all the skilled jobs are now done by AI/Expert Systems and the like, what will we do? Unbridled leisure? I don’t think so.
I’m not a Luddite, it is just that as an engineer, and a programmer I learned a long time ago that just because you can do something, does not mean you should. I view with very great trepidation AI, autonomous cars and aircraft, autonomous drones and military vehicles, this is not a world I want to live in.
FWIW, a couple of months ago AI made it into the world of litigation. A relative who is deep into the practice of the dark art of lawyering passed along an interesting announcement. We’ve discussed AI in the job market (truck drivers, Uber taxi drivers, airplane pilots, surgeons, diagnosticians, etc.) and he argued vociferously that the practice of law was “too complex”. He’s since changed his tune. FWIW, I think Zimmerman is off the mark on this one. If a semi can deliver 4800 cases (50,000 cans) of beer via the public highways and Uber can deliver fare paying passengers via a taxi this thing we call “piloting” is simple. DOD is looking into “autonomous wingmen”, the Russians and Chinese are looking into fully automated battle fields. Fully autonomous civilian aircraft? “Easy peasy” as has already been suggested. Give it a decade. Definitely not “50 years” as John Z. suggests. IMHO the art of the aviator is soon to follow wheel rights and horse shoers.
I’m hardly suggesting 50 years. Maybe 20 or 30 for a true “general AI.” But for autonomous flight I think we are almost there. It’s up to the regulators and insurers more than the engineers at this point.
I’ve wondered why Garmin’s ESP is not linked to the AoA indicator instead of user selected pitch and bank limits.