Shortly after a Smartlynx Estonian Airbus 320 took off on February 28, 2018, all four of the aircraft’s flight control computers stopped working. Each performed precisely as designed, taking themselves offline after (incorrectly) sensing a fault. The problem, later discovered, was an actuator that had been serviced with oil that was too viscous. A design created to prevent a problem created a problem. Only the skill of the instructor pilot on board prevented a fatal crash.

Now, as the Boeing 737 MAX returns to the skies worldwide following a 21-month grounding, flight training and design are in the crosshairs. Ensuring a safe future of aviation ultimately requires an entirely new approach to automation design using methods based on system theory, but planes with that technology are 10 to 15 years off. For now we need to train pilots how to better respond to automation’s many inevitable quirks.

In researching the MAX, Air France 447, and other crashes, we have spoken with hundreds of pilots, and experts at regulatory agencies, manufacturers, and top aviation universities. They agree that the best way to prevent accidents in the short term is to teach pilots how to creatively handle more surprises.

Slow response to overdue pilot training and design reform is a persistent problem. In 2016, a full seven years after Air France 447 went down in the South Atlantic, airlines worldwide began retraining pilots on a new approach to handling high-altitude aerodynamic stalls. Simulator training that Boeing convinced regulators was unnecessary for 737 MAX crews began only after the MAX’s second crash, in 2019.

These remedies only address these two specific scenarios. Hundreds of other unforeseen automated-related challenges could be out there that cannot be anticipated using traditional risk-analysis methods but in the past have included factors such as a computer preventing the use of thrust reverse when it “thought” the airplane had not landed. An effective solution needs to go beyond the limitations of aircraft designers who are unable to create the perfect fail-safe jet. As Captain Chesley Sullenberger points out, automation will never be a panacea for novel situations unanticipated in training.

Paradoxically, Sullenberger correctly noted in a recent interview with us, “it requires much more training and experience, not less, to fly highly automated planes.” Pilots must have a mental model of both the aircraft and its primary systems, as well as how the flight automation works.

Contrary to popular myth, pilot error is not the cause of most accidents. This belief is a manifestation of hindsight bias and the false belief in linear causality. It’s more accurate to say that pilots sometimes find themselves in scenarios that overwhelm them. More automation may very well mean more overwhelming scenarios. This may be one reason why the rate of fatal large commercial airplane crashes per million flights in 2020 was up over 2019.

Pilot training today tends to be scripted and based on known and likely scenarios. Unfortunately, in many recent crashes experienced pilots had zero system or simulator training for the unexpected challenges they encountered. Why can’t designers anticipate the kinds of anomalies that nearly took down the Smartlynk plane? One problem is they use obsolete models created before the advent of computers. This approach to anticipate scenarios that might present risk in flight is limited. Currently, the only available model contemplating novel situations like these is System Theoretic Process Analysis, created by Nancy Leveson at MIT.

Modern jet aircraft developed using classic methods lead to scenarios that wait for the right combination of events. Unlike legacy aircraft built using only basic electrical and mechanical components, the automation in these modern jets uses a complex series of situations to “decide” how to perform.

In most modern aircraft the software driving how the controls respond behaves differently depending on airspeed, if it’s on the ground, in flight, if the flaps are up, and if the landing gear is up. Each mode can carry a different set of rules for the software and can lead to unexpected outcomes if the software is not receiving accurate information.

A pilot who understands these nuances might, for example, consider avoiding a mode change by not retracting the flaps. In the case of the MAX crashes, pilots found themselves in confusing situations, i.e., the automation worked perfectly, just not as expected. The software was fed bad information.