No room for error

Written by: Claire Blejean
Back to insights

As a passenger nervously sits down on their window seat and glances outside at the ground crew finishing up, their neighbour leans over and whispers: 'you know, flying is the safest mode of transport – much safer than driving your car.' This fact has never been truer. Since the 80's aviation incidents have seen a steady decline, and for the first time in 2018, this global number dipped below a hundred. This is despite a constant rise in air traffic with over 4 billion passengers and a record amount of cargo, corresponding to over 35 million registered flights having busied our airspace this past year.

The aviation industry should be proud of its track record; however, this impressive lack of errors is leading to new and unexpected problems. Indeed, it is essential to train aviation professionals, from pilots to air traffic controllers, for failure and error management, but without examples and repeated exposure, how is one expected to learn? In response, the various aviation training organisations have developed methods from flight simulators to air traffic real-time simulation exercises, which enables the practice of these failure scenario. In certifying these professionals, we believe we have equipped them with the necessary tools and knowledge to handle the unlikely and unfortunate. However, one type of emerging technology may not yet have addressed the issue of a lack of error: Artificial Intelligence (AI) techniques.

The principle behind machine learning methods, the building blocks of AI, is that through exposure to vast amounts of data, the algorithm modifies itself according to set incentives, in order to accurately analyse a situation. The training of an AI is entirely dependent on the quality and quantity of data which it is presented with. Whilst current projects using these technics are mainly focused on post-facto analysis, the real benefits of AI methods are in prediction and optimisation. This is where the lack of errors and failures are becoming an issue: how can an AI predict the outcome of events it has not been sufficiently trained for?

In training humans, emotional stressors may enable more effective learning of failure situations. Afterall, who has not had an intense near-miss and thought: this is a mistake I'll never make again! Whilst it is possible to increase the weighting given to some data, an AI which is exposed to only a few occurrences of a type of event will tend to over-fit – in other words, it will not be able to recognise similar situations because they are not precisely the same. In order to reap the benefits of AI techniques, how can this issue be addressed? Two main solutions come to mind:

  • The sharing of training between similar organisations. For example, several airports of comparable sizes which all experience snow every few years, could have an agreement which would allow an AI to be trained using all of their data in order to enable it to deal appropriately with snowy days. One of the benefits here is that the training of an AI does not require the storage of input data within the program, instead, through training the algorithm modifies the relative importance (called weights) given to various inputs. As such it is very difficult to reverse engineer and obtain the initial data, which can address security concerns linked with data sharing.
  • The production of simulated data which can be used to train the AI. A futuristic scenario may even imagine that these simulated scenarios could be produced by another AI.

Whilst the aviation industry should not have any room for error, it is important to recognise the benefits of failure in the training of both humans and AI systems. Afterall, we all learn from our mistakes.

Call us to discuss your next project: +44 1252 451 651

Contact us