AI in ATM faces real safety challenges

Back to articles

Many of today's aviation regulatory and safety challenges are highly visible and commonly discussed. From the integration of RPAS into controlled airspace through to drone ATM near airfields, the hazards are understood, and industry is both engaged and working hard towards solutions. However, one of the biggest technical revolutions that the industry faces is neither as immediately obvious nor attracting anywhere near the level of industry research activity or regulatory interest. This despite its posing new risks. That revolution is the application of machine learning techniques, colloquially referred to as Artificial Intelligence.

As I've outlined before, these techniques have the potential to substantially improve operational efficiency and safety in the airport and ATM environments. In the near term this will come through their use in Total Airport Management (TAM) systems concerned with passenger flow and tracking. Advanced vision and speech processing systems can automate queue monitoring, passenger flow and even track specific passengers. However, there are applications in the airfield and ATM environments too. Automated vision systems capable of monitoring and tracking aircraft, vehicles, drones and wildlife around the airport surface are not far off. Similarly, speech recognition systems capable of monitoring airfield and ATC communications have the potential to both enable new applications as well as to provide continuous monitoring for mis-read clearances, errors in read-backs, etc. It is no stretch of the imagination to foresee an AI solution for AFIS airfields where out of hours service is provided by an AI capable of determining that the runway is clear, no conflicting traffic exists on approach and then communicating clearances with an aircraft via speech recognition and speech generation.

AI offers vast benefits across the aviation domain, and it is increasingly becoming a standard part of the engineer's toolbox when developing solutions. A recent survey by DigitalOcean identified that 17% of developers worked with AI or machine learning in 2017 and of those who did not, 73% said they planned to learn about them in 2018. AI will increasingly become a standard element of many IT solutions, hidden away under the hood but powering many applications.

What about safety?

Traditionally, the safety assessments of software systems in safety critical aviation applications have been addressed through industry standards such as DO178C/ED12C. These rely on a combination of proscribed software development processes coupled with exhaustive testing techniques (such as code coverage) looking for the presence of bugs. Even when applying these standards rigorously to existing software there is little in terms of quantitative assurance of a specific software failure rate to feed into an overall system risk assessment. However, if you think the safety assessment of current software systems is challenging, it becomes substantially more difficult when AI techniques are considered.

Whilst software based, AI algorithms tend to lack the deterministic features of traditional software, making the application of standard software testing approaches such as coverage analysis substantially less effective. Even though they are constructed from a known set of training data, during model development they are often subject to regularisation techniques on a random basis in order to improve their ability to generalise to new data from outside of the training set. This includes techniques such as manipulating inputs, applying dropout, etc. This can mean that from the same training data, different AI models can be developed to achieve the same objective and that is before we consider models that can continue to learn once in an operational environment.

AI when operating is, like a human, very context dependent. The response of the AI will depend upon the environmental conditions where it is operating, its prior state as well as the inputs it is receiving. This means that demonstrating an AI can operate in a broad range of contexts could require a vast scope of testing and validation, potentially beyond the economic means of the operator. Consider for example the range of experiences necessary to test a self-driving car in all conditions. We're probably all familiar with the fatal Tesla accident in which the vehicle failed to identify a truck turning at an intersection due to the bright sunlight reflecting from it. This example wasn't in the vehicle's training data, but how many other examples would be needed to provide effective validation in all conditions?

Demonstrating that an AI has been subject to an appropriate software development life-cycle, as per DO178, in no way provides adequate evidence that it is safe. So, if we are in this position where existing software safety techniques are no longer generally applicable to AI yet we cannot ascribe quantitative failure rates as would be the case with hardware then how can we demonstrate AI safety? At the moment, it appears that typical approaches are to validate performance in the real world and to compare the performance against that delivered by non-AI solutions. Hence, in the case of self-driving vehicles the accident rate of autonomous AI is compared to that of people driving over similar distances, etc. However, that is a relatively weak approach to safety. No one wants to be in an autonomous vehicle when an unforeseen accident not tested for in the validation process is discovered.

Outside of the loosely regulated road transport sector most safety critical industries such as aviation prefer to demonstrate that an operation is safe before it is approved for use. However, without an appropriate tool-set to enable this how are ATM engineers to progress? This is a real challenge, both for the industry keen to use AI as well as for regulators who will have to approve the use of the technology.

Looking at current academic research there seems to be a lack of research aimed at closing the gaps in safety assessment techniques for AI systems. This is likely to continue to be the case for some time, during which many new uses of the technology in safety critical aviation systems are likely to emerge. How we take forward the benefits of the technology without prohibitive costs for safety approval is going to be a major challenge.

Contact the author

Steve Leighton
Tel: +44 1252 451 651

Helioshed 182 low res for blog

Call us to discuss your next project: +44 1252 451 651

Contact us