Aerospace: human intervention and collaboration in AI and cyber security
Recently, I was invited to take part in a discussion with Oxford University academics and other industry leaders. It was wide in scope, but focused on how to use AI to protect the aerospace industry from an increasing range and number of cyber threats and attacks.
Here are some of the key points...
The role of human intervention in aerospace security
Human/machine teaming is an emerging, but critical, research area in autonomy. Deploying any type of autonomous system first requires a big-picture view regarding technical robustness, safety, and governance – who can access data, how to team humans and machines, and build interfaces.
Relying on human intervention is a key security tenet of autonomous solutions. But it’s fraught with peril – for example, an unstable system can still pose a threat whether you have a well-trained pilot in the cockpit or not.
Even if a system is effective, can we be sure the human intervention will work? And the more reliable an autonomous system becomes, the easier it is to become complacent, potentially leading to atrophy because maintaining human skills in the first instance becomes difficult.
And what about public confidence? Even though the technology could save millions of lives, the public still expects a record of zero harm before trusting autonomous cars. Typically, people only remember the mistakes.
It was the same when robots and humans started sharing the factory floor, so it’s up to technologists and academics to convey the harder-to-see data that enlightens and educates.
Building cyber security into design
AI helped to build autonomous models long before the cyber security issue arose. So we must build in cyber security as part of the design process. Data has to be key to this as it offers a potential advantage over attackers, allowing us to gather and respond to real-life examples that theoretical models cannot adequately capture.
“We have to start thinking about cyber security as we’re designing systems.”
– Dr Danette Allen, NASA
Safety architecture is also crucial – both internally and from a regulatory perspective. How do you prove the thing that needs to be secure is secure? We’re never going to really know without many years of field trials. Plus, we can’t necessarily prove a system is secure – only that it might be – by showing it’s as safe as is reasonably practical and that human intervention is feasible.
“Trust” will be a crucial to autonomy succeeding. When introducing AI, we must consider how it supports trust and how we can ensure it identifies anomalies and doesn’t contribute to them. And increasingly complex systems means we can’t just think about academic, industrial or regulatory concerns in isolation. We need a joined-up approach covering everything – technology, policy, law, and strategy.
Collaboration is crucial
Collaboration between industry and academia is key to tackling cyber security threats. Trustworthy Autonomous Systems was set up to emphasise this collaboration and works for everyone, but more international collaboration is needed. While the US and Europe have typically been at forefront of global aviation standards, we’ll need a more globalised approach – exemplified by the US-led international spaceflight programme, Artemis, where the US and its allies work on systems, protocols and standards that meet everyone’s needs.
Standards and technologies are emerging from new countries and regions, too. As we start to leave the “safety but no security” legacy behind, it’ll be interesting to see what problems arise from systems co-opted from different domains with different threats. Each will require the transfer of capabilities across domains (sometimes referred to as permeable boundaries).
This article was authored by Paul Gosling, CTO of Thales UK, and Fellow of the Royal Academy of Engineering.
Latest News from
Quantum Computing in Energy & Utilities Day27/01/2023 09:20:00
This campaign day will be focusing on the opportunities for quantum computing applications in the energy and utilities sector. We will uncover its potential, viability, and challenges.
Voting Open: techUK Digital Twin Steering Group Chair and Vice Chair25/01/2023 16:05:00
Voting is now open for the techUK Digital Twin Steering Group Chair and Vice Chair positions. The elections will be open until 14 February 2022.
A UK Plan for Chips25/01/2023 15:05:00
The UK needs a plan for 'chips' if we are to fulfil the aim to become a science and tech superpower
Financial Services Policy Explainer | Payment Services Regulations (PSR) Review and Call for Evidence23/01/2023 11:20:00
Helping to map the UK’s Joint Regulatory Oversight Committee’s (JROC) upcoming strategic work, HM Treasury’s latest Review, and Call for Evidence of the UK’s payments regulatory environment will shape the following stages of determining digital technology suppliers’ contributions to building the payments infrastructure of UK Open Banking
How UK tech companies are playing their part to tackle the rise of online fraud20/01/2023 13:10:00
Fraud is now the most commonly experienced crime in the UK, costing over a hundred billion pounds every year, with online fraud making up an increasing proportion of incidents.
Made in the UK, Sold to the World Awards 202317/01/2023 16:10:00
Celebrating UK business success around the world, The Department for International Trade’s Made in the UK, Sold to the World Awards are launching in January 2023
Welcome to techUK’s National Security Week! #NatSec202316/01/2023 10:20:00
This week, techUK is showcasing pioneering work across the technology sector which has the power to contribute to and transform UK national security and techUK’s members thought leadership on this.
techUK statement on an amendment to introduce expanded senior management liability provisions in the Online Safety Bill16/01/2023 09:20:00
Amendments have been presented by MPs that introduce expanded senior management liability provisions to the Online Safety Bill