Printable version

EU Liability Directive on AI

The European Commission recently (28 September 2022) released its legislative proposal on AI liability, alongside a revised Product Liability Directive (PLD), aiming to bring the EU’s liability regime into the digital age.

The AI Liability Directive (ALD) has been shaped to work alongside the EU AI Act, which is currently under negotiation. The stated objective is to address characteristics of AI software which are considered challenging under current liability rules, specifically “opacity, autonomous behaviour and complexity”. Liability law will form an important aspect of implementing AI regulation, as it provides a mechanism to determine who should be held responsible when AI malfunctions or causes harm.

The ALD introduces a ‘presumption of causality’, intended to better safeguard anyone subject to harm caused by AI. For causality to be presumed, claimants will have to demonstrate non-compliance with the duty of care on the part of the provider, it has to be deemed “reasonably likely” that the fault influenced the AI output, and that this fault resulted in an AI system causing damage. National courts can then order the developers or deployers of AI to disclose what is considered relevant information, which according to the directive should be proportionate and limited to what is necessary to support a claim.

While the ALD is closely linked to the EU AI Act, it also covers AI which does not fall into one of the “high-risk” categories as outlined in Annex III, if damage can be linked to the system.

You can find out more about the ALD on the European Commission website, and also find a helpful explainer published earlier this week by the Ada Lovelace Institute here.  

Please do send any comments and views on the ALD to and


Channel website:

Original article link:

Share this article

Latest News from

Calling all UK Government Regulators! The Government Regulatory Technology Survey 2022 is now open.