New liability rules on products and AI to protect consumers and foster innovation

28 Sep 2022 02:26 PM

The Commission yesterday adopted two proposals to adapt liability rules to the digital age, circular economy and the impact of global value chains. Firstly, it proposes to modernise the existing rules on the strict liability of manufacturers for defective products (from smart technology to pharmaceuticals). The revised rules will give businesses legal certainty so they can invest in new and innovative products and will ensure that victims can get fair compensation when defective products, including digital and refurbished products, cause harm. Secondly, the Commission proposes for the first time a targeted harmonisation of national liability rules for AI, making it easier for victims of AI-related damage to get compensation. In line with the objectives of the AI White Paper and with the Commission's 2021 AI Act proposal, setting out a framework for excellence and trust in AI – the new rules will ensure that victims benefit from the same standards of protection when harmed by AI products or services, as they would if harm was caused under any other circumstances.

Revised Product Liability Directive, fit for the green and digital transition and global value chains

The revised Directive modernises and reinforces the current well-established rules, based on the strict liability of manufacturers, for the compensation of personal injury, damage to property or data loss caused by unsafe products, from garden chairs to advanced machinery. It ensures fair and predictable rules for businesses and consumers alike by:

Easier access to redress for victims AI Liability Directive

The purpose of the AI Liability Directive is to lay down uniform rules for access to information and alleviation of the burden of proof in relation to damages caused by AI systems, establishing broader protection for victims (be it individuals or businesses), and fostering the AI sector by increasing guarantees. It will harmonise certain rules for claims outside of the scope of the Product Liability Directive, in cases in which damage is caused due to wrongful behaviour. This covers, for example, breaches of privacy, or damages caused by safety issues. The new rules will, for instance, make it easier to obtain compensation if someone has been discriminated in a recruitment process involving AI technology.

The Directive simplifies the legal process for victims when it comes to proving that someone's fault led to damage, by introducing two main features: first, in circumstances where a relevant fault has been established and a causal link to the AI performance seems reasonably likely, the so called ‘presumption of causality' will address the difficulties experienced by victims in having to explain in detail how harm was caused by a specific fault or omission, which can be particularly hard when trying to understand and navigate complex AI systems. Second, victims will have more tools to seek legal reparation, by introducing a right of access to evidence from companies and suppliers, in cases in which high-risk AI is involved.

The new rules strike a balance between protecting consumers and fostering innovation, removing additional barriers for victims to access compensation, while laying down guarantees for the AI sector by introducing, per instance, the right to fight a liability claim based on a presumption of causality.

Click here for the full press release