
As part of its ongoing work to regulate the digital economy, the European Commission (“the Commission”) recently put forward the proposal for a Revised Product Liability Directive (“RPLD”) and the proposal for the Artificial Intelligence (“AI”) Liability Directive. Both proposals are relevant for the regulation of AI systems in Europe, each approaching this issue from a different perspective.
Although these initiatives are presented as part of the same legislative package in that they seek to address damages caused by AI systems, the two proposals are different. The RPLD proposal seeks to modernize an existing strict liability system (also called “no-fault” liability because it does not require proof of intention or negligence). The AI Liability Directive is a brand-new initiative addressing fault-based liability issues (where intention or negligence must be proven). Since they present different approaches to liability, there is no overlap between the claims which can be brought under them. This blog post will consider each proposal separately, discussing their key provisions and how they interact with one another and the AI Act (discussed in an earlier blog post).
The Revised Product Liability Directive Proposal (“RPLD Proposal”)
As the title suggests, this proposal seeks to modernize an existing legislative instrument, the Product Liability Directive (“PLD”), adopted in 1985. The purpose of this tool was to harmonize liability rules for defective products across Member States to avoid any potential obstacles to free movement of goods as well as any distortions of competition. It establishes a strict liability regime to the effect that injured persons are only required to prove that they suffered damage and that this damage was caused by a defective product (Article 4, PLD). The PLD revolves around the main elements of a claim, namely the existence of a product; the damage suffered by the injured party; the defect of the product; and the causal link between the defective product and the damage suffered. The RPLD Proposal is significant because it aims to expand the meaning of each of these key concepts in order to bring them in line with the challenges and realities of the digital age. These concepts are discussed below.
Software to fall under the definition of “product”
Starting from the concept of “product”, the RPLD Proposal lays down that software and digital manufacturing files fall under the definition of a product (Article 4(1)). Recital (3) provides further details on what a “product” will be under the new liability regime, making it clear that AI systems will also be included. According to the same Recital, this will encourage the roll-out and uptake of AI, while ensuring that claimants can enjoy the same level of protection irrespective of the technology involved.
Leaving AI aside, the fact that “software” in general is included in the definition of “product” significantly expands the scope of the RPLD, ensuring it will also cover software included in products (such as the software used in cars) and stand-alone software such as mobile apps. Consequently, the Revised Product Liability Directive is not only a key instrument for the regulation of AI but is also expected to play an important role in the regulation of the digital age more generally.
Loss or corruption of data as a type of damage
Another important change suggested by the RPLD Proposal is the expansion of the meaning of “damage”. Under the current system, damage is limited to death, personal injuries and damage to property which amounts to more than 500 euros (Article 9 PLD). According to the RPLD Proposal, the concept of damage should be extended to cover not only “medically recognized harm to psychological health” but also material loss resulting from the loss or corruption of data that is not used exclusively for professional purposes (Articles 6(a) and 6(c)). The 500 EUR threshold would also be removed. The RPLD Proposal provides a concrete example of a situation where this type of damage would be relevant, that is, the deletion of data saved on a hard drive (RPLD Recital, (16)). Finally, the wording of this provision, which only excludes data used exclusively for professional purposes, suggests a broad scope of application.
Systems that learn after development
Other than showing that they suffered the relevant type of damage covered by the PLD, a claimant must also prove that the product in question was defective. Article 6 of the RPLD Proposal stipulates (as does the PLD) that a product will be considered defective when it does not “provide the safety which the public at large is entitled to expect”, taking all circumstances into account. The RPLD Proposal expands the list of circumstances that should be considered, including the ability of a product to learn after deployment (Article 6(1)(c)), and the “moment in time when the product left the control of the manufacturer”, that is, after the product is placed on the market (Article 6(1)(e)). The proposal explains that these provisions have been specifically included to address the ability of algorithms to learn after being deployed (RPLD Recital, (23)).
Defectiveness, causality and rebuttable presumptions
Article 9 addresses the burden of proof, setting out several rebuttable presumptions concerning both defectiveness and causality. First, Article 9(2) lists three instances where the defectiveness of the product will be presumed: (a) failure of the defendant to disclose evidence required under Article 8(1); (b) failure to comply with mandatory safety requirements laid down in Union or national law; and (c) the presence of an obvious malfunction. Subsequently, Article 9(3) creates a presumption of causality where it has been established that the product is defective and the damage caused is of a kind typically consistent with the defect in question.
The second presumption applicable in the case of defectiveness that was discussed above (failure to comply with mandatory safety requirements laid down in Union or national law) deserves particular attention in that it is linked to the AI Act. Concretely, the AI Act lists several requirements that high-risk AI systems must meet. According to Article 9 of the RPLD Proposal, where a claimant can show that a high-risk AI system did not comply with the mandatory requirements listed under the AI Act, they will benefit from a rebuttable presumption that the said system was defective. Nonetheless, it may also raise implementation challenges, given the difficulties claimants may face in proving non-compliance with the AI Act.
Furthermore, Article 9(4) includes another rebuttable presumption meant to address the instances where a claimant faces “excessive difficulties, due to technical or scientific complexity” in proving defectiveness of a product, causality, or both. The same article gives national courts the power to decide when this presumption is to be applied, but stipulates that, to benefit from it, a claimant will still have to prove that (a) the product contributed to the damage and (b) the product was defective or that its defectiveness is a likely cause of the damage. As Recital (34) recognizes, this presumption is particularly relevant in the case of AI systems. But, given the requirements it sets for a successful claim, its applicability could remain limited in practice.
The AI Liability Directive Proposal
Main provisions
The purpose of the AI Liability Directive proposal is to simplify the legal process for victims when it comes to proving that someone’s fault led to damage. To that end, it introduces a right of access to evidence and a rebuttable presumption of causality.
Before discussing these novelties, it is worth making a few remarks on the scope of and definitions laid down in the AI Liability Directive Proposal (Articles 1 and 2 respectively). According to Article 1, the Directive would only apply to claims brought under national fault-based liability regimes. Article 2, which defines the term “claimant”, lays down that the term does not only include the person who was directly injured but also a person acting on behalf of one or more injured persons. This is significant because it explicitly recognizes the ability to bring collective actions for harms caused by AI systems, in accordance with Union or national law.
Moving on to the substantive provisions, Article 3 deals with the disclosure of evidence and a rebuttable presumption of non-compliance (not to be confused with the rebuttable presumption of causality, covered in the subsequent article). Put simply, this provision introduces a qualified right to get access to relevant evidence before trial. The right is qualified because the potential claimant must still prove the plausibility of their claim and show they have undertaken “all proportionate attempts at gathering the relevant evidence from the defendant” (Article 3(2)). The presumption of non-compliance applies where a potential defendant fails to comply with a disclosure order made by a national court (Article 3(5)). In other words, where no access to evidence is granted, it will be presumed that there is a breach of a relevant duty. This is significant because proving non-compliance is likely one of biggest challenges that potential claimants will face.
The second main feature of the AI Liability Directive is the introduction of a rebuttable presumption of causality in the case of fault (Article 4). Where this presumption applies, the claimant will no longer need to prove that the output produced by an AI system (or its failure to produce an output) which led to damage was caused by the fault of the defendant.
However, for this presumption of causality to apply, three conditions must be met, namely (1) failure to comply with a duty of care, either proved by the claimant or presumed by the court under Article 3(5) discussed above; (2) this failure must have influenced the output produced by the AI systems (or the failure to produce an output); and (3) causality between the output (or lack thereof) and damage. In cases where claims are brought against high-risk AI systems, the first condition will only be met where the claimant can show failure to comply with the specific mandatory requirements for high-risk AI systems laid down in the AI Act (Articles 4(2) and 4(3)).
Article 4 then limits the scope of application of this presumption by establishing that it will not apply to non-high-risk AI systems (save in instances where the national court considers it “excessively difficult” for the claimant to prove a causal link) (Article 4(5)); or to instances where the defendant uses an AI system for personal purposes (unless the defendant materially interfered with the conditions of operation of the system) (Article 4(6)). Finally, the presumption will also not apply to high-risk AI systems, where the defendant can show that sufficient evidence and expertise is “reasonably accessible” for the claimant to prove the causal link (Article 4(4)). It remains to be seen how these exceptions will apply in practice. Concepts such as “reasonably accessible evidence” or “excessively difficult to prove” can be interpreted in a broad manner and significant differences can arise between the interpretations of domestic courts.
Interaction with the AI Act and the RPLD
In terms of how the AI Liability Directive proposal will interact with the AI Act, the Explanatory Memorandum of the proposal sets out that safety (pursued by the AI Act) and liability (addressed by the AI Liability Directive) are two sides of the same coin. The proposal explains that the AI Act intends to reduce the risks posed to safety and fundamental rights through ex ante regulation, establishing several requirements that high-risk AI systems must meet before being placed on the market. However, it also recognizes that the AI Act does not provide any compensation for injured persons who suffered damage caused by an AI system. This is where the AI Liability Directive comes into play, offering remedies for the inevitable cases where AI systems will cause harm.
The Explanatory Memorandum also sets out the relationship between the AI Liability Directive and the Revised Product Liability Directive. It explains that the AI Liability Directive covers national claims mainly based on fault, with a view to compensating “any type of damage and any type of victim”. This is important because it suggests that the AI Liability Directive could be used in cases of algorithmic discrimination. Indeed, the press release from the Commission makes this point directly, arguing that this instrument will make it easier to obtain compensation if someone has been “discriminated in a recruitment process involving AI technology”. Given the popularity of AI-based solutions not only in recruitment but also in assessments determining access to benefits or loans, this is a significant development.
Looking ahead
The proposed changes are welcome efforts to regulate AI because they would empower individuals to bring claims where they have suffered damage. While these two instruments will make the process of bringing a claim against AI systems easier, injured parties will still have to overcome significant challenges (e.g., they will need to demonstrate non-compliance with the AI Act to benefit from the presumption of causality in Article 4 of the AI Liability Directive). Combined with the relatively broad exceptions that apply, bringing a successful claim before a court may still prove difficult.
The interplay between the AI Act and the two proposals discussed in this blog post is another important aspect to consider. For example, systems deemed to be high-risk will not only receive different treatment under the AI Act, but also under the AI Liability Directive and the RPLD.
This blog post was authored together with Ms. Konstantina Bania.
Photo by Christian Lue on Unsplash