The EU’s commitment to standardization of the “digital Wild West”: Where do we stand in terms of artificial intelligence?

, by Gabriele Lococciolo, Le Courrier d’Europe

All the versions of this article: [English] [français] [italiano]

The EU's commitment to standardization of the “digital Wild West”: Where do we stand in terms of artificial intelligence?
Commissioner for the Internal Market, Thierry Breton. Credit: European Parliament, Flickr.com

An increasingly necessary regulation

In a Europe that is increasingly in need of adjusting itself for the challenges posed by the digital world – and in particular for the consequences that these challenges can have on the single market – legal answers are beginning to be given at European level. For this reason, the European Commission – the institution that holds the monopoly of legislative initiative – has put forward several proposals for directives and regulations in recent years. After all, the current Commissioner for the Internal Market Thierry Breton had declared that the era of the digital “Wild West” is now over and that whatever is prohibited in the public space would also be in the digital space.

This is the genesis of texts such as the Digital Market Act (DMA) on digital markets (with the aim of regulating the platform market and better controlling the so-called GAFAMs, whose position in the market is increasingly dominant) or the Digital Service Act (DSA) related to digital services (aiming to impose greater responsibilities on digital companies in the removal of illegal content). These efforts feed into, inter alia, the six priorities of the von der Leyen Commission, which include an EU suited to the digital era whose aim is to promote the new generation of technologies, while providing citizens with the means to act in order to protect themselves from the resulting risks.

A duality that might seem paradoxical and yet that must be taken into account in the current era. Risk is a key concept within the digital regulatory efforts of the European Union, which adopts a real “risk-based approach” to the potential violations of which artificial intelligence (AI) in particular could be accountable to citizens’ fundamental rights, such as the right to respect for privacy. This is the birth of the AI Act, a proposal for a regulation on artificial intelligence presented in April 2021 by the Commission. The latter, increasingly committed to providing effective responses to the challenges posed by the world of AI, is now working on a proposal for a directive (dating back to September 28, 2022), relating to the adaptation of non-contractual civil liability rules in the field of artificial intelligence.

What is non-contractual civil liability?

Non-contractual civil liability means the obligation (for the person who has seen their civil liability called into question) to repair the damage caused by a specific action called fault (a failure to fulfil a legal or moral obligation). There are three conditions for the assumption of non-contractual civil liability: a fault, i.e. a violation of a legal or moral obligation; a damage, or rather the legal acknowledgment of a damage; a causal link, that is, the cause-and-effect relationship between the fault and the damage.

Since these three conditions are cumulative, they must coexist in order for a person’s non-contractual liability to be assumed. Furthermore, these conditions are quite demanding for the victims, since they often find themselves in the difficult situation of not being able to prove these elements. Sometimes, it is the legal practice (via the judge) or the law (the legal texts) that establish a presumption of a causal link, which can be a useful tool for victims as this somehow “streamlines” the procedure for proving it.

Among the types of presumptions of a causal link, legal doctrine distinguishes between the so-called rebuttable presumption (or praesumptio iuris tantum), which admits the possibility for the subject against whom the presumption of the causal link is directed to prove the opposite, and the so-called irrebuttable (or conclusive) presumption (or praesumptio iuris et de iure), which does not admit the possibility of proving otherwise for the subject against whom the presumption of the causal link is directed.

Which steps forward for the European Union?

In the matter of liability concerning AI softwares, there are many questions that arise and to which the EU attempts to give an answer: when we give a prompt to an AI software – e.g. ChatGPT – if this prompt results in an unpleasant output for the user, who will be held liable for that? Perhaps the author of the prompt, or the designer of the AI software (OpenAI in the case of ChatGPT), or both of them? And how to repair the damage caused by an AI system? These questions are the subject of the proposal for a directive of September 2022 on the liability of AI service providers.

This proposal for a directive aims to promote the development of an AI as safe as possible, so that its benefits for the internal market (in accordance with Article 114 of the TFEU, hence the legal bases for the EU to intervene in the standardization of this field) can be fully appreciated. In doing so, the EU’s main purpose is that potential victims who have suffered damage caused by AI have almost equivalent protection to that which could be enjoyed in other more traditional fields which are not necessarily related to the digital world (see the end of the digital “Wild West” mentioned by Thierry Breton).

Why does the EU want to regulate the field of AI?

Firstly, as the victims may find it difficult to prove their damage and thus have it recognized, this proposal for a directive aims to facilitate the victim’s preparatory work. Acting on the causal link is therefore supposed to be an effective means of minimizing this work. Indeed, the features of AI, in particular its complexity, can make it very difficult or excessively burdensome for victims to identify the perpetrator and demonstrate a causal link between fault and damage. In particular, when claiming compensation for damages, injured parties may face very high upfront costs and much longer legal proceedings than in non-AI-related cases, which may discourage or even dissuade them from seeking compensation.

Secondly, since national strategies can differ significantly from each other in the face of the challenges posed by AI, the absence of an EU action could lead to a very high level of fragmentation between national legislations. This requires action by the EU to ensure that there is the highest level of homogenization (after the directive has been transposed into national law) between Member States’ AI laws. In this perspective, the proposal aims to prevent the fragmentation resulting from the specific adaptations of the AI to national rules on non-contractual civil liability.

An actual presumption or a simple lightening of the burden of proof?

Article 4 of the proposal for a directive aims to establish a ‘rebuttable presumption of a causal link in the case of fault’, i.e. it establishes that the damage suffered by the victim is presumed to have been caused by the use of the AI system, unless the counterparty proves otherwise. But how to assess the presumption of a causal link envisaged by this proposal for a directive? It should be noted that since this is a rebuttable and not an irrebuttable (or conclusive) presumption, this could be overturned at any time if the counterparty were able to demonstrate that the damage was not caused by the use of the AI system.

Thus, on closer inspection, the presumption does not entirely exempt the victim from the need to prove the causal link between the damage suffered and the use of the AI system, but it still allows for facilitating this proof by reversing the burden of proof, i.e. by attributing it to the other party. Indeed, if, on the one hand, this notion allows national courts to presume ‘for the purposes of applying liability rules to a claim for damages, the causal link between the fault of the defendant and the output produced by the AI system’ (pursuant to art. 4-1), on the other hand, it is also true that the causal link and its presumption are always likely to be weakened by the counterparty should it demonstrate the contrary.

Also, the costs (in terms of loss of time) related to the amount of time necessary for the other party to prove the contrary mean that the victim faces considerable delays before all the necessary checks are carried out so that the damage can be repaired. Someone could therefore be led to say that this text aims at alleviating the burden of proof for the victim, rather than at a real irrefutable presumption, the praesumptio iuris et de iure, which, let us remember, by not admitting evidence to the contrary, would drastically reduce all the effort and time costs for the repair of a damage to be carried out.

Two sides of the same coin

The Commission’s proposal for a directive on non-contractual civil liability in the field of AI is certainly a tool that enriches the legislative arsenal of the European Union and increases its normative power in digital matters. Moreover, it is undoubtedly a starting point for the protection of victims of AI, which will be increasingly common in an increasingly digitized world.

Its margin of appreciation, however, is variable. Article 4 of the proposal for a directive of September 2022, the core of this text, establishes the notion of ‘rebuttable presumption of a causal link’, where the term ‘rebuttable’ could be an actual “defect of form”, possibly responsible for considerable delays in the event that a victim claims for compensation for the repair of their damage. It should be noted, in this regard, that in the text as it currently stands, the need to demonstrate this causal link still remains, whether it is on one side or the other, and with all that follows.

Nevertheless, this lightening could be significant to the extent that, in the AI field, demonstrating the causal link between fault and damage can turn out to be quite complex for the victim, due to the complexity of AI systems and the lack of transparency which affects the understanding of how they work. Therefore, shifting the burden of proof to companies, which are the creators of AI softwares and which know their functioning much better, can at least significantly ease the preparatory work of the victims.

Your comments
pre-moderation

Warning, your message will only be displayed after it has been checked and approved.

Who are you?

To show your avatar with your message, register it first on gravatar.com (free et painless) and don’t forget to indicate your Email addresse here.

Enter your comment here

This form accepts SPIP shortcuts {{bold}} {italic} -*list [text->url] <quote> <code> and HTML code <q> <del> <ins>. To create paragraphs, just leave empty lines.

Follow the comments: RSS 2.0 | Atom