Five key issues about the regulation of AI

The first thing to be mindful of when thinking about artificial intelligence (“AI”) is the need to see through the hype that surrounds it. In the past few weeks alone there have been claims that an AI system has become sentient (it hasn’t), while narratives of AI either saving or destroying humanity are commonplace.

The regulation of AI poses significant challenges, and requires the involvement of different disciplines (e.g., data science, computer engineering, philosophy). Different stakeholders around the world have already started engaging with those challenges. Most notably, the European Union has recently started to develop a comprehensive regulatory framework for AI, a worthy pursuit which will undoubtedly shape the digital age and emerging algorithmic societies.

The purpose of this piece is to outline five key issues which have emerged from the legislative process around the Artificial Intelligence Act (“AIA”) in the 14 months that have passed since the proposal was published by the European Commission (the “Commission”). These issues will remain relevant until the legislative process is completed and the way in which they will be addressed will likely determine the effectiveness of the AIA. Before discussing these issues, I briefly introduce AI and its importance for modern societies.

What is AI?

One of the many challenges the regulation of AI brings about consists in determining what the term “AI” encompasses. A good place to start is the 2018 Commission Communication on AI, which states that “AI refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.” The AI High Level Expert Group (‘AIHLEG’) built on this definition, explaining that, as a scientific discipline, AI includes several approaches and techniques, such as machine learning, machine reasoning, and robotics.

The term “machine learning” is also important when discussing AI and its regulation. That is because machine learning is one of the most common and effective applications of AI, playing a key role in image recognition, product recommendations and spam filters. Many of the recent breakthroughs that have been made using AI involve machine learning.

Why is AI important?

The impact of AI on our daily lives is clear. Other than the three examples of machine learning applications described above (which may seem trivial), AI can also help create better drugs, help blind people navigate the world, create better prosthetics, and contribute to disaster relief and climate change. The capacity of AI to change the world is astonishing.

But the transformative power of AI has also given rise to concerns, which have led to the adoption of various sets of ethical principles, both by companies developing AI systems, such as Microsoft, and intergovernmental organisations, like the OECD, as well as calls for regulation.

Five key issues about the regulation of AI

Before exploring the key issues concerning the regulation of AI, it is worth outlining the relevant progress made by the EU institutions to date. The Commission published the Proposal for the AIA in April 2021. Since then, the Council of the European Union (“Council”) discussed it under both the Slovenian and the French presidencies, the latter publishing a consolidated version of the text on June 15th. In the European Parliament (“Parliament”) after a competency battle, the Internal Market (IMCO) and Civil Liberties (LIBE) emerged as the lead committees on the file in December 2021. In June 2022, MEPs from all committees tabled over 3000 amendments on the text.

Key Issue 1 – Definitions

As mentioned above, determining what the term “AI” refers to is far from a straightforward task. Consequently, the definition of AI for the purposes of the AIA has been a contentious point and will likely remain one throughout the legislative process. The definition suggested by the Commission in the Proposal has been criticised as being both too broad and not broad enough while the version in the Council’s consolidated version significantly departs from it.

A discussion related to what AI is or should be for the purposes of this regulation concerns the definition of “general purpose AI” (a term that was first introduced by the Slovenian Presidency in a partial compromise text, on which the French Presidency has done significant work since). In the most recent consolidated version, general purpose AI systems are defined as being those intended by their providers to perform generally applicable functions, such as image or speech recognition, translation or pattern detection (Article 3(1)(b). Several amendments tabled by MEPs also discuss general purpose AI and it remains to be seen how the final definition will determine the relations between the different actors involved in the AI lifecycle, given their unique intended purpose (see below).

Key Issue 2 – AI lifecycle dynamics

The Proposal put forward a relatively simple structure of the stakeholders present in the AI lifecycle, revolving mostly around providers, the entities that develop an AI system, and users, those who use it under their own authority.  Importers, distributors and operators will also fall in scope, but they will be subject to fewer and less substantial obligations.

Under the Proposal, the dynamics between providers and users is governed by two elements – (1) the intended purpose of the AI system in question – which is set by the provider – and (2) a substantial modification made to it by the user, which has the effect of transforming it into a provider. While apparently straightforward, this equation raises difficult questions, including how the intended purpose should be defined by the provider and what would amount to a “substantial modification”.  
Going forward, the meaning of both “intended purpose” and “substantial modifications” will be key to the formulation of the final text. The structure of the actors involved could also evolve (amendments that were recently tabled suggest the inclusion of players such as end users, affecter persons, new and original providers).

Key Issue 3 – Risk vs. Innovation

This dilemma is familiar to those who have followed the regulation of other emerging technologies and applications. There is broad consensus that regulation should not chill innovation. As a result, the way in which risk is defined has important implications. The Commission has opted for an approach that defines risk based on individual health, safety and fundamental rights, but the IMCO-LIBE report, prepared by the two co-lead rapporteurs on the file from the Parliament, has suggested expanding this approach, with reference to Article 2 of the Treaty on the European Union. This new approach would define risk by reference to the values listed in this Article, including democracy, freedom and rule of law.

Once the risk is defined, AI systems will be placed into various categories, namely unacceptable, which will be banned; high risk, which will attract specific obligations; and low or minimal, which will remain materially unaffected by the AIA. That being the case, it is not surprising that over 250 amendments have been tabled only in relation to Article 5 (11551406), which lists AI systems that give rise to unacceptable risks, and another 200 in relation to Annex III, which contains the list of high-risk AI systems (3042 – 3242).

Key Issue 4 – Enforcement

Against the backdrop of the discussions on the ineffective enforcement of the General Data Protection Regulation, and the approach taken in the Digital Services Act, which empowers the Commission to have an active role in enforcement, a key point to discuss is how the provisions of the AI Act will be enforced.

Enforcement raises many issues, ranging from the decision to create new National Supervisory Authorities focused only on AI, giving already existing Data Protection Authorities new powers to supervise implementation, to the role of the Commission and the (soon-to-be-created) European AI Board. The existence of a right to effective remedy for individuals is also under discussion.

Key Issue 5 – Liability The Commission has decided to tackle liability separately, but this is a matter inherently related to the discussions around the AIA. Recently, the Commission set a date for publishing the proposal for the Artificial Intelligence Liability Directive, which is 29 September. Even if separate from the AIA, this is integral to the broader goal of regulating AI in Europe. There is a danger that the different timelines and stakeholders involved in the legislative process could lead to discrepancies between the two instruments. Ensuring their coherence, for example the role of compliance with the AIA for liability purposes, will be key to their effectiveness.

Photo by Possessed Photography on Unsplash

Author

Leave a Reply