The artificial intelligence Act: an attempt to frame AI

The artificial intelligence Act: an attempt to frame AI

The artificial intelligence Act: an attempt to frame AI

article précédent

article suivant

Nowadays, the European Union (EU) is facing many challenges, from climate change to high technology evolutions which constantly change the course of things such as artificial intelligence (AI).

Even though AI does exist for quite a long time, however these past years it has been increasingly going further and it became urgent time to frame the improvements that are conducted by the technology evolution.

This is why the EU Commission made a regulation proposal on AI : the Artificial Intelligence Act.

What does this proposition involve? How can it impact developers? What would be the main points? That is what this article will try to show. But, dear reader, you have to keep in mind that the proposal is being discussed by other institutions that are involved in law adoption, so there could be some changes.

WHY this regulation?

First, this proposal is a response to the European Parliament and European Council, which called for legislative action to ensure a well-functioning internal market for artificial intelligence systems.

Furthermore, this regulation is also both a political, economical and social response to AI developments. There is a risk for the EU to be even more marginalised in the development of standards technology, especially because of leading countries like the USA and China, which invest far more in the digital market than the EU.
To illustrate this, EU investments in AI were, in 2016, 3 to 4 times less significant than China’s and 6 to 8 times less significant than the USA’s ^[EU Parliament, AI rules: what the European Parliament wants, 04.05.2022] .This may explain the technological advance concerning autonomous driving cars in the US, for example.

At the same time, EU values (such as human rights) could be challenged if it does not act in time to take a certain leadership ^[Special Committee on Artificial Intelligence in a Digital Age, European Parliament, Report (2020/2266(INI)) on artificial intelligence in a digital age, 05.04.2022, Paragraph 3 and Paragraph 6. page 10 to 11] .Moreover, digital tools are more and more serving autocracies and some corporate actors as manipulation and abuse tools. It is possible to refer to the well known Cambridge Analytica scandal ^[ S. MEREDITH, CNBC, Here’s everything you need to know about the Cambridge Analytica scandal, 23.03.2018]
or more recently to Clearview AI which recovered billions of pictures and videos without users’ consent for facial recognition.

However, these are not the main reasons. One of the EU’s purposes is economic integration. In this case AIs are also seen as a way to perform in industry, a truly game changer that could improve productivity, accelerate innovation, and more.

Even though AI could undoubtedly have advantages, it also provides risks that should not be underestimated. This is why the proposal aimed to be “Human-centric”, corresponding thus to EU values, and to be respectful of fundamental rights. However, the aim is certainly not to stifle innovation, but to bring a strong answer to problems related to AI.

WHAT does the regulation contain?

Definition as a key

Probably the main key of any proposal is the definition of the terms for the simple reason that they will decide what the entire regulation will apply to. In the case of this regulation, the key definition is without any doubt that of “AI”.

The proposal defines an AI system in article 3 as a: “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.

This definition is quite controversial. Some are judging that it may be too broad, going beyond what should be considered as ‘intelligent’ ^[ L. CLARKE, TECH MONITOR, MEPs are preparing to debate Europe’s AI Act. These are the most contentious issues, 28.07.2021] by potentially covering general computer programs and statistical software. On the other hand, some are defending this broad definition as future-proof. ^[ Luca Bertuzzi, EURACTIV, AI regulation filled with thousands of amendments in the European Parliament, 07.06.2022]
The definition of what an AI system is, will be fundamental and it will probably need to be, in a certain manner, ahead of its time.

The definition of what an AI system is, will be fundamental and it will probably need to be, in a certain manner, ahead of its time.

A risk-based approach

This proposal seems to take a risk-based approach, differentiating and classifying AI systems according to the risk they represent: a low risk, a limited risk or a high risk. Some AI represent so much risk that they are prohibited. This is the case of a product or service that would “exploit any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability” ^[Commission of EU, Proposal (COM(2021) 206 final) for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence, 21.04.2021] . As it sounds logical, AI systems classified as high risk will have to fulfill many obligations to hit the market.

At this time, every high-risk AI created by a provider will need to undergo the conformity assessment, comply with AI requirements and to be registered in a EU AI system database. A declaration of conformity will also need to be signed. Every substantial change that will affect the AI system will require to start the procedure again.
There are also other requirements for high-risk AI such as transparency, a risk management system, and even record-keeping ^[Ibid., Article 9, Article 12, Article 13]

The other categories of AI (low risk and limited risk) won’t need to feature the same requirements which seems quite logical due to the risks they provide.

Distinctive obligations between public and private sector

Public and private sectors seem to not be at the same level in terms of obligations. What can be estimated as high-risk for the private sector can simply be banned for the public sector. As an example, the use of social scoring for public authorities is banned, while it seems to fall under the high-risk regime for the private sector. ^[European Parliamentary Research Service, Regulatory divergences in the draft AI act – Differences in public and private sector obligations, 05.2022, Page 11] This could be quite questionable because why would AI use by the private sector be less threatening to individual, collective rights and freedoms than by the public sector? Especially at a time where the private sector can undoubtedly be as influential as the public sector and could drastically change the course of things? This question was raised by a study conducted for the EU Parliament. ^[Ibid., Page 18]

Will this regulation be sufficient?

Beyond the fact that it seems irrelevant to distinguish obligations between the public and private sector, while being less restrictive for the private one whereas private actors do not represent less risks, how could the real impact of such technology be known?

As it was exposed by the Collingridge analysis through “The Social Control of Technology” dilemma ^[B. BENBOUZID, D. CARDON, Contrôler les IA, LA DÉCOUVERTE, 2022, Page 14] , when technologies are emerging, we do not have enough knowledge to predict the impact they will have on society. So, they are implemented in order to measure consequences, but once they are rooted in society, it is too late, it is difficult to control them.

What’s next?

The proposal is under discussion in the Parliament and the EU Council ^[ European Parliament, Legislative train schedule – AIA] .In the following months, it should be debated between the two institutions in order to settle the discussion.

Stéphanie Exposito-Rosso

Stéphanie Exposito-Rosso

December 13, 2022

Never miss GDPR compliance news and best practices anymore

Subscribe to our newsletter

Receive our newsletter about data protection