On 21 April, the European Commission unveiled an ambitious project to provide a legal framework for artificial intelligence within the EU. Commission president Ursula Von der Leyen even announced that she wanted to regulate AI “within 100 days” of her election. Though this initiative actually comes a year and a half after she took office, it remains a world first.
A focus on respecting citizens’ rights
The aim of this proposed law is clear: to ensure that AI devices deployed on European territory promote the well-being of individuals and the common good. In other words, to build an ethical AI that does not become a way for authoritarian regimes to control its people. However, European authorities are definitely not denying AI’s benefits and innovation potential. Thierry Breton, EU Commissioner for the Internal Market, sums it up by saying that AI “offers immense potential but also presents a number of risks”. The EU is also betting on a “trustworthy” AI, believing that it cannot be developed without the informed consent of its citizens. A clever balance between human rights and the race for innovation.
A risk-based pyramid approach
To ensure that AI systems used in the EU are “safe, transparent, ethical, unbiased and under human control”, the Commission has designed this regulatory proposal on the model of a pyramid with 4 typologies of risks:
At the bottom of this pyramid are minimal uses of AI that require little or no supervision. Next come those with limited risks, which must aim to guarantee transparency for users. For example, the use of deepfakes is not a problem in itself, but it needs to be established whether or not a video relaying information is real. Finally, at the top of the pyramid are high-risk AI uses that can select individuals applying for aid, a loan, a job or a field of study. In this case, additional requirements are formulated in terms of documentation, the quality of the data used, performance, robustness, transparency and finally, cybersecurity. The principle being that a selection cannot be made on the basis of an algorithm alone. The principle of a human guarantee is also introduced, insisting on the obligation of “human supervision” (article 114).
Of course, any use that infringes on fundamental rights, such as “social scoring” or the kind of facial recognition systems being developed in China, are classified as unacceptable and are simply excluded. The Commission does not want a Black Mirror scenario. This forthcoming legislation will therefore apply to any company, European or otherwise, that operates in the EU. Hefty fines, in the range of €30 million or 6% of the company’s overall turnover, depending on the organization, have been predicted in the event of non-compliance with the rules. This will dissuade even the most recalcitrant.
A proposal under fire
Companies and lobbyists argue that such a restrictive framework for the use of AI could prove to be a handicap when dealing with competition from China and the US, which have more liberal legislation. The Commission believes that this regulation will help develop AI that is more respectful of citizens’ rights. In addition, measures to support innovation are planned to ensure European companies remain competitive. The political aim is to anticipate the wishes of Member States so that the EU is united on the issue, with a (re)definition of the parameters of AI.
NGOs such as Alliance Vita, which would like even more binding provisions, supports the Commission’s proposal. The NGO Civil Liberties Union for Europe argues that such regulations will not prevent States from carrying out biometric checks on its citizens if they so wish. On the other hand, other organizations, such as the Center for Data Innovation, see a proposed law that will slow down the development of an industry that is still in its infancy as being incompatible with the EU’s ambition to become a leader in the field of AI.
The law is soon to be submitted to the European Parliament, where MEPs, lobbyists and companies will play their part in the debate. It is safe to say that a lot of changes will be made before it is adopted, and the process could well take several years. It remains to be seen whether the text adopted will live up to its initial ambitions.
By Nicolas Ruscher, Consulting Director at Antidox