The AI Act will affect all businesses

Last Friday, the ambassadors of the Member States of the European Union unanimously approved the AI Act, the first European text designed to regulate the use of artificial intelligence. The legislative process is not yet complete, but it would take a major slip-up for Parliament and the Council not to approve it by the summer. The provisions of the regulation will therefore begin to apply between the end of 2024 and mid-2025.

From the customer…

With tougher penalties than the General Data Protection Regulation (GDPR), the AI Act is already shaping up to be a major catalyst for transforming the governance of artificial intelligence within businesses. It is aimed primarily at players who deploy or supply AI tools, but all their potential customers must take it into account. The discussion on the AI Act thus opens a door to a wider debate: the governance of AI in the workplace.

The use of AI by employees and the phenomenon of ‘shadow AI’, where personal AI tools are used for business purposes without clear guarantees of data confidentiality, highlight the risks associated with lax AI management. These practices can lead to the unintentional use of sensitive data to train AI models, with potentially serious consequences for the confidentiality and security of corporate information.

In particular, it is essential to audit contractual relationships with suppliers of AI tools, to ensure that contracts include clauses guaranteeing compliance with the AI Act. This approach limits the risks associated with the use of AI and protects personal and professional data. Departments in charge of technology procurement must be particularly vigilant about compliance with the standards set by the AI Act.

… to the supplier

Companies engaged in the creation or marketing of AI tools will be required to comply with compliance, user information and transparency obligations, the extent of which will vary depending on the assessment of the level of risk of their AI tools according to the AI Act classification. By imposing strict governance and compliance standards, this regulatory approach aims to establish a secure and ethical framework for the use of AI in the professional sector.

On the French Tech side, several voices have been raised to complain that the text is too discriminatory for small-scale AI start-ups. The text of the AI Act proposes a classification of AI into prohibited AI, high-risk AI, low-risk AI and minimal-risk AI. Offering the former – Chinese-style social rating systems, for example – carries penalties of up to 7% of global turnover or €35 million.

The heart of the matter, however, concerns high-risk AI. The AI Act imposes on them onerous transparency and compliance constraints. The important thing, then, is to understand what this category includes, as Gilles Rouvier, partner at Lawways and chairman of the Cyberlex network, explains in Qant: “High-risk AI systems meet cumulative conditions defined by the text of the AI Act. In particular, a list of high-risk AIS appears in Annex III (e.g. AIS for recruitment, AIS for assessing people’s creditworthiness. This list is very broad. The European AI Office and the French authority, which has not yet been designated, will have to do a great deal of work to clarify the technical aspects.’

So ‘high risk’ does not just concern foundation models, reserved for start-ups that have raised several hundred million euros. Its broadest definition could include all kinds of generative AI services, and all the companies that use them.

J.R, M. de R.

 

Read the interview with Gilles Rouvier (Lawways) and all the latest news on artificial intelligence by subscribing to Qant: Cognitive Revolution and the Future of Digital