On 21 April 2021, the European Commission gave a proposal for a regulation laying down harmonized rules on artificial intelligence (COM (2021) 206), which aims to make Europe a world leader in the field of artificial intelligence. The proposal is a follow-up to the European Strategy on AI published in 2018 and aims to put in place the first common European regulatory framework for artificial intelligence. On the same date, the Commission also published an updated version of the EU Member States Coordinated Plan on Artificial Intelligence and a proposal for a machinery products regulation to replace the Machinery Directive (COM (2021) 202), which will ensure, inter alia, the safe integration of AI systems.
This article provides a review of the key content of the proposed AI rules and measures.
Commission proposal for an AI regulation
The new AI regulation would aim to ensure the reliability of artificial intelligence systems, address the risks associated with artificial intelligence and establish the world’s highest industry standards in Europe through proportionate and flexible rules. On the basis of a common definition of artificial intelligence, the rules of the proposed regulation would be applied uniformly in all EU Member States, providing a framework for the most seamless transnational cooperation possible in the field of artificial intelligence. The proposal categorizes the risks associated with artificial intelligence systems as unacceptable, high, limited, or minimal. Artificial intelligence systems identified as containing so-called unacceptable risks would be directly prohibited under the proposal. High-risk systems would only be allowed if strict requirements were met.
Artificial intelligence systems deemed to include unacceptable risks would be systems that are considered to constitute clear threats to human security, livelihoods, or rights. In this respect, the proposed regulation would prohibit, for example, talking toys that encourage dangerous behavior, as well as schemes by which states score their citizens on the basis of social factors.
According to the proposed regulation, high-risk artificial intelligence systems would be defined as systems used:
- In critical infrastructures where they may endanger human life and health (e.g., in traffic control)
- In educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g., scoring of exams);
- In safety components of products (e.g., AI application in robot-assisted surgery)
- In employment and human resources management (e.g., AI application for qualifying job applications)
- In key private and public services (e.g., credit rating, which can completely prevent access to loan)
- In law enforcement, where artificial intelligence may violate fundamental human rights (e.g., assessing the reliability of evidence)
- In migration management, asylum, and border control (e.g., authentication of travel documents)
- In the administration of justice and democratic processes (e.g., interpretation of the law in a concrete situation)
According to the proposal, high-risk AI systems should only be placed on the market if they meet strict requirements:
- The system must be very robust, secure, and accurate
- Adequate arrangements must be in place for risk assessment and risk mitigation
- The data used in the system must be of high quality to minimize risks and discriminatory outcomes
- The system must be carefully documented so that the authorities have the necessary information about its operation and purpose
- Users must be provided with clear and adequate information on the operation and use of the system
- Humans must be able to control the operation of the system adequately to minimize risks
Limited-risk AI systems include the use of service robots, which are subject to certain transparency requirements, such as that service users should be able to understand that they are interacting with a machine. Under the proposed regulation, low-risk AI systems would be given free use, such as spam filters or video games.
Coordinated plan on AI
The updated coordinated AI plan of the EU Member States outlines the necessary changes in Member States’ policies and investment needs in the EU. At the heart of the common approach of the Member States is the guarantee of the security and fundamental rights of businesses and individuals, in order to maximize Europe’s global leadership in the development of artificial intelligence in the years to come. The plan has the following key objectives:
- Creating favorable conditions for the development of AI
- Promoting the export of AI excellence
- Ensuring that AI works for people
- Building strategic leadership for European AI
The plan exploits funding from the Digital Europe and Horizon Europe programs, the EU’s Recovery and Resilience Facility and the Union’s cohesion policy. Thus, the resources anchored in the financial framework of the Member States and the EU will create additional conditions for European cooperation and ensure, inter alia, adequate research and innovation capacity in all Member States.
Commission proposal for a machinery products regulation
On the same day as the proposal for the AI regulation, the Commission gave its proposal for a regulation on machinery products. The regulation would seek to ensure that the so-called new generation of technology would be safe for users and consumers, while pursuing innovation-friendly goals.
The interrelationship of the proposed regulations can be described as where the AI regulation addresses the safety risks of the actual AI systems, the machinery products regulation ensures the safe integration of the systems into machines and equipment – such as robotic lawnmowers, 3D printers and industrial production lines.
From the point of view of companies, the situation is facilitated by the fact that a single conformity assessment can cover the requirements of both regulations. The machinery products regulation also aims to reduce administrative burdens and costs for businesses by clarifying current regulations, while ensuring coherence with other EU product legislation.
Supervision and penalties
The Commission proposes that the application and implementation of the rules would be supervised by the competent surveillance authorities of each Member State, in conjunction with the European Artificial Intelligence Board, which is proposed to be established. The European AI Board is to be composed of representatives of the supervisory authorities of the Member States, the European Data Protection Supervisor, and the Commission. The AI Board would facilitate and support the practical application and standardization of the rules. The Commission has also announced plans to establish test environments to support AI innovation.
According to the proposal, the EU Member States should define in their national legislation effective, proportionate, and dissuasive sanctions for situations where the proposed rules on artificial intelligence are infringed. Therefore, Member States should ensure that, for example, the use of or placing illegal AI systems on the market is subject to sanctions. The rules and sanctions also apply to EU institutions, agencies, and bodies, which would be fined by the European Data Protection Supervisor, if necessary.
The Commission, together with the European AI Board, intends to draw up guidelines on fines, and the AI regulation sets out certain thresholds to be considered in a Member State when determining fines for infringements of the AI rules:
- Prohibited practices or non-compliance related to requirements on data: Up to EUR 30 million or 6% of the total worldwide annual turnover of the preceding financial year (whichever is higher)
- Non-compliance with any of the other requirements or obligations of the AI regulation: Up to EUR 20 million or 4% of the total worldwide annual turnover of the preceding financial year (whichever is higher)
- Supply of incorrect, incomplete, or misleading information to notified bodies and national authorities in reply to a request: Up to EUR 10 million or 2% of the total worldwide annual turnover of the preceding financial year (whichever is higher)
In view of the above, it is clear that the proposal for an AI regulation is grounded on risk-based and well-defined rules. In the Commission’s press release concerning the proposals, Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, stated that: “On artificial intelligence, trust is a must, not nice to have”. At the same time, Thierry Breton, Commissioner for the Internal Market, went on to say that: “AI is a means, not an end”. These statements can be considered justified, bearing in mind the disadvantages and risks that can be caused by failing AI solutions at worst. Thus, the penalties for non-compliance with the rules on AI are also justified.
The European Parliament and the Member States have yet to adopt the Commission’s proposals for regulations under the ordinary legislative procedure. Once adopted, the regulations will be directly applicable throughout the EU. The regulations are expected to enter into force in the second half of 2022 and will apply after a transitional period in the second half of 2024 at the earliest.
We look forward to the progress of the proposed regulations, while reminding all companies involved in artificial intelligence to review their current practices in advance. One should neither forget the obligations already arising from the current regulations, for example with regard to data protection, data security and consumer protection.
As a law firm specialized in technology matters, we are happy to help you with any AI related questions.
Our Associate Trainee Jere Lehtimäki took part in writing this article.