The EU AI Act Key Facts
On 9 December 2023, the European Union (EU) Parliament announced that provisional agreement has been reached for a law — the AI Act — to come into effect in the coming months on obligations for AI based on its potential risks and level of impact. This comes after close to two years of legislative process and much media coverage. So what was the big news in the recently announced deal?
Banned applications
Recognising the potential threat to rights and freedoms posed by certain applications of AI, the law is likely to prohibit:
biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
emotion recognition in the workplace and educational institutions;
social scoring based on social behaviour or personal characteristics;
AI systems that manipulate human behaviour to circumvent their free will;
AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
High Risk AI to require a rights impact assessment
For AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), clear obligations were agreed. This includes a mandatory fundamental rights impact assessment, among other requirements, applicable also to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behaviour, are also classified as high-risk. Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.
Background to the EU AI Act
The AI Act recognizes that not all AI systems are equally risky or beneficial. Therefore, it introduces a risk-based approach, where AI systems are classified into four categories: unacceptable, high, limited, and minimal risk.
- **Unacceptable risk** AI systems are those that pose a clear threat to the safety, livelihood, or rights of people, such as social scoring systems, subliminal manipulation, or indiscriminate surveillance. These AI systems are banned in the EU.
- **High risk** AI systems are those that have a significant impact on people's lives or society, such as health care, education, law enforcement, or public services. These AI systems have to comply with strict rules, such as ensuring data quality, transparency, human oversight, and accountability.
- **Limited risk** AI systems are those that involve some interaction with people, such as chatbots, biometric recognition, or emotion detection. These AI systems have to inform users that they are using AI, and allow them to opt out if they wish.
- **Minimal risk** AI systems are those that have little or no impact on people or society, such as spam filters, video games, or recommender systems. These AI systems are mostly free from regulation, but have to follow some general principles, such as fairness, safety, and human dignity.
The AI Act imposes different obligations and incentives for AI providers and users, depending on the risk category of the AI system.
- AI providers are the entities that develop, create, or supply AI systems. They have to ensure that their AI systems comply with the relevant rules and standards, and that they have adequate quality management and risk assessment systems in place. They also have to register their high-risk AI systems in a European database, and cooperate with the authorities in case of any incidents or complaints.
- AI users are the entities that use AI systems for their own purposes or on behalf of others. They have to follow the instructions and requirements of the AI providers, and ensure that the AI systems are used in a lawful and ethical manner. They also have to monitor the performance and behavior of the AI systems, and report any problems or malfunctions to the AI providers or the authorities.
The AI Act also creates incentives for AI providers and users to adopt best practices and standards, such as certification schemes, codes of conduct, or sandboxes. These incentives aim to foster trust, innovation, and excellence in the EU AI ecosystem.
Once finally passed, there will likely be a lag time between when the regulations come into force and when organizations will be expected to come into compliance.
This means now is a good time for organizations to start reviewing their AI systems, processes, documentation, policies, and culture to accommodate the requirements of the new law and remain competitive.
Waltzer encourages clients to prioritize planning for compliance even though it may not be strictly necessary for a couple more years. Our experience shows governance changes or new processes can take years to establish properly.
Rather than scramble, invest a little time and money for a gradual uplift over the next year or two.