The European Union (EU) is leading the race to regulate artificial intelligence (AI). To end three days of negotiations, the European Council and the European Parliament reached a provisional agreement earlier today which will be the world’s first comprehensive regulation of AI.
Carme Artigas, Spain’s Secretary of State for Digitalization and AI, called the agreement a “historic achievement” in a press release. Artigas said the rules struck an “extremely delicate balance” between encouraging safe and reliable AI innovation and adoption across the EU and protecting the “fundamental rights” of citizens.
The draft law – the Artificial Intelligence Act – was first proposed by the European Commission in April 2021. Parliament and EU member states will vote to approve the bill next year, but the rules will not come into force until 2025 steps.
A risk-based approach to regulating AI
The AI law is designed from a risk-based approach, whereby the higher the risk an AI system entails, the stricter the rules are. To achieve this, the regulation will classify AIs to identify those that pose a ‘high risk’.
The AIs deemed non-threatening and low-risk will be subject to “very light transparency obligations.” For example, such AI systems will need to reveal that their content is AI-generated so that users can make informed decisions.
For high-risk AIs, the legislation will add a number of obligations and requirements, including:
Human supervision: The law requires a human-centered approach, emphasizing clear and effective mechanisms for human oversight of high-risk AI systems. This means that there must be people in the know who must actively monitor the operation of the AI system. Their role includes ensuring that the system works as intended, identifying and addressing potential harm or unintended consequences, and ultimately taking responsibility for its decisions and actions.
Transparency and explainability: Elucidating the inner workings of high-risk AI systems is critical to building trust and ensuring accountability. Developers must provide clear and accessible information about how their systems make decisions. This includes details about the underlying algorithms, training data, and potential biases that may affect the system’s output.
Data management: The AI Act emphasizes responsible data practices, with the aim of preventing discrimination, bias and privacy violations. Developers must ensure that the data used to train and operate high-risk AI systems is accurate, complete, and representative. The principles of data minimization are critical, collecting only the information necessary for the operation of the system and minimizing the risk of misuse or breaches. Furthermore, individuals should have clear rights to access, rectify and delete their data used in AI systems, allowing them to exercise control over their information and ensure its ethical use.
Risk management: Proactive identification and mitigation of risks will become a key requirement for high-risk AIs. Developers must implement robust risk management frameworks that systematically assess potential damage, vulnerabilities, and unintended consequences of their systems.