EU AI Act: A Comprehensive Overview and Its Implications in 2024

Up-to-date insights and implications of the of the Artificial Intelligence Act of the European Union (EU AI Act)
EU AI Act
AI Ethics

Benefits of AI while addressing potential risks

Back to more regulation

Artificial Intelligence (AI) is rapidly reshaping industries, economies, and societies. As AI systems become increasingly integrated into our daily lives, the need for effective regulatory frameworks has become more urgent. In response, the European Union (EU) has proposed the EU AI Act, a comprehensive legislative initiative designed to regulate the development, deployment, and use of AI within its member states. Recognizing the critical need for regulation in high-impact sectors, the proposal specifically addresses areas such as climate change, environmental and public health, the public sector, finance, mobility, home affairs, and agriculture. These regulations aim to tackle the unique challenges and risks associated with AI in these fields, ensuring that the advantages are maximized while mitigating potential harms.

Author
European Commision
Proclamation Date
August 2024
Effective Date

What is the Artificial Intelligence Act in the EU?

The EU AI Act is a legislative proposal introduced by the European Commission on April 21, 2021. It represents the world's first attempt at creating a unified regulatory framework for AI. The Act aims to ensure that AI systems are safe, transparent, and aligned with fundamental rights and EU values. By addressing the ethical and societal challenges posed by AI, the EU seeks to foster innovation while protecting citizens from potential harms.

Who Does the EU AI Act Apply To?

The EU AI Act has a broad scope and applies to a wide range of stakeholders involved in the AI ecosystem. The regulation is designed to cover various actors, including developers, providers, and users of AI systems, both within the European Union and beyond. Here are the key groups to whom the EU AI Act applies:

1. Providers of AI Systems

Providers, also known as developers or manufacturers, are those who design, develop, and bring AI systems to market. The Act imposes specific obligations on these entities, including ensuring compliance with the requirements related to safety, transparency, and accountability. Providers must also ensure that their systems undergo the necessary conformity assessments before deployment.

2. Users of AI Systems

The Act also applies to users of AI systems, which can include businesses, public authorities, and other organizations that deploy AI for various purposes. Users are required to operate AI systems according to the guidelines set forth in the Act, particularly in high-risk applications. They must ensure that these systems are used in a manner that complies with the Act's provisions, including transparency and human oversight requirements.

3. Distributors and Importers

Distributors and importers of AI systems into the EU market are also subject to the EU AI Act. They are responsible for ensuring that the AI systems they distribute or import meet the regulatory standards established by the Act. This includes verifying that the systems have undergone the appropriate conformity assessments and that all necessary documentation is provided.

4. Public Authorities

Public authorities that develop or deploy AI systems, particularly in high-risk areas such as law enforcement or border control, are also subject to the EU AI Act. They must adhere to the same standards and obligations as private sector entities, ensuring that their use of AI respects fundamental rights and operates within the bounds of the law.

5. Organizations Outside the EU

The EU AI Act has extraterritorial implications, meaning it can apply to organizations outside the EU if they place AI systems on the EU market or if their systems affect individuals within the EU. This ensures that all AI systems, regardless of origin, meet the same standards when they have an impact within the EU.

6. Third-Party Providers

Entities that provide services related to AI systems, such as maintenance, repair, or software updates, are also covered by the Act. They must ensure that their services do not compromise the compliance of the AI systems with the EU AI Act's requirements.

Key Provisions of the EU AI Act

The EU AI Act categorizes AI systems based on the level of risk they pose to individuals and society. This risk-based approach is central to the regulation and ensures that the intensity of regulatory requirements corresponds to the potential impact of the AI system. The Act outlines four main categories:

  • Unacceptable Risk AI Systems: These systems are prohibited entirely due to their potential to harm individuals or violate fundamental rights. Examples include social scoring by governments and AI systems that manipulate human behavior to cause harm.
  • High-Risk AI Systems: These systems are subject to stringent requirements and oversight. They include AI applications in critical sectors such as healthcare, transportation, and employment. Key requirements include risk assessment, data quality management, and human oversight.
  • Limited Risk AI Systems: These systems require specific transparency obligations, such as informing users that they are interacting with an AI system. Examples include chatbots and AI-generated content.
  • Minimal or No Risk AI Systems: These systems are largely exempt from regulatory requirements but must still adhere to general safety and transparency standards. Examples include AI-powered video games and spam filters.

Requirements Imposed by the EU AI Act

The EU AI Act establishes a comprehensive set of requirements to regulate the development, deployment, and use of AI systems within the European Union. These requirements are designed to ensure that AI technologies are safe, transparent, and aligned with fundamental rights and EU values. The obligations vary depending on the risk level of the AI system, with stricter requirements for high-risk applications. Below are the three key requirements imposed by the EU AI Act:

  • Prohibition of Unacceptable Risk AI Practices: Certain AI practices considered to pose an unacceptable risk are outright banned.
  • Standards for High-Risk AI Systems: Specific standards are established for the development and deployment of high-risk AI systems.
  • Rules for General-Purpose AI (GPAI) Models: The Act also sets out rules for general-purpose AI models.

AI systems that do not fit into the specified risk categories outlined in the EU AI Act are classified as 'minimal risk' and are generally exempt from the Act's specific requirements. However, these systems may still need to fulfill certain transparency obligations and adhere to other relevant laws. Examples of such minimal risk AI applications include email spam filters and chatbots for customer service, which are common and widely used today.

Prohibited AI Practices Under the EU AI Act

The EU AI Act prohibits certain AI practices that pose unacceptable risks, violating fundamental rights and values. Key prohibited practices include:

  • Social Scoring by Governments: Banning the use of AI for systematically evaluating individuals based on social behavior, which can lead to discrimination.
  • Exploiting Vulnerabilities: Prohibiting AI systems that exploit vulnerabilities of specific groups, such as children or the elderly, to manipulate behavior.
  • Real-Time Remote Biometric Identification: Restricting the use of real-time biometric identification in public spaces, like facial recognition, to prevent mass surveillance and protect privacy.
  • Indiscriminate Surveillance: Banning AI systems that enable widespread, undifferentiated surveillance without specific, justified purposes.

Standards for High-Risk AI Systems

The EU AI Act imposes stringent standards on high-risk AI systems to ensure they are safe and reliable. These standards include rigorous requirements for risk management, data quality, transparency, and human oversight. Providers of high-risk AI systems must implement robust risk assessment processes, use high-quality and unbiased data, and ensure that users are informed about the system's functionalities and decision-making logic. Additionally, high-risk AI systems must include mechanisms for human intervention and oversight. These measures are designed to mitigate risks and ensure that high-risk AI technologies operate within ethical and legal boundaries.

EU AI Act Fines

The EU AI Act establishes a framework for financial penalties to enforce compliance with its regulations. These fines are designed to ensure that AI systems meet safety, transparency, and ethical standards. The severity of the fines depends on the nature and seriousness of the violation. Key aspects include:

  • High Fines for Serious Breaches: Companies that fail to comply with significant requirements, such as prohibitions on unacceptable AI practices or standards for high-risk systems, can face substantial fines. The penalties can reach up to €35 million or 7% of the company's global annual turnover, whichever is higher.
  • Penalties for Non-Compliance: Businesses that neglect essential obligations, such as conducting conformity assessments or implementing post-market monitoring, may incur significant fines. These fines can range up to €15 million or 3% of global annual turnover.
  • Corrective Measures: In addition to financial penalties, the Act allows for corrective actions, such as suspension of AI system operations or mandatory modifications to ensure compliance. These measures are intended to address violations and prevent further risks to individuals and society.
  • Repeated Offenses: Companies with repeated violations or ongoing non-compliance may face escalating fines, reflecting the seriousness of persistent breaches and encouraging ongoing adherence to regulatory standards.

When will the EU AI Act take effect?

The EU AI Act, proposed by the European Commission in April 2021, received approval from the European Parliament on April 22, 2024, and from EU Member States on May 21, 2024. The law will be implemented in phases, with different requirements coming into effect at specific intervals, starting 20 days after its publication in the Official Journal of the EU. The key dates for this phased implementation are as follows:

  • In six months: The Act will begin enforcing prohibitions on certain high-risk AI practices deemed unacceptable. This initial phase targets practices that pose significant risks to fundamental rights and public safety.
  • In 12 months: New regulations will apply to general-purpose AI (GPAI) models. Providers of GPAI models already in use will have an additional 36 months from the Act's entry into force to comply with these new rules.
  • In 24 months: The standards for high-risk AI systems will come into effect. This transition period focuses on ensuring that AI systems classified as high-risk meet rigorous safety and transparency requirements.
  • In 36 months: Regulations will extend to AI systems that function as products or safety components regulated under specific EU laws, ensuring that these systems also comply with the new legal standards.

Future Outlook and Conclusion

The EU AI Act represents a significant step towards comprehensive AI regulation. Its risk-based approach, focus on fundamental rights, and emphasis on transparency and accountability set a high standard for AI governance. While the Act presents challenges for businesses and regulators, it also offers opportunities for responsible AI innovation and the development of ethical AI practices.

The European Union is dedicated to maintaining its leadership in technology while ensuring that AI advancements align with its core values and principles. The Act underscores the necessity for AI systems to operate within the EU's value system, fostering transparency and accountability while upholding fundamental rights.

As the legislation progresses, various stakeholders have largely agreed on the need for these regulatory measures. Nonetheless, there are concerns about potential overregulation and conflicting obligations. The goal of the proposed regulations is to strike a balance that encourages innovation while protecting individuals and society from potential AI risks.

For organisations navigating this new regulatory landscape, platforms like Oxethica can play a crucial role. As an AI governance platform, Oxethica offers support in ensuring compliance with the EU AI Act’s requirements. By providing tools for risk assessment, transparency, and accountability, Oxethica helps organisations align their AI practices with the Act’s standards, facilitating a smoother transition to the new regulatory environment.

More regulation

The Key Risks and Dangers of Artificial Intelligence

Discover why and how Artificial Intelligence can become dangerous

What is your AI System Risk Level under the EU AI Act?

Learn about the risk classes in the EU AI Act

Who is the EU AI Act for and What are Their Obligations?

Overview: The operators addressed in the EU AI Act

The True Cost of Non-Compliance

Discover the hidden costs of non-compliance
By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.
By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.