Technology
· 9 min read

A Legal Deep Dive into the EU's Artificial Intelligence Act of 2024

The European Union's (EU) landmark Artificial Intelligence Act (AI Act) of 2024 ushers in a new era of regulation for this rapidly evolving technology. Published in the Official Journal on July 12, 2024, the Act establishes a comprehensive legal framework for AI systems across all 27 member states. While it enters into force on August 1, 2024, most provisions will be gradually implemented over the following 24 to 36 months.

Prohibited AI Practices

The AI Act takes a strong stance against certain AI practices deemed unethical or harmful. Article 5 outlines clear prohibitions on:

  • Subliminal Techniques [Article 5(1)(a)]
    Forget manipulative social media ads. The Act bans AI designed to influence users without their awareness, protecting them from being subtly nudged in certain directions. This includes any technology designed to influence users in ways they cannot perceive or control, aiming to protect individuals from covert manipulation. A notable case that highlighted the risks of such technologies was the Cambridge Analytica scandal, where subliminal techniques were allegedly used to influence voter behaviour.
  • Deceptive Techniques [Article 5(1)(b)]
    AI-generated content that pretends to be something it's not? Not allowed. AI designed to mislead or deceive users is strictly prohibited. This encompasses a wide range of practices, including AI-generated content that misrepresents its nature or origin, thereby ensuring that users are not tricked or misinformed by AI systems. The use of deepfakes, such as the high-profile case involving a deepfake video of a Belgian political figure, demonstrated the potential for harm and deception.
  • Social Scoring by Governments [Article 5(1)(c)]
    A world where your every move is tracked and used to assign you a social score that impacts your access to opportunities. The EU isn’t a fan. Blanket social scoring systems employed by governments to evaluate citizens are forbidden. These systems, often used to assess individuals' behavior and assign scores that can affect their access to services or opportunities, are viewed as a significant threat to privacy and individual freedoms. The controversy surrounding China’s Social Credit System underscored the potential dangers of such systems and influenced the EU's stance.

[Article 5(2)] Exceptions exist for law enforcement using real-time biometric identification in specific scenarios. However, these exceptions are narrowly tailored, reflecting the EU's commitment to protecting privacy and fundamental rights.

Risk-Based Approach (Categorizing AI Systems)

The Act classifies AI systems into four risk categories, ensuring that regulatory measures are proportionate to the potential harm these systems may cause.

  • Unacceptable Risk [Prohibited under (Article 6)]
    Systems posing a severe threat to safety, livelihoods, and fundamental rights are banned. Examples include AI used for emotion recognition intended to manipulate users or AI applications that exploit vulnerabilities in individuals.
  • High-Risk [Article 7]
    These systems require strict compliance measures, including human oversight, risk management plans, and extensive data governance. High-risk systems often include AI used in critical infrastructure, healthcare, and facial recognition technologies. The Uber self-driving car incident, where an autonomous vehicle struck and killed a pedestrian, highlighted the need for stringent oversight of high-risk AI applications.
  • Limited Risk [Article 8]
    AI systems in this category are subject to transparency obligations and reporting requirements. For instance, chatbots must disclose that they are not human to avoid misleading interactions. The case of Microsoft’s chatbot, Tay, which was manipulated to produce offensive content, demonstrated the need for transparency and oversight in limited risk AI systems.
  • Minimal Risk [Article 9]
    These systems face lighter regulations. Examples include AI used for spam filtering or basic data analytics, where the potential for harm is relatively low. This risk-based approach ensures proportionate regulation, focusing scrutiny on systems with the greatest potential for harm.

Transparency and Explainability

The AI Act prioritizes user rights and allows individuals to understand how AI systems impact them. Key provisions include:

  • Right to Explanation [Article 22]
    Individuals have the right to request an explanation for AI-based decisions that affect them, such as loan denials. This provision ensures that users can understand the rationale behind decisions made by AI, promoting accountability and trust. The GDPR’s "right to explanation" has already set a precedent for this type of regulation, reinforcing the importance of transparency.
  • Information Disclosure [Article 23]
    How does that AI system you interact with work? What data does it collect? Developers must disclose how AI systems work and the type of data they collect. This transparency helps users make informed decisions about engaging with AI systems and protects them from undisclosed data usage.
  • Biometric Identification Oversight [Article 24]
    Facial recognition is powerful, but it can also be misused. Strict safeguards govern the use of biometric identification systems, including clear purposes and user consent. These measures are designed to prevent abuse and ensure that biometric data is handled responsibly. The European Court of Human Rights’ decision in the case of S. and Marper v. the United Kingdom emphasized the importance of stringent safeguards for biometric data.

These measures aim to demystify AI and ensure users are not subject to opaque algorithmic decision-making.

Challenges and the Road Ahead

The AI Act represents a significant step towards responsible AI development. However, challenges remain:

  • Harmonization Across Member States
    Ensuring consistent application across the EU's diverse legal landscape will be crucial. Different legal traditions and enforcement mechanisms may pose challenges to uniform implementation. National regulators will need to closely cooperate and share best practices to ensure a level playing field within the EU.
  • Balancing Innovation and Regulation
    Striking the right balance between fostering innovation and safeguarding public interest is an ongoing process. Over-regulation could stifle technological advancement, while under-regulation might expose individuals to harm. The EU will need to monitor the Act's impact and adapt regulations as necessary to maintain a supportive environment for responsible AI development.
  • Global Alignment
    The EU's approach may prompt discussions on establishing a global framework for AI governance. International cooperation and alignment will be essential to address cross-border AI applications and ensure cohesive regulatory standards. The EU's leadership in this area could influence the development of global AI regulations that promote innovation and protect fundamental rights.

Timeline for the application of the Act

  1. Entry into Force: August 1st, 2024
  2. General Provisions and Prohibited Practices: February 2, 2025 (6 months).
    Note that: This applies to Title I (General Provisions) and Title II (Prohibited AI practices) of the Act.
  3. Risk Category Specific Timelines:
    High-Risk AI: After 24 months from entry into force, which is August 1, 2026. This means providers of high-risk AI systems will need to comply with the Act's requirements by this date.
    Limited Risk AI: After 30 months from entry into force, which is February 1, 2027. Providers of limited-risk AI will need to meet the Act's obligations by this date.
    Minimal Risk AI: Faces lighter regulations and may have a longer grace period for compliance, though the specific timeframe isn't explicitly stated in the Act.

Exceptions: AI systems already regulated by other EU laws: Providers of high-risk AI systems that are already regulated by other EU legislation (e.g., medical devices, machinery) have an extended period of 36 months for compliance. This means they will need to meet the AI Act's requirements by August 1, 2027.

Wrapping Up

The EU's AI Act is a pioneering piece of legislation with far-reaching implications. Legal professionals and businesses alike must closely monitor the Act's implementation and adapt their practices accordingly. The coming years will be crucial in determining the Act's effectiveness in fostering trustworthy and human-centric AI development. As the Act comes into force, it will be essential to observe how these regulations impact the AI landscape both within the EU and globally.

Also read: EU’s Digital Operational Resilience Act (DORA)

Veda Dalvi
Hello, I'm Veda, the Legal Analyst with a knack for decoding the complex world of laws. A coffee aficionado and a lover of sunsets, oceans and the cosmos. Let's navigate the Legal Universe together!

Recent blogs

Contract Management
7 MINS

How does CLM help Automobile Industry?

Read More
Legal
· 17 min read

4 Recent Developments Shaping Data Privacy in Europe with GDPR

Read More