“Grasping Compliance Demands and Responsibilities Under the AI Act”

"Grasping Compliance Demands and Responsibilities Under the AI Act"


# The European Artificial Intelligence Act: An In-Depth Guide on Compliance and Important Dates

As artificial intelligence (AI) continues to advance, its extensive adoption has sparked essential discussions surrounding ethics, safety, and accountability. Policymakers globally are striving to find effective ways to regulate AI systems, balancing the need for innovation with the protection of individuals’ basic rights. On **August 1, 2024**, the **European Artificial Intelligence Act (AI Act)** was officially put into effect, standing as a pivotal framework for AI regulation not just within Europe but worldwide. Although many provisions of the Act will not be fully enforced until **August 2, 2026**, key elements will commence as soon as **February 2025**—including bans on AI systems deemed to have “unacceptable risks.”

This article serves as a thorough resource to clarify who is required to comply with the AI Act, the enforcement timeline for various components of the legislation, and the essential obligations for organizations to achieve compliance.

## **Who Is Required to Comply With The EU AI Act?**

In a manner akin to the **General Data Protection Regulation (GDPR)**, the AI Act has extraterritorial effects. Organizations and individuals worldwide must comply with these rules if they market, implement, or utilize AI systems within the EU, regardless of whether these systems were developed or operated outside the Union.

This extensive reach highlights the EU’s dedication to protecting the rights of its citizens, ensuring that global borders do not weaken the ethical application of AI. It obligates AI providers worldwide to conform to EU standards to sustain their activities in the European market.

## **What Are The Primary Obligations?**

The responsibilities under the AI Act hinge upon two critical aspects:
1. The **risk classification** of the AI system.
2. The organization’s **position in the supply chain**, whether as a provider, deployer, or in other related capacities.

### **Classification of AI Systems by Risk**

The AI Act classifies AI systems into three primary risk categories:

1. **Prohibited Systems**:
These are AI systems that entail an “unacceptable” threat to **health, safety, or fundamental human rights** and are completely prohibited under the AI Act. This category includes AI systems that participate in social scoring or manipulate human behavior in harmful ways.

2. **High-Risk Systems**:
AI systems within this classification can substantially affect individuals’ **safety, well-being, or rights**, yet they are permissible under strict regulations. High-risk systems are generally utilized in sectors such as **healthcare, employment, and law enforcement**. Such systems must adhere to rigorous regulatory standards, encompassing extensive compliance and transparency obligations.

3. **Low-Risk Systems**:
These systems present minimal risks, thus facing fewer compliance demands, allowing for more flexibility in innovation and deployment by companies.

For a detailed overview of the risk classification, consult [Annex 1 of the AI Act](https://artificialintelligenceact.eu/annex/1/).

## **Identifying Your Organization’s Role**

According to the AI Act, organizations are identified within one of six distinct roles, each carrying specific responsibilities:

### **1. Provider**
A Provider is an individual or organization that **designs and creates an AI system** and offers it for use within the EU. Providers bear the greatest responsibility, ensuring their systems comply with all regulatory requirements, including technical documentation, transparency, and monitoring.

### **2. Deployer**
Deployers are individuals or organizations that utilize AI systems developed by Providers. Their responsibilities are limited when they use the AI in its original form. However, if they make significant alterations to an AI system or rebrand it, they effectively take on the role of Providers and must adhere to all corresponding obligations.

### **3. Distributor**
Distributors introduce an AI system into the EU market but do not necessarily develop or rebrand it. Their responsibilities primarily involve ensuring the AI complies with all mandated standards.

### **4. Importer**
Importers are individuals or entities located within the EU who introduce an AI system from outside the EU into the European market, provided the system is under an external company’s name or trademark.

### **5. Product Manufacturer**
These manufacturers incorporate an AI system into a larger product and market it under their own brand within the EU, taking on associated compliance responsibilities.

### **6. Authorized Representative**
An Authorized Representative is an individual or entity situated within the EU, designated to fulfill the responsibilities of a Provider located outside the EU. They serve as a liaison between non-EU Providers and European regulators.

## **Compliance Obligations for Providers and Deployers**

While the majority of the AI Act’s requirements center on Providers, Deployers also have to follow some protocols, especially when AI systems are altered or tailored. Below are some relevant compliance responsibilities for both Providers and Deployers:

– **AI Literacy**: Employees using AI systems are required to undergo mandatory training designed for their specific roles in managing AI, ensuring a sufficient understanding of the systems utilized.