The European Union's (EU) AI Act came into force on 1 August 2024 and is the most detailed AI legislation in the world right now. 

It exists for two reasons: 

  1. So organisations who make AI systems respect people’s wellbeing and rights 
  1. To support investment and innovation in AI with a legal framework 

The EU AI Act defines an AI system as a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. 

AI learns from data that humans provide it. How it uses that information can influence the world around it. 

Your business might have new responsibilities 

You have to comply with the act if any of these four things are true: 

  • Your business is registered in the EU 
  • Your products or services are available to buy or use in the EU 
  • The AI system affects people in the EU 
  • People or organisations in the EU use what the AI system produces 

For example, finance company in the EU scans documents into a system that uses image recognition and machine learning. The system translates the images into words and identifies key words. The key words are the output. The finance company can use that system legally, but only if the system follows the EU AI Act. 

Non-compliance will result in fines

The fine for non-compliance is up to €35 million or 7% of your business's annual turnover worldwide for the previous year. Whichever is higher. 

Not everyone has to comply with the EU AI Act 

Compliance with the act is not required if you're using AI systems for: 

  • Military or defence 
  • Non-commercial purposes 
  • Scientific research and discovery 
  • Research, testing, or development before AI systems are available to buy or use 

The four key roles in the EU AI Act and their responsibilities 

Providers build AI systems or general purpose AI models. Google, Microsoft and Amazon Web Services (AWS) are examples of providers. They have the most responsibility.  

Some of their responsibilities include ensuring their product is compliant with the act and putting their product through a conformity assessment. They must also make sure their employees are trained in the benefits and risks of building AI.

Deployers use AI systems that someone else has built. Deployers must ensure people in their company follow the AI provider’s instructions for using the system. They need make sure their employees are trained in the benefits and risks of using AI. 

They must keep records of how people are using the AI system, and ensure the system is working. And they have a responsibility to inform users if an AI system affects them.  

For example, if an AI system collects employee data, the deployer must tell employees how, when, where, and why it is used. 

Distributors and importers make AI systems available on the EU market. 

Authorised representatives act on behalf of providers who are outside the EU. 

There are four risk categories for products under the EU AI Act

Prohibited is the highest risk category. Prohibited products are an unacceptable risk to safety, security, and/or fundamental rights of people. Predictive policing is one example of prohibited AI. So is social scoring, which can prevent people from accessing essential services.  

AI systems that infer emotions at work or in education are prohibited except in some situations, like detecting pilot drowsiness. 

High risk products must comply with a conformity assessment and meet requirements for risk assessments. Using AI in safety aspects of medical devices or essential infrastructure, like railways or sanitation, is high risk. 

Limited risk products can go on the market. Chatbots and virtual assistants have limited risk, however, those who use the product must know they are interacting with the AI system. 

Minimal risk products, like spam filters, can be used without extra safety or transparency measures. 

Timeline

The EU AI Act sets a new global standard for the ethical and safe development of AI. As AI becomes more integrated into society, this legislation represents a critical step toward balancing technological advancement with public trust and protection. 

Key dates 

1 August 2024 

The EU AI Act comes into force. 

2 February 2025 

AI Systems with unacceptable risks are prohibited. 

Employee training in the benefits and risks of building and/or using AI must be in place.

2 August 2025 

New general purpose AI models, like those that power Open AI’s ChatGPT, must comply with the act. 

If they were on the market before 2 August 2025, they have another 24 months to comply. 

2 August 2026 

 

Most high risk AI systems must comply with the act. 

2 August 2027 

All high risk AI systems must now comply with the act. 

For businesses, it offers an opportunity to align with best practices in AI governance. The act not only outlines obligations but also encourages the development of AI in a manner that is sustainable, equitable, and mindful of societal impact — ensuring a future where AI serves innovation, and people’s rights are respected. 

Learn more about our capabilities in Artificial Intelligence here.