AI News
Last update on March 12, 2024

EU Reaches Agreement on the AI Act – First-Ever AI Regulation Law

5 min read
The AI Act Europe 1

Update – March 12th, 2024 – The European Parliament approved the European Union Artificial Intelligence Act (EU AI Act) by the overwhelming majority of 523 votes in favor, with 46 against and 49 abstentions, transforming Europe into the first region in the world to have formal legal regulations for AI technology.
The law still has to undergo protocolar procedures, including a final wording check and endorsement from the Council. It will be adopted around May 2024, before the end of the legislature. It will enter into force 24 days after formal publication in the Official Journal, and its full implementation will roll out from 2025 onwards.
For details on the main points of the new regulation of the AI Act, read the article below.

December 2023:

The EU moves forward with its legal frame for AI development and usage as it announces that negotiators from the European Parliament and Council have reached an agreement on terms for the AI Act. 

This bill integrates comprehensive rules to ensure the safe and responsible use of AI in Europe, balancing innovation with protecting fundamental rights and democracy.

If approved and implemented, the AI Act will be the first law of its kind in the world. As it stands today, it’s one more of several contributions to legally contain and regulate the novel and rapidly growing field of AI. 

The AI Act Could Become Effective in 2025

On Friday, the European Union issued a press release informing that legislators from the European Parliament and European Council had finally concluded the lengthy negotiations on the terms of the Artificial Intelligence Act (AI Act). 

This bill was first proposed and drafted in April 2021, way before the first generative AI tools reached the mass market. 

The AI Act seeks to regulate all the most critical aspects of AI technology and applications built upon it to protect fundamental rights and the rule of law, ensure it’s ethically developed and safely used across Europe, and make this region an environment for AI companies and products to thrive. 

As the final draft of the Act is finally agreed on, the next steps are for the Parliament and Council to vote on its application. If approved in the upcoming stages of the process, the law could be effective no sooner than 2025. 

An Integral Law to Make AI Tech Rights-Compliant and User-Safe

The AI Act is a wide-ranging legislation that regulates the most pressing aspects of AI tech development and end-use. The terms can be broken down into three major aspects: 

Safeguards for General AI Systems

General-purpose AI (GPAI) systems –including its applications such as image and audio recognition and generation, pattern detection, question answering, translation, etc.- must adhere to transparency requirements such as technical documentation, compliance with EU copyright law, and detailed summaries of training content. 

These latter two points pose an interesting question regarding the future of today's most popular generative AI tools, which have been (allegedly) trained with unauthorized content scraped from the web. Some of these models have still not disclosed the origin and content of their training datasets. 

Additionally, stringent obligations will apply to high-impact GPAI models with systemic risk. These will also have to undergo and fulfill covering evaluations, risk mitigation, adversarial testing, incident reporting, cybersecurity, and energy efficiency.

Measures to Support Innovation and Small-to-Midsize Enterprises

In an effort to ensure businesses of all sizes have the opportunity to develop AI solutions in Europe, the AI Act promotes regulatory sandboxes and real-world testing by national authorities. 

This hopes to facilitate AI development by businesses, particularly SMEs that could otherwise be pushed out of the game by larger corporations.

Obligations for High-Risk AI Systems

The AI Act classifies AI systems as high-risk when they pose “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law.” These types of AI models and applications built upon them must undergo a mandatory fundamental rights impact assessment, among other requirements, and citizens will have the right to legally complain about AI systems and receive explanations about the decisions made by regulators about these systems and how they impact civil rights. 

Examples of high-risk AI models include applications within the insurance and banking sectors and software designed to influence political elections and voter behavior.

Banned AI Applications

Due to the potential threat they pose to citizen’s rights and democracy, some AI applications are banned altogether. 

Some of the most relevant are: 

  • Biometric categorization systems using sensitive characteristics (such as political or religious beliefs)
  • Scraping facial images from the web or CCTV footage for the creation of facial recognition databases
  • Emotion recognition in workplaces and educational institutions
  • Social scoring based on behavior or individual traits
  • AI systems manipulating human behavior
  • AI systems exploiting people’s vulnerability

It’s worth noting that the AI Act contemplates exemptions for law enforcement to use biometric identification AI models, with previous judicial authorization, in specific cases like targeted searches, prevention of terrorism, and identification of specific crimes.

Sanctions and Entry into Force

Last but not least, the bill stipulates that non-compliance with rules may result in fines based on the infringement and company size, ranging from 35 million euros or 7% of global turnover to 7.5 million or 1.5% of turnover. 

First Extensive AI Regulating Law in the World, but Likely Not the Only

With its final draft and potential upcoming ratification, the AI Act becomes the first law of its type to comprehensively frame AI development and usage of AI technology. 

However, it’s far from the only initiative in the world. The US, the UK, and China are among the main countries (and markets) working on creating guidelines and laws to police the AI industry. 

A much-needed measure as AI, and generative AI in particular, keeps evolving at a vertiginous rate, and a proper legal frame for its activity is still missing. 

We are now one step closer. 


Ivanna Attie

All About Ivanna

I am an experienced author with expertise in digital communication, stock media, design, and creative tools. I have closely followed and reported on AI developments in this field since its early days. I have gained valuable industry insight through my work with leading digital media professionals since 2014.

dbe2c03e44dbeafe8a6f2f01c8ee8b46?s=60&d=mm&r=g 8d81f4eb775ee56f1b22ac1fcfa505b6?s=60&d=mm&r=g

AI Insights from Experts

AI Secrets is a platform for tech decision-makers to learn about AI technology. Our team includes experts such as Amos Struck (20+ yrs ICT, Stock Photo, AI), Ivanna Attie (expert in digital comms, design, stock media), and more who share their views on AI.

About us

Most recent news

Try Shutterstock's new AI Image Generator for free!
AI Secrets

Get AI news in your inbox & join thousands of engineers and managers using AI to boost sales and grow market share