The European Union has approved the first comprehensive AI regulations in the world, sweeping guardrails meant to put real-world limits on the development of the technology.
The straightforwardly named AI Act was passed by the bloc’s parliament today, which could set the tone for similar rules in other parts of the world. In May, EU countries will formally vote on the new rules, which could come into effect in 2026, with some provisions kicking in earlier, as Reuters reports. Lawyers still have to pore over the exact text and its translations, but that’s unlikely to stand in the way.
“The AI act is not the end of the journey but the starting point for new governance built around technology,” said member of European Parliament and one of the leading authors Dragos Tudorache in a statement.
“The adoption of the AI Act marks the beginning of a new AI era and its importance cannot be overstated,” analyst Enza Innapollo told the BBC. “The EU AI Act is the world’s first and only set of binding requirements to mitigate AI risks.”
So what does the AI Act entail exactly?
The Act brings the use of generative AI models in line with specific EU copyright laws and transparency obligations. For instance, companies will have to clearly disclose if a given piece of content was generated by an AI.
And AI developers will have to hand over a detailed summary, including text, pictures, and video, of the data they’ve scraped to train an AI model to bring it in line with existing copyright law.
That’s a particularly noteworthy new rule, considering companies in the US have been scraping huge amounts of likely copyright-infringing data from the internet to train their AIs. This trend has already led to several copyright lawsuits in the US.
As part of the AI Act, the government will determine the level of risks based on the AI model in question. Broadly speaking, the higher the perceived risk of a given AI, the stricter the rules.
If an AI system were to be deemed as an “unacceptable risk,” it would be banned. These models include ones that manipulate cognitive behavior, classify people based on behavior or socio-economic status, or biometric identification.
Other “high risk” models will have to be registered in an EU database, including ones responsible for operating critical infrastructure, managing employment, or law enforcement.
Some AI companies are wary of the new rules, arguing that they could end up limiting innovation in the EU.
“It is critical we don’t lose sight of AI’s huge potential to foster European innovation and enable competition, and openness is key here,” Meta’s head of EU affairs Marco Pancini told Reuters.
Other companies welcomed the rules.
“We are committed to collaborating with the EU and industry to support the safe, secure, and responsible development of AI technology,” an Amazon spokesperson told the agency.
The EU isn’t the first to attempt to implement AI rules — but it’s signaling a willingness to go a lot further than other governments have gone so far.
Last year, US president Joe Biden signed an executive order to address AI risks, but critics have pointed out that carrying out the order’s directives will be extremely difficult.
And China’s president Xi Jinping and the Chinese Communist Party have also called for greater state control over the tech in light of risks to data security.
Before the AI Act can go fully into effect, lawmakers still have work ahead of them. The EU is planning on setting up an AI Office, which is designed to be an independent body within the European Commission.
“The rules we have passed in this mandate to govern the digital domain — not just the AI Act — are truly historical, pioneering,” Tudorache said. “But making them all work in harmony with the desired effect and turning Europe into the digital powerhouse of the future will be the test of our lifetime.”
More on AI regulation: Joe Biden’s Executive Order on AI Is Expansive, But Vague