European Union lawmakers gave final approval to the 27-nation group’s artificial intelligence law Wednesday. The rules are expected to take effect later this year.
Lawmakers in the European Parliament voted in favor of the Artificial Intelligence Act, five years after regulations were first proposed.
Major technology companies have generally supported the idea. But they want to make sure new AI requirements work in their favor. OpenAI chief Sam Altman suggested the maker of ChatGPT might pull out of Europe if it cannot comply with the AI Act. He later said his company had no plans to leave.
Here are some details about Europe’s new AI rules:
How does the AI Act work?
Like many EU regulations, the AI Act started as consumer safety legislation. The EU took a “risk-based approach” to products or services that use artificial intelligence (AI).
If an AI application is risky, then more rules cover it. Most AI systems are expected to be low risk, like content recommendation systems or filters that block spam, or unwanted email. Companies can choose to follow voluntary requirements and codes of conduct.
High-risk uses of AI include tools used in medical devices or important infrastructure like water or electrical networks. Those face additional requirements like using what the legislation calls high-quality data and providing clear information to users.
Some AI uses are banned because they are considered to present an unacceptable risk. Those include things like social scoring systems that are meant to govern how people behave. Some sorts of predictive policing and emotion recognition systems also are reportedly banned in schools and workplaces.
Other banned uses include ones that police use to scan faces in public places using AI-powered remote “biometric identification” systems. There is an exception for use in serious crimes like kidnapping or terrorism.
What about generative AI?
The law’s early versions centered on AI systems that carry out limited tasks, like reviewing employment information and job applications. But general AI models, like OpenAI’s ChatGPT, forced EU officials to add rules for generative AI models. AI chatbot systems that can produce lifelike responses, images and more are examples of generative AI models.
Developers of general purpose AI models will have to provide detailed descriptions of the writings, pictures, video and other data on the internet that was used to train the systems. They must also follow EU copyright law.
AI-generated pictures, video or audio of existing people, places or events must be labeled as artificially produced. These sorts of media are known as “deepfakes” because they appear to show real people doing or saying things that are not real.
There are reportedly extra rules for the biggest and most powerful AI models that carry “systemic risks.” Those include OpenAI’s GPT4 and Google’s Gemini.
What do Europe’s rules mean?
The EU first suggested AI regulations in 2019. Europe was quick to propose rules for the new and developing industry.
In the U.S., President Joe Biden signed an executive order on AI in October. The U.S. Congress is likely to propose legislation. Lawmakers in at least seven U.S. states are working on their own AI legislation. And international agreements are possible too.
Chinese President Xi Jinping has proposed his Global AI Governance Initiative for fair and safe use of AI. Other major countries, including Brazil and Japan, are developing rules, as well as the United Nations and Group of Seven industrialized nations.
What happens next?
The AI Act is expected to officially become law by May or June, after approval from EU member countries. Rules will start taking effect slowly. Countries will be required to ban unapproved AI systems six months after the law takes effect.
Rules for general purpose AI systems like chatbots will start going into effect in one year. By the middle of 2026, the complete set of regulations, including requirements for high-risk systems, will be in effect.
Each EU country will set up their own AI enforcement agency. Citizens can make a complaint if they think they have been the victim of a violation of the rules. And the EU will create an AI Office that will oversee the law for general purpose AI systems.
Violations of the AI Act could be punished with a fine of up to $38 million, or seven percent of a company’s worldwide revenue.
I’m Dan Novak.