


LONDON — European Union officials unveiled new rules Thursday to regulate artificial intelligence. Makers of the most powerful AI systems will have to improve transparency, limit copyright violations and protect public safety.
The rules, which are not enforceable until next year, come during an intense debate in Brussels about how aggressively to regulate a new technology seen by many leaders as crucial to economic success in the face of competition with the U.S. and China. Some critics accused regulators of easing rules to win industry support.
The guidelines apply only to a small number of tech companies like OpenAI, Microsoft and Google that make so-called general-purpose AI. These systems underpin services like ChatGPT and can analyze enormous amounts of data, learn on their own and perform some human tasks.
The so-called code of practice provides some of the first concrete details about how EU regulators plan to enforce a law, the AI Act, passed last year. Rules for general-purpose AI systems take effect Aug. 2, but EU regulators will not be able to impose penalties for noncompliance until August 2026, according to the European Commission, the executive branch of the 27-nation bloc.
The commission said the code of practice is meant to help companies comply with the AI Act. Companies that agreed to the voluntary code would benefit from a “reduced administrative burden and increased legal certainty.” Officials said those that do not sign would still have to prove compliance through other means.
It was not clear which companies would join the code of practice.
CCIA Europe, a tech industry trade group representing companies like Amazon, Google and Meta, said the code “imposes a disproportionate burden on AI providers.”
Under the rules, tech firms will have to provide detailed breakdowns about the content used for training their algorithms.
The New York Times has sued OpenAI and partner Microsoft, claiming copyright infringement of news content related to AI. The companies have denied the claims.