Three countries against the EU AI law: “We must regulate applications and not technology if we want to play in the most important league in the world”


The Artificial Intelligence Act, a completely new and necessary legislation aimed at setting rules on the development and use of AI systems, is in the final phase of discussion between the Commission, the Council and the EU Parliament.

However, a recent unofficial document, to which he had access Euractiv, It brings to the table quite a few tensions between the member countries, especially in the case of three major figures of the EU: France, Germany and Italy.

The initial regulation proposal contemplated a progressive approach, establishing codes of conduct and regulations for the most basic AI models. However, the arrival of chatbots as famous as ChatGPT, based on GPT-4, from OpenAI, has made things shake and we are beginning to look at this law again.

These three countries suggest a review of the law especially the points that affect these most advanced technologies, arguing that this type of regulation could negatively affect innovation and security at the same time.

They consider that it will delay innovation and, consequently, security

In a document presented, France, Germany and Italy oppose the imposition of strict rules on fundamental models and propose an approach based on self-regulation through codes of conduct.

The document highlights that the risks associated with AI do not lie so much in the technology itself as in its application. He argues that European standards should support an approach that focuses on this new legislative framework so that innovation and security are not lost.

Birth of artificial intelligence

“We need to regulate applications and not technology if we want to play in the world’s most important AI league,” he told Reuters Germany’s digital minister, Volker Wissing.

The proposal from these countries suggests that, instead of imposing rules from the beginning, codes of conduct be established. These would be based on principles previously defined in the G7 through the Hiroshima process, which seek mandatory self-regulation.

“When it comes to basic models, we oppose establishing untested standards and suggest that mandatory self-regulation through codes of conduct be established in the meantime,” it reads.

Self-regulation would imply that developers define “model cards”, which would be technical documents that summarize relevant information about how these models work. These would include details such as the number of model parameters, intended uses, potential biases, results of bias studies, and safety evaluation equipment.

Besides, To ensure compliance, the creation of an AI governance body is proposed to develop guidelines and oversee the implementation of these “blueprint cards”. This body would be responsible for verifying the implementation of the model cards and would provide a channel to publicly report any violations of the code of conduct.


According to the unofficial document, “any suspected violation for the sake of transparency should be made public by the authority.”

However, the proposal goes one step further. by suggesting that, initially, no sanctions be applied. According to these three countries, this should only be established after systematic violations of the codes of conduct and after a proper analysis and evaluation of the identified violations.

As expected, the resistance of these countries to accept the approach planned and about to be accepted has created tensions in the negotiations. Some have even gone so far as to describe the situation as “a declaration of war.”.

Resolving these issues will no doubt be very important not only for the regulation of AI in the EU, but also for determining how innovation and security are balanced in the development and use of these technologies globally, as which will surely set a precedent for the rest of the countries to take the step.