Science
Poland’s First Remote Surgery: A Medical Breakthrough
20 August 2025
The European Union is once again setting the world straight. This time, it's about artificial intelligence and new legislation that promises a revolution in the relationship between Brussels and tech giants. Although the regulations are not yet mandatory, companies like Google, Meta, and OpenAI know one thing: those who don't listen now will pay later.
Europe has made it clear that it will not tolerate chaos in the world of artificial intelligence. Before the AI Act comes into force in 2026, the European Commission is introducing a Code of Conduct for general-purpose AI. The document is voluntary, but its message is clear: the time of the AI Wild West has come to an end.
Companies that comply with the new guidelines can count on “less administrative burden and greater legal certainty,” according to The New York Times. A lack of cooperation in this area could force Big Tech to prove compliance with regulations, a process that will be both time-consuming and costly.
You might like to read: Playing a Role or Something More? A Chatbot Ponders Its Own Consciousness
The new regulations place a huge emphasis on transparency. The Commission wants to know where the data used to train models comes from. Is it public, private, legal, or perhaps… pirated? Meta has previously argued that individual books (used for AI training) have no training value, but the EU clearly thinks differently.
The Commission is also interested in security issues. Companies will have to report incidents that threaten health, infrastructure, or cybersecurity to a special AI Office. In addition, they will have to prevent security breaches and explain every failure of their protective mechanisms.
The document also contains recommendations regarding copyrights. Companies must respect paywalls and allow digital creators to exclude their work from AI training databases. This is a response to controversial practices, such as downloading books from torrents, which has been attributed to companies like Meta.
What’s more, the EU demands the disclosure of energy consumption by AI models, both during training and their daily operation. This marks the first time that AI will be analyzed through the lens of its impact on the climate.
While the Code itself is not binding, the European Union has already approved the final version of the AI Act, which will become mandatory in 2026. Bloomberg reports that the EU can fine companies that fail to comply with the regulations up to 7 percent of their annual turnover.
The AI industry is protesting. Representatives of the largest companies are appealing for a delay in the implementation of the regulations, claiming they could stifle innovation. But the EU remains unyielding. Is the AI Act its answer to the question of how to combine technological development with a concern for human rights and public safety?
Read the original article: Europa nie chce dzikiej AI. Giganci technologii są w defensywie