Omnibus, AI training and false moral panics

14.11.2025 · comment of the week · legislation
Graphic

The recent leak of the European Commission’s “digital Omnibus proposal” has, predictably, thrown the privacy maximalist camp into a frenzy. No exaggerated hyperbole is beneath this crowd, with “biggest rollback of digital fundamental rights in EU history” being a very mild example. Never mind the details, when the “Holy GDPR” has been desecrated! 

However, this onslaught is meant to daze and confuse the lawmakers and ensure the professional campaigners for our rights, that we never asked to be our spokespersons, hold on to their generously funded jobs.

Looking from the all-important perspective of Europe’s technological and economic future, this proposal is neither the threat it is portrayed to be, nor a forced concession handed to US “BigTech”. Instead, it represents one of the most significant opportunities we have to secure our digital future as a continent fighting to remain relevant in the global economic stage. Credit must be given to the Commission for taking this step, a much needed and overdue one, knowing the response it will garner. 

The proposed clarifications, specifically, unequivocally introducing “legitimate interest” as a viable legal basis for training AI models, are a pragmatic and necessary step to unblock European innovation with little practical effect on the everyday lives of EU citizens.

“The digital Omnibus proposal isn’t a threat to our rights – it’s the clarity Europe needs to innovate, compete, and lead. It is a pragmatic, pro-European step toward a stronger digital future.”

For years, the public debate has been fixated on the AI Act as the primary legislation shaping the legal situation of AI creators in Europe. While the AI Act is indeed a critical piece of regulation, it is not the biggest legal obstacle to AI development in Europe. Our members, from innovative SMEs to established R&D leaders, consistently report that their main challenge is (apart from copyright) the legal uncertainty surrounding the GDPR and how it affects AI training.

Data is the resource that fuels AI. Without access to high-quality, large-scale datasets for training, it is impossible to build, refine, or test competitive AI models. The Draghi report was correct to single out Europe’s fragmented data protection landscape as a critical detriment to our competitiveness, perhaps even more so than the AI Act itself. We cannot become a global AI power if the very resource needed for its creation remains locked behind a wall of legal ambiguity.

This is not a call to abandon our values. It is a call to create a clear, harmonized, and innovation-friendly interpretation of them.

The ability to safely and legally train AI models within the European Union is paramount if we are to produce our own intellectual property. The current alternative is untenable: we risk becoming a continent of technology importers, wholly reliant on models trained elsewhere, under different legal and ethical frameworks. To achieve “strategic autonomy,” we must be able to build our own foundational models, and that work must begin at home. The Commission’s proposal provides a long-overdue legal pathway for this, supporting innovation while adhering to the established “legitimate interest” balancing test. 

This legislative clarification is made all the more urgent by the increasing fragmentation of the legal landscape. The recent opinion from the European Data Protection Board (EDPB) unfortunately only further muddled the waters, created more questions than answers, leaving companies to navigate a patchwork of national interpretations and the constant threat of ex-post enforcement. Industry does not fear regulation – it fears uncertainty more. Legal risk is a tax on business. The Commission’s proposal rightly moves to fix this uncertainty at the legislative level.

The stakes could not be higher. AI development is a global, highly mobile field. Taking AI development elsewhere is not a hypothetical threat; it is a daily reality. It is easy and cheap for a startup to incorporate in a different jurisdiction, or for a large enterprise to move its R&D budget.

More tragically, we are witnessing a “brain drain” of our top talent. We are educating the best and brightest AI engineers in the world, only to see them pulled out of the EU to work for competitors, simply because the legal environment here is too complex and too uncertain to build the next generation of technology.

The “digital Omnibus proposal” is not a “bonfire” of our rights. It is an essential clarification that will allow European companies to compete, European IP to be created, and European talent to flourish at home. We urge co-legislators to see this for what it is: a pragmatic, necessary, and profoundly pro-European step toward securing our digital future.

· More articles

Graphic
08.08.2025

The AI Act Comes Into Force. Poland Has a Chance to Build the Regulator of the Future — But Time Is Running Out

The beginning of August marks an important milestone for artificial intelligence regulation in Europe. In line with the AI Act timeline, several key provisions have now entered into force — including those related to the obligations of general-purpose AI (GPAI) providers and the governance structures at both EU and Member State levels. For Poland, this […]

Graphic
14.08.2025

What Does a Yacht Have to Do with Development? The Strategic Drift of Poland’s Economic Policy

The HoReCa sector support programme under the National Recovery Plan, which has been stirring public debate for the last week, should be more than just another topic in the daily political squabble. It has revealed a broader, systemic problem that calls for deeper reflection. While headline-grabbing cases of funding yachts or saunas rightly raise questions […]