A Tale of Two Cities: the best of times, the worst of times (for AI in Europe)

21.11.2025 ·
Graphic

This month, the digital policy path of Europe suddenly bifurcated. In the span of a single week in November, two courts – one in Munich, the other in London – handed down judgments that are diametrically opposed in their technical understanding of Artificial Intelligence models, their relation to copyrighted works that were used to train them, and the legal implications of such training.

For proponents of European digital sovereignty, the contrast should not be just stark. It should be alarming. While the UK High Court has effectively cleared the way for AI development, the Munich District Court has erected what seems to be a very high wall that could trap EU AI development in a legal quagmire.

The Crux of the Dispute: Is an AI Model a “Copy”?

The central legal battleground for Generative AI has always been the ontological status of the model itself. Does a neural network, consisting of billions of parameters and weights, constitute a “copy” or “reproduction” of the works it “read” during training?

In Munich, in the case of GEMA v OpenAI, the answer was a resounding yes. The court ruled that because the model could, upon specific prompting, reproduce song lyrics (“memorization”), the model itself constitutes an “embodiment” of the copyrighted works. Consequently, the court dismissed OpenAI’s reliance on the Text and Data Mining (TDM) exception found in Article 4 of the EU Copyright in the Digital Single Market (CDSM) Directive. The logic is severe: if a model can output a work, it contains the work, and thus the TDM exception—designed for analytics, not reproduction—is not applicable. 

Across the channel in London the judicial approach couldn’t be more different. In Getty Images v Stability AI, the UK High Court took a pragmatic, technically literate view. On the core issue of reproduction, the judge rejected the notion that the AI model (the “weights”) constitutes an “infringing copy” of the training data. The court recognized that a model creates statistical correlations rather than storing JPEGs or text files. Consequently, since the model is not a copy, it does not infringe reproduction rights – for example its importation and use in the UK does not trigger secondary infringement, even if the training (which happened abroad) involved copyrighted works.

What does this mean for Europe’s AI ambitions?

The implications of the Munich judgment for the EU’s goal of “AI Sovereignty” are potentially catastrophic. If this ruling stands – and it is important to note it is not yet final – it effectively illegalizes the existence of Foundation Models trained on openly available European datasets without exhaustive, item-by-item licensing.

The EU has prided itself on the CDSM Directive as a balanced framework. However, the Munich court’s interpretation renders the TDM exception in Art. 4, which is already only a narrow provision compared to the US “Fair Use” doctrine, practically useless for Generative AI training. If the mere potential for “memorisation” invalidates the exception, then no LLM can be safely trained in Germany and, if this interpretation is confirmed by the EUCJ, in Europe. 

This creates a perverse incentive. To be “sovereign,” the EU must produce its own models. Yet, under the Munich court’s standard, the only legally safe way to build a European model is to train it on data so sanitised and limited, that the resulting AI will be commercially irrelevant compared to its American or Chinese counterparts. And with no viable EU companies doing AI training who will pay for licenses for these datasets? Everybody loses. 

Finally a Brexit Dividend?

Ironically, the UK, rightly criticized for its lack of clear regulatory direction post-Brexit, may have just stumbled onto a massive competitive advantage.

The UK High Court’s refusal to treat statistical weights as “copies” provides exactly the kind of legal certainty investors need. While the UK government has dithered on legislative text-and-data mining reforms, the judiciary has just provided a de facto safe harbor for model creators. AI companies can now look at London and see a jurisdiction that distinguishes between learning from a work and counterfeiting it. It is not hard to imagine AI research labs, and the immense capital that follows them, shifting their headquarters from Berlin or Paris to London to avoid the “Munich risk.”

Conclusion: at a crossroads yet again

We must face a hard truth: You cannot regulate what you do not possess. If the EU wishes to enforce its values on AI (transparency, fairness, non-discrimination etc.) it must first possess a domestic AI industry capable of building world-class models. Otherwise we will remain a bystander in the global AI revolution. 

The road laid down by the Munich judgment leads to a digital colony, where Europeans are mere consumers of compliant, licensed models built elsewhere. To avoid this, the EU must urgently revisit the scope of the TDM exception and not only safeguard it, but expand and fortify it. We need a legal framework that acknowledges the technical reality: training is not copying, and learning is not theft. Creating AI models benefits society without any detriment to rights holders – as long as the outputs of these models are not competing with the original works. 

Unless the European judiciary or legislature broadens the TDM exception to explicitly cover model training and rejects the “model-as-copy” fallacy, the “Brussels Effect” will be replaced by the “London Lift-off.” We are limiting our own competitiveness just as the race is truly beginning.