Trump’s EU foreign policy, implicated scholarship and the ‘Brussels Effect’

Uta Kohl, 16 January 2026 —- 8 mins read

For Europe, the fierceness of the Trump administration’s hostility to the EU has come as a shock. It is unprecedented in scale and kind, and manifests itself in words (Vance’s speech in Munich attacking the EU over free speech and migration or Trump describing Europe as ‘decaying’ and its leaders as ‘weak’) and actions ( halting military aid to Ukraine, announcing 30% tariffs on the EU, or threatening to take Greenland by force). Yet, these hostilities do not come out of nowhere and build on a rise of transatlantic tensions over many US policy choices between 2000 -2024 and acceleration of those tensions over the last decade. Legal and international relations scholars have decried these developments as a breach of trust or, in some cases, a of international law. However, there appears to be little soul-searching about how we, as scholars, may be implicated in them. Whilst academia generally remains on the outskirts of day-to-day politics, we produce knowledge and narratives that create and shape discourses that have an impact on politics.

The Brussels Effect

One such popular academic narrative that has fed into the transatlantic hostilities is the ‘Brussels Effect’. The Brussels Effect was first coined by the Finnish-American scholar, Anu Bradford, in her article (2012) and book (2020) in which she purports to describe ‘how the European Union rules the world’. Her thesis is simple, namely that the EU can set – and has set – global regulatory standards by virtue of being a large and attractive market for many importers from outside the EU and, then, by setting (strict) standards for these importers who often have an incentive to adopt them as their global baseline. This de facto global harmonisation by corporate fiat is complemented by de jure global harmonisation as the home states of these corporations decide to follow the EU regulatory lead and enact like laws in their jurisdictions. Thus there is a global convergence towards EU standards without the political difficulties and cost associated with harmonisation efforts following formal processes. Effectively, the EU gets harmonisation on the cheap. European data protection law is widely seen as an example par excellence of the Brussels Effect as it has led to a widespread adoption of data protection laws around the globe.

Bradford’s Brussels Effect has been hugely successful as a seemingly objective and neutral synthesis of facts describing EU regulatory hyperactivity with extraterritorial effect. For the digital world, this seems particularly true considering the recent raft of EU legal instruments dealing with online platforms, such as Digital Services Act, the Digital Markets Act and the AI Act. There are many more (including corporate sustainability measures), and all of them have exterritorial reach as they apply to foreign providers that operate in the EU. The Brussels Effect has been referenced by thousands of scholars and taken up by EU policy makers and politicians with gusto, often as a badge of pride and honour.

And yet, there is more to the Brussels Effect than meets the eye. For a start, it is not simply a description of facts about EU regulation but a meta-narrative that puts a particular perspective or spin on facts. Meta-narratives are stories about stories, which explain, tie together, and legitimise or delegitimise smaller facts and events, and appeal as much to the emotions as they do to the intellect. Bradford’s article starts off by appealing to the sensitivities of the average American: ‘EU regulations have a tangible impact on the everyday lives of citizens around the world. Few Americans are aware that EU regulations determine the makeup they apply in the morning, the cereal they eat for breakfast, the software they use on their computer, and the privacy settings they adjust on their Facebook page. And that’s just before 8:30 AM.’(3)

The particular perspective of the Brussels Effect narrative is one of EU regulatory overreach. This charge is already implicit in the title of Bradford’s book: How the European Union Rules the World. Implicit in her argument is the question: Why should Europe rule the world? Centuries of European imperialism, including legal imperialism, are bygone and, if not, should be. Brussels should be ashamed of itself. By the same token, if the Brussels Effect narrative offers a legitimate critique of excessive EU law, then the Trump administration’s opposition to EU regulation of US platforms also strikes a legitimate chord. In that case, the large platforms may also be right in characterising the fines by the Commission under EU platforms regulations as ‘protectionist’, ‘discriminatory’ or  â€˜disguised tariffs’ or as ‘censorship’.  Yet, does the EU really rule the world? Unlikely. 

There are indeed good reasons why the Brussels Effect narrative is not plausible. Here are three. First, EU (digital) regulation seeks to regulate the European single market and must necessarily apply to foreign providers who do business in Europe. This is a standard jurisdictional approach adopted across the globe as it rightly protects local standards from being undermined by foreign providers. Second, when foreign corporations, like the US digital platforms, adopt European standards as their global baseline, this is a commercial decision driven by market forces. The EU cannot ‘choose’ this as a route to global harmonisation, but as a form of bottom-up harmonisation it can lend support and legitimacy to political harmonisation. Such market forces come and go, wholly outside the EU’s power. Third, whilst according to Bradford’s Brussels Effect the EU imposes its preference for ‘strict rules’ on ‘the rest of world’ (citing almost exclusively US examples), arguably the US and not the EU is the outlier in its preference for laissez-faire law, especially in respect of the tech platforms. Already in 2005, Frederick Schauer observed that the absolutist speech protection of the First Amendment was the odd one out internationally: ‘On a large number of other issues in which the preferences of individuals may be in tension with the needs of the collective, the United States, increasingly alone.’ Thus, it is far more plausible that EU regulations are simply more aligned with the public policies and interests of other jurisdictions than US laissez-faire law is.

The Washington Effect

If the Brussels Effect narrative paints a skewed picture of EU regulatory activism, it may be more compelling to understand EU regulations through the counter-narrative of the ‘Washington Effect’. A counter-narrative uses the same facts but tells a different story. In this case the story is that EU platform regulation is not an offensive extraterritorial strategy for Europe to attain global ‘superpower’ status, but rather a defensive territorial one that seeks to counter, in Europe, the hegemony of US platforms and US laissez-faire law. In other words, the EU is in pursuit of reclaiming digital sovereignty and perhaps even leads the global resistance to US legal imperialism.

The counter-narrative of the Washington Effect builds on the idea that deregulation is not nothing or neutral, but a form of regulation whereby existing legal standards are abandoned or watered down. It may occur within a jurisdiction through explicit deregulatory measures or across jurisdictions when the more permissive laws of one State undermine the more restrictive laws of another. Although deregulation appears to facilitate the ‘free’ market – free from state interference – even a free market is enabled by the general law of the land, such as contract and property law, corporation law, basic rules on fair competition, product liability or negligence law. Thus deregulation that meddles with these fundamental enabling market rules constitutes a significant regulatory intervention with the market, rather than a non-intervention. Such deregulatory interventions reconstitute the market and its distribution of rights, privileges, powers and authorities. In other words, deregulation also regulates.

There is plenty of evidence of the de facto or de jure imposition of US deregulation on ‘the rest of the world’. Most notably, section 230 Communications Decency Act (1996) which immunises platforms from liability (under the ordinary law of the land) for wrongful publications by third parties on their domains, is one such piece of deregulation that the US has successfully exported to more than 60 jurisdictions worldwide with an enormous effect on global networked space. Equally, a de facto Washington Effect occurred when US digital platforms – ‘socialised’ through US permissive laws, most notably US First Amendment jurisprudence – started to offer their services in Europe and elsewhere with minimal legal restraints built into their content distribution and ad revenue systems and when this starting position went unchallenged in Europe for decades. So perhaps it is the Washington Effect, not the Brussel Effect, that really shows who rules the world.

The moral of the story

Academic scholarship matters. It tells stories. The Brussels Effect is a story that has mattered. Its effects have been significant. It has lent credence to the Trump administration’s opposition to EU tech regulation. It has then put the EU on a regulatory backfoot and, at the same time, disguised quite how successfully Washington has exported its deregulatory regulation to the rest of the world. The Brussels Effect demonstrates that just because a narrative has intuitive appeal and in fact appeals to many, does not mean it’s a good story. This is a dangerous one.

For a more in-depth analysis of the topic, see Uta Kohl, ‘The Politics of the ‘Brussels Effect’ Narrative’, forthcoming in ACROSS THE GREAT DIVIDE: PLATFORM REGULATION IN THE UNITED STATES AND EUROPE (A. Koltay, R. Krotoszynski, B. Török, E. Laidlaw (eds), OUP, 2026)

Data Protection and data analytics: what is Art. 29 WP really saying to businesses wanting to innovate with data?

In three-month time, the General Data Protection Regulation (GDPR), will become applicable to many, if not all, data processing activities to which living individuals can be associated. Businesses operating in Europe have had about two years to prepare for this change. As readers know, even if the GDPR is a lengthy piece of legislation, additional interpretative guidance is very much welcome to create and aid understanding about the ‘links’ between key concepts arising across the different pieces of the legislative ‘jigsaw’. The influential EU Article 29 Data Protection Working Party (Art. 29 WP) has therefore been working hard these past few months to give context to some of the most important GDPR requirements: e.g. by publishing guidelines on issues such as data protection impact assessment, data protection officers, the right to data portability, automated individual decision-making and profiling, personal data breach notification, consent, and transparency.

For new comers to the field, excited about working with data (including personal data) to build and develop smart algorithmic systems, getting simple answers to key questions about how to comply with the GDPR is not always easy. [The same is often probably true for avid readers of the GDPR
]

What if one had only 1000 words to explain to businesses wanting to innovate with data relating to people what the GDPR is about? What would the message be?

For the sake of this thought exercise attempted here, we should probably assume that data innovation, in the main, implies the repurposing of data. The data is first collected for a specific or specified purpose and is then processed for a different purpose, one that most likely was not anticipated at the initial stage of collection by the data controller.

One of the first questions to pose in that context is whether a new legal (‘lawful’) basis is needed to comply with EU data protection law for this change of purpose. Under GDPR Article 6, the principle of lawfulness demands that at least one legal basis (chosen from a limited list of options) be identified to justify a personal data processing activity, either: consent; performance of a contract or steps necessary to entering into a contract; protection of the vital interests of the data subject; performance of a task carried out in the public interest or in the exercise of official authority vested in the controller; or, necessary to achieve legitimate interests pursued by the data controller, or by a third party, as long as the interests are not overridden by the interests or fundamental rights and freedoms of the data subjects.

Reading both Article 6(4) GDPR and the last version of Art. 29 WP guidelines on consent (‘WP259’) in conjunction, it appears that if the initial legal basis relied upon to justify personal data processing is consent, the only way to comply with the principle of lawfulness at the second stage (the data analytics stage) is to seek consent again.

This is what Art. 29 WP writes at p. 12 of WP259: “If a controller processes data based on consent and wishes to process the data for a new purpose, the controller needs to seek a new consent from the data subject for the new processing purpose.”

Nevertheless, Art. 29 WP is mindful of the fact that the law is changing and the GDPR introduces stricter conditions for obtaining informed consent where it is being relied upon by a data controller. It therefore adds (p. 30): “If a controller finds that the consent previously obtained under the old legislation will not meet the standard of GDPR consent, then controllers must assess whether the processing may be based on a different lawful basis, taking into account the conditions set by the GDPR. However, this is a one off situation as controllers are moving from applying the Directive to applying the GDPR. Under the GDPR, it is not possible to swap between one lawful basis and another.”

GDPR Art. 6(4) and Recital 50 seem to confirm that – following the GDPR coming into force – if the initial legal basis to be relied upon to justify processing personal data is consent, the doctrine of (in)compatibility of purposes (to ensure compliance with the so-called principle of ‘purpose limitation’) is not applicable. [Note that there has not always been consensus on the exact effects of the doctrine of (in)compatibility of purposes, see my previous post here, but Recital 50 now clarifies that “[t]he processing of personal data for purposes other than those for which the personal data were initially collected should be allowed only where the processing is compatible with the purposes for which the personal data were initially collected. In such a case, no legal basis separate from that which allowed the collection of the personal data is required.”].

But then, even if one is ready to seek consent again at the data analytics stage, could data subjects really be said to be capable of providing meaningful consent to such secondary practices? Article 6(4) provides that consent can only be given in relation to specific purposes.

Recital 33 GDPR suggests that, for scientific research purposes, the principle of purpose limitation should be relaxed. This is because, “It is often not possible to fully identify the purpose of personal data processing for scientific research purposes at the time of data collection. Therefore, data subjects should be allowed to give their consent to certain areas of scientific research when in keeping with recognised ethical standards for scientific research. Data subjects should have the opportunity to give their consent only to certain areas of research or parts of research projects to the extent allowed by the intended purpose.”

Although the GDPR seems to adopt a broad definition of scientific research, which covers “technological development and demonstration, fundamental research, applied research and privately funded research” (Recital 159), this relaxation per definition only applies to scientific research. Data analytics practices are not necessarily tantamount to scientific research activities. In fact, in most cases they do not involve researchers at all.

This explains why the GDPR uses a different term to describe data analytics: that of ‘general analysis.’ In Recital 29, one reads as follows:

“In order to create incentives to apply pseudonymisation when processing personal data, measures of pseudonymisation should, whilst allowing general analysis, be possible within the same controller when that controller has taken technical and organisational measures necessary to ensure, for the processing concerned, that this Regulation is implemented, and that additional information for attributing the personal data to a specific data subject is kept separately.”

What could Recital 29 mean?

It seems to suggests that, assuming the initial data controller also performs the secondary ‘general analysis’, the new purpose pursued at this later stage should be deemed  compatible with the initial purpose at least where a process of pseudonymisation (see Article 4(5) for the GDPR definition of ‘pseudonymisation’ matching the description in Recital 29) is applied to the personal data post-collection. Therefore, could we also surmise – logically – that, assuming consent was not the initial legal basis relied upon to justify the collection of the personal data originally, no new legal basis would be needed to justify its secondary usage?

On the other hand, by contrast, what if the secondary ‘general analysis’ of that same personal data was actually to be undertaken by a third party, which implies that the data controller would transfer the data set to a recipient [e.g. a start-up] to carry out the innovatory analytics job? Would the old and new purposes be necessarily incompatible? If the answer is yes, a new legal basis would then be needed to justify the secondary processing at the data analytics stage.

What should a start-up receiving personal data from a data provider, to develop a solution and sell it back to the data provider, really do then?

At a minimum, the start-up should probably check what the legal basis for the repurposing of the data is likely to be, BUT ALSO whether the initial legal basis relied upon by the data provider in collecting/creating the personal data was consent obtained from the data subject, or not.

Taking this analysis one step further, assuming there is an argument [which is not straightforward as explained above] that the processing of personal data for general analysis (secondary analytics) purposes was compatible with the initial purpose justifying the original collection –even if the general analysis is to be undertaken by a third party on behalf of the data controller – that third party should in principle receive the data after a pseudonymisation process has been applied to the personal data.

Start-ups should therefore specifically ask for pseudonymised data from the provider of the data they will be experimenting with, whenever possible.

This makes particular sense in the light of Article 11 GDPR, which expressly states in alluding to a state of personal data very similar to the state of personal data that has undergone the process of GDPR pseudonymisation, that if “the controller is able to demonstrate that it is not in a position to identify the data subject, the controller shall inform the data subject accordingly, if possible. In such cases, Articles 15 to 20 shall not apply except where the data subject, for the purpose of exercising his or her rights under those articles, provides additional information enabling his or her identification.”  [As a reminder, Article 15-20 GDPR refers to the data subject’s rights of: access by the data subject; to rectification; to erasure; to restriction of processing; and, to data portability].

Surprisingly the right to object is not exempted under Article 11 as it is governed by Article 21, whereas Article 12(2) expressly states: “In the cases referred to in Article 11(2), the controller shall not refuse to act on the request of the data subject for exercising his or her rights under Articles 15 to 22, unless the controller demonstrates that it is not in a position to identify the data subject.” [How do we explain this?]

But that is probably not all that we can infer from the above logic being suggested.

Under GDPR Art. 4(4) ‘profiling’ means “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.”

Whereas, reading Art. 29 WP guidelines on automated individual decision-making and profiling (‘WP251’, recently adopted in final form here), it appears clear that the Art.29 WP envisages that the secondary data analytics stage, i.e. analysis to identify correlations in personal datasets at a later time period, is covered by this GDPR definition of profiling. Specifically, it alludes to the fact that analysis to identify correlations would/should fall underwould/should fall under the GDPR definition of profiling. (p.7).

As a result, if the data shared retains individual level data points [a fact that is consistent with the process of pseudonymisation being applied to personal data precisely to minimise the risk of harm arising to data subjects consequential to later processing activities], there is an argument that the recipient responsible for the data analytics effort may yet be determined to be engaging in profiling activities whenever it looks for patterns of commonalities. [The way the ultimate purpose of the set of processing activities is described could make the difference. E.g. “I am analysing data generated by driverless cars to identify where the most accidents take place and adapt road signs”].

As profiling activities require special care under the GDPR, in particular if such activities are followed by individual decisions taken as a result of profiles created (see GDPR Art.22 and Art. 35, for example), start-ups could find it useful to check with their data providers whether a data protection impact assessment has been undertaken to make sure the future risks for the individual data subjects – those at the very centre of the data analytics ‘story’ – have been taken into account and mitigated at an early stage.

What is the moral of the story? Data providers and start-ups should probably work closely together when doing people-centric data innovation or
. 1000 words is never enough to tell a data protection story!

Sophie Stalla-Bourdillon

This article was first published on Peep Beep!, a blog dedicated to information law.