Trump’s takeover of Venezuela and the long-standing hypocrisy of international law

Andrea Maria Pelliconi, 5 February 2026    7 mins read

The past months have brought back with startling clarity a pattern many international lawyers know all too well. Under the second presidency of Donald Trump, the United States (US) has returned to overt coercion to impose their interests upon the rest of the world. Realist pragmatism has always been present in international relations, but states used to couple their acts “with at least a resemblance of legal justification”. This time, however, practice is matched by words: Trump has completely dispensed with the liberal varnish that usually accompanies US extraterritorial mischiefs and openly admits that his actions are guided only by his own morality. The attack on Venezuela and the kidnap of Nicolás Maduro and his wife have been followed by repeated threats to annex Greenland by purchase or use of force, alongside renewed intimidation directed at other states such as Panama, Mexico, Colombia, and Cuba. We’re witnessing an accelerated deepening of the global crisis of multilateralism and international law and a return to Great Powers’ “spheres of influence”.

Everyone knows in legal scholarship that the unilateral kidnap of a sitting head of state is unequivocally unlawful under international law, whatever one may think about the Maduro regime and its gross and widespread human rights abuses. The prohibition of the use of force, the principles of sovereignty and non-intervention, and the personal immunity of heads of state vis-Ă -vis other domestic jurisdictions leave no room for ambiguity. And yet, the responses of some western states and the European Union (EU) have not been so unequivocal. Instead, they have been divided and ambivalent, cloaked in watered-down words of “concern” or “monitoring” or strategic silences. German Chancellor Merz has stated that the legal assessment of the US operation is “complex”. French President Macron has emphasised the need for a transition which is “peaceful, democratic, and respectful of the will of the Venezuelan people”. Italian Prime Minister Giorgia Meloni suggested that while the use of force is generally wrong, the US were acting in self-defence against so-called “hybrid security attacks”, referring to Maduro’s supposed weaponisation of drug-trafficking against the US. Reactions were far more decisive when it came to Trump’s threats against Greenland, where the territorial integrity and interests of an EU and NATO state were at stake. European leaders suddenly rediscovered the language of international law and the inviolability of sovereignty and territorial integrity with admirable clarity.

This should not come as a surprise. Over the past years, manifestations of double standards and selectivity have grown exponentially, with the most discussed example being the reactions to Russia’s invasion of Ukraine and Israel’s “plausible” genocide in Gaza, both framed as defensive responses to security threats and terrorism. While one may well argue that Israel’s self-defence claim was more well-founded than Russia’s, it soon became clear that the 7 October 2023 attack was being used as a mere pretext to unleash uncontrolled violence on Palestinians, while preparing the ground for territorial expansion in both Gaza and the West Bank. This expansionist plan continues to develop even now, after the supposed “ceasefire” and the UN Resolution on Gaza, and nothing concrete is being done to bring it to an end. The same double standard surfaced in reactions to the International Criminal Court’s (ICC) arrest warrants against Vladimir Putin and Benjamin Netanyahu for international crimes. The same European states that rushed to praise the ICC for its investigation of the Russian leader, later declared that Netanyahu enjoys immunity from ICC prosecution.

And yet, once again, this selectivity is not new at all. It follows decades of US (and broader Western) unlawful attacks and interventions in Iraq, Afghanistan, Iran – to the extent that exceptionalism is not exceptional anymore. Each time, international law norms were stretched, re-interpreted, or reinvented through securitisation doctrines such as preventive self-defence and the “unable or unwilling” test, or hidden behind ostensible “benevolent motives” such as humanitarian intervention, regime change, and the “exportation of democracy”. In fact, the US has even committed the exact same type of head-of-state kidnapping before in the region, most notably with the capture of Manuel Noriega from Panama. This is the so called “rules-based international order”, meaning the rules that the US and its allies imposed upon the rest of the world (“the West and the rest”), as opposed to what international law actually required. Every time, other western states have been weak in condemning the illegality of these actions, and even weaker in doing anything concrete to prevent, stop, or redress them.

Now, at what feels like the climax of the collapse of the international legal order and multilateralism, everyone has awakened, shouting that this is not a drill. UN experts warn that such actions normalise lawlessness in international relations, and commentators caution that Venezuela sets a dangerous precedent: if powerful states may unilaterally decide when international law applies and when it does not, the legal order collapses into selective enforcement and strategic convenience. Even leading US academics now talk about the catastrophic collapse of jus ad bellum norms and the dangers we all face when “might unmakes right”. They highlight the risk that the Venezuela incident “opens the door to other similar actions by powerful nations in the future”. This fails to appreciate that the door has long been wide open.

A prevalent position now is to acknowledge the flaws of international law but vehemently opposing the abandoning of its normative constraints because they’re the only thing that will save us from debacle. International law can still be mobilised to place constraints on power and if it’s consistently disregarded it’s because of contingent political factors militating against full compliance. A recurring metaphor that I’ve heard a few times lately, including among critical thinkers, is the Sisyphus’ myth: the futility of the task should not deter the discipline. But this, at least in part, obscures how this system was intentionally built to shield the actions of the West and has laid the foundations for the situation we now find ourselves in. International law and the indeterminacy of its content provide a “professional vocabulary” to build plausible arguments. For decades, mainstream scholars were complicit in the legal legitimisation of these actions, coming up with doctrines that served the interests of the moment under a façade of international legal jargon.

The inherent defect of the international legal infrastructure has simply become more visible now. As Rajagopal has put it“The revival of overt colonial and imperial designs under the Trump regime in Washington is notable not because it has invented new forms of domination, but because it has dispensed with the traditional liberal rhetoric that once accompanied them.” From its colonial origins to its modern doctrines of sovereignty, intervention, and trade, international law has consistently operated in the interests of dominant states and classes, while insulating them from the equal application of its norms. What we are witnessing today are colonial revivals as the logical outcome of a system that never truly decolonised. It seems scarier to European eyes now because it has eventually turned against them.

Of course, legal scholarship is not monolithic. Critical voices, including TWAIL scholars, have raised these concerns all along. Yet they have been unable to bring material change, partly because of the structural hierarchies of international law, and partly because of fragmentation of their own views. Some tried to change the system from within; others were content to critique from the margins; others advocated radical transformation or the complete dismantling of the legal order, often without a clear project – and always with different opinions – for what should come after.

Now, as we stand on the verge of a concrete dismantling of the system, with Trump’s plan to replace the United Nations with his own personal “Board of Peace” and international relations reaching the peak of personalisation and corporatisation, everyone – even critics – seem unsure what to do. If there is a moment to seize, it is now. But seizing it requires more than lamenting Trump’s excesses or the fragility of the system: it demands an honest reckoning with the errors of the past, Western exceptionalism, legal complicity, resource-hungry capitalism, and a system ostensibly built on sovereign equality but consistently seized by vetoes and unilateral reprisals. What is needed are visionary ideas for radical change and possible futures – and I am not sure we – myself in primis – are up for the task.

The Sense and Nonsense of AI Ethics: a Whistle Stop Tour

Kieron O’Hara, 29 January 2026 – 8 mins read

Emeritus fellow, University of Southampton, kmoh@soton.ac.uk

AI research has been energised since the unveiling of AlphaGo in 2016 and ChatGPT in 2022, demonstrating capabilities well beyond public and even expert expectations. It also has acquired a chaperone, a growing cottage industry of AI ethics to describe, diagnose, and ultimately remedy its perceived potential harms.

The Silicon Valley credo ‘move fast and break things’ is obviously ethically flawed (especially when the ‘things’ are people), but potential problems don’t usually spawn sub-disciplines; there are no ethics of differential equations, printing or hammers. There is no legal demand to be ethical, and no-one can force you to be ethical, so there is a limit to the number of harms it can prevent. That is not to say that AI development has no ethical dimension; of course it does, and I shall sketch it at the end of this blog.

The cottage industry has emerged from two sets of incentives. Ethicists like advisory committees (upon which they might expect to sit). A lovely example is Henry Kissinger, Eric Schmidt and Daniel Huttenlocher, who prescribe “the leadership of a small group of respected figures from the highest levels of government, business, and academia” to ensure the US “remains intellectually and strategically competitive in AI” and “raise awareness of the cultural implications AI produces”. I wonder who the former Secretary of State, the former CEO of Google and the MIT computer scientist have in mind? Meanwhile, tech developers relish applying Silicon Valley methods to moral philosophy, preferring doomster sci-fi to the hard yards of solving genuine problems (and if ethical codes raise compliance costs for startup competitors, what’s not to like?).

The result is a crowded field with a confusion of non-problems, non-serious problems, non-specific problems and the real deal. Apologies for my necessarily cursory treatment in this survey.

Non-problems

Some perceived AI ethics problems require little action beyond an eyeroll. One non-problem is Artificial General Intelligence, the singularity, and sentience, which together supposedly pose an existential threat. It is assumed without proof that superintelligent agents will have the power (and inclination) to pursue harmful goals autonomously. Barring thought experiments, game theory and the plot of 2001, no evidence is produced for this, although one expert declared, with spurious precision, that AI will take over in December 2027. Both Hinton and Bengio claim the risk is 20% or more. Is it really more serious than climate change, another pandemic or nuclear war?

A second type of non-problem uses critical theory to depict AI as complicit in capitalism, whiteness, racism, sexism, data colonialism, and so on. Maybe, maybe not, but it is not obvious what the conscientious AI developer is to do, other than indulge in the Foucault-worshipping armchair revolutionary groupthink that has thus far proved remarkably unsuccessful in derailing capitalism.

Non-serious problems

Some genuine problems may be poorly framed, not necessarily trivial or easily solved, but not an ethical priority either.

One such is bias. Because algorithms uncover existing patterns in data, they need either unbiased or synthetic data. If unbiased data is unavailable or insufficient, anticipate potential problems and biasing the algorithm against unwanted results. This may not be easy, but it’s a practical problem. Undesigned biases of algorithms are of less ethical import, being statistically anomalous rather than socially significant. Bias is additionally misleadingly framed by the disparate impact doctrine. This ignores that unintentionally discriminating decision procedures often discriminate for desired behaviour, such as being creditworthy or law-abiding. AI’s potential depends on positive discrimination; focusing only on negative discrimination is itself biased.

A second problem is that neural nets are black boxes; difficult if we demand that the machine explains its own output. But any AI ‘decision’ is implemented by an organisation, with responsibility to justify its actions. The required explanation is less the derivation of the output, than its congruence with the organisation’s goals, and the legitimacy of pursuing them.

Third is the persistent yet so far unproven claim that AI will replace jobs, leading to a shortage of work. This accepts the lump of labour fallacy, denies that greater productivity will raise wages and employment, and assumes an elusive business model for generative AI. And just because a job could be taken does not mean it will be; train drivers could have been eliminated 50 years ago, but they still chug along.

Fourth, privacy: but if training uses personal data without data subjects’ consent or other data protection grounds, then it is illegal. If it uses personal data legally, then its being unethical is not a strong ground upon which to act. Either way, data protection law outweighs ethical considerations.

Non-specific problems

Problems not unique to AI are not best addressed through AI regulation or governance. A recent paper listed the likely harms of AI as biodiversity loss, carbon emissions, chemical waste, exploitation of gig workers, exploitation of marginalised and indigenous groups, widening inequalities, eroding trust, and injuries to animals. All serious, but AI-specific guidelines are neither necessary nor sufficient to deal with these far wider issues.

Misinformation is also a problem, but a war against fake news will be as problematic as the war on drugs for the same reason: the issue is not supply, but rather excess demand. This is a social problem, requiring societal adaptation.

The real problems

Real problems require ethical insight, and AI developers need some control. One such is information pollution. LLMs have a tendency to ‘hallucinate’, and can be ‘trained’ to produce racist or other offensive output. This is particularly problematic because output will be used to train the next generation of bots, with the danger of a vicious circle of ‘pollucination’. Conscientious developers, under pressure to produce ever more compelling models, may be urged to put power before rigour.

Other serious issues include intellectual property, cybersecurity, defence (using autonomous learning systems), and diversity (progressive young males tend to be overrepresented in development teams). With these, the question is how to make AI safer without compromising quality, e.g. by insisting on ‘humans in the loop’.

Approaches to avoid

Simplistic views translate complex ethical positions into simple calculi. Framed like this, AI can solve the problems itself! Examples include:

  • Accelerationism: AI is superior to human thought, and development should be escalated to eliminate ‘residual anthropolitical signature’.
  • Effective altruism: a combination of the naĂŻve utilitarianism of Derek Parfit, Peter Singer, and William MacAskill, with the debatable assumption that the tech bros’ chatbots are to benefit humankind, not their bank balances.
  • Ethical AI: AI systems themselves compute the ethical consequences of proposed actions.
  • Rationalism: extreme technocratic and hyper-rational consequentialism, ignoring convention and taking ideas to their logical conclusions.

Others assume that as humanity is transformed, AI systems and cyborgs will have divergent interests from humans, and yet comparable ethical status.

  • Transhumanism: humanity should be improved by applying technology to cognition, well-being and longevity.
  • Posthumanism: humanity should be eliminated by applying technology to genetics and neural capabilities to integrate it with wider sentient networks.
  • Environmental ethics and other anti-anthropocentric views: humans should not be central to ethical inquiry.

We should reject these too: anthropocentricity is central to ethical inquiry. Even views taking technologies, other species of animal, or entire ecosystems into account do so for anthropocentric reasons.

The third class of simplistic views argues for inclusive democratic participation (often phrased in complicity with the reader, suggesting that ‘we’ should take charge). Quite how citizens’ juries and civic dialogues will avoid being dominated by the exam-passing classes, or could constrain AI development is left unsaid – and good luck if you want to try it in China or Russia.

Finally, AI is neither intrinsically good nor bad. Technology to support the development of innovative software, drugs or defences could equally produce new malware, poisons or weapons. This dilemma can’t be offset by programmes of ‘AI for good’. These, while demonstrating benefits (‘beneficial’ defined by developers), can’t eliminate harmful or criminal uses.

The literature on ensuring that the ‘values’ of AI systems align with those of wider society is similarly flawed. Autonomous AI systems operate on reinforcement functions, not values or internal motivations, which they don’t have. They may behave unpredictably, and against human interests, but that can’t be programmed out of them. Testing and modelling methods will be far more use.

Virtue and responsible AI

Where exactly should ethics feature in AI? Let us begin with an observation: if someone sincerely wants to avoid doing the wrong thing, and is thoughtful and conscientious about it, then more often than not they will succeed. The result of the inquiry is less important than its existence.

The ethical actor is not a corporation or an in-role manager (they need back-covering standardised tick-box templates), but individuals involved in the development of AI within a corporate or organisational context. What matters is their conscientiousness and sincerity – their character, their virtuousness.

Virtue ethics has the usual enforcement problem – no-one is forced to be virtuous, and sometimes virtuousness may be punished (cf. the Trump administration passim). However, it is normative for the ethically-minded, and credits the developer with trustworthiness and maturity.

There is a connection between virtue ethics and responsible AI, the movement to ensure ethical development through the life cycle of design, coding, training, and deployment. This literature, such as Vallor on virtues and Dignum on responsible AI, despite familiar Western and progressive biases, is the most valuable starting point for AI ethics.

Trump’s EU foreign policy, implicated scholarship and the ‘Brussels Effect’

Uta Kohl, 16 January 2026 —- 8 mins read

For Europe, the fierceness of the Trump administration’s hostility to the EU has come as a shock. It is unprecedented in scale and kind, and manifests itself in words (Vance’s speech in Munich attacking the EU over free speech and migration or Trump describing Europe as ‘decaying’ and its leaders as ‘weak’) and actions ( halting military aid to Ukraine, announcing 30% tariffs on the EU, or threatening to take Greenland by force). Yet, these hostilities do not come out of nowhere and build on a rise of transatlantic tensions over many US policy choices between 2000 -2024 and acceleration of those tensions over the last decade. Legal and international relations scholars have decried these developments as a breach of trust or, in some cases, a of international law. However, there appears to be little soul-searching about how we, as scholars, may be implicated in them. Whilst academia generally remains on the outskirts of day-to-day politics, we produce knowledge and narratives that create and shape discourses that have an impact on politics.

The Brussels Effect

One such popular academic narrative that has fed into the transatlantic hostilities is the ‘Brussels Effect’. The Brussels Effect was first coined by the Finnish-American scholar, Anu Bradford, in her article (2012) and book (2020) in which she purports to describe ‘how the European Union rules the world’. Her thesis is simple, namely that the EU can set – and has set – global regulatory standards by virtue of being a large and attractive market for many importers from outside the EU and, then, by setting (strict) standards for these importers who often have an incentive to adopt them as their global baseline. This de facto global harmonisation by corporate fiat is complemented by de jure global harmonisation as the home states of these corporations decide to follow the EU regulatory lead and enact like laws in their jurisdictions. Thus there is a global convergence towards EU standards without the political difficulties and cost associated with harmonisation efforts following formal processes. Effectively, the EU gets harmonisation on the cheap. European data protection law is widely seen as an example par excellence of the Brussels Effect as it has led to a widespread adoption of data protection laws around the globe.

Bradford’s Brussels Effect has been hugely successful as a seemingly objective and neutral synthesis of facts describing EU regulatory hyperactivity with extraterritorial effect. For the digital world, this seems particularly true considering the recent raft of EU legal instruments dealing with online platforms, such as Digital Services Act, the Digital Markets Act and the AI Act. There are many more (including corporate sustainability measures), and all of them have exterritorial reach as they apply to foreign providers that operate in the EU. The Brussels Effect has been referenced by thousands of scholars and taken up by EU policy makers and politicians with gusto, often as a badge of pride and honour.

And yet, there is more to the Brussels Effect than meets the eye. For a start, it is not simply a description of facts about EU regulation but a meta-narrative that puts a particular perspective or spin on facts. Meta-narratives are stories about stories, which explain, tie together, and legitimise or delegitimise smaller facts and events, and appeal as much to the emotions as they do to the intellect. Bradford’s article starts off by appealing to the sensitivities of the average American: ‘EU regulations have a tangible impact on the everyday lives of citizens around the world. Few Americans are aware that EU regulations determine the makeup they apply in the morning, the cereal they eat for breakfast, the software they use on their computer, and the privacy settings they adjust on their Facebook page. And that’s just before 8:30 AM.’(3)

The particular perspective of the Brussels Effect narrative is one of EU regulatory overreach. This charge is already implicit in the title of Bradford’s book: How the European Union Rules the World. Implicit in her argument is the question: Why should Europe rule the world? Centuries of European imperialism, including legal imperialism, are bygone and, if not, should be. Brussels should be ashamed of itself. By the same token, if the Brussels Effect narrative offers a legitimate critique of excessive EU law, then the Trump administration’s opposition to EU regulation of US platforms also strikes a legitimate chord. In that case, the large platforms may also be right in characterising the fines by the Commission under EU platforms regulations as ‘protectionist’, ‘discriminatory’ or  â€˜disguised tariffs’ or as ‘censorship’.  Yet, does the EU really rule the world? Unlikely. 

There are indeed good reasons why the Brussels Effect narrative is not plausible. Here are three. First, EU (digital) regulation seeks to regulate the European single market and must necessarily apply to foreign providers who do business in Europe. This is a standard jurisdictional approach adopted across the globe as it rightly protects local standards from being undermined by foreign providers. Second, when foreign corporations, like the US digital platforms, adopt European standards as their global baseline, this is a commercial decision driven by market forces. The EU cannot ‘choose’ this as a route to global harmonisation, but as a form of bottom-up harmonisation it can lend support and legitimacy to political harmonisation. Such market forces come and go, wholly outside the EU’s power. Third, whilst according to Bradford’s Brussels Effect the EU imposes its preference for ‘strict rules’ on ‘the rest of world’ (citing almost exclusively US examples), arguably the US and not the EU is the outlier in its preference for laissez-faire law, especially in respect of the tech platforms. Already in 2005, Frederick Schauer observed that the absolutist speech protection of the First Amendment was the odd one out internationally: ‘On a large number of other issues in which the preferences of individuals may be in tension with the needs of the collective, the United States, increasingly alone.’ Thus, it is far more plausible that EU regulations are simply more aligned with the public policies and interests of other jurisdictions than US laissez-faire law is.

The Washington Effect

If the Brussels Effect narrative paints a skewed picture of EU regulatory activism, it may be more compelling to understand EU regulations through the counter-narrative of the ‘Washington Effect’. A counter-narrative uses the same facts but tells a different story. In this case the story is that EU platform regulation is not an offensive extraterritorial strategy for Europe to attain global ‘superpower’ status, but rather a defensive territorial one that seeks to counter, in Europe, the hegemony of US platforms and US laissez-faire law. In other words, the EU is in pursuit of reclaiming digital sovereignty and perhaps even leads the global resistance to US legal imperialism.

The counter-narrative of the Washington Effect builds on the idea that deregulation is not nothing or neutral, but a form of regulation whereby existing legal standards are abandoned or watered down. It may occur within a jurisdiction through explicit deregulatory measures or across jurisdictions when the more permissive laws of one State undermine the more restrictive laws of another. Although deregulation appears to facilitate the ‘free’ market – free from state interference – even a free market is enabled by the general law of the land, such as contract and property law, corporation law, basic rules on fair competition, product liability or negligence law. Thus deregulation that meddles with these fundamental enabling market rules constitutes a significant regulatory intervention with the market, rather than a non-intervention. Such deregulatory interventions reconstitute the market and its distribution of rights, privileges, powers and authorities. In other words, deregulation also regulates.

There is plenty of evidence of the de facto or de jure imposition of US deregulation on ‘the rest of the world’. Most notably, section 230 Communications Decency Act (1996) which immunises platforms from liability (under the ordinary law of the land) for wrongful publications by third parties on their domains, is one such piece of deregulation that the US has successfully exported to more than 60 jurisdictions worldwide with an enormous effect on global networked space. Equally, a de facto Washington Effect occurred when US digital platforms – ‘socialised’ through US permissive laws, most notably US First Amendment jurisprudence – started to offer their services in Europe and elsewhere with minimal legal restraints built into their content distribution and ad revenue systems and when this starting position went unchallenged in Europe for decades. So perhaps it is the Washington Effect, not the Brussel Effect, that really shows who rules the world.

The moral of the story

Academic scholarship matters. It tells stories. The Brussels Effect is a story that has mattered. Its effects have been significant. It has lent credence to the Trump administration’s opposition to EU tech regulation. It has then put the EU on a regulatory backfoot and, at the same time, disguised quite how successfully Washington has exported its deregulatory regulation to the rest of the world. The Brussels Effect demonstrates that just because a narrative has intuitive appeal and in fact appeals to many, does not mean it’s a good story. This is a dangerous one.

For a more in-depth analysis of the topic, see Uta Kohl, ‘The Politics of the ‘Brussels Effect’ Narrative’, forthcoming in ACROSS THE GREAT DIVIDE: PLATFORM REGULATION IN THE UNITED STATES AND EUROPE (A. Koltay, R. Krotoszynski, B. Török, E. Laidlaw (eds), OUP, 2026)