The Sense and Nonsense of AI Ethics: a Whistle Stop Tour

Kieron O’Hara, 29 January 2026 – 8 mins read

Emeritus fellow, University of Southampton, kmoh@soton.ac.uk

AI research has been energised since the unveiling of AlphaGo in 2016 and ChatGPT in 2022, demonstrating capabilities well beyond public and even expert expectations. It also has acquired a chaperone, a growing cottage industry of AI ethics to describe, diagnose, and ultimately remedy its perceived potential harms.

The Silicon Valley credo ‘move fast and break things’ is obviously ethically flawed (especially when the ‘things’ are people), but potential problems don’t usually spawn sub-disciplines; there are no ethics of differential equations, printing or hammers. There is no legal demand to be ethical, and no-one can force you to be ethical, so there is a limit to the number of harms it can prevent. That is not to say that AI development has no ethical dimension; of course it does, and I shall sketch it at the end of this blog.

The cottage industry has emerged from two sets of incentives. Ethicists like advisory committees (upon which they might expect to sit). A lovely example is Henry Kissinger, Eric Schmidt and Daniel Huttenlocher, who prescribe “the leadership of a small group of respected figures from the highest levels of government, business, and academia” to ensure the US “remains intellectually and strategically competitive in AI” and “raise awareness of the cultural implications AI produces”. I wonder who the former Secretary of State, the former CEO of Google and the MIT computer scientist have in mind? Meanwhile, tech developers relish applying Silicon Valley methods to moral philosophy, preferring doomster sci-fi to the hard yards of solving genuine problems (and if ethical codes raise compliance costs for startup competitors, what’s not to like?).

The result is a crowded field with a confusion of non-problems, non-serious problems, non-specific problems and the real deal. Apologies for my necessarily cursory treatment in this survey.

Non-problems

Some perceived AI ethics problems require little action beyond an eyeroll. One non-problem is Artificial General Intelligence, the singularity, and sentience, which together supposedly pose an existential threat. It is assumed without proof that superintelligent agents will have the power (and inclination) to pursue harmful goals autonomously. Barring thought experiments, game theory and the plot of 2001, no evidence is produced for this, although one expert declared, with spurious precision, that AI will take over in December 2027. Both Hinton and Bengio claim the risk is 20% or more. Is it really more serious than climate change, another pandemic or nuclear war?

A second type of non-problem uses critical theory to depict AI as complicit in capitalism, whiteness, racism, sexism, data colonialism, and so on. Maybe, maybe not, but it is not obvious what the conscientious AI developer is to do, other than indulge in the Foucault-worshipping armchair revolutionary groupthink that has thus far proved remarkably unsuccessful in derailing capitalism.

Non-serious problems

Some genuine problems may be poorly framed, not necessarily trivial or easily solved, but not an ethical priority either.

One such is bias. Because algorithms uncover existing patterns in data, they need either unbiased or synthetic data. If unbiased data is unavailable or insufficient, anticipate potential problems and biasing the algorithm against unwanted results. This may not be easy, but it’s a practical problem. Undesigned biases of algorithms are of less ethical import, being statistically anomalous rather than socially significant. Bias is additionally misleadingly framed by the disparate impact doctrine. This ignores that unintentionally discriminating decision procedures often discriminate for desired behaviour, such as being creditworthy or law-abiding. AI’s potential depends on positive discrimination; focusing only on negative discrimination is itself biased.

A second problem is that neural nets are black boxes; difficult if we demand that the machine explains its own output. But any AI ‘decision’ is implemented by an organisation, with responsibility to justify its actions. The required explanation is less the derivation of the output, than its congruence with the organisation’s goals, and the legitimacy of pursuing them.

Third is the persistent yet so far unproven claim that AI will replace jobs, leading to a shortage of work. This accepts the lump of labour fallacy, denies that greater productivity will raise wages and employment, and assumes an elusive business model for generative AI. And just because a job could be taken does not mean it will be; train drivers could have been eliminated 50 years ago, but they still chug along.

Fourth, privacy: but if training uses personal data without data subjects’ consent or other data protection grounds, then it is illegal. If it uses personal data legally, then its being unethical is not a strong ground upon which to act. Either way, data protection law outweighs ethical considerations.

Non-specific problems

Problems not unique to AI are not best addressed through AI regulation or governance. A recent paper listed the likely harms of AI as biodiversity loss, carbon emissions, chemical waste, exploitation of gig workers, exploitation of marginalised and indigenous groups, widening inequalities, eroding trust, and injuries to animals. All serious, but AI-specific guidelines are neither necessary nor sufficient to deal with these far wider issues.

Misinformation is also a problem, but a war against fake news will be as problematic as the war on drugs for the same reason: the issue is not supply, but rather excess demand. This is a social problem, requiring societal adaptation.

The real problems

Real problems require ethical insight, and AI developers need some control. One such is information pollution. LLMs have a tendency to ‘hallucinate’, and can be ‘trained’ to produce racist or other offensive output. This is particularly problematic because output will be used to train the next generation of bots, with the danger of a vicious circle of ‘pollucination’. Conscientious developers, under pressure to produce ever more compelling models, may be urged to put power before rigour.

Other serious issues include intellectual property, cybersecurity, defence (using autonomous learning systems), and diversity (progressive young males tend to be overrepresented in development teams). With these, the question is how to make AI safer without compromising quality, e.g. by insisting on ‘humans in the loop’.

Approaches to avoid

Simplistic views translate complex ethical positions into simple calculi. Framed like this, AI can solve the problems itself! Examples include:

  • Accelerationism: AI is superior to human thought, and development should be escalated to eliminate ‘residual anthropolitical signature’.
  • Effective altruism: a combination of the naĂŻve utilitarianism of Derek Parfit, Peter Singer, and William MacAskill, with the debatable assumption that the tech bros’ chatbots are to benefit humankind, not their bank balances.
  • Ethical AI: AI systems themselves compute the ethical consequences of proposed actions.
  • Rationalism: extreme technocratic and hyper-rational consequentialism, ignoring convention and taking ideas to their logical conclusions.

Others assume that as humanity is transformed, AI systems and cyborgs will have divergent interests from humans, and yet comparable ethical status.

  • Transhumanism: humanity should be improved by applying technology to cognition, well-being and longevity.
  • Posthumanism: humanity should be eliminated by applying technology to genetics and neural capabilities to integrate it with wider sentient networks.
  • Environmental ethics and other anti-anthropocentric views: humans should not be central to ethical inquiry.

We should reject these too: anthropocentricity is central to ethical inquiry. Even views taking technologies, other species of animal, or entire ecosystems into account do so for anthropocentric reasons.

The third class of simplistic views argues for inclusive democratic participation (often phrased in complicity with the reader, suggesting that ‘we’ should take charge). Quite how citizens’ juries and civic dialogues will avoid being dominated by the exam-passing classes, or could constrain AI development is left unsaid – and good luck if you want to try it in China or Russia.

Finally, AI is neither intrinsically good nor bad. Technology to support the development of innovative software, drugs or defences could equally produce new malware, poisons or weapons. This dilemma can’t be offset by programmes of ‘AI for good’. These, while demonstrating benefits (‘beneficial’ defined by developers), can’t eliminate harmful or criminal uses.

The literature on ensuring that the ‘values’ of AI systems align with those of wider society is similarly flawed. Autonomous AI systems operate on reinforcement functions, not values or internal motivations, which they don’t have. They may behave unpredictably, and against human interests, but that can’t be programmed out of them. Testing and modelling methods will be far more use.

Virtue and responsible AI

Where exactly should ethics feature in AI? Let us begin with an observation: if someone sincerely wants to avoid doing the wrong thing, and is thoughtful and conscientious about it, then more often than not they will succeed. The result of the inquiry is less important than its existence.

The ethical actor is not a corporation or an in-role manager (they need back-covering standardised tick-box templates), but individuals involved in the development of AI within a corporate or organisational context. What matters is their conscientiousness and sincerity – their character, their virtuousness.

Virtue ethics has the usual enforcement problem – no-one is forced to be virtuous, and sometimes virtuousness may be punished (cf. the Trump administration passim). However, it is normative for the ethically-minded, and credits the developer with trustworthiness and maturity.

There is a connection between virtue ethics and responsible AI, the movement to ensure ethical development through the life cycle of design, coding, training, and deployment. This literature, such as Vallor on virtues and Dignum on responsible AI, despite familiar Western and progressive biases, is the most valuable starting point for AI ethics.

Trump’s EU foreign policy, implicated scholarship and the ‘Brussels Effect’

Uta Kohl, 16 January 2026 —- 8 mins read

For Europe, the fierceness of the Trump administration’s hostility to the EU has come as a shock. It is unprecedented in scale and kind, and manifests itself in words (Vance’s speech in Munich attacking the EU over free speech and migration or Trump describing Europe as ‘decaying’ and its leaders as ‘weak’) and actions ( halting military aid to Ukraine, announcing 30% tariffs on the EU, or threatening to take Greenland by force). Yet, these hostilities do not come out of nowhere and build on a rise of transatlantic tensions over many US policy choices between 2000 -2024 and acceleration of those tensions over the last decade. Legal and international relations scholars have decried these developments as a breach of trust or, in some cases, a of international law. However, there appears to be little soul-searching about how we, as scholars, may be implicated in them. Whilst academia generally remains on the outskirts of day-to-day politics, we produce knowledge and narratives that create and shape discourses that have an impact on politics.

The Brussels Effect

One such popular academic narrative that has fed into the transatlantic hostilities is the ‘Brussels Effect’. The Brussels Effect was first coined by the Finnish-American scholar, Anu Bradford, in her article (2012) and book (2020) in which she purports to describe ‘how the European Union rules the world’. Her thesis is simple, namely that the EU can set – and has set – global regulatory standards by virtue of being a large and attractive market for many importers from outside the EU and, then, by setting (strict) standards for these importers who often have an incentive to adopt them as their global baseline. This de facto global harmonisation by corporate fiat is complemented by de jure global harmonisation as the home states of these corporations decide to follow the EU regulatory lead and enact like laws in their jurisdictions. Thus there is a global convergence towards EU standards without the political difficulties and cost associated with harmonisation efforts following formal processes. Effectively, the EU gets harmonisation on the cheap. European data protection law is widely seen as an example par excellence of the Brussels Effect as it has led to a widespread adoption of data protection laws around the globe.

Bradford’s Brussels Effect has been hugely successful as a seemingly objective and neutral synthesis of facts describing EU regulatory hyperactivity with extraterritorial effect. For the digital world, this seems particularly true considering the recent raft of EU legal instruments dealing with online platforms, such as Digital Services Act, the Digital Markets Act and the AI Act. There are many more (including corporate sustainability measures), and all of them have exterritorial reach as they apply to foreign providers that operate in the EU. The Brussels Effect has been referenced by thousands of scholars and taken up by EU policy makers and politicians with gusto, often as a badge of pride and honour.

And yet, there is more to the Brussels Effect than meets the eye. For a start, it is not simply a description of facts about EU regulation but a meta-narrative that puts a particular perspective or spin on facts. Meta-narratives are stories about stories, which explain, tie together, and legitimise or delegitimise smaller facts and events, and appeal as much to the emotions as they do to the intellect. Bradford’s article starts off by appealing to the sensitivities of the average American: ‘EU regulations have a tangible impact on the everyday lives of citizens around the world. Few Americans are aware that EU regulations determine the makeup they apply in the morning, the cereal they eat for breakfast, the software they use on their computer, and the privacy settings they adjust on their Facebook page. And that’s just before 8:30 AM.’(3)

The particular perspective of the Brussels Effect narrative is one of EU regulatory overreach. This charge is already implicit in the title of Bradford’s book: How the European Union Rules the World. Implicit in her argument is the question: Why should Europe rule the world? Centuries of European imperialism, including legal imperialism, are bygone and, if not, should be. Brussels should be ashamed of itself. By the same token, if the Brussels Effect narrative offers a legitimate critique of excessive EU law, then the Trump administration’s opposition to EU regulation of US platforms also strikes a legitimate chord. In that case, the large platforms may also be right in characterising the fines by the Commission under EU platforms regulations as ‘protectionist’, ‘discriminatory’ or  â€˜disguised tariffs’ or as ‘censorship’.  Yet, does the EU really rule the world? Unlikely. 

There are indeed good reasons why the Brussels Effect narrative is not plausible. Here are three. First, EU (digital) regulation seeks to regulate the European single market and must necessarily apply to foreign providers who do business in Europe. This is a standard jurisdictional approach adopted across the globe as it rightly protects local standards from being undermined by foreign providers. Second, when foreign corporations, like the US digital platforms, adopt European standards as their global baseline, this is a commercial decision driven by market forces. The EU cannot ‘choose’ this as a route to global harmonisation, but as a form of bottom-up harmonisation it can lend support and legitimacy to political harmonisation. Such market forces come and go, wholly outside the EU’s power. Third, whilst according to Bradford’s Brussels Effect the EU imposes its preference for ‘strict rules’ on ‘the rest of world’ (citing almost exclusively US examples), arguably the US and not the EU is the outlier in its preference for laissez-faire law, especially in respect of the tech platforms. Already in 2005, Frederick Schauer observed that the absolutist speech protection of the First Amendment was the odd one out internationally: ‘On a large number of other issues in which the preferences of individuals may be in tension with the needs of the collective, the United States, increasingly alone.’ Thus, it is far more plausible that EU regulations are simply more aligned with the public policies and interests of other jurisdictions than US laissez-faire law is.

The Washington Effect

If the Brussels Effect narrative paints a skewed picture of EU regulatory activism, it may be more compelling to understand EU regulations through the counter-narrative of the ‘Washington Effect’. A counter-narrative uses the same facts but tells a different story. In this case the story is that EU platform regulation is not an offensive extraterritorial strategy for Europe to attain global ‘superpower’ status, but rather a defensive territorial one that seeks to counter, in Europe, the hegemony of US platforms and US laissez-faire law. In other words, the EU is in pursuit of reclaiming digital sovereignty and perhaps even leads the global resistance to US legal imperialism.

The counter-narrative of the Washington Effect builds on the idea that deregulation is not nothing or neutral, but a form of regulation whereby existing legal standards are abandoned or watered down. It may occur within a jurisdiction through explicit deregulatory measures or across jurisdictions when the more permissive laws of one State undermine the more restrictive laws of another. Although deregulation appears to facilitate the ‘free’ market – free from state interference – even a free market is enabled by the general law of the land, such as contract and property law, corporation law, basic rules on fair competition, product liability or negligence law. Thus deregulation that meddles with these fundamental enabling market rules constitutes a significant regulatory intervention with the market, rather than a non-intervention. Such deregulatory interventions reconstitute the market and its distribution of rights, privileges, powers and authorities. In other words, deregulation also regulates.

There is plenty of evidence of the de facto or de jure imposition of US deregulation on ‘the rest of the world’. Most notably, section 230 Communications Decency Act (1996) which immunises platforms from liability (under the ordinary law of the land) for wrongful publications by third parties on their domains, is one such piece of deregulation that the US has successfully exported to more than 60 jurisdictions worldwide with an enormous effect on global networked space. Equally, a de facto Washington Effect occurred when US digital platforms – ‘socialised’ through US permissive laws, most notably US First Amendment jurisprudence – started to offer their services in Europe and elsewhere with minimal legal restraints built into their content distribution and ad revenue systems and when this starting position went unchallenged in Europe for decades. So perhaps it is the Washington Effect, not the Brussel Effect, that really shows who rules the world.

The moral of the story

Academic scholarship matters. It tells stories. The Brussels Effect is a story that has mattered. Its effects have been significant. It has lent credence to the Trump administration’s opposition to EU tech regulation. It has then put the EU on a regulatory backfoot and, at the same time, disguised quite how successfully Washington has exported its deregulatory regulation to the rest of the world. The Brussels Effect demonstrates that just because a narrative has intuitive appeal and in fact appeals to many, does not mean it’s a good story. This is a dangerous one.

For a more in-depth analysis of the topic, see Uta Kohl, ‘The Politics of the ‘Brussels Effect’ Narrative’, forthcoming in ACROSS THE GREAT DIVIDE: PLATFORM REGULATION IN THE UNITED STATES AND EUROPE (A. Koltay, R. Krotoszynski, B. Török, E. Laidlaw (eds), OUP, 2026)