Kieron OâHara, 29 January 2026 â 8 mins read
Emeritus fellow, University of Southampton, kmoh@soton.ac.uk
AI research has been energised since the unveiling of AlphaGo in 2016 and ChatGPT in 2022, demonstrating capabilities well beyond public and even expert expectations. It also has acquired a chaperone, a growing cottage industry of AI ethics to describe, diagnose, and ultimately remedy its perceived potential harms.
The Silicon Valley credo âmove fast and break thingsâ is obviously ethically flawed (especially when the âthingsâ are people), but potential problems donât usually spawn sub-disciplines; there are no ethics of differential equations, printing or hammers. There is no legal demand to be ethical, and no-one can force you to be ethical, so there is a limit to the number of harms it can prevent. That is not to say that AI development has no ethical dimension; of course it does, and I shall sketch it at the end of this blog.
The cottage industry has emerged from two sets of incentives. Ethicists like advisory committees (upon which they might expect to sit). A lovely example is Henry Kissinger, Eric Schmidt and Daniel Huttenlocher, who prescribe âthe leadership of a small group of respected figures from the highest levels of government, business, and academiaâ to ensure the US âremains intellectually and strategically competitive in AIâ and âraise awareness of the cultural implications AI producesâ. I wonder who the former Secretary of State, the former CEO of Google and the MIT computer scientist have in mind? Meanwhile, tech developers relish applying Silicon Valley methods to moral philosophy, preferring doomster sci-fi to the hard yards of solving genuine problems (and if ethical codes raise compliance costs for startup competitors, whatâs not to like?).
The result is a crowded field with a confusion of non-problems, non-serious problems, non-specific problems and the real deal. Apologies for my necessarily cursory treatment in this survey.
Non-problems
Some perceived AI ethics problems require little action beyond an eyeroll. One non-problem is Artificial General Intelligence, the singularity, and sentience, which together supposedly pose an existential threat. It is assumed without proof that superintelligent agents will have the power (and inclination) to pursue harmful goals autonomously. Barring thought experiments, game theory and the plot of 2001, no evidence is produced for this, although one expert declared, with spurious precision, that AI will take over in December 2027. Both Hinton and Bengio claim the risk is 20% or more. Is it really more serious than climate change, another pandemic or nuclear war?
A second type of non-problem uses critical theory to depict AI as complicit in capitalism, whiteness, racism, sexism, data colonialism, and so on. Maybe, maybe not, but it is not obvious what the conscientious AI developer is to do, other than indulge in the Foucault-worshipping armchair revolutionary groupthink that has thus far proved remarkably unsuccessful in derailing capitalism.
Non-serious problems
Some genuine problems may be poorly framed, not necessarily trivial or easily solved, but not an ethical priority either.
One such is bias. Because algorithms uncover existing patterns in data, they need either unbiased or synthetic data. If unbiased data is unavailable or insufficient, anticipate potential problems and biasing the algorithm against unwanted results. This may not be easy, but itâs a practical problem. Undesigned biases of algorithms are of less ethical import, being statistically anomalous rather than socially significant. Bias is additionally misleadingly framed by the disparate impact doctrine. This ignores that unintentionally discriminating decision procedures often discriminate for desired behaviour, such as being creditworthy or law-abiding. AIâs potential depends on positive discrimination; focusing only on negative discrimination is itself biased.
A second problem is that neural nets are black boxes; difficult if we demand that the machine explains its own output. But any AI âdecisionâ is implemented by an organisation, with responsibility to justify its actions. The required explanation is less the derivation of the output, than its congruence with the organisationâs goals, and the legitimacy of pursuing them.
Third is the persistent yet so far unproven claim that AI will replace jobs, leading to a shortage of work. This accepts the lump of labour fallacy, denies that greater productivity will raise wages and employment, and assumes an elusive business model for generative AI. And just because a job could be taken does not mean it will be; train drivers could have been eliminated 50 years ago, but they still chug along.
Fourth, privacy: but if training uses personal data without data subjectsâ consent or other data protection grounds, then it is illegal. If it uses personal data legally, then its being unethical is not a strong ground upon which to act. Either way, data protection law outweighs ethical considerations.
Non-specific problems
Problems not unique to AI are not best addressed through AI regulation or governance. A recent paper listed the likely harms of AI as biodiversity loss, carbon emissions, chemical waste, exploitation of gig workers, exploitation of marginalised and indigenous groups, widening inequalities, eroding trust, and injuries to animals. All serious, but AI-specific guidelines are neither necessary nor sufficient to deal with these far wider issues.
Misinformation is also a problem, but a war against fake news will be as problematic as the war on drugs for the same reason: the issue is not supply, but rather excess demand. This is a social problem, requiring societal adaptation.
The real problems
Real problems require ethical insight, and AI developers need some control. One such is information pollution. LLMs have a tendency to âhallucinateâ, and can be âtrainedâ to produce racist or other offensive output. This is particularly problematic because output will be used to train the next generation of bots, with the danger of a vicious circle of âpollucinationâ. Conscientious developers, under pressure to produce ever more compelling models, may be urged to put power before rigour.
Other serious issues include intellectual property, cybersecurity, defence (using autonomous learning systems), and diversity (progressive young males tend to be overrepresented in development teams). With these, the question is how to make AI safer without compromising quality, e.g. by insisting on âhumans in the loopâ.
Approaches to avoid
Simplistic views translate complex ethical positions into simple calculi. Framed like this, AI can solve the problems itself! Examples include:
- Accelerationism: AI is superior to human thought, and development should be escalated to eliminate âresidual anthropolitical signatureâ.
- Effective altruism: a combination of the naĂŻve utilitarianism of Derek Parfit, Peter Singer, and William MacAskill, with the debatable assumption that the tech brosâ chatbots are to benefit humankind, not their bank balances.
- Ethical AI: AI systems themselves compute the ethical consequences of proposed actions.
- Rationalism: extreme technocratic and hyper-rational consequentialism, ignoring convention and taking ideas to their logical conclusions.
Others assume that as humanity is transformed, AI systems and cyborgs will have divergent interests from humans, and yet comparable ethical status.
- Transhumanism: humanity should be improved by applying technology to cognition, well-being and longevity.
- Posthumanism: humanity should be eliminated by applying technology to genetics and neural capabilities to integrate it with wider sentient networks.
- Environmental ethics and other anti-anthropocentric views: humans should not be central to ethical inquiry.
We should reject these too: anthropocentricity is central to ethical inquiry. Even views taking technologies, other species of animal, or entire ecosystems into account do so for anthropocentric reasons.
The third class of simplistic views argues for inclusive democratic participation (often phrased in complicity with the reader, suggesting that âweâ should take charge). Quite how citizensâ juries and civic dialogues will avoid being dominated by the exam-passing classes, or could constrain AI development is left unsaid â and good luck if you want to try it in China or Russia.
Finally, AI is neither intrinsically good nor bad. Technology to support the development of innovative software, drugs or defences could equally produce new malware, poisons or weapons. This dilemma canât be offset by programmes of âAI for goodâ. These, while demonstrating benefits (âbeneficialâ defined by developers), canât eliminate harmful or criminal uses.
The literature on ensuring that the âvaluesâ of AI systems align with those of wider society is similarly flawed. Autonomous AI systems operate on reinforcement functions, not values or internal motivations, which they donât have. They may behave unpredictably, and against human interests, but that canât be programmed out of them. Testing and modelling methods will be far more use.
Virtue and responsible AI
Where exactly should ethics feature in AI? Let us begin with an observation: if someone sincerely wants to avoid doing the wrong thing, and is thoughtful and conscientious about it, then more often than not they will succeed. The result of the inquiry is less important than its existence.
The ethical actor is not a corporation or an in-role manager (they need back-covering standardised tick-box templates), but individuals involved in the development of AI within a corporate or organisational context. What matters is their conscientiousness and sincerity â their character, their virtuousness.
Virtue ethics has the usual enforcement problem â no-one is forced to be virtuous, and sometimes virtuousness may be punished (cf. the Trump administration passim). However, it is normative for the ethically-minded, and credits the developer with trustworthiness and maturity.
There is a connection between virtue ethics and responsible AI, the movement to ensure ethical development through the life cycle of design, coding, training, and deployment. This literature, such as Vallor on virtues and Dignum on responsible AI, despite familiar Western and progressive biases, is the most valuable starting point for AI ethics.