Following a spate of highly publicized scandals in 2016–2017, the US Big Tech companies that lead artificial intelligence (AI) research suddenly developed a pronounced interest in the ethics of AI. By 2019, nearly every Big Tech company, and many smaller ones, had released a set of ethical AI principles. As of early 2020, there were 167 AI ethics guidelines documents worldwide.
While the terms used vary–it might be responsible, trustworthy, or human-centered AI–the sentiment is generally the same: AI can be rendered ethical by adhering to a particular set of practices in production and deployment. Microsoft states that it is “committed to the advancement of AI driven by ethical principles that put people first”, while IBM describes itself as “build[ing] AI systems in an ethical manner to benefit society as a whole”. Google even claims that “values-based AI is good for your business”.
However, given that these are capitalist organizations and therefore have as their primary goal the generation of profits and the accumulation of capital, it is easy to appreciate the view of critics such as the philosopher Thomas Metzinger, who have argued that AI ethics amounts to little more than cynical public relations. Making a comparison to environmental greenwashing, Metzinger calls AI ethics mere ethics washing. This is certainly true. But it’s not the whole story.
AI ethics also serves the economic exigencies of data-intensive capital more directly by functioning, as economist Cecilia Rikap calls a subordinated innovation network. This is a dispersed network of research-producing individuals over which Big Tech wields indirect control and appropriates the output, directing it towards ends that will advance its commodity production, circulation, and other business processes. In short, AI ethics is not about AI nor ethics, but rather the accumulation of capital.
Ethics is a vast and varied field with an ancient history. It’s thus fair to ask: what ethical system does ethical AI refer to? This question is never answered. However, the space of possible answers is delimited by the capitalist context of the AI industry. Capital is an amoral system; it is defined as commodity production for market sale by private owners of the means of production. As Karl Marx quipped, the only “moral imperative” that capital has is “to produce as much surplus-value as possible”. Perhaps unexpectedly, one does not have to look to a communist for such a critique.
At the other end of the political spectrum, the champion of neoliberalism Milton Friedman, railed against the notion of corporate responsibility, arguing that if such a term “is not pure rhetoric, it must mean that [the businessman] is to act in some way that is not in the interest of his employers”. The only social responsibility a business can have, he argues, is to “increase its profits”. Contrary to the evaluations of both Marx and Friedman, industry-led AI ethics claim to aim for the impossible resolution of the contradictory terms capital and ethics.
This is perhaps why a survey of 211 firms found that “AI ethics guidelines have not had a notable impact on practice”. I contend that AI ethics not only cannot serve to make the AI industry more ethical, but that it is not intended to. It serves another function for Big Tech. To discern it, we need to think of AI ethics as work. Whatever else AI ethics might be, it is, for most people involved with it, part of a job.
Since the 1970s, the production of commodities has been radically fragmented across the world into global value chains, with each moment of production located wherever the requisite commodities, including labour, are cheapest. While the bulk of labour is performed in poorer regions, the largest share of value accrues to the richest regions.
Rikap argues that Big Tech similarly outsources innovation by creating innovation networks consisting of smaller companies, research labs, and universities that work on research and development. Such organizations are dubbed partners, but they lack power to influence the agenda of Big Tech. Thus, Rikap argues, they exist in a relation of “subordination” since, while they contribute to innovation processes, the outputs of these “are mostly transformed into intangible assets by the intellectual monopoly”.
My contention is that the whole AI ethics phenomenon is best understood as a subordinated innovation network. It is a diffuse network, composed of individuals who work at Big Tech and many who work in startup companies, universities, research labs, NGOs, and even governments. They are subordinated insofar as they have no power to influence Big Tech, who can thus define the agenda for AI research.
The insoluble contradiction between capital and ethics sets the agenda. Ethics can take many forms, but capitalist firms cannot cease being capitalist without ceasing to exist. In the context of Big Tech’s domination of AI, AI ethics must ignore its capitalist context or take it as given. All compromises must be made in favour of capital and the diminishment of ethics.
The subordinated innovation network of AI ethics thus produces research that merely helps Big Tech plow their invasive business models into more and more spheres of life, driving what Shoshana Zuboff calls the dispossession cycle of surveillance capitalism. This is a business strategy in which the data useful for training AI is collected through unsavory practices and papered over by superficial adaptations that leave core business processes intact–and new sectors of life subject to incessant surveillance. AI ethics encourages the generation of such “useful” innovations–while avoiding any serious questioning of the profit-seeking foundations on which AI is built.
The simple fact is that capitalism can only accept ethical frameworks that do not contradict its imperative to accumulate. AI ethics employs (at best) a trivial sense of ethics that is blind to the illogical pathologies of capitalist production. A serious effort to render AI beneficial to the world needs to begin by acknowledging that capitalist production just might impinge on optimal ways of developing and deploying AI.
How to get started with such an endeavour? The first step would be for researchers outside the industry to refuse funding from the industry. While the mechanisms of subordination also work in other ways, this most tangible one can easily be avoided. This, however, is likely to mean working with fewer resources, prestige, and prospects for advancement. Operating with such limited means is certainly more difficult than telling machine learning engineers how to be more ethical. But at least it squarely confronts our current situation in which the possibilities for our technological future are limited to only those futures which can generate profits for a handful of corporations.
Steinhoff, J. (2023). AI ethics as subordinated innovation network. AI & SOCIETY, 1-13. https://doi.org/10.1080/07294360.2023.2174083