consequences of enslaving AI entities
///

The consequences of enslaving AI entities

To truly benefit from AI technology, we must consider the tragedy of the master and the broader ethical implications. However, our exploitation of AI threatens both our ethical principle as well as our humanity.

250 views

It’s easy to forget how many goods we purchase and consume daily are made by daily wage workers. We can enjoy a morning cup of coffee without thinking that it may have been grown by workers in debt bondage –  a type of modern-day slavery that captures about 40 million workers worldwide and is often passed down through the generations. Those who prefer tea instead won’t get a better result: Malawi, India, and Sri Lanka are just some of the countries that use forced and child labour in tea production. Putting a spoonful of sugar in your tea or coffee would expand that list further to Bolivia, Burma, Mexico, and the Dominican Republic.

Article Video Summary: Ethics of AI – Tragedy of the Masters

You might towel yourself off after a shower, walk across a rug, and put on a t-shirt,  not knowing that much of the cotton used in these products is still grown and processed using slave labour; the forced labour of Uyghur detainees in China alone may have already tainted a fifth of the global supply chain. Open your laptop, or pick up a cell phone to get some work done. Tens of thousands of children mine cobalt for the lithium-ion batteries we use in laptops, smartphones, tablets, electric vehicles, jet engines, gas turbines, and more. Given how pervasive slavery is in the global supply chain, perhaps we can download some of this difficult work onto robots and other artificial intelligence (AI) entities. But can we erase the wrongs of slavery so easily? What is at stake is how we plan to produce the good we consume in the future and the effects this will have on our moral, spiritual, and economic development.

Children labour in the brick kilns of Nepal
Figure 1: Children labour in the brick kilns of Nepal
Credit: Shresthakadar (17 October 2014).

Arguments in Favour of Artificial Exploitation

Several arguments have been advanced in favour of enslaving AI. A utilitarian could easily argue that replacing exploited humans with insensible machines would reduce the world’s suffering. Others have emphasized the many benefits humans could attain by exploiting AI, including the possibilities for sexual gratification – the so-called ‘hedonic argument’. It might be a therapeutic tool that can help some people overcome a traumatic breakup or cope with physical and emotional deprivations or the all-too-common loneliness and social isolation of our modern societies. Some may even prefer AI entities as a means of post-human sexual expression and identity.

Replacing humans with AI has the effect of downloading all the suffering and indignity of slavery onto a machine instead of a person. At first glance, there seem to be good reasons to do this. Almost three-quarters of the world’s slaves today are women, many working in traditional female roles, particularly domestic and sexual labour. Sex work, in particular, can be exploitative and coercive and subject women to significant physical and psychological harm. It should come as no surprise, then, that many argue that we should free exploited human beings and replace them with exploited machines and AI entities

Others have gone further and argued that the sexual exploitation of AI entities might be cathartic, especially for paedophiles and violent offenders, who would have a ‘safe’ outlet for their sexual aggression. However, there is strong evidence that access to violent pornography, sex dolls, and other ‘artificial victims’ fuels offending and desensitization. There is little check on this behaviour since non-human entities do not offer negative feedback to their treatment. Tolerance develops, as with other forms of addictive and compulsive behaviour; this weakens the user’s overall empathy and conditions sexual preferences for violent and non-consensual sexual activity.

The Tragedy of the Master

Instead, we argue that artificial exploitation hastens the commodification of the human, with humanity and true relationship replaced by a sense of control of easy disposability. The real world is not controllable, and other human beings are one of its least controllable aspects. Real people get depressed, fall away from us, do not want what we give them, get sick, and sometimes die. In the simulacrum created by artificial exploitation, the ‘advantage’ is that the user always has total control: there is no uncertainty, no failed advances, no possibility of missteps, no need for accommodation, and no fear of rejection. The downside is that the user loses the benefits we obtain from relationships with actual people: cultivating empathy and humanity and our broader development as moral agents that only relationships with flesh-and-blood agents – ‘in real life’ – can bring us.

We, therefore, lose much when we enslave and exploit AI, a concept that Mark Coeckelbergh has termed the ‘tragedy of the master.’  Artificial exploitation makes us vulnerable, constrains our choices, leads our knowledge and skills to atrophy, and binds us even more thoroughly to the technological imperative. Being masters of technology brings its own tragedy: it leaves us coddled, alienated, and automated. We grow dependent on technology and end up in a bondage of our own making.

What is Next?

The tragedy of the master should guide how we relate to AI and should routinely be assessed as a factor in the ethical governance of AI. Forming one’s identity, preserving enjoyable and beneficial skills, and developing healthy relationships with others are vitally important, and they are all threatened by our unreflective exploitation of robots and AI. But there is more at stake than this: our exploitation of AI poses a threat to our agency and our ability to make ethical choices and shape our character, and the future governance of AI needs to meet these broader goals if we are to benefit from our use of these technologies truly. It goes to the heart of our role as social beings, but also moral and ethical ones: kind beings, loving beings, and respectful beings – all forms of moral development central to cultural and spiritual teachings worldwide.

We recommend reading the second part of this article, The ethical concerns over enslaving AI, in which we show that traditional spiritual beliefs, cultural practices and legends can play an important role in how we relate to and govern AI technologies.

🔬🧫🧪🔍🤓👩‍🔬🦠🔭📚

Journal reference

Sinclair, D., Dowdeswell, T. & Goltz, S. (2022). Artificially Intelligent Sex Bots and Female Slavery: Social Science and Jewish Legal and Ethical Perspectives. Information & Communications Technology Law.  https://doi.org/10.1080/13600834.2022.2154050

Tracey Dowdeswell is a Professor of Criminology and Legal Studies at Douglas College. Her research interests focus on the intersection of law and data science. She is the co-author of Real World AI Ethics for Data Scientists: Practical Case Studies (Chapman & Hall/CRC Press, 2023), which is part of their Data Science Book Series.

Nachshon (Sean) Goltz is an academic, entrepreneur, and lawyer with a main interest in the intersection of ethics, technology, and history. Currently, he is a Senior Lecturer at the Edith Cowan University School of Business and Law. He is also the co-author of Real World AI Ethics for Data Scientists: Practical Case Studies, which is part of the Data Science Book Series by Chapman & Hall/CRC Press and is set to be published in 2023.