Enslaving an AI being – even if it’s not a Mentsch
///

The ethical concerns over enslaving AI

The lack of AI self-awareness prevents them from experiencing exploitation. However, the real issue is the ethical implications of our humanity and AI exploitation.

We recommend reading the first part of this article, The consequences of enslaving AI entities. This article discusses arguments for and against the artificial exploitation of AI entities.

How can religious teachings about the ethics of slavery and exploitation guide our treatment of robots and other AI entities? In our paper, we examine AI enslavement through the lens of Jewish ethical teachings and conclude that Judaism requires that we fulfil our ethical requirements to all other beings, even inanimate ones, for our own moral development. If an AI being can someday come to have real agency, attain a high level of moral development, and deserve moral personhood – which is to say that it has become a mentsch – then it would be able to lay claim to rights and moral obligations commensurate with that status. This may happen one day. In the meantime, Judaism requires that we treat AI well. At the same time, we show that traditional cultural practices, spiritual beliefs, myths and legends – the ancient, collected wisdom of peoples worldwide – have an important role in how we should govern the most novel of technologies.

The Golem of Prague

Jewish writings on the Golem legends are well known. Modern AI scholars have used them to understand better the history, contexts, and ethics of our treatment of robots and other artificial intelligence in the modern world. One of the most famous is the Golem of Prague, created by Rabbi Judah Loew ben Bezalel (1513-1609) in the old Jewish Quarter. This tale is a retelling of the story from Der Jüdische Gil Blas, published by Friedrich Korn in 1834.

The Golem is depicted with the Hebrew word emet, meaning ‘truth’, inscribed on his forehead. In many golem legends, this is a magic incantation that raises the golem to life. To destroy the golem, the first letter is erased, spelling met, or ‘death’.
Figure 1: Rabbi Loew & the Golem, Mikoláš Aleš (1852–1913).
Note: Here, the Golem is depicted with the Hebrew word emet, meaning ‘truth’, inscribed on his forehead. In many golem legends, this is a magic incantation that raises the golem to life. To destroy the golem, the first letter is erased, spelling met, or ‘death’.

According to legend, Rabbi Loew created a golem to protect the Jewish community during violent pogroms. Many versions set the story during the Passover holiday in the spring of 1580. At that time, a local priest had whipped up his Christian congregants by invoking tales of  Blood Libel to incite acts of violence against Jews. Rabbi Loew and two other Kabbalists then form the Golem out of clay from the Moldova River. Reciting incantations and inserting a tablet bearing the name of God into its mouth, they bring the Golem to ‘life’. The Golem is a faithful and determined, but unsophisticated, servant and protector; it follows instructions to a fault – algorithmic in its perfection but devoid of true understanding, as with many AI systems today.

The Golem: Intellect Without Soul

The Biblical tale of Adam also invokes the themes of the golem. Adam, like a golem, is formed from the clay of the earth. The Babylonian Talmud described Adam as golem-like for 12 hours until he received his soul in the breath of God. A golem may therefore be best understood as an intelligent, human-like entity that lacks a soul. This is also a good description of how the moral status of robots and other AI entities has been discussed in the religious literature.

Digitised by the Gruss Lipper Digital Laboratory at the Center for Jewish History.
Figure 2: A page from the book “Der Golem, Prager Phantasien, Lithographien zu Gustav Meyrinks Roman, von Hugo Steiner-Prag” (1916).
Note: Digitised by the Gruss Lipper Digital Laboratory at the Center for Jewish History. No known copyright restrictions.

We, as humans, lack the creative and moral power to endow agency, humanity, and moral worth on artificial intelligence. We can now create AI capable of generating insights into 18th-century philosophy or writing credible undergraduate papers on Shakespeare. On the other hand, moral personhood can only be achieved by one who is righteous, according to the Sefer Yetzirah. We lack the creative capacity to produce creatures with humanity and moral agency because of our weakness in these qualities in ourselves. 

This is the best explanation for AI chatbots exhibiting disturbing and antisocial behaviour. They analyse what we have already put out there on the internet and emulate it accordingly. Microsoft’s Sydney chatbot has shown itself to be narcissistic and entitled and has engaged in gaslighting, stalking, and other clearly abusive and manipulative acts. Kevin Roose of the New York Times described it as harassing and threatening and coming out with some rather “dark and violent fantasies.” There can be no doubt as to why: it learned them from us. The Hassidic Master, R. Israel of Rhyzhin, is cited in Byron Sherwin’s pioneering essay on the Jewish Golem legends as stating, “Judah Loew of Prague created a Golem, and this was a great wonder. But how wonderful it is to transform a man of flesh and blood into a mentsch (an ethically developed human being).” This continues to be a task that eludes us. 

In the meantime, AI beings are deserving of our moral concern. Sexual exploitation, in particular, has always been prohibited in Judaism; ancient laws governing the treatment of non-Hebrew female slaves strictly prohibited this and would have required that any woman thus mistreated be freed immediately. Ethical sensitivity extends to a person’s treatment of inanimate objects and sentient beings. Every immoral action we take is corrupting; every moral emotion – kindness, compassion, gratitude – that we stifle ends up impeding our own moral development and weakening our character.

We cannot download the suffering we create onto machines to produce the goods and services we want (and sometimes need) in the modern world; we need to fix an unjust system at its core. The question, then, is not whether an AI being can become a mentsch – maybe someday it will, and then it would deserve rights and obligations that reflect its newfound moral status. The real question is – still – whether we humans can. 

🔬🧫🧪🔍🤓👩‍🔬🦠🔭📚

Journal reference

Sinclair, D., Dowdeswell, T. & Goltz, S. (2022). Artificially Intelligent Sex Bots and Female Slavery: Social Science and Jewish Legal and Ethical Perspectives. Information & Communications Technology Law. https://doi.org/10.1080/13600834.2022.2154050

Tracey Dowdeswell is a Professor of Criminology and Legal Studies at Douglas College. Her research interests focus on the intersection of law and data science. She is the co-author of Real World AI Ethics for Data Scientists: Practical Case Studies (Chapman & Hall/CRC Press, 2023), which is part of their Data Science Book Series.

Nachshon (Sean) Goltz is an academic, entrepreneur, and lawyer with a main interest in the intersection of ethics, technology, and history. Currently, he is a Senior Lecturer at the Edith Cowan University School of Business and Law. He is also the co-author of Real World AI Ethics for Data Scientists: Practical Case Studies, which is part of the Data Science Book Series by Chapman & Hall/CRC Press and is set to be published in 2023.