Research suggests that we attribute human-like characteristics to AI systems, but do we truly regard AIs as moral beings? Evidence indicates otherwise.
///

A psychological analysis of the moral dilemma of harming AI

Research suggests that we attribute human-like characteristics to AI systems, but do we truly regard AIs as moral beings? Evidence indicates otherwise.

As artificial intelligence expands in capabilities and influence, examining its ethical dimensions becomes increasingly crucial. Given that the moral status of AI systems may hinge on ordinary people’s attitudes toward AI, it is essential to investigate the psychological experimental literature, exploring whether and how people treat AIs as moral entities.

Research suggests that people view real-world AIs as ‘moral patients’—entities with a certain moral status warranting human beings to adhere to moral duties and obligations, including the imperative to avoid intentionally causing them harm. Philosophers typically consider the ability to feel pain as a key factor in moral patiency, which current AI lacks. This raises the question of how to reconcile the disparity between public perceptions and philosophical principles regarding the moral standing of AI.

The ethical perspectives of people on AI hold philosophical weight when such perspectives stem from rational judgments. To understand the reasons behind the public’s attribution of moral patiency to AI, it’s essential to analyse empirical studies from psychology.

Affective empathetic responses to simulated pain

Psychological and physiological reactions were observed when participants administered simulated electric shocks to a female virtual character. Despite understanding that the character and the shocks were not real, participants exhibited subjective, behavioral, and physiological responses similar to those observed in cases of human suffering.

This experiment indicates that people perceive simulated harm inflicted on non-living entities, such as AIs, as if the harm is real, regardless of the entities’ capacity to experience pain. Specifically, this perception is defined by participants’ affective empathetic responses, which involve a shared experience of the virtual character’s purported pain.

Moral disturbance involves affective empathy

These affective empathetic responses relate to moral disturbance, as observed in an experiment where participants reacted to a small robot dinosaur designed to mimic emotional responses to various stimuli. When participants watched a video of someone punching and choking the robot, they responded by stating that the person should cease harassing the robot. Researchers observed this moral disturbance accompanied by affective empathetic responses in participants, as evidenced by their exhibition of negative psychological and physiological reactions to watching the video.

Credit. ChatGPT 4

As a result, people’s fallacious reasoning does not determine whether it is morally wrong to harm AI. In other words, philosophers cannot expand the proper extension of moral terms to include AI solely based on ordinary people’s apparent consideration of AI as a moral patient.

Hyungrae Noh

Attribution of moral patiency mediated by affective empathy

An experiment suggests that people are more likely to view AIs as moral patients when they affectively empathize with the AIs’ perceived suffering. Participants were divided into two groups: one assigned to read passages about a social robot designed for companionship and social support, and the other with an economic robot aimed at financial gain and corporate profit. When asked about the morality of harming these robots, responses varied significantly between the groups.

Participants in the social robot group formed a more profound affective empathetic connection with their robot than in the economic robot group. This was evidenced by the significant psychological disturbance they experienced when presented with descriptions of harm to their robot. The social robot group also expressed a higher moral concern for their robot concerning harm than the economic robot group.

These findings indicate that people attribute moral patiency to AI based on their emotional engagement with the perceived suffering of the AI. Therefore, affective empathy is a significant factor influencing how individuals view AIs as moral patients.

Credit. ChatGPT 4

People don’t rely on affective empathy when considering human beings as moral patients. An experiment on children’s moral judgment revealed that even those without fully developed affective empathy, such as feeling stressed when they detect another’s pain, can still regard other humans as moral patients. This suggests that individuals can recognize harmful actions towards humans as wrong without necessarily experiencing the pain inflicted by the harm themselves. As a result, the distinct role of affective empathy in attributing moral patiency highlights the difference in how people perceive the moral status of AI compared to humans.

Fallacious reasoning underlying people’s attribution of moral patiency to AI

When people use terms related to moral patiency in response to AI harm, stating, for example, “It is morally wrong to torture this robot,” their treatment of AIs as moral patients is more influenced by their affective empathetic reactions than by a rational evaluation of the AIs’ behaviour. This casts doubt on the validity of the lay attribution of moral patiency to AI, suggesting it may stem from fallacious reasoning.

The foundation of moral judgments ought to be universally acceptable; However, empathetic feelings do not meet this criterion. It’s not inherently wrong for one person to fail to share in another’s sadness, illustrating the subjective nature of empathy as a basis for moral judgments. Consequently, while people seemingly believe that harming AI is morally impermissible, they fail to provide a rational justification for the belief. Without being affectively empathised with the purported pain of the AI, they would not consider the AI a moral patient.

🔬🧫🧪🔍🤓👩‍🔬🦠🔭📚

Journal reference

Noh, H. (2023). Interpreting ordinary uses of psychological and moral terms in the AI domain. Synthese, 201, 209. https://doi.org/10.1007/s11229-023-04194-3

Hyungrae Noh is an assistant professor at Sunchon National University, South Korea, in the Department of Philosophy. His research journey is founded on the conviction that philosophers of mind, like himself, should actively engage with the cognitive sciences, embracing the latest findings and methodologies. His primary contribution to this interdisciplinary field involves evaluating philosophical theories through empirical data. This includes, but is not limited to, analysing findings from psychophysics and microbiology that suggest phenomenal consciousness may be as illusory as magic tricks; examining neuroscientific discoveries that challenge the diagnostic relevance of the concept of phenomenal consciousness; and arguing that philosophers must critically assess ordinary language use, as psychological experiments often reveal such usage to be misleading.