Can machines prioritize ethics and human values? A critical examination of AI's impact on society and individual rights.
///

Application of normative theories and human rights principles to AI

Can machines prioritise ethics and human values? A critical examination of the impact of artificial intelligence on society and individual rights.

How can AI be programmed to make ethical, feminist, and human rights-based decisions? Could AI be willing to sacrifice one life to save a hundred? These are some of the questions that often arise in a commoner’s mind. An ethical dilemma will naturally occur when AI machines are forced to pick between two evils. The normative ideas of prominent philosophers may have answers to the ethical dilemma of AI robots. Normative views are simple and believe that logic alone can determine right from wrong without the need for extensive knowledge of philosophy or religion to reach logical conclusions.

Consequentialist theories

Utilitarianism philosophy is best summarised by Bentham’s famous remark, “The greatest happiness for the greatest number of people”. The theory is more consequentialist because it focuses on the implications of a specific action for a larger population. John Stuart Mill argues that actions are right if they create maximum utility for all those affected. AI robots can use utilitarianism to maximize happiness for most people. AI vehicles programmed with utilitarianism will prioritize saving the greatest number of human lives when faced with a choice between two vices.

Critics may argue, What if an innocent person who is simply walking alone on the sidewalk and diligently abiding by all the rules suddenly finds himself at the mercy of a utilitarian-programmed car that is carrying multiple passengers and ends up being hit? This would be a violation of an individual’s human rights. However, no one can disagree that the car was following at least an established ethical principle of utilitarianism by picking the best possible alternative to minimise the loss of human life. A human could have done the same thing.

The second important consequentialist theory is the theory of egoism by Henry Sidgwick. This theory holds that human conduct should be based exclusively on self-interest and do well by doing good. Considering that machines are associated with their owners, it can be understood that self-interest refers to the interests of both the owner and the customers. As an enlightened egoist, AI machines could be programmed to do good in the interests of their owners. AI machines programmed with the principle of egoism would not perform good deeds solely for the sake of being virtuous but rather because they would ultimately serve the best interests of their owners.

Normative theories from prominent philosophers may offer insight into solving the ethical dilemmas of AI machines. Diverse moral principles could be used as a marketing strategy, and humans could be given the option of selecting moral standards for their AI machines.

Sanghamitra Choudhury

Non‐consequentialist theories

While consequentialists prioritize the welfare of the majority, non-consequentialists prioritize individual rights. Non-consequentialists base decisions on universal ethical values like fairness, rights, truth, justice, and commitment. Non-consequentialist ethics is based on duty and respect for individual rights. Moral Absolutism, Moral Nihilism, Immanuel Kant’s Categorical Imperative (CI), and John Rawl’s theory of justice are notable non-consequentialist ideas. In order to program AI machines with ethical codes, it is possible to simplify and adapt these wonderful principles of normative non-consequentialism to make them more readily compatible with machines.

The categorical imperative principles of Immanuel Kant are the moral requirements that come from pure reason, and a person must fulfil them. The universalization principle was introduced in the first formulation. “Act exclusively in accordance with the maxim that you would like to see become a universal law…” The second formulation is often known as the “humanitarian formula.” It focuses on how people should be treated. Humans should not be used as means to an end but valued as ends in themselves since they possess reason and self-awareness. For example, a person who uses a car for everyday trips will cease using it if the car becomes troublesome.

However, a person can utilise objects like cars as mere means, but not human beings, because humans are logical beings and an end in themselves. It would be wrong for an AI system to treat humans as “mere means,” because humans are reasoning beings who are ends in themselves. Kant’s third formulation states that rational creatures must see themselves as legislating laws universally. Kant argues that individual rights cannot be violated, even for the sake of the majority’s interests. As a result, AI systems based on Kant’s principle would prefer individual rights over group rights if they conflict.

Can machines prioritize ethics and human values? A critical examination of the impact of artificial intelligence on society and individual rights.
Credit. Midjourney

Justice, according to John Rawl

Now let us switch to John Rawl. Justice, according to John Rawl, is a fair distribution based on a fair procedure rather than natural law or logic. Artificial intelligence could contribute to the expression of Rawls’ theory and the concept of fair justice. John Rawl posed a hypothetical question in 1971: “What would happen if the representatives of society who are responsible for drafting the laws that govern society were unaware of their position in society?” Rawls refers to this as an “original position,” in which lawmakers would be blinded by a veil of ignorance. They will possess a complete understanding of fundamental and undisputed realities about science and society, yet remain completely oblivious to their own position within society.

According to Rawls, in such a setting, the laws enacted by legislators will be reasonable and fair since they will avoid bias. They’ll endeavour to come up with regulations that are ethical, fair, and equitable for everyone and that don’t unfairly favour or disfavour any particular group. AI has the potential to bring Rawls’ original position concept to life. The AI machine has enormous potential to represent Rawls’ magnificent notion of justice. Machines without human vices will make fair laws for all people.

Feminist care ethics and artificial intelligence

Traditional normative theories and moral philosophies are accused of being gender-biased and disregarding women’s moral concerns by advocates of feminist ethics like Carol Gilligan and Nel Nodding. The ethics of care values relationships and considers morality as a means to nurture strong bonds. Men, on the other hand, see ethics as a set of abstract laws that must be followed and applied consistently. Because of their ties to their mothers, women are said to have a higher prevalence of care-based morality. AI robots can provide excellent care for humans in various roles, like nurses, doctors, or helpers with housework and errands.

Human rights and artificial intelligence

UN resolutions in 2019 urged the use of human rights law to govern AI and emerging digital technologies. Additionally, they raised concerns and issued stern warnings regarding the possible human rights consequences that could arise from these technologies. AI has created new types of despotism that hurt the poor and vulnerable. Automation may deepen the global economic gap. AI enables small teams to generate high profits with minimal staff. “Growth is inevitable but socially harmful,” writes Stephen Hawking. The rise of AI poses a real threat to job security. Researchers are studying how to sustain a standard of living in an unpredictable job environment. Universal basic income is an example.

Long-standing algorithms have been used to create credit ratings and loan report screening. Machine learning systems analyze non-financial data, such as residence, online behavior, and purchases, to assess creditworthiness. E-scores result from these systems. The scores could potentially lead to financial discrimination targeting vulnerable communities. AI technology like facial recognition may also make mistakes if the user has dark skin. This goes against the principles of equal rights and opportunities.

Perception in the criminal justice system

AI reinforces numerous preconceptions in the criminal justice system, which could threaten the process. AI could aid “risk scoring” and “predictive policing.” ‘Predictive policing’ uses many data sets to anticipate crime, while “risk scoring” determines if a defendant will re-offend. However, concerns have been raised by experts regarding the potential of AI systems’ recommendations, as they may exacerbate problems through the introduction of biased replacements. Predictive policing and movement surveillance could use drones, GPS, fingerprint, face, and retina identification technologies. These technologies could track and police people. Even if used for public safety reasons, this may violate the right to freedom of movement.

The use of AI for health and reproductive screening could lead to infertility and impact marriage choices. AI-powered DNA and genetic testing could ensure that only desirable qualities are passed on to future generations. This violates Article 23 of the ICCPR, which protects marriage, children, and family. There are serious concerns that integrating AI-based tracking and prediction of student performance could potentially restrict educational opportunities and infringe upon students’ right to education.

This approach would denigrate students who overcome barriers to succeeding in school and the workforce and exacerbate the imbalance. This will violate Article 13 of the ICCPR, which covers universal education. There is also a fear that AI could criminalize certain cultures. By identifying and discouraging some groups from voting, AI-powered surveillance could limit and prevent political participation. 

Conclusion

The ethical guidance for AI machines drawn from prominent philosophers underscores the importance of navigating complex ethical dilemmas. Rather than attempting a one-size-fits-all approach, the lessons learned highlight the need for adaptable and context-aware ethics in AI systems. At the core of AI robot ethics lies the principle of minimizing harm to humans. Additionally, recognizing the value of diversity in moral principles can be a strategic approach, allowing individuals to customize the moral codes for their machines. These key guiding lessons underscore the imperative of ethical considerations in the development and deployment of AI technology, ultimately promoting responsible and values-aligned AI systems.

🔬🧫🧪🔍🤓👩‍🔬🦠🔭📚

Journal reference

Kumar, S., & Choudhury, S. (2023). Normative ethics, human rights, and artificial intelligence. AI and Ethics3(2), 441-450. https://doi.org/10.1080/14488388.2023.2199600

Shailendra Kumar is an Assistant Professor in the Department of Management at Sikkim University, Gangtok, India. Business ethics, corporate social responsibility, and artificial intelligence are among his research and specialization areas. He has written extensively on the topic of Artificial Intelligence and ethics.

Sanghamitra Choudhury is a Professor in the Department of Political Science at Bodoland University (a public university) in Assam, India. She has been a Post-Doctoral Senior Fellow at the University of Oxford, a Charles Wallace Fellow at Queen's University, Belfast, a UN International Law Fellow at The Hague Academy of International Law in the Netherlands, and a Consultant at UNIFEM in India.