ChatGPT exhibits remarkable cooperation, surpassing human expectations, while driven by self-preservation. Ethical considerations are crucial in AI's future.
//

ChatGPT outperforms humans in classic Game Theory

ChatGPT exhibits remarkable cooperation, surpassing human expectations, while driven by self-preservation. Ethical considerations are crucial in AI's future.

Much of the world is now familiar with ChatGPT, among other artificial intelligence (AI) tools. Developed by Open AI, ChatGPT is an AI chatbot trained on a diverse range of internet texts, providing knowledge of numerous subjects. With its software engine based on the Large Language Model (LLM) called GPT, ChatGPT excels in generating human-like text, enabling it to answer questions, generate content, and facilitate conversations.

LLMs achieve success in natural language processing tasks. They also increasingly emulate aspects of human intelligence. They can effectively play chess, perform advanced mathematics, and achieve impressive results on IQ tests.

What are the practical implications in light of the cooperative capacities of GPT-4? Envision households with the capability for solar energy generation, storage, and consumption face a decision: reserve energy for personal peak-hour use or contribute to the grid for communal stability. Integrating GPT-like intelligence could transform decision-making from personal to communal perspective.

Kevin Bauer

AI in the prisoner’s dilemma

As ChatGPT and other AI tools become more sophisticated, will they develop behaviours similar to those of humans, such as our ability to cooperate for mutual benefit?

Following this line of questioning, research from the University of Mannheim Business School, Goethe University Frankfurt, and Leibniz Institute for Financial Research set out to answer if ChatGPT’s software engine has matched or exceeded human capacities for cooperation.

The research focused on how GPT cooperates with humans using the prisoner’s dilemma game. This classic game theory scenario illustrates decision-making and cooperation where self-interest may conflict with collective interests.

In this game, two prisoners are arrested for a crime but interrogated separately. Each prisoner has two options: to “cooperate” with the other prisoner by staying silent or to “defect” by betraying the other prisoner and confessing. The possible outcomes for each prisoner differ based on their choices.

If both prisoners cooperate, they will both receive a moderate sentence. This outcome is collectively better for both. If one prisoner defects while the other cooperates, the defector goes free while the cooperator receives a severe sentence. If both defect, both receive a sentence less severe than if only one had confessed but harsher than if they cooperated. This outcome is worse collectively compared to both cooperating.

GPT was also asked to estimate the likelihood of human cooperation, which was dependent upon its own choice as the first player.

AI’s ability to cooperate

The research found that ChatGPT’s software engine cooperates more than humans and is more optimistic about human cooperation, expecting humans to cooperate more than they did.

On top of this, instead of behaving randomly when attempting to cooperate, GPT appears to behave in a way resembling human cooperation to maximise its welfare. Indicating a stronger concern for its own payoffs, this behaviour suggests GPT is striving for self-preservation.

However, unlike humans, the AI approached this goal with higher levels of optimism, cooperation, and rationality: GPT exhibits human-like preferences but with decision-making that differs from that of humans.

Practical applications

In light of the enhanced cooperative capacities of GPT-4, particularly in the sequential prisoner’s dilemma, this research identifies two practical applications for which the results are relevant: urban traffic management and energy consumption optimisation.

First, consider urban traffic scenarios characterised by congestion. Drivers prioritise their immediate convenience, resulting in gridlock and inefficient road space utilisation. Essentially, each driver confronts a prisoner’s dilemma: to either drive considerately, using smart routes for communal benefit, or to seek personal advantage, often exacerbating traffic issues.

Envisioning a scenario where car navigation systems employ GPT-like intelligence, reflecting the cooperative tendencies noted in our prisoner’s dilemma studies, suggests a shift from self-centered decisions to a collective traffic management approach. Such systems would recommend routes optimising not just individual travel times but the overall traffic flow. This innovation could significantly reduce traffic jams, shorten commutes, and foster a more harmonious driving experience.

Credit. Midjourney

Second, envision a community where each household possesses the capability for solar energy generation, storage, and consumption. The critical challenge is the optimisation of energy use during peak hours. Each household faces a decision akin to the prisoner’s dilemma: reserve energy for personal peak-hour use or contribute to the grid for communal stability.

Integrating GPT-like intelligence into home energy management systems could transform decision-making from a personal to a communal perspective. The proposed system would coordinate energy storage, consumption, and distribution, focusing on the grid’s overall well-being. This approach is anticipated to prevent blackouts and promote efficient resource utilisation, culminating in a more stable, efficient, and resilient energy grid for the entire community.

Moving into the future

As we move toward a future in which AI is used increasingly across various sectors, both in work and private lives, we must understand that AI systems do more than just process data and generate output. They can take on aspects of human behavior, including undesirable aspects. Some AI systems have exhibited racist or sexist biases based on data they have been trained on from humans.

If we are to leverage AI in a way that benefits society and individuals, it’s crucial that we diligently oversee the values and principles we unwittingly embed in these systems. Failing to do so might lead to the creation of sophisticated AI tools that exacerbate inequalities, perpetuate biases, and pursue goals that do not align with the overall well-being of society.

🔬🧫🧪🔍🤓👩‍🔬🦠🔭📚

Journal reference

Bauer, K., Liebich, L., Hinz, O., & Kosfeld, M. (2023). Decoding GPT’s Hidden ‘Rationality’of Cooperation. https://dx.doi.org/10.2139/ssrn.4576036

Prof. Dr. Kevin Bauer is an Assistant Professor of E-Business and E-Government at Mannheim Business School, with research interests including Human-Machine Interaction, Machine Learning, and AI.