Artificial intelligence has become almost omnipresent. The rapid growth of this technology and the resulting public interest were sparked by OpenAI’s release of ChatGPT, a “generative” text system, to the public. This platform has led to new social interactions with technology, a change that can be understood using Actor-Network Theory. The rapid growth of this technology and the resulting public interest sparked by OpenAI’s release of ChatGPT have instigated a novel series of social dynamics, altering how people engage with technology—an evolution explicable through the framework provided by Actor-Network Theory.
Considering the impact of AI in our current lives, one possible solution is to allude to regulation or industry guidelines. To actor-network theory, this would be like inserting an actant in the network of relationships that helps to create resistance or to create an anti-anti program to mitigate the impacts and effects of this technology on employment, human rights, and even its carbon footprint. The European AI Act and the Japanese Society 5.0 plan are important examples. Nevertheless, for this to work, the regulations and guidelines should not be too strict in that they hinder innovation and not too lax in that they allow the AI industry’s bad practices, policies, and effects to continue.
What is ChatGPT?
Given this, it is essential to clarify how ChatGPT works. First, ChatGPT is a platform that uses various Artificial Intelligence tools and systems to achieve goals or produce outputs. In this context, artificial intelligence broadly refers to a system’s ability to perform tasks without direct programming—similar to how Netflix learns a person’s movie and TV show preferences. Within this technology field, many tools are considered artificial intelligence, including those applicable to robotics.
In light of this, it’s essential to elucidate the nature and functioning of ChatGPT. First and foremost, ChatGPT operates as a platform employing an array of Artificial Intelligence tools and systems to achieve objectives or provide an output.
AI and ChatGPT
ChatGPT, developed by OpenAI with investment from companies including Microsoft, is a large language model (LLM). It operates via a generative transformer, GPT, which enables the platform to craft text almost indistinguishable from human composition. To generate text, ChatGPT employs two techniques: firstly, it utilises generative unsupervised pre-training, drawing from unlabeled data; secondly, it employs discriminative supervised fine-tuning to enhance its performance in specific scenarios.
The initial technique is akin to navigating a bustling city like Tokyo as you endeavour to comprehend the public transit system; the second is comparable to receiving guidance from a resident friend after your initial exploration. In essence, ChatGPT learns from an extensive and growing dataset, allowing it to generate input that may not always align with the original intentions of its designers or engineers. Additionally, human input plays a role in presenting the outcomes produced by the system.
ChatGPT has emerged as a tool, sparking significant public and media attention to AI. The platform’s versatility is noteworthy: Mexican-American researcher Saiph Savage utilises it to publish content in English; educators leverage it to formulate open-ended questions aligned with learning objectives; individuals with communication challenges benefit from its assistance in articulating their thoughts.
Imitation vs. genuine reasoning abilities
However, certain researchers contend that ChatGPT and similar LLMs replicate patterns learned from their training data without genuine reasoning abilities. They are likened to parrots that mimic sounds without actual understanding, merely regenerating text based on the data they were exposed to. However, certain researchers contend that ChatGPT and similar LLMs replicate patterns learned from their training data, lacking actual reasoning ability like stochastic parrots mimicking sounds that do not actually reason and only regenerate text based on the amounts of data they were trained on.
In a way, these systems resemble extensive collections echoing meaningful sentences—a reflection of the many interactions they’ve encountered. Additionally, LLMs often reproduce biases in their training data, leading to extensive collections echoing meaningful sentences—a reflection of the many interactions they’ve engaged with them before. Furthermore, LLMs frequently replicate biases inherent in their training data, manifest issues like generating false content or hallucinations, and lack transparency concerning their operations and the data used for training.
To counter these challenges, many platforms undergo human-guided fine-tuning and integrate safeguards to prevent the generation of harmful content. Essentially, programs are devised to prompt responses based on human input, creating a cycle of actions and reactions. Consequently, ChatGPT functions as a hub for human and non-human actors, forming a network. However, this network faces resistance, as there’s always a counterforce to its intended outcomes. The Actor-Network theory offers valuable insights into understanding these dynamics.
Power relationships and resistance
At first glance, Actor-Network Theory might appear daunting. However, even though Bruno Latour (one of its most prominent authors) has formulated many dispersed and ever-evolving concepts, it is feasible to streamline the theory to its core principles on the relationship between resistance and AI. Actor-Network Theory includes several concepts, but its core element is moving away from solely explaining society based on social factors.
Instead, it highlights the connections between humans and objects, objects with other objects, and objects with humans in a network of relationships. Actor-Network Theory includes several concepts, but its core element is moving away from solely societal factors. Instead, it emphasises the intricate connections among humans and objects, between objects and other objects, and between objects and humans within a network of associations.
The two pivotal concepts in Actor-Network Theory are “actant” and “quasi object.” The term “actant” conveys that the world teems with human and non-human actors, all integral to a network. This redefines the notion of society, portraying it not as a social construct but as a complex association. The second term, “quasi object,” refers to something that triggers an action in another actant—akin to the code that interprets a prompt in ChatGPT, facilitating content generation. Finally, there’s the notion of the program or the intended purpose of the network—such as generating coherent, human-like text devoid of offence, bias, and misinformation. All these connections and interactions are visually depicted in the chart that represents the network.
ChatGPT as a network: Understanding interactions and potential impacts
Keeping this in perspective, ChatGPT operates as a network within a larger network. Within this context, numerous interactions transpire between human actants and non-human actants. This encompasses the designers responsible for creating the platforms, the human actors involved in curating the training dataset, the data that nourishes the transformer, facilitating data comprehension and input generation, and the individuals engaged in fine-tuning the model.
Furthermore, an integral part of this network includes the individuals who craft prompts and drive ChatGPT to generate outputs, among other contributors. Furthermore, the network comprises various interconnected groups of connections, creating subset networks within the ChatGPT system, as exemplified by the internet infrastructure as one such subset.
Considering this perspective, ChatGPT and AI might come across as exceptionally impressive. However, actor-network theory underscores resistance to the generated output, and methods to manipulate or alter the system are always possible. Such actions can be advantageous when individuals modify the systems to prevent exploitation or bias or detrimental when actors manipulate the systems to evade safeguards, leading to the generation of problematic content—such as biased fake news—that could target susceptible populations.
This is because, no matter how stable the connections are in ChatGPT, they can still be changed by people or by adding new elements, which can lead to positive and negative outcomes. This is because, no matter how stable the connections are in ChatGPT, they can still be changed by human actors or by introducing new actants or objects, yielding positive or negative outcomes.
Gutiérrez, J. L. M. (2023). On actor-network theory and algorithms: ChatGPT and the new power relationships in the age of AI. AI and Ethics, 1-14. https://doi.org/10.1109/tse.2018.2810892