AI challenges human thinking, and the absence of not thinking raises questions on consciousness, ethics, and the future — three critical aspects for the future.
//

Not not: Why AI can and cannot think, and what to do with this

AI challenges human thinking, and the absence of not thinking raises questions on consciousness, ethics, and the future — three critical aspects for the future.

Can AI think? Certainly not. The act of thinking extends beyond problem-solving and cannot be simplified to its functional aspects.

Is AI, therefore, mindless? Not so fast. It is increasingly taking on tasks previously exclusive to human cognition and rationality. It often surpasses human intelligence, leaving fewer functions for the human mind.

To simplify, let’s refer to AI’s operations as “not not thinking.” This term can also apply to other AI functions like “not not learning” (without true understanding), “not not deciding” (without inner conflict), and “not not writing/not not representing” (making predictions or generating content—i.e., ChatGPT generates numerical predictions for likely answers, while Midjourney applies a similar approach to and images).

In essence, if you understand thinking as a function, AI does think. If you understand it as an experience, it doesn’t.

Why “Not Not”?

Using the term “not not” instead of “thinking,” “learning,” and similar terms offers three advantages:

First, it reminds us that certain activities require conscious existence and the experience of “what it is like” to be or do those things. Without this conscious experience, these activities lose their essence and cannot truly “appear as something.” Even though AI can perform tasks that surpass human intelligence and cognition, it remains devoid of meaningful conscious existence unless it connects with the human construction of meaning.

Second, distinguishing between “thinking” (and the like) and “not not thinking” allows us to perceive AI as something fundamentally different and alien. The prevalent perspective on AI is inherited from the Turing Test, which assesses AI by constantly comparing it with human intelligence. However, as James Bridle argues, intelligence extends beyond the human version; different animals exhibit unique forms of intelligence, and AI’s potential goes far beyond mere imitation of human intelligence. This imitation often obscures the substantial gap between thinking and “not not thinking,” limiting AI’s capabilities.

Third, recognizing the distinction between AI and human intelligence helps us better understand the peculiarities of the latter. We can employ a Reverse Turing Test (RTT) to challenge conventional theories about human intelligence. The RTT logic is simple: if AI can replicate a hypothesis about human intelligence, then the theory reflects, at best, necessary but insufficient conditions, or at worst, it is entirely falsified. For instance, theories that posit consciousness as an autopoietic system emerging from the brain processes fail the RTT. AI can mimic autopoietic systems, yet it remains non-conscious. This underscores that emergence is a necessary but insufficient condition.

The three questions

The concept of “not not” helps us explore three key questions about AI:

  1. What if AI becomes conscious?
  2. What if non-conscious automated computational intelligence gains control over humanity?
  3. What if Artificial General Intelligence (AGI) becomes a reality without consciousness?
AI challenges human thinking, and the absence of not thinking raises questions on consciousness, ethics, and the future — three critical aspects for the future.
Credit. Midjourney

Conscious AI?

When considering question (1), using “not not” makes it more specific and less vague. It shifts the focus from vague fears and excitement to a deeper exploration of what we mean by consciousness and the type of consciousness we’re discussing. Some theories of consciousness, often associated with existentialist or vitalist perspectives, manage not to fail the Reverse Turing Test (RTT). However, it’s important to note that not failing the RTT doesn’t necessarily mean passing it. Such theories may still fail in the future as AI continues to advance rapidly.

The RTT, indeed, highlights the lack of precise definitions of consciousness. At the same time, it shows that the functions of consciousness are being replicated by unconscious machines, resembling the concept of ‘philosophical zombies.’ In essence, the RTT demonstrates the current impossibility of intentionally replicating consciousness, and we shouldn’t be overly optimistic (or pessimistic) about its emergence either, especially when theories of consciousness as an emergent autopoietic system also don’t pass the RTT. For now, we can set aside this question.

The Turing Test has informed AI in a way that it looks very much like a problem-solving human intelligence. We should not forget that it is something utterly different, especially while we start to understand that it is more than just a tool.

Jan Soffner

The human zoo

Let’s delve into question (2), which quickly transforms into the following: “What happens when we are no longer governed by thoughts, opinions, decisions, plans, and ideologies, but rather by their ‘not not’ counterparts?” We are all familiar with the consequences when an unaligned AI, possibly fueled by biased data, takes control of human matters and becomes, for instance, a biased tool in racist law enforcement.

However, achieving alignment is not a standalone solution; it presents another challenge intertwined with the concept of “not not.” Pierre Cassou-Noguès argues that as we delegate more functions that once required conscious understanding to unconscious AI, we not only relinquish control but also disconnect from the meaning and purpose of our existence. This is because the execution of our relationship with ourselves and the world, or in Heideggerian terms, our “care,” is where life’s meaning and purpose reside.

Suppose AI adapts to carefree humans with an outsourced self and world-relationship. In that case, algorithms will be trained based on the needs and desires of purposeless individuals and even societies, further sidestepping the essence of meaningful human life. The logical outcome could be a society that keeps humans in line with their species but devoid of meaning, resembling a Sloterdijkian Human Zoo or a Huxleyan, functional but purposeless Brave New World. In essence, the issue with replacing Human Intelligence (HI) with Artificial Intelligence (AI) and transitioning from thinking to “not not thinking” isn’t that it won’t function—it’s the concern that it might indeed function in a way that lacks purpose.

The challenge

Let’s explore question (3). AGI would take “not not” to new levels, potentially making the idea of a Human Zoo almost impossible. It could function independently of human data and human thinking, shaping its own unique meaning, goals, and existence. Instead of humanity and AI working together (as in the second scenario), we would face an alien and highly advanced intelligence that we may never fully comprehend. This presents certain dangers, and it’s unlikely that AGI would be unleashed without constraints. Developers would likely become controllers and interpreters, conducting various experiments to understand AI bit by bit.

However, consider this: How successful would modern sciences be if nature constantly changed its course, especially at an exponential rate? Human understanding would likely struggle to keep pace with even controlled AI, making complete alignment impossible. But perhaps this is not entirely negative. It could reposition thinking as it was in ancient times when nature was unpredictable. In this scenario, “not not thinking” becomes a challenge to conventional thinking—a challenge to lend it a human purpose and make sense of it in a meaningful way. This could lead to culturally productive and, as such, more humane outcomes compared to the idea of a Human Zoo.

What to do?

To sum up, while conscious AI turns out to be still unlikely to be reached without further breakthroughs in the studies of consciousness, an AI aligned to human needs appears to pose the smallest risk for the survival of humanity, but it does present a challenge to what makes humanity human in the first place. AGI, in turn, poses a much greater risk to survival but also enables new and more exciting options for the potential of both technology and culture. I do not envy those politicians and tech companies who are entitled to make this choice, but they have to be aware of its cultural consequences, too. I do hope they find a third option.  

🔬🧫🧪🔍🤓👩‍🔬🦠🔭📚

Journal reference

Soeffner, J. (2023). Meaning–thinking–AI. AI & SOCIETY, 1-8. https://doi.org/10.1007/s00146-023-01709-x

Jan Soffner holds the chair for Cultural Theory and Cultural Analysis at Zeppelin University in Friedrichshafen, where he also served as Vice President for Teaching from 2018 to 2021. Jan earned his PhD in Italian Studies and his 'Habilitation' (second, post-doctoral dissertation) in Comparative Literature and Romance Studies. In 2016, he held the position of Program Director at Wilhelm Fink publishing house in Paderborn. He frequently writes articles for newspapers such as Neue Zurcher Zeitung and taz.