4 minute read
AI and Consciousness: Navigating the Interplay of Mind and Machine
BY MATHILDA MULERT
Just five days after its launch in November 2022, OpenAI’s artificial intelligence (AI) large language model, ChatGPT, acquired 1 million users. In comparison, Facebook celebrated this milestone after roughly 10 months, while Netflix had to wait for 3.5 years. By now, many universities have implemented ChatGPT policies, and the use of AI in healthcare, criminal justice, and governance is rapidly growing. One thing is clear: AI is here, and it is here to stay. As we adjust to a technological landscape where AI becomes increasingly humanlike, a cascade of philosophical questions emerges. At the helm of this discourse are the two influential philosophical thinkers; Alan Turing and John Searle. Through their lenses, we embark on a journey that asks: Does intelligence require consciousness? Can these two concepts exist independently? And what ethical implicationsmightthishave?
Advertisement
AlanTuringandtheTuringTest
In 1950, when the concept of computers itself was in its infancy, English mathematician, computer scientist and philosopher Alan Turing published his seminal paper Computing Machinery and Intelligence. In it, Turing introduced his Turing Test, designed to assess the intelligence of machines through their abilitytomimichuman conversation.
The test works as follows: Imagine three participantsareengaginginaconversation throughacomputerinterface(liketexting). There is a human interrogator, a human responder, and a machine responder. The interrogator’s task is to chat to both respondentswithoutknowingwhichoneis the human and which one is the machine. The machine passes the Turing Test if the interrogator is not able to consistently distinguish between the responses of the human and the machine. If this is the case, Turing argues, the machine can be considered intelligent.
So, what is Turing’s threshold for a computer program to exhibit intelligence?
The Turing Test focuses on the machine’s observable behaviour - its ability to mimic human conversation - rather than its internal workings or thought processes. But this raises a critical question: Can intelligence be boiled down to outward behaviour, or must the machine understand its outputs to be deemed intelligent?
JohnSearleand theChineseRoom
As technology advanced, another thinker stepped onto the stage: In a 1980 article, the American philosopher John Searle published his thought experiment, the “Chinese Room”. This experiment challenges the Turing Test by further questioningwhatitmeansforamachineto be intelligent. In doing so, Searle is harder tosatisfythanTuring.
His thought experiment works as follows: ImagineanEnglishspeakingpersonissatin aroom,armedwithEnglishinstructionson how to manipulate Chinese symbols. Although the person does not speak any Chinese, she is able to use these instructions to communicate with a Chinese speaking person outside of the room through an exchange of written notes. The outcome is a flowing Chinese conversation, despite lacking any comprehensionfromthepersoninsidethe room.
Searle’sChineseRoomisintendedtomake us contemplatewhetheran AIsystem that merely appears intelligent should truly be considered intelligent. If the AI is, like the person in the Chinese Room, simply manipulating symbols without grasping any meaning, that does not suffice for intelligence. So, while Turing’s threshold for intelligence relies on observable behaviour, Searle’s threshold for intelligence requires a sense of understandingon the machine’s behalf.
What would Turing reply to Searle’s objection? In his 1950 paper, Turing preemptively acknowledges the Argument from Consciousness - the idea that a machine might lack true consciousness despite producing human-like responses. He takes a pragmatic stance and replies that itis essentiallyirrelevant whetherthe machine really is conscious as long as it seems like it is. As our understanding of consciousness is phenomenological (i.e. based on subjective experiences and perceptions), Turing suggests a shift from internal states of the mind to observable behaviour. Put differently, if the machine exhibits intelligentbehaviour,then wecan consider it intelligent. His primary concern was not to delve into the intricacies of consciousness itself, but to develop a practical test that assesses machines’ intelligence.
Intelligenceand Consciousness
Whether or not Searle’s objection resonates with you, the Chinese Room opens the door to an expansive philosophical debate that extends over centuries and a variety of viewpoints. At thecoreofthediscourseliesthequestion: Is consciousness a prerequisite of intelligence, or can these two concepts exist independently? While one could writeseveralbooksjustonthistopic,letus brieflylookatthreenotablestandpoints: René Descartes, an influential French philosopher, proposed dualism. This view asserts a clear line between the body and themind.Inthisframework,consciousness is regarded as existing separately fromthe physical world, in a realm that is distinct from mere matter. Descartes thought of consciousness as the foundation of intelligence and rationality. Therefore, he would argue that consciousness is a prerequisiteforintelligence.
In contrast to Descartes, Thomas Hobbes, another pioneering figure in philosophy, was a defender of materialism. Materialists argue that everything, including consciousness, emerges from physicalprocesses.Thisviewpointsuggests that conscious experiences arise from the complex workings of the brain’s neural activities. In this context, intelligent behaviourcouldpotentiallymanifest,even in the absence of conscious awareness, because it is the underlying physical processesthatdrivecognition.
Lastly, Daniel Dennett, a contemporary philosopher, and a functionalist, offers a unique approach to the relationship between consciousness and intelligence. Functionalism focuses on the functions of mental processes rather than their underlying nature. What matters to philosophers likeDennett, is whatrolethe mental states play in contributing to the larger cognitive system. In this context, consciousness and intelligence can be detached, suggesting that intelligent behaviour does not necessarily rely on consciousness.
In navigating these philosophical paradigms, it is clear that no single perspective holds the ultimate key to the complex interplay between consciousness and intelligence. The diversity of viewpoints reflects the complexity of the human mind and the dynamic nature of ourunderstandingoftheseconcepts.
Ethical Implications: Transparency in AI decisions
Astheepistemologicaldiscourseunfolds,it crosses over into the ethical realm. The juxtaposition of Turing’s Test and Searle’s Chinese Room unveils a crucial ethical dimension in the development and deployment of AI: How do we ensure transparencyandlegitimacyinAIdecisionmaking?
Many modern day AI systems operate as so-called “black boxes”. This means that they make decisions which they cannot explain in a way that would be comprehensible for humans. This opacity raises profound ethical concerns, especially when AI systems influence significant domains of our lives. For example, in Allegheny County, Pennsylvania, the Department of Human Services used a predictive algorithm aimed at projecting which children are most likely to become victims of abuse. If we want to use AI in such significant domains, we must ensure transparency. Life-changing AI decisions could only be legitimate if they are open to public scrutiny, and if one can, for example, recognise algorithmic biases within the AI’s decision-makingprocess.
The rapid ascent of AI is transforming our world and brings many opportunities that we once only dreamed of. Yet, as we navigate this ever-changing technological landscape, we are reminded that with greatpowercomesgreatresponsibility.As we reap the benefits of artificial intelligencewearefacingachoice:tocraft an AI landscape rooted in accountability andfairness,ortoriskrelinquishingcontrol totheopacityofblackboxes.
References
Cole, D. (2004) “The Chinese Room Argument”, The Stanford Encyclopedia of Philosophy (Summer 2023 Edition), E.N. Zalta&U.Nodelman (eds.).
Searle, J. R. (1980) “Minds, brains, and programs,” Behavioral and Brain Sciences Cambridge University Press, 3(3), pp. 417–424.
Turing,A.M.(1950)“ComputingMachinery and Intelligence”. Mind,59, pp.433-460.