2 minute read
New AI Technology Prompts Philosophical Questions
Continued from page 9 ophy Joseph Moore, who is currently teaching a course on the philosophy of mind, the emergence of AI like ChatGPT has renewed long-standing debates in the field over what it means to have a mind.
“Speculation about the possibility and implications of artificial minds goes back at least into the 19th century, but reflecting on ChatGPT and other emerging AIs can make these issues relevant, and sharpen them,” he wrote in an email statement to The Student. “Not only do we confront the gold standard question of whether any artificial system can have ‘general intelligence,’ but these AIs press us to reflect on what we mean by this and other central notions bound up in our concept of mind.”
Advertisement
“And of course, philosophy can weigh in on more existential questions about what AIs—and the dreaded ‘singularity’— might mean for the future of mankind,” Moore added. “People range from being cautiously optimistic to completely apocalyptic about these larger questions.”
For his part, Riondato pushed back on the philosophical excitement surrounding ChatGPT’s emergence.
“I do think that most of the hype is definitely misplaced. I mean, none of these AI models has shown any kind of really revolutionary compo- nent, in some sense. The development is following the same rate of progress that it has followed for the past 15 years,” Riondato said. “But it’s clear that it’s a better model than what we had before and in terms of producing a sequence of words that sounds like language.”
He also noted that humans, whose “brains have evolutionarily developed to search for patterns and search for meaning,” are sure to “find meaning in those sequences of words [produced by ChatGPT,]” even if the AI itself is just recreating trends in datasets.
Grobe, for his part, cautioned against being “seduced by the analogy between human intelligence and machine intelligence.”
At the college’s panel event on ChatGPT in February, Spector offered a more nuanced vision for how human and artificial intelligence may converge. He acknowledged the areas in which human intelligence is successful but models like ChatGPT are not — like, in “representing the world, and doing logical reasoning, and planning” — but he expressed doubt that these distinctions would exist for long.
“I would expect [these large language models] to be hybridized in the very near future,” Spector said, “so that we’re going to have systems that actually do model the world, that actually do reason and plan, on top of doing this ‘autocomplete’ function.”
In an interview with The Student, Spector expanded on his belief that it’s close-minded to neglect the parallels between human and machine intelligence — even precluding any further advances in AI technology.
“I’ve, for a very long time, understood what happens in humans, in human brains and minds, to be a kind of computation,” he said. “But people often think that sort of denigrates the human mind, that it’s ‘merely’ computation. I would flip that and say it means computation is a lot cooler than we thought. Computation can fall in love. Computation can imagine other worlds. Computation can write mind-blowing poetry. Computation can feel pain.”