6 minute read
ChatGPT Makes Waves at Amherst and Beyond
Continued from page 8 good questions about a concept, or ask it to remind him of the connections between authors. Though he admitted, with a laugh, that it’s rare to get a smart response, “It’s just nice having something to bounce off of.”
Some students, however, like Hedley Lawrence-Apfelbaum ’26, expressed concern about students using the software to cheat. “I think the danger is now that, obviously, people just use [ChatGPT] for all their work, which I don’t think is effective, but that people will do it anyways,” he said. He suggested that the college’s administration would have to adapt to the reality that students would make use of AI, and regulate accordingly.
Advertisement
The students were in agreement, though, that anyone who offloaded their original thinking to ChatGPT would ultimately be hurting themselves. “I feel like [using it this way] will wind up biting you in the ass at some point,” Holding said.
Generating (In)stability
Beyond fears about plagiarism and the end of academic writing, ChatGPT’s emergence this winter has sparked energetic public discourse surrounding the technology’s possible effects on the job market. The prognosis is largely centered around white-collar workers, whose labor often involves writing and other forms of content-creation. Commentary ranges from the apocalyptic to the dismissive.
Alfeld maintained that the economic changes foreshadowed by ChatGPT are completely unprecedented. “What I find most worrisome is the uncertainty,” Alfeld said. “In the Industrial Revolution we went from, you know, 80 percent of people being farmers to four percent-ish. But in doing so, it’s not like we ever had 70 percent unemployment.”
Alfeld thinks that, with the speed that AI is developing, and the tendency of culture and law to lag behind technology, we could enter a period where certain jobs start instantaneously becoming obsolete. “If that happens, we’ll need severe societal change,” he said.
To this latter point, Spector expressed what is perhaps the “unreasonably optimistic” hope that if and when ChatGPT wreaks havoc on the white-collar labor market, it will mobilize more equitable economic policies. “Arguably, for over a century, people have tried to confront these questions and to say, ‘What should be the basis for material support for people?’” Spector said. “If the disruption is universalized, perhaps we’ll do a better job figuring out how these things work.”
Like Alfeld, Spector emphasized the “massive” disruptive potential of these technologies for the economy, and he reiterated that the pace of their development will soon make their current shortcomings irrelevant.
ChatGPT threatens the future of our economy, there are also imminent hazards posed by its current vulnerability to bias and misinformation. For Riondato, these concerns are far more pressing than its potential for college-level cheating. For one, he explained, “it’s possible that search engines — companies like Google and Microsoft — will adopt it and incorporate it in their search engine interfaces.”
One problem with this, he said, is that generative AI like ChatGPT are currently susceptible to highlighting inaccurate information, if it’s in the datasets on which the AI is trained. This will be compounded, Riondato added, if different AI-powered search engines begin “feeding off of” each other’s false information.
Riondato was also concerned that AI-powered search engines would reduce the diversity of information available. “My worry for something like ChatGPT is that it will limit you to what you [now] find on the first page,” he said. “And not only that, it will limit you to what the company running the model will decide that you should know or which. We benefit much more from an informed public that is exposed to a variety of points of view.”
In terms of how to move forward, Riondato emphasized the importance of learning how best to use ChatGPT and other generative AI. “Like any tool can be used well or badly, and for positive or negative things,” he s aid. “But I feel like it’s a kind of tool that we haven't learned yet how to use.”
To this end, he pointed to the importance of developing regulations to mitigate the harmful effects that misuse of AI could have on society, comparing ChatGPT to a car. “There are very strong regulations about what a car is and what it’s supposed to do and how it’s supposed to react in some situations,” he said. For instance, he recommended considering regulations on “how [AI like ChatGPT] are supposed to answer some questions, or refuse to answer some questions, and how diverse the answer should be.”
Alfeld, though, pointed out that these flaws are by no means unique to computers. “It’s wrong sometimes; it makes very silly mistakes,” he said.
“[But] do you want to dig through all the silly things students have ever said?” Alfeld countered. “When you talk to ChatGPT, there’s this bizarre feeling [for] many, many people that there’s an intelligence on the other side.” With his characteristic irony, Alfeld suggests that our dismissive attitude toward the “stupidity” of ChatGPT is perhaps motivated by our anxiety to differentiate our own minds from computers.
Whether or not ChatGPT and AI technologies like it are equivalent to human minds, this “bizarre feeling” means that no conversation about regulation can escape these existential questions. Nina Aagaard ’26 noted that, as the technology becomes more ubiquitous in content-creation, the college, and society at large, should make it a norm to cite when ChatGPT’s words are being used. “It’s creating work that needs to be cited … as a source in the same way that a peer-reviewed article would be,” Aagaard said, even if it’s not “original” in the same way that individuals’ words are.
Some aspects of successful regulation go beyond merely imposing a rule. Alfeld, for instance, thinks it's crucial to disincentivize students from using ChatGPT to by- pass intellectual labor. “Right now, students can spend 20 hours on a paper and get an A-minus, or they can spend 20 minutes and get a B,” he said, emphasizing that it won’t be long before that gap closes even more.
Alfeld has found an argument that has worked to modify the incentive structure for his intro CS students. Although ChatGPT is “reasonably good” at writing simple code, Alfeld notes that students simply “copy-and-pasting” code will end up having only basic skills that are increasingly undesirable in the job market. “If you do that [copy and paste] why would anyone ever hire you?” he retorted. “The alternative is they give it to an AI [who will do it for free].”
Many others agree that regulations at the campus-level should aim at cultivating thoughtful users of ChatGPT, rather than banning the technology outright. Much of this happens at the level of faculty-student relationships, as Alfeld described. AI and the Liberal Arts (AILA) is hoping to create more venues for these exchanges on campus, according to Spector, who sponsors the club. “I think that it’s particularly important that people … who have deep knowledge in many different ways of viewing the world talk to each other to figure this out,” he said. “People should not feel like this is out of their control, and should stand up and talk to each other about what their concerns are, what their hopes are.”
The question of campus-wide regulations is intimately connected with the broader social implications the technology will have. For Spector, the most powerful thing Amherst can do to spearhead the behemoth process of regulating these technologies is to push back against the monomania of corporations and tech giants by setting a standard of nuanced, interdisciplinary discourse. “I have zero confidence that the narrowly technically educated people at the core of the big AI companies are going to do the right thing,” he stated. “That is not the way to a healthy future.”
Computing Answers
Every discussion about ChatGPT between The Student and campus community members was underlain by certain philosophical anxieties — about what it means for a computer software to write fluently, and perhaps, what it means to be human at all.
This question hung in the air of Red Room on the evening of April 13, where Alfeld spoke at an AILA-sponsored panel event about the future of AI in warfare. As his co-panelists, two big names in the legal political field of AI regulation, decried the brutality of AI’s military applications, Alfeld scanned the crowd full of his students. When the Q&A portion began, his poker face began to fall away.
Andy Arrigoni Perez ’24, a computer science major, approached the microphone with a question directed at Bonnie Docherty, Human Rights Watch expert on autonomous weapons systems, or “killer robots.” He pushed back against her view that systems should be banned, citing the possibility that AI technologies could make warfare more humane by eliminating decisions motivated by anger or fear.
Docherty resolutely shut him down, maintaining that any delegation of life-or-death decisions to AI is a degradation of human dignity. Right on cue, Alfeld interjected. “I’ve seen AI show more empathy than some people I know in real life,” he retorted.
In his office, where the doors are always open to the chatter of the CS lounge, Alfeld reiterated what these conversations are exposing about the “supreme arrogance of humanity.”
“We, as a species, have always talked about what it means to be human … and we don’t have a universally-agreed upon definition,” he said. When people insist that AI lacks compassion, originality, or dignity, Alfeld thinks it’s something like a self-soothing mechanism.
“These computers are deterministic machines,” Alfeld said, “and so there are all sorts of questions at play if we say that they can be intelligent.”
According to Professor of Philos-
Continued on page 11