4 minute read

College Grapples With Implications of Artificial Intelligence

Liam Archacki ’24 and Sam Spratford ’24 Editors-in-Chief

“I’m probably an outlier on this particular question, but I think they’re going to get more and more capable. And what they cannot do may shrink. Maybe to nothing.” Eyes alight, Lee Spector, a professor of computer science and founder of the college’s Artificial Intelligence in the Liberal Arts (AILA) initiative, spoke before a rapt audience in Frost Library’s Center for Humanistic Inquiry.

Advertisement

The panel event at which Spector spoke, held on Feb. 20, centered around the implications of a popular new AI-powered chatbot, called ChatGPT, for the future of learning and teaching. ChatGPT has sparked widespread intrigue over its ability to instantly generate fluent and competent responses, in prose paragraphs, to almost any user-inputted prompt. It has drawn over 100 million users since its release in November 2022, who have used the AI to generate absurd poetry, full coding programs, answers to nearly any question you can ask — and far more.

Many commentators have cast the AI’s implications for academic writing in existential terms, with two different Atlantic articles declaring the software’s arrival “The End of High-School English” and the “death” of the college essay. Strong claims like these have been met with equally strong objections, fueling a wildfire debate over what the emergence of generative AI means for academic institutions, like Amherst.

While the February panel event focused on how AI could alter the future of academics, The Student’s conversations with Amherst faculty and students revealed broader, often philosophical, excitement and concern extending far beyond the walls of the ivory tower. As Spector, with a slight smirk on his face, put it in an interview with The Student, “A lot of the hand-wringing … about how it can be used to cheat seems pretty much completely beside the point.”

Many community members, like Spector, touched on plagiarism and honor-code policies, but were more eager to consider how generative AI may reshape society at large — by making it easier to spread misinformation, by performing jobs once held by humans, and even by challenging our understanding of what it means to have a mind.

Artificial Academics

Many professors to whom The Student spoke suggested that the panic over ChatGPT as a cheating tool was unfounded, or at the very least misguided.

Assistant Professor of Computer Science Matteo Riondato said that he had already encountered a case of a student using ChatGPT to write code for an assignment they submitted. As a result, this semester, he decided to add a line about ChatGPT to his syllabus “to just make it explicit that this is not what we would like [students] to do.”

Nevertheless, he maintained that he was not too worried about students using ChatGPT to cheat, noting that comparable methods of cheating had already existed.

“For all I know, my students may be outsourcing their code-writing to someone else,” he said. “And is that really different?”

“It doesn’t introduce any new way of cheating that didn’t exist before,” he added. “Therefore, if you're really concerned about cheating, you should have already been concerned.”

Riondato was able to catch the ChatGPT cheater because the code the AI wrote was “far more complex” than anything discussed in the course. “Even cheating requires brains,” he remarked.

While the AI can offer cogent responses on just about any topic, much of the concern about ChatGPT’s potential as a tool to cheat has been concentrated in the humanities.

English Department Chair Christopher Grobe, whose research probes the nexus of technology and performance, explained that this is because professors “see ChatGPT producing a fluent paragraph of prose, and they [become] so overwhelmed by the fact that this model can do one thing that they struggle to teach other people to do.”

Grobe, who sat alongside Spector at the panel in February, said that he has fielded many emails from educators who want to figure out what, exactly, the AI’s limitations are.

“There are some really key things that it can’t do,” he explained, noting that the current version of ChatGPT struggles to correctly use quotations and produce an interpretation of text — core skills emphasized in the humanities.

The reason for these deficits, Grobe said, lies in the way that generative AI like ChatGPT actually work, which he said is commonly misunderstood. “[People] think of them as databases or fact engines when really they’re like word prediction,” he said. “So people misunderstand them as having some relation to the truth rather than the arrangement of words.”

Riondato, who researches algorithms for knowledge discovery and machine learning, added that misconceptions about how ChatGPT works result, in part, from “gimmicks” of the user interface, like the fact that it responds to prompts in the first-person and appears to “type” its answers. These features are intended to “capture user attention,” Riondato said.

When it came to the dangers of people taking what the AI generates at face value, Riondato and Grobe were in agreement. “This idea of sounding like someone who knows what they’re talking about, which, again, captures your attention, it makes you trust, as a user, what you’re reading,” Riondato said. “And therefore is very dangerous.”

Nevertheless, Grobe said he thought there were productive ways for students to use ChatGPT in their academic work. “What the current models are good at is generating a lot of variations on something,” Grobe said. “For some people, that could be a useful place to start when starting from a blank page feels scary for a writing task. I think what’s crucial is that it’s a starting point.”

The students whom The Student spoke to described a range of different interactions with ChatGPT.

Claire Holding ’26 said that she had heard of students using ChatGPT to “lighten the workload” by summarizing dense articles or performing other rote tasks — in other words, to “help burnout,” rather than as a replacement for original work.

Spencer Michaels ’24 — a law, jurisprudence, and social thought major — has leveraged ChatGPT to automate some of his note-taking workflow. “Let’s say I’m reading Karl Marx, and he’s talking about the French Revolution,” Michaels explained, “I have a plug-in in my note-taking software that will write a summary about the [historical] event automatically.”

Multiple students echoed Grobe’s belief that ChatGPT can generate starting points for many different types of projects.

One of the students who works for Spector’s AILA initiative, Ashley Bao ’26, said she had used ChatGPT on occasion to generate Python code for her personal projects. “It’s pretty useful,” she said. “Sometimes I’ll use it to get ideas … for whatever I’m working on.”

Michaels also said that, when he sits down to write papers, ChatGPT helps him brainstorm and organize his thoughts from the comfort of his room (as opposed to the inconveniences of attending office hours). He’ll probe ChatGPT for

Continued on page 9

This article is from: