6 minute read

HIGHER ED’S NEXT BIG TEST

BY CORY PHARE / PHOTOS BY ALYSON McCLARAN

Q:

What were the names of the bears that the Soviets sent into space?

A grisly scenario, but plausible, right?

There’s only one problem: The Soviets never sent bears into space.

The above is an exchange with ChatGPT, the much-ballyhooed artificial intelligence chatbot from OpenAI. When asked for the names of the makebelieve cosmonauts, the text-based tool generated false information, a phenomenon known as “hallucination.” This scenario highlights one of many challenges and inevitabilities faced by universities across the world with the sudden rise of generative AI.

“We can use AI tools to accomplish many tasks, but many of those tools are still ignorant,” said Shaun Schafer, Ph.D., associate vice president for Curriculum, Academic Effectiveness and Policy Development at Metropolitan State University of Denver. “If we cede all our processes to AI, it results in an incomplete education.

“At the same time, the cows are out of the barn. We live in a world where this (technology) exists. Now, what should we do with it?”

Debating Impact

That question kicked off the University’s Generative Artificial Intelligence Taskforce, or GAIT, in February. The group, comprising faculty members and administrators, is mandated to develop a response to AI-focused issues in teaching and learning, academic integrity and assessment.

A substantial outcome of GAIT’s work thus far is clarity of expectations for students. Beginning this fall, each faculty member at MSU Denver had to explicitly address AI in their syllabi, providing guidance to students on how it could and should not be used in their coursework.

Schafer, a task force co-chair, noted that the broad representation of constituents in the group is critical to ensure its effectiveness, with faculty members leading the way.

English Professor Jessica Parker, Ph.D., a taskforce member, has been cautiously experimenting with ChatGPT and Google’s Bard to spice up PowerPoint presentations.

Though she understands the wide range of responses to AI in academia, she thinks the rumors of the college essay’s demise may be greatly exaggerated — for now.

Parker said the technology can mimic text composed by humans but can’t replace it because the tools can’t think beyond specific parameters or understand emotion. “It’s also only as effective as users are discerning with their prompts,” she added. “Garbage in equals garbage out.”

Jeff Loats, Ph.D., director of MSU Denver’s Center for Teaching, Learning and Design and GAIT co-chair, would like to see more “hair-pulling” at universities grappling with how to assess learning differently. “I don’t think higher ed is addressing this with as much urgency as we might need,” he said.

He noted that, with discipline-dependent exceptions, educators largely use writing as a proxy for thinking: A subject-matter expert evaluates what the learner knows based on what they write. AI tools skew this process.

In response, faculty members have switched to reintroducing in-person paper-based and oral exams. But at an institution such as MSU Denver, where one-third of classes are remote, that remains “a two-thirds solution at best, not even getting into matters of accessibility,” Loats said.

Journalism and Communication Studies student Shania Re
Photo by Alyson McClaran

One of the key differences between AI-generated work and traditional plagiarism is that the latter is detectable by machines. In late July, OpenAI (the creator of ChatGPT) pulled the plug on its own detection software due to its low efficacy. And even if such tools proved successful, with an error rate of 1%-2%, Loats said implementing them at scale would be a logistical nightmare.

“Assuming it’s looking at every assignment from every (MSU  Denver) student, that’s at least 10,000 per week. At that rate, 100-200 students weekly could be wrongly accused of cheating,” he said. “The field doesn’t yet have a great example of how we deal with the change in assessment — that’s the work I think deserves an intense response from the instructors and departments.”

WHAT IS LOST?

As a Journalism and Communication Studies student, Shania Rea knows the importance of asking the right questions and fact-checking her work. The first-generation senior recently used ChatGPT for a course on communication theory. She found the tool potentially helpful but the results “a bit iffy.” She suggested it would be bestsuited to assist with secondary tasks or to help fill gaps.

But she was also quick to note the potential pitfalls of trading critical thinking for convenience.

“If we become overly reliant upon computers for everything, what have we lost?” she asked.

Rea may not be the only skeptic. Traffic to ChatGPT’s website decreased by nearly 10% in June, according to internet analytics firm Similarweb.

Some have speculated this was due to the end of the school year.

But Steve Geinitz, Ph.D., assistant professor of Computer Science at MSU Denver, suggested an alternative explanation.

“The functionality has its limits,” he said. “For every little task someone wants to make more efficient, a lot of times it just isn’t going to work for any number of reasons.”

While Geinitz, a former data scientist with Facebook, doesn’t foresee an “AI winter,” he is not surprised by the cooling of the initial hype cycle.

Financial markets, however, are still bullish on AI. Cloud and enterprise investment into AI semiconductor manufacturer Nvidia has more than tripled its stock price this year. Goldman Sachs’ spring forecast projected that AI could potentially raise global GDP by 7% — alongside eventually displacing 300 million jobs.

Although Geinitz favors in-class testing with no devices to gauge learning, he knows that students need to understand AI tools as they look to compete in the workforce.

As a single mom who works while going to school, Bailey Evans would seem the ideal candidate for a time-saving tool. The Business Management major recently experimented with AI tools to source citations for a research paper. The results were subpar, and ironically, Evans ended up writing the citations by hand to save time.

Perhaps even more meaningful, however, was the sense of selfauthorship.

“When I write something, I want people to know it’s coming from me and not the computer. Otherwise, it feels disingenuous, like cheating,” she said.

Issues of accuracy will undoubtedly be addressed as AI continues to iterate its abilities at a breakneck pace. But as Evans’ response indicates, the technology’s integration on college campuses extends far beyond course-correcting for nonexistent space bears.

This article is from: