3 minute read

Blurring the Line Between Human and Machine AI’s potential raises questions about its role in academia

Robert Muroni News Editor

Since its global release on November 30, 2022, ChatGPT has experienced something of a meteoric rise. Indeed, just five days following its hard launch, the chatbot officially recorded more than one million users. In January, OpenAI, the developers behind the platform, revealed that they have even struggled with meeting capacity requirements at times due to such a large volume of users.

Advertisement

For those who have used ChatGPT before, the platform’s widespread adoption can’t have been much of a surprise. One quick glance at the platform often leaves new users marvelling at its capabilities. From being able to tell jokes, to being able to pen full-page articles all at the push of a button, ChatGPT is able to replicate tasks which, for decades, have seemed only achievable by humans. More impressive still, further in-depth testing of the platform’s capabilities have led to some truly remarkable results. On December 31, a paper published by Michael Bommarito and Daniel Katz, professors at the Chicago-Kent College of Law and Michigan State University of Law respectively, revealed that ChatGPT was even able to pass the bar exam, reporting that the chatbot’s top two and top three choices were “correct 71% and 88% of the time.”

At McGill, ChatGPT’s rapid rise to stardom is unsurprisingly becoming an increasingly prevalent topic of conversation in the classroom. In my experience as a student at the Desautels Faculty of Management, several of my professors have polled students about their knowledge of the platform. Some have gone even further, verbally committing to redesigning future assignments with the knowledge that students can use ChatGPT to help them answer questions. While such commitments mostly led to reactions of amusement by students in the classroom, there’s no doubt that ChatGPT’s capabilities have left many professors seriously worried about its broader academic implications. And for good reason; a mere two weeks after the platform made headlines about passing the bar exam, it once again made the news when professors at the Wharton School of Business, University of

Pennsylvania, announced that the chatbot was able to pass exams given to their MBA students. Here at Desautels, my Operations Management professor revealed to students that ChatGPT was able score 80 per cent (an A-) on the 2021 edition of his final exam.

Concerns about the rapid advancement of technology is nothing new – in 1942, Isaac Asimov famously published his three laws of robotics as a result of a fear that robots would eventually completely replace humans – but never before have machines been this close to actually fully replicating “human” thought. And while Asimov’s fears have yet to materialize, ChatGPT’s capabilities definitely do raise fears about humans being replaced by robots, at least in the sphere of academia.

AI looks to be improving at an increasingly fast rate. On January 23, Microsoft officially announced a new multi-billiondollar investment into ChatGPT, with a commitment to improving its capabilities beyond its current limits. These commitments, while undoubtedly exciting, also run contrary to the spirit of learning within the academic setting, raising broader questions about the role of AI in academia as a whole.

McGill has frequently taken measures to prevent widespread cheating and preserve academic integrity. Back in 2003, for example, the university responded to the rise of the internet by passing a Senate resolution designed to underscore that “all students must understand the meaning and consequences of cheating, plagiarism and other academic offences under the Code of Student Conduct and Disciplinary Procedures.” But while responses to “old” tech were more straightforward, responses to the rise of “new” adaptive tech like AI likely won’t be. Unlike websites such as Google, which actively seek and return existing web pages, AI creates them. This means that while Google can only simply return previously written papers, AI like ChatGPT can go one step further: creating new, neverbefore-seen papers for its users. With AI blurring the line between human and machine, blurred too becomes the line between what is and what is not academic integrity. For example, plagiarism is currently widely defined as presenting another person’s work as your own. But how does this definition extend to the use of AI, where a student doesn’t present “another person’s,” but rather a machine’s, work as their own? Questions also arise about how the principles of academic integrity apply to those

|

a step in the right direction in the long road to preserving academic integrity, it is exactly that: just a step.

who use AI simply to aid their work, rather than to replace it. For example, if a student uses AI to help teach them how to resolve a similarly – but not identically –worded question, or if a student requested AI feedback on an essay they wrote, are these instances of academic malpractice? These concerns are relatively widespread. Just two months after ChatGPT’s release, students took the first steps to combating the rise of AI in academia by releasing GPTZero – an AI software designed to red-flag AI-generated writing. While GPTZero is definitely

Ultimately, there’s no denying that platforms like ChatGPT, powered by fresh investments from many of the world’s tech giants, aren’t going anywhere. Nor can one deny that as AI continues to blur the line between human and machine, blurred too becomes the line between what is and what is not academic integrity. While we should definitely welcome the benefits that AI provides, we should equally look to resolve the very real threats that it poses. Only then will we actually be able to truly preserve the academic integrity we so clearly value.

This article is from: