5 minute read

Advancement of AI Can Complement, Compete with Human Creativity

ChatGPT by OpenAI, a chatbot that replicates human conversation based on a given prompt, is a game changer for creators everywhere. The currently free-to-use program employs artificial intelligence to locate and composite relevant information to produce written responses to most prompts users can imagine. While ChatGPT was trained on a wide variety of internet-sourced material, the technology does not have access to the internet and cannot retrieve external information. However, if you ask ChatGPT to edit your cover letter, research a topic for class, or compose a sonnet in the style of Wordsworth on the most outlandish topic, it will deliver.

The technology’s seemingly limitless capabilities have sparked intellectual excitement with a healthy dose of apprehension. The world is enraptured by its potential practical applications. Access to this kind of free tool is of particular interest to notoriously broke college students. We can use it in classrooms, future work environments, or for creative hobbies. With this in mind, how will we choose to use ChatGPT? There is, of course, the adage of “just because we can, doesn’t mean we should,” but that hasn’t halted technological advancement in the past. Technology has always been developed to make tasks more efficient and streamlined. This technology has also always been met with fear and anxiety. The takeover of robots and artificial intelligence, the obsolescence of humanity, and the death of originality are all common fears expressed in our media and society. Throughout history, technologies now commonly accepted as harmless were at the root of widespread criticism. The transition from manuscripts to bound and printed books in the late Middle Ages was a source of anxiety for people who feared a loss of human touch. ChatGPT is a tool just like any other and maybe with a similar cost.

Advertisement

From a human perspective, one must consider the potential AI has to exert control over our lives. Many AI systems have documented biases — for instance, the U.S. Department of Commerce found that facial recognition AI can misidentify people of color. Human beings choose the data that AIs are trained on, which means human bias is built into the foundation of the program. If a developer with a lesser emphasis on diversity were to create a popular AI system, the software could perpetuate our society's unconscious biases. A computer algorithm in Broward County identified African-American defendants as “high risk” twice as many times as it did white defendants. If AI is supposed to reshape how we conduct our lives, we have to ask if it can do so in a completely impartial way.

For academic institutions, the existence of ChatGPT presents a different set of problems. Higher education institutions — Oberlin included — often mandate student adherence to an honor code, which stipulates that all submitted content must be the student’s original work. Using an AI for class submissions would constitute a serious breach of academic policy but can be more difficult to catch than other methods of cheating. However, the uses of the application are more varied than simple plagiarism. Would it be inherently problematic to use ChatGPT to create a crash course ahead of a chemistry exam or to conduct a wide survey of a spe- cific historical topic? To reject the legitimacy of this application outright in an academic setting, without exploring its potential positive applications, would constitute a dangerously reactionary rejection of the new and different.

With opportunity comes a cost, and ChatGPT is no different. The software can hyper-efficiently research and write content — significantly faster than a human ever could. Entry-level positions such as paralegals, copywriters, and social media associates may be rendered obsolete by the efficiency of ChatGPT. Despite this possibility, we must recognize that even as occupations disappear in the wake of new technological advancements, more will always emerge in their place.

In its current form, ChatGPT cannot generate new information or ideas, and the implicit context of a prompt is largely irrelevant to it, so humans are still needed to feed it the right information so it can best complete the task at hand. Human use also risks human misuse, but thankfully, in its current form, ChatGPT has certain embedded safety features: It refuses to answer questions that would help you do something illegal, generate hate or otherwise offensive speech, or generate content it deems as intended to be misleading.

As with all new technology, the question is not only what ChatGPT is now but what it may become in the future. As with all technology, the evolution of AI is inevitable, and we cannot and should not try to stop it. Instead, let’s try to talk about how to use it, consider how to complicate its applications, and understand how advancing artificial intelligence can compete with and complement human creativity.

Editorials are the responsibility of the Review Editorial Board — the Editors-in-Chief, Managing Editor, and Opinions Editors — and do not necessarily reflect the views of the staff of the Review

Letter To The Editor

Contract Grading Democratizes Writing

I recently saw an opinion in the Review (“Contract Grading Detrimental to Oberlin Academics, Student Success,” The Oberlin Review, Dec. 9, 2022) that critiqued the use of contract grading in certain courses. Having had the opportunity to take "Re-envisioning Writing: Connection, Negotiation, and Empowerment" under the contract grading system, I feel compelled to voice my opinion on what exactly contract grading is and what it brings to the table.

To start with, contract grading is not an easy ‘B’ or above. It’s not a means to reward minimal effort with a passing grade. Professor of Writing and Communications Laurie McMillin’s contract reads, “[To get a] B+ you need to have 0–3 absences, no missed or ignored assignments, [and] all assignments completed fully and appropriately according to stated assignment guidelines.” The contract makes it explicitly clear that not only must students complete assignments on time, but submitted work must also meet certain standards. In many respects, it is comparable to a standard grading system, which also rewards appropriate completion of work.

The main philosophy behind the contract grading system is to allow students to unlock creative potential in their work. In the standard letter grading system, with the fear of a negative grade on one assignment having an adverse impact on the overall course grade, students often opt for simpler, more straightforward ways to complete their assignments. They may be hesitant to explore a new topic or write using unconventional styles, anticipating a negative response from their instructor if they choose to do so. That’s where the beauty of contract grading lies: it lowers the stakes on assignments just enough so that creative expression isn’t stifled. The contract aims to allow students to take risks with assignments and go beyond what the conventional path dictates. Therefore, it is a means of democratizing writing — by re- ducing the stigma and repercussions of not conforming to typical approaches to an assignment, contract-graded classes promote more inclusive styles and encourage the flow of creative thoughts. Sometimes risks don’t pay off, and that’s okay — the contract offers some protection in those situations. For example, in ‘Re-envisioning Writing,’ Professor McMillin allows students to redo assignments and resubmit within 48 hours if they do not meet a required standard. Furthermore, students get multiple opportunities to receive feedback on their work — from peer-reviews, course writing associate, or the professor — to push their writing to a higher standard. The contract is unequivocal. It rewards hard work but not at the expense of creativity. It’s a testament to the idea that there is no "correct" way of doing an assignment and that each approach will have its merits and demerits while allowing you the freedom to choose how you

This article is from: