2 minute read

Chat GPT and AI Ethics

Next Article
PwC România

PwC România

by Andrei Vernon, Year 13

AI, or artificial intelligence, has become a hot topic in the news in the last few months, and for good reason. With the rapid advancement of technology, AI is becoming more and more integrated into our daily lives, from chatbots on customer service websites to self-driving cars. One of the most notable examples in recent memory is ChatGPT.

Released on November 30, 2022, ChatGPT is a chatbot using a language model called GPT-3. It works by using a technique called machine learning, where the AI is trained on a large dataset of human-generated text. This training allows the AI to “understand” the context and generate appropriate responses when it is given new text as input. As a result, ChatGPT can produce human-like text that is often indistinguishable from text written by a human.

Although the core function of a chatbot is to mimic a human conversation, ChatGPT is versatile, being able to write and debug computer programs; compose music, teleplays, fairy tales, and student essays; write poetry and song lyrics; simulate an entire chat room; and even play games like tic-tac-toe.

Despite its wide functionality, the AI has some significant limitations. For example, it can often generate nonsensical or outright wrong information in a confident manner. ChatGPT is programmed to filter out possibly offensive or inappropriate responses, but it can often be duped into generatin g these kinds of answers if given the right prompt. These flaws raise some big ethical concerns. Some people are worried about how ChatGPT could be used to generate large amounts of misinformation, allowing fake narratives to spread more rapidly than ever before. Others are worried about AI replacing or atrophying human intelligence. Bots have surpassed us in speed, and at this rate, they will soon surpass us in intellect as well. What will we become if we outsource our thinking to machines?

For schools, the largest concern is that of plagiarism. Students might use the AI to write their assignments and then claim ownership of its essays and problem sets. Universities in particular are worried about the potential damage ChatGPT could do to their lesson plans and have been rushing to catch students using the tool to cheat.

One response, which has been observed in New York City’s public school system, is to crack down and block access to the tool on their computers and networks. However, some propose a more accepting approach toward AI. They suggest that, much like calculators or cell phones, AI will become commonplace. Instead of banning ChatGPT, we should use it to generate feedback for our work, help with brainstorming or creative tasks, and prepare us to work alongside such systems in the future.

As for those worrying about what AI will do to their jobs, I think that depends on how we adapt to it. As Stephen Hawking once wrote, “If machines produce everything we need, the outcome will depend on how things are distributed. So far, the trend seems to be toward technology driving ever-increasing inequality.” The prosperity that automation brings us may not mean much if it’s funneled to fewer and fewer people. Ultimately, whether our future looks more like Star Trek or Blade Runner might depend on the actions we take today.

This article is from: