1 minute read

ChatGPT, A Faulty Dream?

A new era of artificial intelligence and technology rapidly approaches, and many interesting websites and applications are created every day. One of the newest websites which have taken the world by storm happens to be ChatGPT. ChatGPT is a piece of software that takes in inputs and uses artificial intelligence to output answers which may solve a lot of the world’s questions. ChatGPT has many great features to it which often make it a great tool to answer simple and very straightforward questions. However, ChatGPT has many necessary goals which are not met. Since ChatGPT is a machine learning model, it uses inputs from both users and also from the web to develop an answer to

by Ethan Xie ‘26

output. But, the sources which ChatGPT uses are often incorrect or have strong political biases within them. Furthermore, ChatGPT’s inability to differentiate a “correct” answer from a “wrong” answer can lead this piece of AI to be incapable of solving even the most basic questions. Theoretically, with enough “wrong” inputs and false information on the web, ChatGPT would be able to be tricked into giving out completely incorrect answers because it is incapable of detecting any errors in its output. The shortcomings of AI put on full display that it is extremely difficult to fully develop an unbiased, correct, and direct piece of software for answering simple and complex questions.

In order to do this, an AI would need to be able to not only take in mass data and be able to fact check said inputs, but it would also need to find credible sources online and determine whether or not they are biased toward a particular agenda before creating an answer. Creating an AI language model capable of solving such complex problems would be a major stepping stone in artificial intelligence and many of the world’s hardest solutions would soon just be a few clicks away. Nevertheless, the current state of ChatGPT is still quite impressive, and it will be intriguing to see what the future of AI may hold.

This article is from: