8 minute read
Brilliant Technology or BLACK MAGIC?
By: Jillian Wells
Students and educators are witnessing a technological breakthrough that is transforming the way students learn. The emergence of Artificial Intelligence (AI) chatbots, such as Chat Generative Pre-trained Transformer (ChatGPT), allow students to access a smarter and more efficient way of learning. With ChatGPT, students can simply input a question or a prompt, and the AI chatbot will generate an original response that is seemingly unplagiarized. As a result, more and more students are now turning to AI chatbots to tackle homework problems, essays and even quizzes with incredible ease and efficiency. The future of education is unfolding before our eyes and AI chatbots such as ChatGPT are at the forefront of this learning innovation. The AI programs are rapidly expanding their capabilities. Within two months of its launch, ChatGPT, a language model chatbot, had already gained over 100 million users. This exponential growth raises important questions about the impact on learning environments. How will schools, like California State Polytechnic University, Humboldt, respond to this evolving technology?
What is AI?
Artificial intelligence is a branch of computer science that allows a programmed device to perform tasks, make decisions and learn on its own. While the origins of AI date back to 1951, the capabilities of AI have dramatically increased in recent years.
According to David Marshall, a lecturer in the computer science department at Cal Poly Humboldt, an AI’s algorithm needs to be trained through reinforcement in order to work. “When the artificial intelligence does a good job of giving us the good answer, it reinforces or improves the connection.” Marshall said, “If we tell it no, that was the wrong thing, it deemphasizes a particular connection.”
While this is a basic function of AI, we don’t fully understand how it works. Marshall added, “Computer scientists don’t actually know what’s happening inside of artificial intelligence. They have an algorithm that says this is how we’re going to teach this program to do things, but what actually happens inside of the program is sort of a little bit of a black magic, a black box.”
The Good, the Bad, and the Ugly
The advanced capabilities of AI programs like ChatGPT have put educators in a difficult situation. On one hand, the student’s utilization of such programs could be considered plagiarism and hinder the learning process. On the other hand, these programs can also serve as a tool to help enhance learning efficiency.
For students, these programs can be very useful and help them learn more effectively. Logan McDaniel, a junior studying child development at Cal Poly Humboldt, said that students may as well use this available technology because it’s not going anywhere. “It’s just when I get stuck on one of my papers or something, I’ll just look at it and then just use it as a tool to help me figure out the next point and how to move on from the place that I’m stuck at.” McDaniel said, “I know 10-15 people that use it, so quite a bit of people.”
For educators, this may mean shifting the way they teach. “Faculty are now saying, well, maybe we need to change the way we assess, so maybe I need to say, ‘okay, everybody take out a blank piece of paper, I want you to write in class now for me’,” Marshall said. “Some have said, we need to have agreements with students regarding AI, like what is done at the military institutions of higher learning. Students need to attest that no cheating is going to happen. If you find yourself cheating, or somebody else discovers it, lapses in ethics will be reported by the students involved.”
Measures such as in-class exams or honor code agreements for students to not use AI programs are seemingly a bandaid over a bullet hole. Currently, there is very little administrators can do to regulate the use of AI on school work. Enoch Hale, director of the Center for Teaching and Learning at Cal Poly Humboldt, said that, “As far as our center is concerned, we don’t have the funding and there aren’t any specific softwares that we can procure that would prevent the cheating.” Hale added that he believes that AI is “another extension of human creativity” and the best course of action is to “think through it and with it.”
The Ethical Dilemma
Is student use of AI to complete schoolwork plagiarism? The answer is: yes, no, and we don’t know. While ChatGPT can be seen as a form of cheating, it also falls under the category of using technology to enhance learning. Marshall pointed out that plagiarism traditionally involves copying something that has already been written, but with ChatGPT, the AI generates responses that have never been written before. While using ChatGPT to complete schoolwork is a seemingly victimless crime, Hale views cheating as, “allowing other thinkers to do the work for you, that you should do yourself to grow.” By this definition, the use of ChatGPT would be considered cheating. However, there isn’t a conclusive response yet as this is still a new and emerging phenomenon. “It’s a very disruptive technology because nobody really has wrapped their arms around how to deal with it. There are a lot of people that have theory, how to integrate this into academics, but there is no solid understanding,” Marshall said.
Helpful in the Workplace?
Not only are AI chatbots being used by students in school, but it’s also being utilized in workplaces. The Lost Coast Outpost, a local news media company in Humboldt County, uses AI as a tool for multiple purposes. They use AI to generate artistic images to go along with their articles and have the AI dissect multiple-page documents to pick out the main points for their readers. “If people use it smartly it’s a really powerful tool … I think it will allow journalists to do deeper and better reporting,” said Hank Sims, Editor of The Lost Coast Outpost. “It does a great deal of the tedious work that we would have to do.” Sims stated ChatGPT is good at, “highlighting the most important things in a big long document.”
As the use of AI tools in education and professional settings continues to gain traction, Derek Bruff, Visiting Associate Director at the Center for Excellence in Teaching and Learning at the University of Mississippi, stated that “I don’t think these tools are going to vanish in 12 months. I think we are going to see them integrated into lots of different workflows and lots of different professional environments, and that means as educators I think it’s part of our responsibility to help students to learn to use these tools as well.”
What’s next for AI?
Although ChatGPT’s skills are impressive, it’s important to recognize the limitations. The AI’s large margin of error can have severe consequences and mislead users. One of the major challenges with ChatGPT is that its data set is static, limiting its ability to provide the latest and most accurate information. While ChatGPT excels at answering specific questions it has been trained on, its results become increasingly unreliable as the questions become more complex or distant from its training. As Marshall warns, the danger of AI, like social media, is that people tend to trust its information even if it’s not true. Therefore, it’s important to be cautious while using ChatGPT and other AI tools and validate the information they provide.
WARNING! GENERATED BY AI!
The capabilities of AI chatbots like ChatGPT are undoubtedly phenomenal, but how does it perform when prompted to write a creative story? Below is a comedic, fictional story generated completely by ChatGPT that combines every story in this issue of Osprey.
In a world plagued by earthquakes, homelessness, and dams, a motley crew of Cal Poly students banded together with baddies on bikes and tech enthusiasts to face the seismic dangers and help the homeless.
The tech enthusiasts touted their AI-powered earthquake detection system as the savior, but the students scoffed, “If this is brilliant technology, I’m a wizard.”
But when a massive earthquake hit, the students ate their words. The tech system worked like magic! “I guess we’re all wizards now,” they quipped sarcastically.
However, they quickly realized that relying solely on tech was like putting a band-aid on a gaping wound. They needed more. They turned to renewable energy and traditional ecological knowledge.
The baddies on bikes were in. “We’ll ride for a cause, as long as it’s not for spandex or high-vis jackets.”
As they worked towards their common goal, they encountered setbacks and obstacles, including skeptical locals who thought they were nuts. “You’re telling me, you can control wildfires with fire? That’s like trying to put out a house fire with a squirt gun!”
But the group persevered. They learned that sometimes, you have to put aside your differences and work together, even if it means listening to tech enthusiasts who claim to be wizards. “If you can’t beat ‘em, join ‘em,” they said.
In the end, they triumphed over the seismic dangers and homelessness, and the dams were finally being removed. The group celebrated their victory, but not without a little bit of humor. “I guess you could say we were damsels in distress, but now we’re the dam-removers,” they joked.