7 minute read
ChatHB: Preparation for Life and Artifical Intelligence
Endless information is at our fingertips. Can’t remember that famous actor’s name? Google it. Need the weather forecast? Check your app. Need driving directions? Apple Maps. Waze. Want to do all of this hands-free? Alexa? Hey Siri?
State-of-the-art technology using artificial intelligence is not new. It is around us every day. Our smart phones don’t require a password because they unlock through facial recognition. When we text, the autocomplete function suggests words and sentences to help us quickly and efficiently send our message. Netflix makes recommendations of movies or documentaries we might like based on what we watched last weekend. Amazon gives us a robust list of recommended items based on our searches.
But lately, there’s a lot of buzz about artificial intelligenceAI for short. OpenAI, a research and deployment company, launched ChatGPT just last year, and the discussion has gone to fever pitch ever since. What exactly are these new applications? What are the implications of AI in education? Will AI take over the world?!
Data scientist Cal Al-Dhubaib met with members of Hathaway Brown’s administrative team and the alumnae Head’s Council to help explain and demystify all the chatter around AI. “I want to create an understanding of AI, but then I want to make it boring,” he explains. “Let’s see it for the pattern recognizing and reacting system that it is. I want people to come away from an AI discussion knowing how AI tools work, but also understanding they are not perfect.”
Generative AI
The popular AI platform ChatGPT - along with other programs like Microsoft’s Copilot, Adobe’s Firefly, and Google’s Bard - is referred to as generative AI . These tools act as a chatbot capable of having human-like conversations with users and generating large amounts of content, including text and images. Open AI has also released an art platform called Dall-E 2 which generates images and artwork in seconds based on the prompt given. Want to see artwork of Cleveland’s Terminal Tower
in the style of Van Gogh? How about in the style of Picasso or Monet? Dall-E 2 quickly and impressively delivers.
The platforms are capable of generating large amounts of information because they are trained on billions of examples of data and human language, also known as large language models. “It’s important to understand that what looks like intelligence is just an emergent property of a model being exposed to a lot of examples of human expression,“ explains Al-Dhubaib. “I like to say that it’s autocomplete on steroids.”
Now that generative AI tools are readily available, individuals who are not programmers or data scientists have the ability to interact with AI directly, instead of it just working in the background. Within seconds. you can ask ChatGPT to do more than you can imagine, from the simple to the complex - generate a grocery list to make Spanish paella, write a poem about life at an all-girls school, explain the importance of the Magna Carta, or find the solution to a complex calculus problem.
The Good and Bad of AI in Education
Generative AI tools like ChatGPT are exciting, and can help students in a multitude of ways. They can help explain complex math problems, brainstorm ideas, and proofread. “A huge positive of AI is that it can act as a tutor to which every student has access,” HB’s Chief Information Officer Barry Kallmeyer shares. “Tutors can be expensive and this can help level that playing field.”
However, the tools also raise questions about academic integrity. “There is still a lot of concern that students can misuse these tools,” Kallmeyer explains. “We need to normalize their use and find ways to help our students use them ethically.”
As with all resources, Kallmeyer encourages students to double check and verify information found through generative AI tools. Al-Dhubaib agrees, “Language models can be wrong, and they can produce citations that look real but don’t exist. You must do your due diligence.”
AI also challenges educators to take a look at the questions they are asking. Language models can supply endless amounts of words on a huge variety of topics, but they do not necessarily reflect critical thinking, feeling, or understanding. Al-Dhubaib asks, “What does it mean to demonstrate learning when creating words is now cheap or even free?”
And just where does all the information in tools like ChatGPT come from? “Behind every AI are humans,” Al-Dhubaib explains. “It’s important for users to understand that humans are making the design choices and selecting and curating what data goes into the training process for that given language model.”
With humans guiding the process, biases and limitations are possible. “A lot of these enterprises say they have ethical AI research teams, but there are currently no standards to hold the industry accountable,” Al-Dhubaib explains.
Government Regulations
Al-Dhubaib believes government regulations around AI in the United States are inevitable. An example is the proposed legislation of the European Union Artificial Intelligence Act which would be the world’s first comprehensive legal framework for AI. According to the World Economic Forum, the proposed law focuses “primarily on strengthening rules around data quality, transparency, human oversight, and accountability. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy.”
Although some critics may say regulation holds back competition, Al-Dhubaib believes a set of standards are necessary just like consumers have protections through regulatory agencies like the FDA and FAA. “What you think is best is not necessarily what someone else thinks is best. This creates chaos, ambiguity, and room for bad actors to be subversive,” he says. “I’m a big fan of standards that hold us to the same measuring stick.”
Until there are regulations, ethical and responsible behaviors are voluntary, and many in the industry are stepping up. For example, some language learning model companies are creating “data cards” which might give information about where data was sourced or how it was filtered for bias.” Think of it as a nutritional label on food products, but for AI,“ Al-Dhubaib explains.
AI at Hathaway Brown
At HB, large language models like ChatGPT are changing the landscape of learning. “Our students will never know a world without AI so we need to lean into this and help our students navigate the fast-paced advancements,” Head of School Dr. Fran Bisselle shares. “Generative AI tools will make us more efficient, and they will also make the value of human relationships and good judgment more important.”
For many years, Kallmeyer has led HB’s technology review committee comprised of faculty and staff. During the current school year, he expects the topics covered will be entirely focused on AI. “The technology and our access to it is moving rapidly and we need to build structures to make sure conversations are shared among students and employees,” he explains.
Upper School Director Rachel Lintgen worked with student senate representatives to create artificial intelligence guidelines. The document underscores HB’s philosophy to “embrace the use of AI and other emerging technologies with responsible oversight, transparency, and appropriate use.” Clear guidelines are shared and the policy states that “submission of work that uses or is aided by unauthorized or uncited technology, is considered a violation of the honor code.”
As students, alumnae, educators, and families know, the true mission of HB is preparation for life. “No doubt artificial intelligence will play an important role in our lives moving forward,” says Dr. Bisselle “We will empower our students to be ready.”