7 minute read

OpenAI THE END OF SCHOOL?

STORY BY Garry Dow and Corrine Szarkowicz

In July 2015, a handful of tech-world luminaries met for a private dinner at the Rosewood Hotel on Sand Hill Road in Menlo Park, California. The group included Elon Musk, Greg Brockman, and Sam Altman. At the time, Altman was the president of the start-up incubator Y Combinator. Brockman was the former chief technology officer of the e-payment giant Stripe. And Elon Musk was, well, Elon Musk.

When this meeting occurred, transformational advances in computing power had created a real sense of excitement in the field of machine learning. For decades, AI had failed to live up to the hype. But by 2015, a string of revolutionary technical breakthroughs had convinced many people that the long AI winter, as some critics had dubbed it, was finally over.

During the same time, a titanic shift in public attitude was underway. Former Silicon Valley darlings, including Facebook, Google, and Apple, were being criticized for their monopoly-like hold on the market. Everything from teenage depression to the collapse of democracy was being blamed on Big Tech.

Musk, Brockman, and Altman understood that to succeed where others had failed, they would need to do more than innovate, they would need to deviate. On that July evening, one of the most ambitious artificial intelligence labs on the planet was born. They called it OpenAI.

Today, the San Francisco-based company is valued at a staggering $29 billion, but in the early years, OpenAI flew mostly under the radar. Then, in 2020, it released a new program, Generative Pre-Trained Transformer 3 (GPT-3), to a limited number of users.

This breakthrough took place far from the glitz and glamor of the Rosewood Hotel, in suburban Iowa of all places, where 285,000 CPU cores yoked together were playing a kind of game, twenty-four hours a day, seven days a week. It was (and remains) one of the most powerful supercomputers on the planet. But to understand what happens inside the mind of this magic box, and why it matters for the future of education, we first need to take a slight detour.

As a field, artificial intelligence is currently fragmented into a number of unique specialties, each one trying to solve a different problem. Some researchers are using AI to make self-driving cars safer; others are trying to improve the accuracy of biometrics; still others are using the technology to design novel proteins that may one day cure disease.

All of them rely on an approach called ‘‘deep learning.’’ GPT-3 belongs to a category of deep learning known as a large language model (LLM). If you’ve ever used an app with an autocomplete feature — a Google Doc, for example — then you’ve already used a very rudimentary kind of LLM. Google, Meta, and a few lesser known companies all have their own proprietary versions. Earlier this year, Microsoft made a direct “multiyear, multibillion-dollar investment” in OpenAI.

These models represent a new generation of conversant machine learning systems that can produce on-demand text, images, and videos based on a vast online database. Much like our brains, these models learn to identify patterns of language through repeated cycles of trial and error, always guessing at the next … right … word. In the process, some of the “neural” connections grow stronger while others grow weaker, a phenomenon neuroscientists call “pruning” and computer scientists refer to as “training.”

Simply by playing this game, over and over again, a trillion times in a row, GPT-3 learned to write original prose with incredible, staggering fluency. It is, as one writer put it, “a freakishly capable tool.”

Just after Thanksgiving, on November 30, 2022, OpenAI released a new product, Chat-GPT, to the general public. This variation of GPT-3 was able to generate tweets, pen poetry, summarize emails, answer trivia questions, translate languages, craft songs, and even write computer programs, all based on simple, intuitive prompts.

When we asked ChatGPT to describe the key difference between itself and its predecessor, here is what the program told us: “In terms of performance, ChatGPT is not as powerful as GPT-3, but it is better suited for chatbot applications. It has been trained on a large dataset of conversational text, so it is able to generate responses that are more appropriate for use in a chatbot context. ChatGPT is also capable of inserting appropriate context-specific responses in conversations, making it more effective at maintaining a coherent conversation.” To which we responded, “Thank you.” To which it responded, “You’re welcome.”

It didn’t take long for other people to notice the difference, too. Within five days of its launch, ChatGPT had more than a million users. One of those people was the actor Ryan Reynolds. Less than a month after ChatGPT hit the mainstream, Reynolds, always an early adopter, came out with a commercial for his company Mint Mobile, written entirely by the program, a kind of meta commentary on the future of advertising. Today, Chat-GPT is averaging nearly 700 million visits per month, making it one of the Top 50 most visited websites globally.

If GPT-3 true believers are to be believed, any company with a product that currently requires a marketing or customer service or tech-support team could use this technology to replace real human beings. And those jobs are just the beginning. In January, news broke that CNET had quietly published dozens of feature articles generated entirely by artificial intelligence on its website. Editors had coyly attributed its machinewritten stories to “CNET Money Staff.”

Even programmers aren’t safe. A few months after the original GPT-3 went online, the OpenAI team discovered, to their astonishment, that GPT-3 had become surprisingly good at computer programming.

Turns out the web is filled with examples, and from those elemental clues, GPT-3 learned to code all by itself.

In the near future, OpenAI is expected to release a new version, GPT-4, which is even smarter than its predecessor, opening the door to even more sophisticated reasoning. Sooner than you think, every person you know could have access to a virtual assistant that will make Siri and Alexa look like antiques. Already, generative artificial intelligence is being incorporated into traditional search engines, with Microsoft’s Bing — that’s right, Bing! — leading the way.

Which begs the question: In a world where AI can write for you and draw for you and code for you, and, perhaps? maybe? some day? think for you — what does it mean to be educated?

If you go on TikTok right now, you’ll find the hashtag #ChatGPT has more than 600 million views. One video shows a bot solving math problems, another writing papers. In one video, a student is shown copying and pasting multiple choice questions into the tool. Some in academia have declared this the “death of homework.” Their fear is not without merit. This technology really is disruptive. It really is an existential threat to the way schools currently “do” school.

This sudden change in the tech landscape has led to an avalanche of commentary. Cheating is the most immediate concern for schools, but many have noted that ChatGPT also has a propensity for spitting biased, toxic language into the air. Even when ChatGPT is behaving, it often spews out wrong or misleading information.

In response, a senior at Princeton University named Edward Tian recently developed a program called GPTZero that promises to quickly and efficiently detect AI. A month later, OpenAI released their own version. Meanwhile, officials at New York City Public Schools have blocked ChatGPT altogether. But Edward Tian and the New York Public School system seem to be outliers in this conversation. Even if the prohibition of AI and the utilization of AI-detection tools succeed in the short term, they will almost certainly fail in the long term. There will always be a workaround.

A

Across the country and the world, high schools and colleges have begun experimenting with this technology, embracing the best, safeguarding against the worst. “This technology is so powerful,” says Pomfret Director of Technology Tie Watkins, “It’s not a question of if it will be used, but how.”

As a start, many schools, including Pomfret, have revised their academic integrity policies to include generative artificial intelligence as plagiarism. Some teachers have also begun adjusting or creating assignments that teach students the critical thinking skills they need to use generative AI in thoughtful and informed ways. Others have begun crafting questions they hope will be too clever for a chatbot to answer. Still others are considering teaching newer or more obscure texts.

“We need to up our game,” says Grauer Family Institute Director Gwyneth Connell, who oversees academics at Pomfret. “Imagination, creativity, and innovation need to be at the center of how we teach and evaluate our students moving forward.”

Aiden Choi ’23 first learned about DALL·E 2, an image-generating cousin of GPT-3, at a business and engineering conference last summer. For his yearlong senior project in Advanced Photography Master Portfolio, Aiden decided to compete against DALL·E 2. He challenged himself to take a weekly photograph that was better than the one the program could create. Aiden says his photographs have been good, but AI may have the edge. “With DALL·E 2, the return is usually better because it has access to all types of images and data. That can be hard to beat.”

Josh Lake, the head of the Science Department, has also begun tinkering with AI In his computer science class, he encourages students to use the tool as a coding assistant. Even ChatGPT’s flaws can become fodder for analysis. ChatGPT is not always accurate, so in Lake’s astronomy class, he likes to use it as a conversation starter. "The AI-generated answers provide an opportunity for students (and teachers) to practice critical thinking. Is the ChatGPT response correct or incorrect? How do we know? How can we check?"

ChatGPT can help teachers as well. It can create lesson plans, generate tests and quizzes, and serve as an afterhours tutor. One Pomfret faculty member told us that she used ChatGPT to evaluate a few of her students’ papers, and that the app had provided more detailed and useful feedback than she would have, in a fraction of the time.

Today’s students will graduate into a world full of generative AI programs. To be good citizens, they’ll need to know their way around these tools. What are their strengths and weaknesses? What are their knowledge gaps? What sorts of biases do they contain, and how can those biases be weaponized?

“AI is going to be a part of the world that our students inherit,” Lake says. “If we are doing our job as a school, we must embrace technology, address the issues head-on, and create assignments that incorporate its application.”

Aiden Choi agrees, comparing the advancement of AI to the printing press or the first computer. “While people were initially resistant to new technologies, the inventions have transformed societies. I believe these tools will be part of the norm in five to ten years. Then we will be on to something even more advanced.“

STORY BY Garry Dow

ARTWORK BY Annum Architects Shawmut Design and Construction

PHOTO BY Corrine Szarkowicz

This article is from: