6 minute read
Meta has Built a Massive new Language AI and it’s Giving it Away for Free
from DAWN
By Will Douglas Heaven
META’S AI LAB has created a massive new language model that shares both the remarkable abilities and the harmful flaws of OpenAI’s pioneering neural network GPT-3. And in an unprecedented move for Big Tech, it is giving it away to researchers—together with details about how it was built and trained.
“We strongly believe that the ability for others to scrutinize your work is an important part of research. We really invite that collaboration,” says Joelle Pineau, a longtime advocate for transparency in the development of technology, who is now managing director at Meta AI.
Meta’s move is the first time that a fully trained large language model will be made available to any researcher who wants to study it. The news has been welcomed by many concerned about the way this powerful technology is being built by small teams behind closed doors.
“I applaud the transparency here,” says Emily M. Bender, a computational linguist at the University of Washington and a frequent critic of the way language models are developed and deployed.
“It’s a great move,” says Thomas Wolf, chief scientist at Hugging Face, the AI startup behind BigScience, a project in which more than 1,000 volunteers around the world are collaborating on an open-source language model. “The more open models the better,” he says.
Large language models—powerful programs that can generate paragraphs of text and mimic human conversation—have become one of the hottest trends in AI in the last couple of years. But they have deep flaws, parroting misinformation, prejudice, and toxic language.
In theory, putting more people to work on the problem should help. Yet because language models require vast amounts of data and computing power to train, they have so far remained projects for rich tech firms. The wider research community, including ethicists and social scientists concerned about their misuse, has had to watch from the sidelines.
Meta AI says it wants to change that. “Many of us have been university researchers,” says Pineau. “We know the gap that exists between universities and industry in terms of the ability to build these models. Making this one available to researchers was a no-brainer.” She hopes that others will pore over their work and pull it apart or build on it. Breakthroughs come faster when more people are involved, she says.
Meta is making its model, called Open Pretrained Transformer (OPT), available for non-commercial use. It is also releasing its code and a logbook that documents the training process. The logbook contains daily updates from members of the team about the training data: how it was added to the model and when, what worked and what didn’t. In more than 100 pages of notes, the researchers log every bug, crash, and reboot in a three-month training process that ran nonstop from October 2021 to January 2022.
With 175 billion parameters (the values in a neural network that get tweaked during training), OPT is the same size as GPT-3. This was by design, says Pineau. The team built OPT to match GPT-3 both in its accuracy on language tasks and in its toxicity. OpenAI has made GPT-3 available as a paid service but has not shared the model itself or its code. The idea was to provide researchers with a similar language model to study, says Pineau.
OpenAI declined an invitation to comment on Meta’s announcement.
Google, which is exploring the use of large language models in its search products, has also been criticized for a lack of transparency.
The company sparked controversy in 2020 when it forced out leading members of its AI ethics team after they produced a study that highlighted problems with the technology.
Culture clash
So why is Meta doing this? After all, Meta is a company that has said little about how the algorithms behind Facebook and Instagram work and has a reputation for burying unfavorable findings by its own in-house research teams. A big reason for the different approach by Meta AI is Pineau herself, who has been pushing for more transparency in AI for a number of years.
Tech giants dominate research but the line between real breakthrough and product showcase can be fuzzy. Some scientists have had enough.
Pineau helped change how research is published in several of the largest conferences, introducing a checklist of things that researchers must submit alongside their results, including code and details about how experiments are run. Since she joined Meta (then Facebook) in 2017, she has championed that culture in its AI lab.
“That commitment to open science is why I’m here,” she says. “I wouldn’t be here on any other terms.”
Ultimately, Pineau wants to change how we judge AI. “What we call state-of-the-art nowadays can’t just be about performance,” she says. “It has to be state-of-the-art in terms of responsibility as well.”
Still, giving away a large language model is a bold move for Meta. “I can’t tell you that there’s no risk of this model producing language that we’re not proud of,” says Pineau. “It will.”
Weighing the risks
Margaret Mitchell, one of the AI ethics researchers Google forced out in 2020, who is now at Hugging Face, sees the release of OPT as a positive move. But she thinks there are limits to transparency. Has the language model been tested with sufficient rigor? Do the foreseeable benefits outweigh the foreseeable harms—such as the generation of misinformation, or racist and misogynistic language?
“Releasing a large language model to the world where a wide audience is likely to use it, or be affected by its output, comes with responsibilities,” she says. Mitchell notes that this model will be able to generate harmful content not only by itself, but through downstream applications that researchers build on top of it.
Meta AI audited OPT to remove some harmful behaviors, but the point is to release a model that researchers can learn from, warts and all, says Pineau.
“There were a lot of conversations about how to do that in a way that lets us sleep at night, knowing that there’s a non-zero risk in terms of reputation, a non-zero risk in terms of harm,” she says. She dismisses the idea that you should not release a model because it’s too dangerous—which is the reason OpenAI gave for not releasing GPT-3’s predecessor, GPT-2. “I understand the weaknesses of these models, but that’s not a research mindset,” she says.
Hundreds of scientists around the world are working together to understand one of the most powerful emerging technologies before it’s too late.
Bender, who coauthored the study at the center of the Google dispute with Mitchell, is also concerned about how the potential harms will be handled. “One thing that is really key in mitigating the risks of any kind of machine-learning technology is to ground evaluations and explorations in specific use cases,” she says. “What will the system be used for? Who will be using it, and how will the system outputs be presented to them?”
Some researchers question why large language models are being built at all, given their potential for harm. For Pineau, these concerns should be met with more exposure, not less. “I believe the only way to build trust is extreme transparency,” she says.
“We have different opinions around the world about what speech is appropriate, and AI is a part of that conversation,” she says. She doesn’t expect language models to say things that everyone agrees with. “But how do we grapple with that? You need many voices in that discussion.” https://www.technologyreview. com/2022/05/03/1051691/meta-ai-largelanguage-model-gpt3-ethics-huggingfacetransparency/ Image credit: NHUNG LE