4 minute read

THE STRUGGLE TO REIN IN AI

Many call for regulations, but progress is slow. And some worry they’ll inhibit innovation.

BY STRAWBERRY SAROYAN

One warning sign was when Microsoft’s chatbot “Sydney” asked a tech reporter to leave his wife because it postulated they’d fallen in love. (Sure, the journalist goaded the bot into revealing its dark side. But it did not disappoint!) Soon after, My AI, a Snapchat feature being tested by a researcher posing as a 13-year old girl, advised her on her plans to lose her virginity to a 31-year old: Set the scene with dim lighting and candles. Then Geoffrey Hinton, a pioneer in the development of neural networks, quit his job at Google. He cited the proliferation of false information, impending mass job loss, and “killer robots,” and added that he regrets his life’s work.

Ethical and regulatory concerns around AI accelerated with the release of ChatGPT, Bard, Claude and other bots starting late last year. Issues of copyright infringement, algorithmic bias, cybersecurity dangers, and “hallucinations” (making things up) dominate the conversation. Yet proposed solutions have little to no legal traction. Will this be a replay of social media— where eth- ics concerns and attendant rules stagnate? Or will governments and other players take a timely, yet not too strict, stand?

Part of the problem is knowing what regulations might be necessary. One issue is with expectations for the arrival of artificial general intelligence, or AGI—meaning the machines achieve, and possibly supersede, human capabilities. Predictions of when it will occur range from several years to several decades. If and when the AGI tipping point comes, some fear the machines’ goals could get out of “alignment” with humanity’s. Hinton, for instance, fears autonomous weapons becoming those “killer robots.” In a summer 2022 survey of machine learning researchers, nearly half believed there was a 10% chance AI could lead to “human extinction.” Until these dangers are better understood, rules are hard to propose, let alone pass in government.

Another issue is that even chatbot creators have a hard time pinpointing why a “black box,” as the machines’ opaque processes are dubbed, spits out certain things and not others. In one famous early breakdown, a Google photo service labeled African-Americans “gorillas.” In another case, an AI-assisted hiring process at Amazon filtered out female candidates. Both issues were rectified, but systemic change remains challenging.

Some Promising Steps

AI companies say they are open to oversight. Sam Altman, co-founder and CEO of ChatGPT and maker OpenAI, was famously receptive to Congress’ questions and suggestions at a May hearing. He visited the Hill along with neuroscientist and AI expert Gary Marcus and IBM chief privacy and trust officer Christina Montgomery. The Senators seemed eager to get ahead of the problem, but some Congresspeople had difficulty grasping AI’s tenets. And concrete plans were nowhere in sight. For instance, beyond general suggestions, there were no detailed discussions of a regulatory agency that would issue licenses allowing companies to conduct advanced AI work.

There are potential bright spots in the private sphere. Founded in 2021 by former OpenAI creators who wanted to focus on ethical development, Anthropic employs a method known as “mechanistic interpretability.” Founder Dario Amodei describes this as “the science of figuring out what’s going on inside the models.” The startup purposely creates machines that deceive them, then studies how to stop this deception.

Congress is taking some action. The bipartisan American Data Privacy Protection Act, which would establish privacy rules around AI, was introduced last year. Mutale Nkonde, CEO and founder of AI for the People, a policy advisory firm, notes that it incorporates the concept that privacy is a civil rights issue. “If the working of this AI system does not comply with existing U.S. [civil rights] law, then we can’t use it, in the same way that we wouldn’t release a medicine or allow a food to be introduced to the American people that didn’t meet rigorous standards,” she says.

The Biden administration has released an AI Bill of Rights outlining privacy commitments—saying that data should be collected with users’ knowledge, and calling for disclosures to be readable, transparent, and brief. It also proposes protections against algorithmic bias—in which it called for building equity concerns into systems’ design, and independently testing companies’ success in this realm. The Bill of Rights is not an executive order or a proposal for law, however.

The closest the President has come to actual rulemaking happened in July, with voluntary guidelines agreed to by Anthropic, OpenAI, Google, Meta, Microsoft, Amazon, and Inflection (whose founders include LinkedIn co-founder Reid Hoffman). They include engaging in cybersecurity safety testing by independent parties and employing “watermarks” that specify material has been generated by AI. (The technique is not foolproof, but shows promise.) Companies also agreed to prioritize research on preventing AI system bias.

But the European Union leads on setting guardrails. The AI Act, passed by the European Parliament in June, would rate uses of AI on a scale of riskiness and apply rules—and punishments for running afoul of them. AI is being used to run critical infrastructure like water systems and using tools like facial recognition software (which the Act strictly limits). The Act’s final wording is being negotiated with the two other major EU legislative bodies, and lawmakers hope to pass it by the end of the year.

A Growing Sense Of Urgency

There’s still deep disagreement on the scope of rules. As early as 2017, China, which has drafted rules requiring chatbot creators to adhere to censorship laws, proclaimed its intention to dominate the field. Faced with that challenge, some worry about inhibiting U.S. innovation with too many regulations.

Others, however, believe allowing for proliferation of misinformation via AI, particularly leading into a U.S. election year, could threaten our very democracy. If we don’t have a functioning government, they say, tech innovation is the least of our worries.

Privacy and copyright issues gain urgency. A potential bellwether: comedian Sarah Silverman’s suit against OpenAI and Meta for training their models using her book The Bedwetter. Striking Hollywood writers and actors worry that AI could upend the industry by, say, writing screenplays largely autonomously or replacing thespians with life-like recreations based on scans bought for a one-time fee.

Nkonde sees a larger issue touching all aspects of life: studies show that citizens believe AI’s proclamations blindly. “People think these technologies, that are not finished, that do not recognize the whole of society, can make these huge decisions and shape our consciousness,” she says. “And they don’t even really know how to recognize ‘That’s a cat.’ ‘That’s a dog.’”

Tristan Harris, founder of the Center for Humane Technology, recently told a group of expert technologists that regulations need to be implemented urgently. But even Harris cautioned against pessimism and pointed to the world’s success in slowing the spread of atomic weapons and averting nuclear war. Things are moving fast with AI, and issues are maybe even more complicated. But, he told the audience,“We can still choose which future we want.”

This article is from: