3 minute read

Examining the danger of AI-generated disinformation

In many ways, 2023 was a breakout year for artificial intelligence, with explosive advancements and the adoption of generative AI in particular becoming more widespread. Since the debut of generative AI tools such as ChatGPT, a litany of alarms has sounded about the threats it poses to reputations, jobs and privacy. GSI experts say another threat looms large but often escapes our notice: how artificial intelligence tools can be used for disinformation and its associated security implications.

The holy grail of disinformation

“The holy grail of disinformation research is to not only detect manipulation, but also intent. It’s at the heart of a lot of national security questions,” says Joshua Garland, interim director at ASU’s Center on Narrative, Disinformation and Strategic Influence.

Detecting disinformation is a focus of Semantic Information Defender (SID), a federal contract with software company Kitware Inc. that ASU is participating in. Funded by the U.S. Defense Advanced Research Projects Agency (DARPA), the SID aims to produce new falsified-media detection technology. The multialgorithm system will ingest significant amounts of media data, detect falsified media, attribute where it came from and characterize malicious disinformation.

“Disinformation is a direct threat to U.S. democracy because it creates polarization and a lack of shared reality between citizens. This will most likely be severely exacerbated by generative AI technologies like large language models,” said Garland.

The disinformation and polarization surrounding the topic of climate change could also worsen.

“The Department of Defense has recognized climate change as a national security threat,” he said. “So, if you have AI producing false information and exacerbating misperceptions about climate policy, that’s a threat to national security.”

Garland added that the technology’s climate impact goes beyond disinformation. Programs like ChatGPT are energy intensive, requiring massive server farms to provide enough data to train the powerful programs. Cooling those data centers consumes vast amounts of water, as well. Given the chatbot’s unprecedented popularity, researchers like Garland fear it could take a troubling toll on water supplies amid historic droughts in the U.S.

The promise (and pitfalls) of rapid adoption

“Right now, we are seeing rapid adoption of an incredibly sophisticated technology, and there’s a significant disconnect between the people who have developed this technology and the people who are using it. Whenever this sort of thing happens, there are usually substantial security implications,” said Nadya Bliss, executive director of ASU’s Global Security Initiative who also serves as chair of the DARPA Information Science and Technology Study Group.

She said ChatGPT could be exploited to craft phishing emails and messages, targeting unsuspecting victims and tricking them into revealing sensitive information or installing malware. The technology can produce a high volume of these emails that are harder to detect.

“There’s the potential to accelerate and at the same time reduce the cost of rather sophisticated phishing attacks,” Bliss said.

ChatGPT also poses a cybersecurity threat through its ability to rapidly generate malicious code that could enable attackers to create and deploy new threats faster, outpacing the development of security countermeasures. The generated code could be rapidly updated to dodge detection by traditional antivirus software and signature-based detection mechanisms.

This article is from: