4 minute read

The Real Threat of Deepfake

The pornography industry latches onto new ideas quickly, particularly when it comes to technology. Porn pioneered internet-based video streaming services a year before CNN and a decade before YouTube. But porn’s latest foray into technology could be as disturbing as it is disruptive.

For decades, graphics geeks and the digital literate have been using platforms like Adobe Photoshop and After Effects to alter photos and videos. They do it to promote a cause, advertise a business, campaign for a politician, get a laugh with a visual gag or put a celebrity’s face on a porn star’s body. Until recently, the process required specialized skills and took considerable time, and the results were obviously fake. Anyone could tell with the naked eye that the images were doctored.

Advertisement

But that’s changing. Today, artificial intelligence (AI) is lowering the cost of fake videos, reducing the time it takes to make them and

eliminating the need for special skills to manipulate the digital images. At the same time, the fakes are reaching a level of realism that tricks the eye and the mind. Take the example of fake videos of politicians delivering speeches. Viewers feel they’re seeing and hearing the candidates say things they have never really said.

It’s happening because AI captures facial characteristics from thousands of images and puts them together to create a video that’s totally convincing. Welcome to the world of deepfakes.

The term (a mashup of deep learning and fake) and its underlying technology achieved notoriety when a Reddit user whose handle was deepfakes published a series of real-looking fake celebrity pornographic videos in 2017. Fake found a new forum and, with the speed of Moore’s Law, deepfakes are becoming the latest existential threat to political, cultural and privacy norms.

The threat of deepfakes has already expanded beyond victimizing female celebrities in fake pornography. It’s moved into corporate espionage, market manipulation and political interference—presenting a grave new challenge for the Authentication Economy.

Last May, Florida Senator Marco Rubio warned the U.S. Senate Intelligence Committee that bad actors could use deepfakes to launch “the next wave of attacks against America and Western democracies.” Political operatives might throw the 2020 presidential election into chaos by creating a digital “October surprise” that could go viral before it’s detected as fake, he said.

In October, California became the first state to criminalize the use of deepfakes in political campaign promotion and advertising. California’s law is aimed at political attack ads placed within 60 days of an election.

But the concern is global. In mid-December, China announced that its Cyberspace Administration would enforce new laws that ban publication of false information or deepfakes online without disclosure that the post was created with AI or virtual reality (VR) technology.

Big tech platforms are stepping up as well. While Twitter has been drafting a deepfake policy, some insiders believe the company’s decision to ban all political advertising stems from a video of House Speaker Nancy Pelosi that was altered to show her slurring during a speech. The video went viral, underscoring the platform’s vulnerability to doctored content.

In September, Facebook (FB) donated $10 million in grants and awards to the Deepfakes Detection Challenge, which was established “to spur the industry to create new ways of detecting and preventing media manipulated via AI from being used to mislead others.” Partners in the Challenge, which will run through 2020, include Microsoft (MSFT), Cornell Tech, Massachusetts Institute of Technology, University of Oxford and University of California at Berkeley.

As government and tech firms step up to meet the impending threat, deepfake dissemination is expanding beyond the bounds of pornography, notes Giorgio Patrini, the Authentication Economy tech entrepreneur who founded Deeptrace, a deepfake detection platform. (See “The State of Deepfakes,” p. 21.)

Some fear that policing and detection won’t keep pace with the technology’s rapid development. In fact, deepfake pioneer Hao Li, associate professor of computer science at the University of Southern California, recently predicted that manipulated videos that appear “perfectly real” will be accessible to all in less than a year.“It’s going to get to the point where there is no way that we can actually detect [deepfakes] anymore,” he said. “So we will have to take a look at other types of solutions.”

CONNIVING MISS MAISY

A deepfake-generated digital persona was designed to spread disinformation that could deflate the value of Tesla stock

In March 2019, someone posing as a Bloomberg journalist named “Maisy Kinsley” connected with more than 195 people on LinkedIn and followed a large number of Tesla short sellers on Twitter. Several of the short sellers claimed the imposter contacted them in an attempt to extract personal information. But Bloomberg has never employed anyone by that name, and the fraudster’s profile picture contained visual anomalies consistent with synthetically generated images. LinkedIn and Twitter have closed the accounts.

This article is from: