4 minute read

Deepfakes: what they are and how to spot them

Deepfakes: what they areand how to spot them

Concern around the potential for deepfake technology to compromise American security and society has led to the drafting of new legislation.

Called the Deepfake Report Act, it was compiled by US Senators Cory Gardner, Rob Portman and Martin Heinrich in their capacity as co-founders of the United States Senate Artificial Intelligence Caucus, along with Caucus members Joni Ernst, Brian Schatz, Gary Peters and Mike Rounds.

It is intended to direct the US Department of Homeland Security (DHS) to conduct an annual study of deepfakes and other types of similar content, they said in a press release, calling the Act a “crucial step” in an era “where we have more information available at our fingertips than ever”.

What are deepfakes? Described by the US Caucus as “hyperrealistic, digital manipulations of real content that depict events that did not actually occur”, deepfakes have made international headlines in recent months.

Malicious deepfakes include postings of people in compromising sexual positions, politicians and celebrities saying things they never said and the face-aging app that went viral on Facebook earlier this year.

Enormous opportunities and serious challenges “Artificial intelligence (AI) presents enormous opportunities for improving the world around us but also poses serious challenges,” said Senator Gardner. “Deepfakes can be used to manipulate reality and spread misinformation quickly…. (so) we have to be vigilant about making sure that information is reliable and true in whichever form it takes.”

The challenges posed by deepfakes will require policymakers to grapple with important questions related to civil liberties and privacy, added Senator Portman “As AI rapidly becomes an intrinsic part of our economy and society, AI-based threats such as deepfakes have become an increasing threat to our democracy.”

“AI certainly provides a number of benefits,” said Senator Joni Ernst. “However, some of its applications – like deepfakes – are misleading folks across the country. This poses not only a threat to civil liberties, but to our national security.”

Homeland Security and Governmental Affairs Committee member Senator Gary Peters added: “When we see something with our own eyes, we tend to believe it, and video has become an important way for people around the world to communicate and share information. “Deepfakes have the potential to undermine our trust in what we see and hear by creating deceptive content that poses a threat to everything from public safety to our democracy. This bill will task our top intelligence and defense experts with shining a light on these rapidly developing threats and the implications (that) forged content can have on our society.”

New apps and websites coming South African technology expert Arthur Goldstuck, MD of World Wide Worx, is expecting significant growth in the arena of artificial intelligence tools such as point- and-click adaptations of videos and images. This, he says, is likely to be aided by the creation of apps and websites that will obviate the need for the technical expertise that’s currently required to produce deepfakes.

The flip side of the coin, he notes, is the likelihood of authentic products being presented as fake, a concern that’s been raised by the legal fraternity. The challenge for lawyers and judges is going to be to identify and separate fact from fiction, says Mr Goldstuck, something that can be done by assessing the various elements of the video or image: “Fakes tend to portray outof-character and even illegal behaviour. The most common test is to check if the news source is credible.”

Seeing is not always believing Africa Check, in its guide on “How to spot cheap, out-of-context and deepfake videos (https://africacheck.org/factsheets/ guide-how-to-spot-cheap-out-of-contextand-deepfake-videos/)” says seeing is not always believing. Videos claiming to be of xenophobic violence in South Africa went viral in September 2019, says the organisation. An investigation by Africa Check found that many of them were either old or had been shared out of context. “Videos can be easily manipulated. They can also be realistically created using new technology, making people appear to say or do outrageous things.”

Africa Check’s tips on how to identify manipulated videos, videos shared out of context and deepfakes include:

1. Manipulated or poorly edited videos A video purportedly showing US House of Representatives, Nancy Pelosi, drunk, was watched 2.5 million times on Facebook in just a few days, says Africa Check. It turned out that the video had been slowed down to slur her speech as evidenced in the original video and the altered video, which can be watched side-by-side.

To verify the authenticity of a video, Africa Check recommends doing online searches for photographs and news stories relating to it as well as a search for the original. It also advises on doing a reverse image search of screenshots of the video and a search for keywords describing the event.

Another way to verify authenticity is to check playback speed. Free online tools such as Kapwing can slow down or speed up videos until they sound normal, after which the “new” video can be compared to the video in question.

It’s also an idea to look for “jumps” or clumsy transitions where something has been added or deleted – this can be done by moving through single frames of the footage.

2. Videos shared out of context This applies to genuine videos that have not been edited or manipulated but which are shared out of context. An example of this, says Africa Check, was footage of a young woman allegedly being stabbed in Welkom in South Africa earlier this year. After going viral, the video was found to have originated in Brazil, and had nothing to do with the alleged murder of the South African woman.

To check if a video is being shared out of context, Africa Check suggests using different search terms to find previous airings and debunks. Taking screenshots of key frames in the video will also allow for reverse image searches.

Amnesty International’s YouTube DataViewer can be used to check the origin of a YouTube video. Further, advises Africa Check, pay special attention to a video’s upload date, since old videos can’t depict recent events.

Read the comments accompanying videos and social media posts. People often use the comments to debunk a video, photo or claim, says the organisation, and valuable fact-checking clues or information can be found in posts by others.

3. Deepfakes According to Africa Check, the name deepfakes derives from “deep learning” and “fake”. Using machine learning and artificial intelligence (AI) to create fake videos, deepfakes have become a major concern owing to their potential for criminal use, says Africa Check.

Tips to spot a deepfake include doing a Google search to see who else is reporting on the video and its content. If statements or actions by public figures are genuine, they will be reported by the credible news media.

Take screen grabs from the video and then do a Google reverse image search for the original or a longer version. This can help find earlier uses and the wider context.

Search for a transcript of a speech to compare against the video in question. While people often improvise when speaking in public, it’s still a good place to start. Rely on trusted sources and be wary of anonymous accounts that post videos.

Deepfakes work best with short videos because of the time and skill it takes to make longer ones, so be wary of brief clips. Also be on the lookout for visual clues such as weirdlooking faces and bad lip syncing.

This article is from: