7 minute read

A Wolf in Sheep's Clothing: Deepfakes

A Wolf in Sheep’s Clothing: Deepfakes

By Ciarán Quinn, SS Business Studies and German

Advertisement

Recently, a video emerged of Tom Cruise in a jewellery shop. While this situation in itself is hardly eyebrow raising, the fact that the American actor was speaking fluent Mandarin in the tone reflective of a giddy teenager certainly is. Even if the video is in jest and self-evidently questionable as to its authenticity, such clips raise questions regarding the extent of artificial intelligence’s ability to create “deepfake” videos. Consequently, and more worryingly, it also highlights what is at stake in the financial, political, and legal spheres.

Deepfake, a term coined in 2017 by a user on Reddit, refers to the use of deep learning by artificial intelligence to allow for the transfer of one’s likeness onto another’s, whether that be in the form of a video address or a voice message. Deepfakes aren’t a new phenomenon; their origins are traceable to academic endeavours in the 1990s, as well as initial sources of photo editing occuring in the 18th century, with double exposure of film allowing for photos to be merged together. Much of this progress was subsequently picked up by online communities, from which deepfake technology has flourished, both for better and for worse. This technology has allowed for bias and prejudice in film to be addressed through the reimaging of characters as people of colour or in reversed gender roles. Deepfake technology has also allowed for the re-touching of historical footage. This means that deepfake infused films are becoming more accessible and relatable to younger generations, as well as more sentimental purposes such as enabling portraits of loved ones to become alive and smile, as seen on social media platforms such as TikTok. What makes deepfake technology all the more attractive and applicable is its availability online through outlets such as Github, which provides as many as 80 individual open source deepfake-applications.

As with many great inventions, there is a sinister side. Unfortunately, the surge in the uptake of deepfake footage (15,00 deepfake videos online in 2019 versus 145,000 in 2020, spurred on by the pandemic) has caused it to fall victim to malevolent purposes. The propensity of deepfake technology to put a person’s face on another’s body has had vicious ramifications. This has been utilized to spread misinformation to the detriment of political opponents, one such example being a video circulated in 2019 of the House of Representantives speaker Nancy Pelosi appearing to slur her words while giving a speech. While the potential for such deepfaked videos to harm a politican’s standing are clear, what is even more frightening is its effect on the ordinary person. Deepfake technology has allowed for an upsurge in online blackmailing, with victims of such attacks having their likeness superimposed onto staged videos, which are subsequently uploaded to pornographic sites as a means of humiliation. The likeness of victims is scrubbed from social media platforms, which is then input into artificial technology to produce such footage.

The same holds true for audio footage, for example audio recordings of a business executive requesting a large transfer under dubious circumstances resulting in the fraudulent theft of millions of euros. One of the first reported cases of fraud involving deepfakes occurred in 2019 when a U.K. based company was stripped of just over €200,000, when a German-accented partner requested the urgent transfer of the sum to a Hungarian supplier, which ultimately ended up in a Mexican bank account before disappearing. Despite the relatively recent deployment of deepfake technology in such illegal activities, estimations placed the value of deepfake facilitated fraud at over $250 million in 2020.

Even with such acknowledgement of the current and growing threat of deepfake technology, one may still retain solace in their self-awareness. Deepfakes currently struggle to defeat the human eye, and upon close inspection it is relatively easy to ascertain what is reality, and what is a deepfake. The battle against deepfakes

is, however, ongoing, and given the alarming rate at which this technology is developing, coupled with the criminal intent to utilize it, this peace of mind will not last for much longer.

Ongoing events in Ukraine reflect this sentiment. As Russian missiles roared overhead, Ukrainians were left in disarray, presenting an opportunity for spreading misinformation. Among such coverage emerged two particularly relevant clips. One relates to the Russian Federation’s President Vladimir Putin announcing peace in Ukraine with plans to absorb the eastern Donbas region into Russia, while Crimea, which has been annexed by Moscow since 2014, would become an autonomous state within Ukraine. Another relates to President Zelensky of Ukraine, who could be seen in a press conference announcing the capitulation of Ukraine to the invading Russian Forces. While both clips can be seen to be fake, as proven by the questionable facial movements or disparities in intonation (as audio clips still require a voice actor to mimic the victim’s voice to provide a model for the deepfake technology), in a time where civilians are left scared and confused, such clips can sow the seeds of havoc and turmoil. Furthermore, in the context of combat, Russian troops frustrated by logistical and operational complications plaguing the invaders, have become bogged down across Ukraine. This has resulted in the Russian forces opting to communicate via unencrypted radio transmissions between one another. The use of such open radio channels has allowed Ukrainian saboteurs to further frustrate Russian communication by jamming signals. The use of deepfake technology also holds potential here, with the ability of ground forces commanders to have their likeness deepfaked across radio communication. This would allow for Ukrainian forces to deploy mimicry, giving out dud-orders through deepfake applications to instil further panic and disorganization among the Russian army.

From such developments, it is clear that the propensity of deepfake technology to capitalise on disorder is potent, but the ability of deepfake technology to mimic political figures has the potential to influence elections, disenchant supporters and cause general havoc within the political sphere, even in times of peace. Deepfaked clips of personalities such as Boris Johnson, Jeremy Corbyn, and Donald Trump are widespread. Many scholars have already pointed to the similarities between historical disinformation spreading; including the doctoring of inscriptions from Roman times to rewrite history, or the many air-brushed photos from Stalin’s tyranny.

Soon, the emerging capabilities of deepfake technology may pose more bad than good.

Despite the clear dangers, legislation across the world has failed to properly address deepfakes. Legislation in Texas and California has been passed to mitigate the use of deepfake technology coming up to elections, however as pointed out by Matthew Feeney, the scope of this legislation is incredibly narrow, and infringes on freedom of speech, bars the use of deepfake technology even for satire, and overall seems to serve more of a federal interest above anything else. In the UK there is no specific deepfake legislation, nor are there “deepfake intellectual property rights” which may be relied on within a dispute. This means any endeavours by a victim to stop such harassment rest upon proving a violation of privacy. Such a defence, as pointed out by Carlton Daniel and Ailin O’ Flaherty, relies upon a multitude of cobbled together defences, which are nowhere near sufficient to support the affected party. Irish courts have previously accepted proposed legislation tackling the issue in the form of the Harassment, Harmful Communications and Related Offences Bill 2020, which aims to protect victims from pornographic deepfaked videos online, but for the moment, more must be done legally to mitigate the dangers surrounding deepfakes.

This article is from: