The Bulletin - Law Society of South Australia - September 2020

Page 23

ARTIFICIAL INTELLIGENCE

PROHIBITING IMPERSONATION OF POLICE IN AN ERA OF DEEPFAKES? ANTHONY STOKS, LLBLP HONOURS STUDENT, FLINDERS UNIVERSITY AND TANIA LEIMAN ASSOCIATE PROFESSOR & DEAN OF LAW, FLINDERS UNIVERSITY

A

ustralia’s recent bushfire crisis has highlighted the critical importance of announcements from public authorities including police – via still or moving images and sound on traditional news media, websites or social media platforms. But what if we couldn’t be sure that videos were trustworthy, or if images and sound had been manipulated to spread misinformation or cause harm? ‘Deepfakes’ first appeared online in 2017, created by a machine learning algorithm that digitally “face swap[ped] celebrity faces onto porn performers’ bodies”.1 Deepfakes include “the full range of hyper-realistic digital falsification of images, video, and audio…[at] the “cutting-edge” of increasingly realistic and convincing digital impersonation”.2 In the physical world, people can be expected to realise when someone is being impersonated.3 In the digital world, how can we know these videos are ‘fake’? For now, many Deepfakes are labelled as such in the original posts and are simply used for entertainment purposes4 primarily via YouTube clips depicting celebrities in films they have never featured in.5 But as more are generated it is less likely they will be. Where content is re-posted on social media without reference to the original site, there may already be no indication it’s a Deepfake. Creation of these hyper-realistic images, video, and audio requires “some part of the editing process [to be]

automated using [artificial intelligence or] AI techniques”.6 The machine learning algorithm involves two competing AI systems working together to “form … a generative adversarial network (GAN). The first step in establishing a GAN is to identify the desired output and create a training dataset for the generator [AI system 1]. Once the generator begins creating an acceptable level of output, video clips can be fed to the discriminator [AI system 2]”.7 The generator continues to create ‘fake’ video clips which are spotted by the discriminator until the discriminator is no longer able to detect a ‘fake’ - a Deepfake clip almost impossible for humans to detect. Deepfake technology is increasingly accessible,8 including by children and hackers, raising concerns

about “unforeseen and unintended consequences. It is not that fake videos or misinformation are new, but things are changing so fast… challenging [the public’s] ability to keep up”.9 Proposed responses include creating in-built indicators to “verify photos and videos at the precise moment they are taken”10 using metadata and blockchain to create a record of when the original picture or video is made11 - a solution unlikely to be scalable, given the vast number of images uploaded online every day.12 Incorporating fake-video detection in social media platforms13 has also been suggested but may not combat Deepfakes targeted to specific individuals,14 and risks regulating lawful creation of videos for satirical, educational, or entertainment purposes or for sharing privately. September 2020 THE BULLETIN

23


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.