10 minute read

Prohibiting impersonation of police in an era of Deepfakes? By Tania Leiman & Anthony Stoks

PROHIBITING IMPERSONATION OF POLICE IN AN ERA OF DEEPFAKES?

TANIA LEIMAN ASSOCIATE PROFESSOR & DEAN OF LAW, FLINDERS UNIVERSITY AND ANTHONY STOKS, LLBLP HONOURS STUDENT, FLINDERS UNIVERSITY

Advertisement

Australia’s recent bushfire crisis has highlighted the critical importance of announcements from public authorities including police – via still or moving images and sound on traditional news media, websites or social media platforms. But what if we couldn’t be sure that videos were trustworthy, or if images and sound had been manipulated to spread misinformation or cause harm? ‘Deepfakes’ first appeared online in 2017, created by a machine learning algorithm that digitally “face swap[ped] celebrity faces onto porn performers’ bodies”. 1 Deepfakes include “the full range of hyper-realistic digital falsification of images, video, and audio…[at] the “cutting-edge” of increasingly realistic and convincing digital impersonation”. 2 In the physical world, people can be expected to realise when someone is being impersonated. 3 In the digital world, how can we know these videos are ‘fake’? For now, many Deepfakes are labelled as such in the original posts and are simply used for entertainment purposes 4 primarily via YouTube clips depicting celebrities in films they have never featured in. 5 But as more are generated it is less likely they will be. Where content is re-posted on social media without reference to the original site, there may already be no indication it’s a Deepfake.

Creation of these hyper-realistic images, video, and audio requires “some part of the editing process [to be] automated using [artificial intelligence or] AI techniques”. 6 The machine learning algorithm involves two competing AI systems working together to “form … a generative adversarial network (GAN). The first step in establishing a GAN is to identify the desired output and create a training dataset for the generator [AI system 1]. Once the generator begins creating an acceptable level of output, video clips can be fed to the discriminator [AI system 2]”. 7 The generator continues to create ‘fake’ video clips which are spotted by the discriminator until the discriminator is no longer able to detect a ‘fake’ - a Deepfake clip almost impossible for humans to detect.

Deepfake technology is increasingly accessible, 8 including by children and hackers, raising concerns

about “unforeseen and unintended consequences. It is not that fake videos or misinformation are new, but things are changing so fast… challenging [the public’s] ability to keep up”. 9 Proposed responses include creating in-built indicators to “verify photos and videos at the precise moment they are taken” 10 using metadata and blockchain to create a record of when the original picture or video is made 11 - a solution unlikely to be scalable, given the vast number of images uploaded online every day. 12 Incorporating fake-video detection in social media platforms 13 has also been suggested but may not combat Deepfakes targeted to specific individuals, 14 and risks regulating lawful creation of videos for satirical, educational, or entertainment purposes or for sharing privately.

PROTECTING THE PUBLIC: PROHIBITING IMPERSONATION OF POLICE

It is critical that we can rely on information we receive and interactions we have with our emergency services. To protect and enable us to clearly identify a police officer with state authority, legislation in every Australian jurisdiction prohibits impersonation of police. As society and technology has changed, what makes police distinctive has changed, and these offences have changed too. Provisions enacted in the 1850s and early 1900s assumed a context of in-person human interaction, specifying that certain accessories were to be owned and used only by police. Amendments in the 1930s and 1950s (after widespread adoption of photography where police could be identified in still images; and paralleling a rise in ‘private police’ possessing similar items and accoutrements to police but not required to dress like police) created separate offences for the wearing or possession of police uniforms or clothing. In the 1990s and 2000s (after widespread introduction of CCTV), conduct impersonating police become a separate additional offence in most jurisdictions.

Common legislative prohibitions generally cover those situations where the position of ‘police’ is used to intimidate, threaten, or inconvenience the public. 15 Exceptions allow impersonation for ‘social entertainment’ or ‘satire’. The limited caselaw 16 involves individuals who said/ did something in a physical context usually whilst possessing intention or recklessness to impersonate, and it was clear who to charge - the person who said or did the prohibited action. Several key principles emerge:

Impersonation of police is a serious matter warranting serious penalties; Elements of the offence focus on the act itself rather than its impact, with the latter relevant to sentencing; Context and conduct are critical; Any representation is assessed objectively - would a reasonable member of the public perceive a person’s appearance, statements, or conduct to be that expected of a police officer?; Consideration of the implicit power of police (i.e. inherent power to intimidate) is relevant to sentence.

EXISTING PROHIBITION NOT SUFFICIENT TO MEET NEW RISKS

Existing provisions 17 may not be adequate to address the challenges now posed by Deepfakes. Who should be charged? What elements should be proved for the offence to be made out?

If a ‘real’ person is portrayed in a Deepfake, 18 they may not even be aware such a depiction existed, so neither committed an act nor had any requisite intention. 19 If a Deepfake depicts a ‘person’ who no longer exists or has never existed, then who is doing the impersonation? What about those who share or re-post Deepfakes? For images generated by GANs, what about those who upload the data training sets? What about developers of the AI algorithms? Even if an appropriate actus reus could be identified, what, if any, mens rea should be required - intent to impersonate, intimidate, threaten, or inconvenience? Proving this might be difficult, especially as these videos are often created for ‘entertainment’ purposes by those testing the capability of the technology. 20

While posting a Deepfake online may be a representation, context will continue to be critical in determining whether someone should be charged. Posting content with a clear title, statement, or description that it is a Deepfake suggests no intent to deceive, intimidate, threaten, harm, or inconvenience the public. However, even if clear warnings are added to ensure the Deepfake is not taken seriously, 21 once it is posted online there is no longer any control over what happens to that content. Interpreting provisions as prohibiting simply the act of impersonation without any mens rea requirement will apply oppressively to those using Deepfake apps or programs for creative purposes. If provisions are interpreted as prohibiting the outcome of impersonating a police officer, then who should be charged if content is subsequently shared or re-posted without acknowledgement the video is ‘fake’? The viral nature of the internet means posting and sharing content happens almost instantaneously, often without consideration about whether the content is ‘real’ or ‘fake’, sometimes even shared by news and verified social media accounts without rigorous fact-checking. Perhaps one response to this challenge could be a new offence prohibiting dissemination of material that is not appropriately factchecked.

Convincing Deepfakes cannot be created without machine learning. While humans may code or design the GAN algorithm, it is machine learning which creates the Deepfake. The GAN ‘decides’ when a ‘fake’ video will be indiscernible from a ‘real’ video while “not completely under the control of human ‘handlers’”. 22 This raises new questions about whether criminal prohibitions can effectively address acts or outcomes that humans can never commit on their own, or whether criminal liability should and could be attributed if and where artificial intelligence is involved in an offence. Even doing so would not account for multiple parties potentially involved in the wider (and possibly entirely innocent) online dissemination of Deepfakes.

Existing provisions include exceptions for ‘social entertainment’ or ‘satire’ without reference to mens rea. ‘Social entertainment’ is not defined, 23 and may encompass anything from blogging to online gaming to using social networking sites 24 – potentially covering a wide variety of online interactions with other people. Posting a Deepfake online that depicts a human image wearing a police uniform, whether viewers of the content know it is a Deepfake or not, might be ‘social entertainment’. Proving it was shared for other purposes would be difficult because the context in which it was originally posted or shared may not be known. Even if the Deepfake purported to be social entertainment or satire, it might still intimidate, threaten, or inconvenience the public.

When they portray figures of authority such as police or other public figures making public interest announcements, Deepfakes pose risks to the public and Australia’s justice system. Their impact on our society now so reliant on social media for information is a threat the law cannot ignore. Historically, criminal prohibitions

against impersonating police have addressed the need for clarity, certainty and reliability when dealing with police and others with state power to direct our lives and impact our liberty and autonomy. The existing legal framework has significant limitations in responding to the new challenges of Deepfakes purporting to be police. The new issues which have been identified can be used as a catalyst for further reform. Protecting the credibility of police is now as important as maintaining peace, order, and good government. B

Endnotes

Samantha Cole, ‘We are truly fucked: everyone is making AI-generated fake porn now’, Vice (online, 25 January 2018) <https://www.vice. com/en_us/article/bjye8a/reddit-fake-pornapp-daisy-ridley> cited in Bobby Chesney and Danielle Citron, ‘Deep Fakes: a looming challenge for privacy, democracy, and national security’ (Research Paper No 692, University of Texas Law, August 2018) 4. Bobby Chesney and Danielle Citron, ‘Deep Fakes: a looming challenge for privacy, democracy, and national security’ (Research Paper No 692, University of Texas Law, August 2018) 3-4. Marshall v Mielke [2012] TASMC 28, [14]; Medlycott v Redman [1991] SASC 2932, 2935; Police Act 1998 (SA) s 74(3). Delfino, Rebecca A, ‘Pornographic deepfakes— revenge porn’s next tragic act: the case for federal criminalization’ (Research paper no 2019-08, Loyola Law School, July 2019) 5

6

7

8

9

10

11

12 13 14 Ctrl Shift Face, ‘The Dark Knight’s Tale [DeepFake]’ (YouTube, 19 May 2019) 00:00:00— 00:00:59 <https://www.youtube.com/ watch?v=TgcvQA6-qBg>. James Vincent, ‘Why we need a better definition of “deepfake”’, The Verge (online, 22 May 2018) <https://www.theverge. com/2018/5/22/17380306/deepfake-definitionai-manipulation-fake-news>. (emphasis added) Margaret Rouse, ‘deepfake (deep fake AI)’, whatis. com (Web Page) <https://whatis.techtarget.com/ definition/deepfake>. See eg Miles O’Brien, ‘Why ‘deepfake’ videos are becoming more difficult to detect’, PBS (online, 12 June 2019) <https://www.pbs.org/ newshour/show/why-deepfake-videos-arebecoming-more-difficult-to-detect>. Tom Chivers, ‘What do we do about deepfake video?’, The Guardian (online, 23 June 2019) <https://www.theguardian.com/ technology/2019/jun/23/what-do-we-do-aboutdeepfake-video-ai-facebook>. Kaveh Waddell, ‘1 big thing: tracing deepfakes’, Axios future (online, 11 July 2019) <https:// www.axios.com/newsletters/axios-future38439855-c7ae-405f-b6ad-19105515d27e. html?utm_source=newsletter&utm_ medium=email&utm_campaign=newsletter_ axiosfutureofwork&stream=future>. See eg Haya R Hasan and Khaled Salah, ‘Combating deepfake videos using blockchain and smart contracts’ (2019) 7 IEEE Access 41596, 41598. Waddell (n 10). Ibid. See eg Nick Statt, ‘Thieves are now using AI deepfakes to trick companies into sending them money’, The Verge (online, 5 September 2019) <https://www.theverge. com/2019/9/5/20851248/deepfakes-ai-fakeaudio-phone-calls-thieves-trick-companiesstealing-money>. 15

16

17 18

19

20

21 22

23 24 Australian Federal Police Act 1979 (Cth) s.63; Criminal Code 2002 (ACT) s.362; Crimes Act 1900 (NSW) s. 546D; Police Act 1990 (NSW) ss.203 and 205; Police Administration Act 1978 (NT) s.156; Police Service Administration Act 1990 (Qld) s 10.19; Police Act 1998 (SA) 74; Police Service Act 2003 (Tas) s.78; Victoria Police Act 2013 (Vic) s.256; The Police Act 1892 (WA) ss.16 and 16A. 14 Australian cases and 1 UK case - Doolan v Cooper (1962) 62 SR (NSW) 719; Turner v Shearer [1972] 1 WLR 1387; Schroeder v Samuels (1973) 5 SASR 198; Keynes v Kowald (1976) 13 SASR 354; Cameron v Holt (1980) 142 CLR 342; Fogarty v Brown (1989) 17 NSWLR 21; Medlycott v Redman [1991] SASC 2932; Medlycott v Redman [1991] SASC 2932; DPP v Burrow [2004] NSWSC 433; Arndt v Sao [2006] QDC 419; Clarkson v R (2007) 209 FLR 387; Michael v State of Western Australia [2008] WASCA 66; Marshall v Mielke [2012] TASMC 28; Opacic v R [2013] NSWCCA 294; DPP v Morgan [2019] VCC 476. Above note 17 eg YouTube ‘You Won’t Believe What Obama Says In This Video!’ https://www.youtube.com/ watch?v=cQ54GDm1eL0 See eg Ryan v The Queen (1967) 121 CLR 205, 213. Rebecca A Delfino, ‘Pornographic deepfakes— revenge porn’s next tragic act: the case for federal criminalization’ (Research paper no 2019-08, Loyola Law School, July 2019) 14. Medlycott v Redman [1991] SASC 2932, 2935. Subramanian, Ramesh, ‘Emergent AI, social robots and the law: security, privacy and policy issues’ (2017) 26 Journal of International Technology and Information Management 3 Police Act 1998 (SA) s 74. Yuan-Hsuan Lee and Jiun-Yu Wu, ‘The indirect effects of online social entertainment and information seeking activities on reading literacy’ (2013) 67 Computers & Education 168, 170-172.

This article is from: