Diversity & Inclusion
Deepfakes M
uch has been written about the threat deepfakes pose to civil discourse, public trust, and political processes. But these manipulated videos can also cause untold emotional and reputational harm at the individual level, and overwhelmingly target women. In this article, Kelsey Farish – one of Europe’s leading legal experts on deepfakes and manipulated media – discusses a few key points to be aware of. A deepfake is commonly defined by academics and technical experts as a piece of AI-generated audiovisual media which purports to show someone doing something they did not do, but in a manner that is so realistic that the human eye cannot easily detect the fake. In plain English, the word “deepfake” is typically used as a catch-all to describe face-swapping videos. When the algorithm used to generate deepfakes was first shared online in 2017, it was released as a free software tool that anyone with a bit of technical knowledge could use. Principally, it was used as a means to insert the faces of women celebrities into pornographic films, transforming them into unwilling participants in a novel form of image based sexual abuse (the preferred terminology which includes harms such as “revenge porn”). Although many deepfakes remain in the realm of sexually explicit videos, they can be used in any context. Some are completely innocent and humorous, or used as a form of political satire. Some deepfakes have even been employed in a therapeutic context, for example to allow individuals to virtually say goodbye to deceased loved ones. In the medical field, Alzheimer’s patients may benefit from deepfake technology, where it enables them to engage with younger versions of themselves and family members. The above is mentioned because it is important to contextualise the current deepfake ecosystem. Like any technology, deepfakes are not inherently “bad” or “dangerous” – although they can be used for deceptive and harmful purposes, they can also be used for beneficial ends, too. This duality makes regulating deepfakes very difficult in practice, especially when noting the ease with which they can be created. Furthermore, and as many
Today, a fairly believable deepfake can be generated using just one photograph of the intended target. In addition to deepfake mobile apps, specialist freelancers even sell bespoke deepfakes for as little as £5 per video on marketplaces such as Fivver. As of June 2020, almost 50,000 deepfakes made available to the public had been detected: by December 2020, that number had nearly doubled. It goes without saying that the age of the deepfake is only just beginning. So, here are a few important things to remember: 1. You needn’t be a celebrity to be at risk. Deepfakes can be used by anyone with motive. This could include a colleague who seeks to hamper your professional ambition, or an (ex-) partner who submits falsified evidence to a family court. More recently, we have even seen cases of a parent seeking to damage the reputation of her teenage daughter’s cheerleading rivals. As with all forms of defamation or harassment suffered online, anyone can be the victim of an unwanted deepfake, irrespective of their celebrity status or public profile. 2. Deepfakes are a gendered issue. As explained above, anyone can theoretically become a victim of a deepfake. That said, women nevertheless account for 90% of deepfake victims and other forms of image-based sexual abuse. Sir Tim Berners-Lee, the inventor of the world wide web, has stated that he believes the “crisis” of gendered abuse online “threatens global progress on gender equality.” Several campaigns and advocacy groups including the #MyImageMyChoice campaign call for tougher laws and policies on this important issue. 3. Convincing deepfakes can be made using only one image of the victim. Unless you have absolutely no
Emma Watson (image-based sexual abuse)
© Georges Biard. CC-BY-SA-3.0.
Gal Gadot (image-based sexual abuse)
© Georges Biard. CC-BY-SA-3.0.
© Gage Skidmore. CC-BY-SA-3.0.
© Gage Skidmore. CC-BY-SA-3.0.
Scarlett Johansson (image-based sexual abuse)
lawyers will appreciate, just because a deepfake is offensive doesn’t necessarily mean that it is actionable as a criminal or civil offence. For example, a parody deepfake of a politician may be crude or distasteful, but the creator’s rights to freedom of expression may still be protected.
Tom Cruise (satire)
Sensity is the leader for research in deep fakes. It reports that of the 85,000 deep fakes that have been detected, more than 90% depict non-consensual porn featuring women. 36 | LegalWomen