Combatting Online Harms Through Innovation
with “shallowfakes” — when malicious actors upload real and unaltered media but change the context and claim it depicts different people at different places or times. 438 It is also possible that people could abuse these tools, extracting data from them and using them for surveillance. 439 As authentication tools advance, and especially as they scale, it is important to ensure that they enhance trust and freedom of expression, not harm it. Sam Gregory, Program Director of WITNESS, points out that human rights activists, lawyers, media outlets, and journalists “often depend for their lives on the integrity and veracity of images they share from conflict zones, marginalized communities and other places threatened by human rights violations.” 440 Sometimes, however, whether to protect themselves or their subjects, they may need to use pseudonyms, blur faces, or obscure locations. 441 We would not want authentication systems to block the resulting videos or for viewers to ignore them because they lack certain markers.
I.
Legislation
Legislative efforts around the world may reflect that the only effective ways to deal with online harm are laws that change the business models or incentives allowing harmful content to proliferate. Under debate in Congress are, among other things, proposals involving Section 230 of the Communications Decency Act, data privacy, and competition. Some of these proposals give the FTC new responsibilities. Nonetheless, Congress did not seek recommendations on how to deal with online harm generally, so these proposals are beyond the bounds of this report. The Congressional request is narrower. It asks the FTC to recommend laws that would “advance the adoption and use of artificial intelligence to address” the listed online harms. In fact, platforms and others already use AI tools to attempt to address most of those harms, but these tools are often neither robust nor fair enough to mandate or encourage their use. We look instead to the development of legal frameworks that would help ensure that such use of AI does not itself cause harm. 442
See Bobbie Johnson, Deepfakes are solvable—but don’t forget that “shallowfakes” are already pervasive, MIT Tech. Rev. (Mar. 25, 2019), https://www.technologyreview.com/2019/03/25/136460/deepfakes-shallowfakeshuman-rights/. 439 See Sam Gregory, Tracing trust: Why we must build authenticity infrastructure that works for all, WITNESS Blog (May 2020), https://blog.witness.org/2020/05/authenticity-infrastructure/. 440 Id. 441 See id. 442 While some existing laws may provide guardrails for some harms caused by some AI tools discussed herein, those laws are insufficient. See, e.g., Slaughter, supra note 13 at 48; Andrew D. Selbst, Negligence and AI’s Human Users, 100 B.U. L. Rev. 1315 (2020), https://www.bu.edu/bulawreview/files/2020/09/SELBST.pdf; Yavar Bathaee, The Artificial Intelligence Black Box and the Failure of Intent and Causation, Harv. J. L. & Tech. 31:2 (2018), https://jolt.law harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intentand-Causation-Yavar-Bathaee.pdf. 438
FEDERAL TRADE COMMISSION • FTC.GOV
74