Mervinskiy 520

Page 64

Combatting Online Harms Through Innovation

problems; hers will focus on harm to marginalized groups. 353 Congress has asked about how to foster innovative ways to combat online harm, and thus one response, in her words, is that “what truly stifles innovation is the current arrangement where a few people build harmful technology and others constantly work to prevent harm, unable to find the time, space or resources to implement their own vision of the future.” 354 Finally, it is critical that the research community keep privacy in mind. AI development often involves huge amounts of training data, which can be amassed in invasive ways 355 and which is in tension with data minimization principles. As noted above in the transparency context, implementing adequate privacy protections for such data may be difficult in practice and may require creative solutions. Eventually, AI systems may be trained on much less data, as opposed to the current hunger for more, but it is unclear how long it may take for that to happen. 356

E. Platform AI interventions Mitigation tools The use of automated tools to address online harms is most often framed as an issue of detection and removal, whether before or after content is posted. But platforms, search engines, and other technology companies can and do use these tools to address harmful content in other ways. They have a range of interventions or “frictions” to employ, including circuit-breaking, downranking, labeling, adding interstitials, sending warnings, and demonetizing bad actors. 357 Some platforms already use such mitigation measures, but their relative secrecy means few details are known at either a systemic or individual level about their efficacy or impact. These interventions are generally automated and thus many of them would have the same inherent flaws of AI-based detection tools, as they would still be dependent on the ability to identify particular types of

See, e.g., Nitasha Tiku, Google fired its star AI researcher one year ago. Now she’s launching her own institute, The Washington Post (Dec. 2, 2021), https://www.washingtonpost.com/technology/2021/12/02/timnit-gebrudair/?s=03; Tom Simonite, Ex-Googler Timnit Gebru Starts Her Own AI Research Center, WIRED (Dec. 2, 2021), https://www.wired.com/story/ex-googler-timnit-gebru-starts-ai-research-center/?s=03. 354 Gebru, supra note 352. 355 See, e.g., John McQuaid, Limits to Growth: Can AI’s Voracious Appetite for Data BeTamed?, Undark (Oct. 18, 2021), https://undark.org/2021/10/18/computer-scientists-try-to-sidestep-ai-data-dilemma/. 356 See, e.g., Tom Simonite, Facebook Says Its New AI Can Identify More Problems Faster, WIRED (Dec. 8, 2021), https://www.wired.com/story/facebook-says-new-ai-identify-more-problems-faster/; H. James Wilson, et al., The Future of AI Will Be About Less Data, Not More, Harvard Bus. Rev. (Jan. 14, 2019), https://hbr.org/2019/01/thefuture-of-ai-will-be-about-less-data-not-more. 357 One proffered example of demonetization is for Google to use probability scores that AI assigns to violative content in search results, which results in its blocking or demotion, to penalize misinformation-filled sites “in the algorithmic auctions Google runs in which sites … bid for ad placements.” Noah Giansiracusa, Google Needs to Defund Misinformation, Slate (Nov. 18, 2021), https://slate.com/technology/2021/11/google-ads-misinformationdefunding-artificial-intelligence html. See also Ryan Mac, Buffalo gunman’s video is surfacing on Facebook, sometimes with ads beside it, The New York Times (May 19, 2022), https://www.nytimes.com/2022/05/19/technology/buffalo-shooting-facebook-ads html. 353

FEDERAL TRADE COMMISSION • FTC.GOV

61


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.