Combatting Online Harms Through Innovation
even a majority of integrity harms, particularly in sensitive areas.” 269 In 2021, an executive, Andrew Bosworth, wrote in an employee memo that moderating people’s behavior in the metaverse “at any meaningful scale is practically impossible.” 270 Moreover, even as these tools become better at identifying explicitly harmful content, neither machines nor human moderators may ever be able to deal effectively with “the mass of ordinary and pervasive posts that express discriminatory sentiments in ways that threaten and silence marginalized groups.” 271 Queensland University of Technology Professor Nicolas Suzor, who sits on the Facebook Oversight Board, calls such posts “[t]he internet’s major abuse problem” and explains that “[u]ltimately, abuse and harassment are not just problems of content classification.” 272 It’s not clear that automating decisions about certain kinds of harmful content is something to which platforms or others should aspire anyway. Tarleton Gillespie argues that these decisions “are judgments of value, meaning, importance, and offense. They depend both on a human revulsion to the horrific and a human sensitivity to contested cultural values. There is, in many cases, no right answer for whether to allow or disallow, except in relation to specific individuals, communities, or nations that have debated and regulated standards of propriety and legality.” 273
B. Humans in the loop If AI tools employed to detect harmful online content are not good or fair enough to work on their own, then an obvious and widely shared conclusion is that they need appropriate human oversight. 274 Professor Sarah T. Roberts explained that the many kinds of harmful content poorly suited for automated filters require humans “called upon to employ an array of high-level cognitive functions and cultural competencies to make decisions about their appropriateness for a site or platform.” 275 Their judgment may also be constrained or distorted by the content moderation policies they are required to enforce. Given the amount of online content through which to wade, however, it is entirely implausible to put enough humans in place to monitor all See Seetharaman, supra note 129. See Adi Robertson, Meta CTO thinks bad metaverse moderation could pose an ‘existential threat,’ The Verge (Nov. 12, 2021), https://www.theverge.com/2021/11/12/22779006/meta-facebook-cto-andrew-bosworth-memometaverse-disney-safety-content-moderation-scale. See also Emily Baker-White, Meta Wouldn’t Tell Us How It Enforces Its Rules in VR, So We Ran a Test to Find Out, BuzzFeed News (Feb. 11, 2022), https://www.buzzfeednews.com/article/emilybakerwhite/meta-facebook-horizon-vr-content-rules-test;Tanya Basu, This group of tech firms just signed up to a safer metaverse, MIT Tech. Rev. (Jan. 10, 2022) (describing why current AI detection tools for online harms will fare poorly in the metaverse), https://www.technologyreview.com/2022/01/20/1043843/safe-metaverse-oasis-consortium-roblox-meta/. 271 Suzor, supra note 234 at 65. 272 Id. 273 Gillespie, Custodians of the Internet, supra note 225 at 206. 274 See, e.g.,; Gillespie, Custodians of the Internet, supra note 225 at 107; Rachel Thomas, Avoiding Data Disasters (Nov. 4, 2021), https://www.fast.ai/2021/11/04/data-disasters/; CFDD, supra note 224 at 14; Shenkman, supra note 224 at 36; Singh, supra note 224 at 34; Google, Removals under the Network Enforcement Law (“Machine automation simply cannot replace human judgment and nuance.”), https://perma.cc/SF24-X6ZK. 275 Sarah T. Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (2019) at 34-35. 269 270
FEDERAL TRADE COMMISSION • FTC.GOV
48