Combatting Online Harms Through Innovation
sponsored troll accounts. 81 Unfortunately, many of these tools are limited to Twitter because other platforms, like Facebook, restrict their APIs in ways that prevent access to the data necessary to create and test such tools. 82 The need to increase research access generally is discussed below. As with deepfakes, one can expect the battle to continue between those seeking to detect fake accounts and those developing ever more sophisticated ways to deploy them for illicit purposes.
C. Website or mobile application interfaces designed to intentionally mislead or exploit individuals This category of harm appears to refer principally to so-called “dark patterns,” which were the focus of a 2021 Commission public workshop and a later Enforcement Policy Statement. 83 The potential use of AI to detect dark patterns has not been fully explored. 84 It may be that the creation of effective detection tools will remain challenging for the same reasons as noted above with respect to fraudulent and deceptive content generally. Another challenge is the need to resolve complex issues of how to define, identify, and measure dark patterns, 85 which would presumably be a precondition for setting computers to the same task. However, one oft-cited research study used automated tools to help detect dark patterns on shopping sites. 86 Further, the
https://arxiv.org/pdf/2006.06867.pdf; Adrian Rauchfleisch and Jonas Kaiser, The False positive problem of automatic bot detection in social science research, PLoS ONE 15(10): e0241045 (Oct. 22, 2020), https://doi.org/10.1371/journal.pone.0241045. 81 See Mohammad Hammas Saeed, et al., TROLLMAGNIFIER: Detecting State-Sponsored Troll Accounts on Reddit (Dec. 1, 2021), https://arxiv.org/pdf/2112.00443.pdf; Chris Stokel-Walker, Researchers Have a Method to Spot Reddit’s State-Backed Trolls, WIRED UK (Jan. 12, 2021), https://www.wired.co.uk/article/researchers-reddit-statetrolls. 82 See EPRS, Automated Tackling of Disinformation at 33-34 (Mar. 2019), https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624278/EPRS STU(2019)624278 EN.pdf; Johanna Wild and Charlotte Godart, Spotting bots, cyborgs and inauthentic activity, in Verification Handbook for Disinformation and Media Manipulation (Craig Silverman, ed.) (2020), https://datajournalism.com/read/handbook/verification-3. 83 See https://www ftc.gov/news-events/events-calendar/bringing-dark-patterns-light-ftc-workshop; https://www.ftc.gov/news-events/press-releases/2021/10/ftc-ramp-enforcement-against-illegal-dark-patterns-trickor-trap. See also Arvind Narayanan, et al., Dark Patterns: Past, Present, and Future, Queue (Mar.-Apr. 2020), https://dl.acm.org/doi/pdf/10.1145/3400899.3400901. 84 See Competition and Markets Authority, Online Choice Architecture: How digital design can harm competition and consumers at 42 (Apr. 5, 2022), https://www.gov.uk/government/publications/online-choice-architecture-howdigital-design-can-harm-competition-and-consumers. 85 See Jennifer King and Adriana Stephan, Regulating Privacy Dark Patterns in Practice — Drawing Inspiration from California Privacy Rights Act, 5 Geo. L. Tech. Rev. 250 (2021), https://georgetownlawtechreview.org/wpcontent/uploads/2021/09/King-Stephan-Dark-Patterns-5-GEO.-TECH.-REV.-251-2021.pdf. Among other things, it would be difficult to determine what training data one would use to build a dark pattern detection model. 86 See Arunesh Mathur et al., Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites, Proc. of the ACM Human-Computer Interaction, Vol. 3, CSCW, Art. 81 (Nov. 2019), https://arxiv.org/abs/1907.07032. See also
FEDERAL TRADE COMMISSION • FTC.GOV
19