2 minute read

G. Availability and scalability

Next Article
F. User tools

F. User tools

better able to resist it.428 Some countries, including the United Kingdom, are incorporating digital literacy as part of concerted national strategies.

429

Advertisement

We have noted already some of the problems with the fact that only a few large technology companies are responsible for most of the AI tools within the scope of this report. For any such tool that is effective and fair, another problem with this concentration is that others who may need the tool, like smaller platforms or investigative journalists, won’t necessarily have access to it or the resources to create their own.430 Twitter even discusses this problem in its open internet principles, advocating for more accessibility and regretting that such technology remains in “proprietary silos” and that this fact perpetuates the domination of a few companies.431

Greater access to these tools does carry risk. For example, while sharing an algorithm may not involve exposure of personal information, sharing the dataset used to create an AI model could implicate privacy concerns. Such concerns may be more acute when the sharing is with other commercial actors as opposed to vetted researchers or certified auditors. Sharing technology and information also risks cross-site censorship.432 Further, the more widely a detection or mitigation

428 See Jon Roozenbeek and Sander Van Der Linden, Breaking Harmony Square: A game that “inoculates” against political misinformation, Harvard Kennedy School Misinformation Rev. (Nov. 6, 2020), https://misinforeview hks.harvard.edu/article/breaking-harmony-square-a-game-that-inoculates-against-politicalmisinformation/. See also Nicholas Micallef, et al., Fakey: A Game Intervention to Improve News Literacy on Social Media, Proc. ACM Hum.-Comput. Interact., Vol. 5, No. CSCW1 (Apr. 2021), https://dl.acm.org/doi/10.1145/3449080. 429 See https://www.gov.uk/government/publications/online-media-literacy-strategy; Amy Yee, The country inoculating against disinformation, BBC Future (Jan. 30, 2022) (showing the positive effects of such efforts in Estonia), https://www.bbc.com/future/article/20220128-the-country-inoculating-against-disinformation. 430 See, e.g., UK Dept. for Digital, Culture, Media & Sport, Understanding how platforms with video-sharing capabilities protect users from harmful content online (Aug. 2021), https://www.gov.uk/government/publications/understanding-how-platforms-with-video-sharing-capabilities-protectusers-from-harmful-content-online; Royal Society, supra note 359 at 18, 82. These needs are often discussed in the TVEC and deepfake contexts. See, e.g., also DHS, Increasing Threat of Deepfake Identities, supra note 43 at 31; Tech Against Terrorism, GIFCT Technical Approaches Working Group Gap Analysis and Recommendations at 2425; Jacob Berntsson and Maygane Janin, Online Regulation of Terrorist and Harmful Content, Lawfare (Oct. 14, 2021), https://www.lawfareblog.com/online-regulation-terrorist-and-harmful-content; OECD, Transparency Reporting on Terrorist and Violent Content Online, supra note 190 at 12; EPRS, Tackling deepfakes in European policy, supra note 43 at 59. 431 See Twitter, Protecting the Open Internet, supra note 377 at 8. 432 See Emma Llansó, Content Moderation Knowledge Sharing Shouldn’'t Be a Backdoor to Cross-Platform Censorship, TechDirt (Aug. 21, 2020), https://www.techdirt.com/articles/20200820/08564545152/contentmoderation-knowledge-sharing-shouldnt-be-backdoor-to-cross-platform-censorship.%E2%80%A6.

This article is from: