4 minute read
F. Disinformation campaigns coordinated by inauthentic accounts or individuals to influence United States elections
recent study noted similar concerns and added the lack of a commonly accepted definition of TVEC, the constant evolution of extremist behavior, and the need for ethical guidelines.195
Considering that the same extremist group may use multiple types of platforms to recruit and radicalize, that terrorist methods change, and that definitions and datasets are problematic, what seems clear is that automated tools have a long way to go in this area. Per the broader discussion below, they must be coupled with appropriate collaboration, human oversight, and a nuanced understanding of contextual and cultural difference, all while somehow striking the right balance of free speech, privacy, and safety.196
Advertisement
The Technology Engagement Team (TET) of the State Department’s Global Engagement Center (GEC) defends against foreign disinformation and propaganda by leading efforts to address the problem via technological innovation. In cooperation with foreign partners, private industry, and academia, its goal is to identify, assess, and test such technologies, which often involve AI and efforts to address election-related disinformation.197 Further, the Cybersecurity and Infrastructure Security Agency of DHS is responsible for the security of domestic elections and engages in substantial work against election-related disinformation. The Commission suggests that these agencies are best positioned to advise Congress on federal agency efforts in this area.
Several substantial reports have addressed inadequate platform efforts to address election-related disinformation, including the limited assistance of AI tools. In 2021, the Election Integrity Partnership published a lengthy report on misinformation and the 2020 election, concluding, among other things, that platform attempts to use AI to label content were flawed because the AI tools could not “distinguish false or misleading content from general election-related
(noting bias in terms of which ideologies, events, or organizations are included in datasets), https://doi.org/10.1109/ACCESS.2021.3068313. See also Sara M. Abdulla, Terrorism, AI, and Social Media Research Clusters, Center for Security and Emerging Technology (Nov. 2021), https://cset.georgetown.edu/publication/terrorism-ai-and-social-media-research-clusters/.
https://oro.open.ac.uk/69799/1/Fernandez Alani final pdf.pdf. The definitional problem and other issues were raised in a 2020 joint letter from human rights groups to GIFCT. See https://www hrw.org/news/2020/07/30/jointletter-new-executive-director-global-internet-forum-counter-terrorism#.
https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=1647&context=chtlj. 197 See https://www.state.gov/bureaus-offices/under-secretary-for-public-diplomacy-and-public-affairs/globalengagement-center/technology-engagement-team; https://www.state.gov/programs-technology-engagement-team/.
195 Miriam Fernandez and Harith Alani, Artificial Intelligence and Online Extremism: Challenges and Opportunities, in Predictive Policing and Artificial Intelligence 131-62 (John McDaniel and Ken Pease, eds.) (2021) (also noting biases involving geographical location, language, and terminology),
196 See, e.g., United Nations Office of Counter-Terrorism, supra note 179; Saltman, Lawfare, supra note 193; Jonathan Schnader, The Implementation of Artificial Intelligence in Hard and Soft Counterterrorism Efforts on Social Media, Santa Clara High Tech. L. J. 36:1 (Feb. 2, 2020),
commentary.”198 Further, a recent ProPublica and Washington Post investigation — for which researchers relied in part on machine learning techniques – found that Facebook played a critical role in spreading false narratives about the election immediately before the January 6, 2021, siege of the United States Capitol.199 Park Advisors, a State Department contractor working with GEC, issued a 2019 report that discussed the mixed results from platform attempts — including via the use of AI — to counter this problem in connection with recent elections.200
For several years, academic researchers such as University of Southern California Professor Emilio Ferrara have been using AI, sometimes with government funding, to study electionrelated disinformation, despite limited data available from platforms other than Twitter. In one recent study, focused on Twitter and the 2020 Presidential election, the results implied that platform efforts to limit malicious groups were not effective against those groups’ evasive actions, such that “rethinking effective platform interventions is needed.”201 Another recent study involving Twitter and the 2020 election found that bots were still responsible for significant manipulation but that, as compared to the 2016 election, a shift had occurred from foreign to domestic sources.202 Other recent studies propose platform-agnostic techniques to detect coordinated accounts or operations based on social media content or behavior.203 Another
198 Center for an Informed Public, Digital Forensic Research Lab, Graphika, & Stanford Internet Observatory, The Long Fuse: Misinformation and the 2020 Election, Stanford Digital Repository: Election Integrity Partnership v1.2.0 at 212 (2021), https://purl.stanford.edu/tr171zs0069. Further, to the extent that election-related disinformation often involves bots or deepfakes, the same detection problems exist in this context as they do for bots and deepfakes generally. 199 See Craig Silverman, et al., Facebook groups topped 10,000 daily attacks on election before Jan. 6, analysis shows, The Washington Post (Jan. 4, 2022), https://www.washingtonpost.com/technology/2022/01/04/facebookelection-misinformation-capitol-riot/; Jeremy B. Merrill, How ProPublica and The Post researched posts of Facebook groups, The Washington Post (Jan. 4, 2022), https://www.washingtonpost.com/technology/2022/01/04/facebook-propublica-post-jan6-methodology/. See also Tech Transparency Project, A Year After Capitol Riot, Facebook Remains an Extremist Breeding Ground (Jan. 4, 2022), https://www.techtransparencyproject.org/articles/year-after-capitol-riot-facebook-remains-extremistbreeding-ground. 200 See Nemr and Gangware, supra note 79. 201 Karishma Sharma, et al., Characterizing Online Engagement with Disinformation and Conspiracies in the 2020 U.S. Presidential Election (Oct. 20, 2021), https://arxiv.org/pdf/2107.08319.pdf. 202 See Ho-Chun Herbert Chang, et al., Social Bots and Social Media Manipulation in 2020: The Year in Review, (Feb. 16, 2021), https://arxiv.org/pdf/2102.08436.pdf. See also William Marcellino, et al., Human–machine detection of online-based malign information, RAND Corporation (2020), https://www.rand.org/pubs/research reports/RRA519-1 html. 203 Karishma Sharma, et al., Identifying Coordinated Accounts on Social Media through Hidden Influence and Group Behaviours (Aug. 2021), https://dl.acm.org/doi/pdf/10.1145/3447548.3467391; Steven T. Smith, et al., Automatic detection of influential actors in disinformation networks, PNAS 118 (4) (Jan. 26, 2021), https://www.pnas.org/content/118/4/e2011216118; Meysam Alizadeh, et al., Content-based features predict social media influence operations, Sci. Adv. 6: eabb5824 (Jul. 2020), https://www.science.org/doi/10.1126/sciadv.abb5824.