Combatting Online Harms Through Innovation
recent study noted similar concerns and added the lack of a commonly accepted definition of TVEC, the constant evolution of extremist behavior, and the need for ethical guidelines. 195 Considering that the same extremist group may use multiple types of platforms to recruit and radicalize, that terrorist methods change, and that definitions and datasets are problematic, what seems clear is that automated tools have a long way to go in this area. Per the broader discussion below, they must be coupled with appropriate collaboration, human oversight, and a nuanced understanding of contextual and cultural difference, all while somehow striking the right balance of free speech, privacy, and safety. 196
F. Disinformation campaigns coordinated by inauthentic accounts or individuals to influence United States elections The Technology Engagement Team (TET) of the State Department’s Global Engagement Center (GEC) defends against foreign disinformation and propaganda by leading efforts to address the problem via technological innovation. In cooperation with foreign partners, private industry, and academia, its goal is to identify, assess, and test such technologies, which often involve AI and efforts to address election-related disinformation. 197 Further, the Cybersecurity and Infrastructure Security Agency of DHS is responsible for the security of domestic elections and engages in substantial work against election-related disinformation. The Commission suggests that these agencies are best positioned to advise Congress on federal agency efforts in this area. Several substantial reports have addressed inadequate platform efforts to address election-related disinformation, including the limited assistance of AI tools. In 2021, the Election Integrity Partnership published a lengthy report on misinformation and the 2020 election, concluding, among other things, that platform attempts to use AI to label content were flawed because the AI tools could not “distinguish false or misleading content from general election-related
(noting bias in terms of which ideologies, events, or organizations are included in datasets), https://doi.org/10.1109/ACCESS.2021.3068313. See also Sara M. Abdulla, Terrorism, AI, and Social Media Research Clusters, Center for Security and Emerging Technology (Nov. 2021), https://cset.georgetown.edu/publication/terrorism-ai-and-social-media-research-clusters/. 195 Miriam Fernandez and Harith Alani, Artificial Intelligence and Online Extremism: Challenges and Opportunities, in Predictive Policing and Artificial Intelligence 131-62 (John McDaniel and Ken Pease, eds.) (2021) (also noting biases involving geographical location, language, and terminology), https://oro.open.ac.uk/69799/1/Fernandez Alani final pdf.pdf. The definitional problem and other issues were raised in a 2020 joint letter from human rights groups to GIFCT. See https://www hrw.org/news/2020/07/30/jointletter-new-executive-director-global-internet-forum-counter-terrorism#. 196 See, e.g., United Nations Office of Counter-Terrorism, supra note 179; Saltman, Lawfare, supra note 193; Jonathan Schnader, The Implementation of Artificial Intelligence in Hard and Soft Counterterrorism Efforts on Social Media, Santa Clara High Tech. L. J. 36:1 (Feb. 2, 2020), https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=1647&context=chtlj. 197 See https://www.state.gov/bureaus-offices/under-secretary-for-public-diplomacy-and-public-affairs/globalengagement-center/technology-engagement-team; https://www.state.gov/programs-technology-engagement-team/.
FEDERAL TRADE COMMISSION • FTC.GOV
35