Combatting Online Harms Through Innovation
still supporting explainable and transparent AI systems, including AIAs and grievance mechanisms. 332 Privacy is also one aspect of trustworthy AI that NIST will study and incorporate into a Congressionally mandated Risk Management Framework. 333 The many challenges of transparency and accountability — and the fact that they don’t by themselves prevent harm — highlight the importance of focusing on the entire AI lifecycle, design through implementation. Some scholars have argued that transparency might be less important if algorithms could be designed not to discriminate in the first place. 334 Both designers and users of AI tools must nonetheless continue to monitor the impact of their AI tools, since fair design does not guarantee fair outcomes. In its Online Harms White Paper, the United Kingdom’s government indicated it would work with industry and civil society to develop a Safety by Design framework for online services, possibly to include guidance on effective systems for addressing illegal or harmful content via AI and trained moderators. 335 Algorithmic design is not within the scope of this report, though it is referred to again in the discussion below on platform interventions.
D. Responsible data science Those building AI systems, including tools to combat online harms, should take responsibility for both inputs and outputs. Such responsibility includes the need to avoid unintentionally biased or unfair results derived from problems with the training data, classifications, or algorithmic design. In their call for an AI bill of rights, WHOSTP officials note that some AI failings that disproportionately affect already marginalized groups “often result from AI developers not using appropriate data sets and not auditing systems comprehensively, as well as not having diverse perspectives around the table to anticipate and fix problems before products are used (or to kill products that can’t be fixed).” 336 Further, the 2021 DHS report on deepfakes stated that scientists
Young, et al., Beyond Open vs. Closed: Balancing Individual Privacy and Public Accountability in Data Sharing, Proc. of ACM (FAT’19) (Jan. 29, 2019) (advocating for use of synthetic data and a third-party public-private data trust), https://par nsf.gov/servlets/purl/10111608. 332 United Nations High Commissioner for Human Rights (UNHCHR), The right to privacy in the digital age (Sep. 13, 2021), https://www.ohchr.org/EN/Issues/DigitalAge/Pages/cfi-digital-age.aspx. 333 See National Defense Authorization Act for Fiscal Year 2021, H.R. 116-617, § 5301, at 2768-2775, https://www.congress.gov/congressional-report/116th-congress/house-report/617/1?overview=closed; See also https://hai.stanford.edu/policy/policy-resources/summary-ai-provisions-national-defense-authorization-act-2021. 334 See, e.g., Joshua A. Kroll, et al., Accountable Algorithms, 165 U. Penn. L. Rev. 633 (2017), https://scholarship.law.upenn.edu/penn law review/vol165/iss3/3/. 335 See United Kingdom Department for Digital, Culture, Media, and Sport, and Home Office, Online Harms White Paper at 8.14 (Dec. 15, 2020), https://www.gov.uk/government/consultations/online-harms-white-paper/onlineharms-white-paper. The White Paper informed the pending Online Safety Bill, first introduced in May 2021. See https://www.gov.uk/government/publications/draft-online-safety-bill. 336 Lander and Nelson, supra note __.246. See also NIST Special Publication 1270, supra note 249 at 36-37, 45 (noting benefits of diversity within teams training and deploying AI systems, that “the AI field noticeably lacks diversity,” and that team supervisors should be responsible for risks and associated harms of these systems ); Color of Change, Beyond the Statement: Tech Framework (also recommending that decision-makers be held responsible for discriminatory outcomes), https://beyondthestatement.com/tech-framework/.
FEDERAL TRADE COMMISSION • FTC.GOV
58