Mervinskiy 520

Page 53

Combatting Online Harms Through Innovation

humans in the loop doesn’t correct for harms caused by flawed AI systems; it also shouldn’t serve as a way to legitimize such systems or for their operators to avoid accountability. 282

C. Transparency and accountability Calls have increased for more transparency by and accountability for those deploying automated decision systems, particularly when those systems impact people’s rights. While these two terms are now mentioned regularly in legal and policy debates about AI, it is not always clear what they mean or how they are distinguished from each other. 283 For our purposes, transparency involves measures that provide more and meaningful information about these systems and that, ideally, enable accountability, which involves measures that make companies more responsible for outcomes and impact. 284 That ideal for transparency will not always be attainable, such as when consumers cannot consent to or opt out of corporate use of these systems. Many proposals exist for how to attain these intertwined goals, which platforms certainly won’t reach on their own. These proposals often cover the use of AI tools to address online harms. Below is a brief overview of these goals, with possible legislation discussed later. A major caveat is that even major success on these goals would not actually prevent the harms discussed herein. But it would provide information on the efficacy and impact of these tools, which would help to prevent over-reliance on them, assess whether and when a given tool is appropriate to use, determine the most needed safeguards for such use, and point to the measures the public and private sectors should prioritize to address those harms. 285 In Algorithms and Economic Justice, Commissioner Slaughter identified fairness, transparency, and accountability as the critical principles for systems designed to address algorithmic harms. 286 Meaningful transparency would mean disclosure of intelligible information sufficient to allow third parties to test for discriminatory and harmful outcomes and for consumers to “vote with their feet.” 287 Real accountability would mean “that companies—the same ones that benefit from See Austin Clyde, Human-in-the-Loop Systems Are No Panacea for AI Accountability, Tech Policy Press (Dec. 1, 2021), https://techpolicy.press/human-in-the-loop-systems-are-no-panacea-for-ai-accountability/; Green, supra note 280; Green and Kak, supra note 278; Madeleine Clare Elish, Moral Crumple Zones: Cautionary Tales in HumanRobot Interaction, Engaging Science, Technology, and Society 5 (2019), https://doi.org/10.17351/ests2019.260. 283 See, e.g., Heidi Tworek and Alicia Wanless, Time for Transparency From Digital Platforms, But What Does That Really Mean?, Lawfare (Jan. 20, 2022), https://www.lawfareblog.com/time-transparency-digital-platforms-whatdoes-really-mean. 284 See, e.g., Mike Ananny and Kate Crawford, Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability, New Media and Society 20:3, 973-89 (2018), http://mike.ananny.org/papers/anannyCrawford seeingWithoutKnowing 2016.pdf. 285 See, e.g., Shenkman, supra note 224 at 35-36; Daphne Keller and Paddy Leerssen, Facts and Where to Find Them: Empirical Research on Internet Platforms and Content Moderation, in Social Media and Democracy: The State of the Field and Prospects for Reform (Nathan Persily and Joshua A. Tucker, eds.) (Aug. 2020), https://doi.org/10.1017/9781108890960. 286 Slaughter, supra note 13 at 48. 287 Id. at 49. See also https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equityyour-companys-use-ai. 282

FEDERAL TRADE COMMISSION • FTC.GOV

50


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.