Mervinskiy 520

Page 12

Combatting Online Harms Through Innovation

of particular content and whether it has been altered. These tools — which could involve blockchain, among other things — can be especially helpful in dealing with the provenance of audio and video materials. Like detection tools, however, authentication measures have limits and are not helpful for every online harm. Finally, in the context of AI and online harms, any laws or regulations require careful consideration. Given the various limits of and concerns with AI, explicitly or effectively mandating its use to address harmful content — such as overly quick takedown requirements imposed on platforms — can be highly problematic. The suggestion or imposition of such mandates has been the subject of major controversy and litigation globally. Among other concerns, such mandates can lead to overblocking and put smaller platforms at a disadvantage. Further, in the United States, such mandates would likely run into First Amendment issues, at least to the extent that the requirements impact legally protected speech. Another hurdle for any regulation in this area is the need to develop accepted definitions and norms not just for what types of automated tools and systems are covered but for the harms such regulation is designed to address. Putting aside laws or regulations that would require more fundamental changes to platform business models, the most valuable direction in this area — at least as an initial step — may be in the realm of transparency and accountability. Seeing and allowing for research behind platforms’ opaque screens (in a manner that takes user privacy into account) may be crucial for determining the best courses for further public and private action. 24 It is hard to craft the right solutions when key aspects of the problems are obscured from view.

III. USING ARTIFICIAL INTELLIGENCE TO COMBAT ONLINE HARMS A. Deceptive and fraudulent content intended to scam or otherwise harm individuals Of the harms specified by Congress, deception is the most central to the Commission’s consumer protection mission. Public and private sector use of AI tools to combat online scams is still in its relative infancy, and such tools may be hard to develop. While some scams may be detected by relatively clear and objective markers, many are context-dependent and not obvious on their face. After all, the nature of a scam is to deceive people into thinking it’s not a scam. For example, the initial part of a scheme may involve a seemingly legitimate online ad, with key fraud indicators hidden offline and revealed only later. These factors may make it difficult for

Commission staff is currently analyzing data collected from several large social media and video streaming companies about their collection and use of personal information as well as their advertising and user engagement practices. See https://www.ftc.gov/reports/6b-orders-file-special-reports-social-media-video-streaming-serviceproviders. In a 2020 public statement about this project, Commissioners Rebecca Kelly Slaughter and Christine S. Wilson remarked that “[i]t is alarming that we still know so little about companies that know so much about us” and that “[t]oo much about the industry remains dangerously opaque.” https://www.ftc.gov/system/files/documents/public statements/1584150/joint statement of ftc commissioners cho pra slaughter and wilson regarding social media and video.pdf. 24

FEDERAL TRADE COMMISSION • FTC.GOV

9


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.