6 minute read
Should internet platforms be liable for recommending harmful content? CON PRO
By Cate Sauri Staff Writer AN OPINION
By Nora Piecre Staff Writer AN OPINION
Advertisement
From ‘90s dial-ups to the modern World Wide Web, the internet has undergone a drastic transformation over the past three decades. While online platforms have evolved with it, the 26-word law that first nurtured them has yet to do the same.
In February, the Supreme Court heard arguments on the high-profile Gonzalez v. Google case, which targets Section 230 of the Communications Decency Act of 1996— a law designed shortly after the commercialization of the internet to protect online companies from being held legally liable for content posted by their users. The family of Nohemi Gonzalez, a 23-year-old American student killed during a 2015 ISIS attack in Paris, claimed that the content recommended by YouTube’s algorithms promoted terrorism that ultimately took their daughter’s life. Their case rightfully argued that courts should not interpret Section 230 to include the protection of companies when their algorithms recommend extremist content to their users.
The role of the Supreme Court is to interpret the Communications Decency Act as Congress originally intended it to be understood. However, Section 230 far predates the algorithms that exist online today, so Congress could not have intended to protect technology that was not yet in existence. The portion of the bill currently on trial does not explicitly mention algorithms, which have become far more complex in the past two decades, and only states that, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
The sentence does not mention recommendations or promotion of certain online content, and, therefore, the law cannot protect such actions by internet companies.
Furthermore, Section 230 only protects internet companies from being held legally accountable for content uploaded by third parties, but the act of collecting data to promote or recommend content is a choice by the companies that goes beyond the protection of the law. “What Google did isn’t covered by Section 230 immunity because they’re acting as a data collector… to categorize different individuals into groups,” Jolina Cuaresma, Senior Counsel for Privacy and Technology Policy for Common Sense Media, said. “Then they push out content to the particular user based on the algorithms.”
Removing Section 230 protections for content-recommending algorithms would thus incentivize media companies to monitor extremism in content posted to their platforms, ultimately making the internet a
Voicebox
All photos by Maia Turpen
safer place for users. “When [companies are] on the hook for liability, it puts pressure on profits, which then puts pressure on your shareholder value. When you hold them accountable, the interests are aligned, users and shareholders all want the same thing.” Cuaresma said. “Right now, there’s a divergence—executives are focused on stock price without worrying about the harms that may be happening on the consumer side.”
Even beyond the legal weaknesses of Google’s case, it is not acceptable to allow internet companies to post content that encourages terrorism and extremism without facing consequences. According to a study conducted by the Counter Extremism Project in 2021, 71 percent of extremist videos flagged on YouTube were recommended by YouTube’s own content recommendation algorithms. Given this harmful status quo, the Supreme Court must take ac- tion to reign in online people to give help even when they’re not legally obligated to do so in exchange for providing assistance to be immunized from civil liability,” University of Miami Professor of Law and President and Legislative and Tax Director of the Cyber Civil Rights Initiative (CCRI) Dr. Mary Anne Franks said in an interview with Silver Chips. Franks wrote the amicus brief on behalf of the CCRI and legal scholars in support of the plaintiff. platforms’ dissemination of dangerous posts.
When a newspaper or magazine reports factually erroneous information, it can be sued for libel, but the newsstand selling the publication cannot. The same protections for non-publishers should exist in the online world, where social media platforms house the writing, photos, and videos of billions of third-party internet users.
“In the offline world, there is a distinction between the legal liability that publishers of information face and the legal liability that distributors of information face,” HWG LLP law firm partner Adrienne Fowler said in an interview with Silver Chips. “We are going to essentially make it clear to the courts that they should be treating online service providers… like the newsstand operators and not like the magazine publishers.” Fowler represents a group of Amici who are former U.S. national security officials in the Supreme Court case Gonzalez v. Google.
Today, the challenge to the law brought by Gonzalez v. Google threatens the modern internet. YouTube as we know it does not exist without algorithms that decide which videos to place on a user’s home page or suggest which to play next. The same goes for most widely-used social media platforms such as Instagram, Facebook, and TikTok. If online service providers were no longer protected by Section 230, they would be opened up to lawsuits and would likely choose to eliminate recommendation algorithms altogether to avoid legal fees and lengthy civil cases.
Recommendation algorithms and platforms’ decisions regarding how to display content are essential to the function of the platforms themselves. “The only way for this type of online service to operate is [by] presenting subsequent [content] much in the way that a new stand operator is ordering which magazines go at the front of his or her stand and which ones go at the back,” Fowler says.
Platforms that cannot arrange content using recommendations are ineffective at connecting communities of users across the world.
A ruling in favor of Google in this case would not provide any motivation for internet compa- nies to stop promoting extreme or harmful content. Although such algorithms may seem harmless to users, recommending posts they find amusing, the same online architecture that is used to push these postings can be used to promote much more dangerous content.
Sophiali
“[Media companies] track what we search for and, and show us more because they think that’s what we want. And that keeps us on the site because we keep browsing and we keep looking,” Steve Freeman, Vice President of Civil Rights at the Anti-Defamation League, said. “But if you take that into the world of hate and extremism, it just shows potential, it radicalizes, it foments that sort of hate and it just stirs it up and makes it much worse.”
The ongoing case considers whether Section 230 of the Communications Decency Act protects interactive online platforms when they implement algorithms that recommend harmful content provided by third-party users. Gonzalez petitioned the Supreme Court to review the reach of Section 230 after the U.S. Court of Appeals for the Ninth Circuit affirmed the district court’s ruling that the law protects algorithmic recommendations.
Since 1996, Section 230 of the Communications Decency Act has protected internet platforms from being treated as the publishers or speakers of content posted by third party users. Section 230(c)(1) protects companies from being liable for such content, while Section 230(c)(2), known as the “Good Samaritan” provision, allows platforms to moderate content without having to fear lawsuits.
The “Good Samaritan” gives internet platforms greater incentives to seek out and remove content that promotes harmful messaging or inappropriate behavior. “[The purpose of Section 230] is to encourage
For the past 27 years, recommendation algorithms have facilitated the development of online communities and movements, including Black Lives Matter and the Arab Spring. Section 230 shields platforms from liability for the content contributed by users, therefore limiting incentives for strict moderation. As stated in the amicus brief filed in support of Google by the American Civil Liberties Union and Daphne Keller, Director of the Program on Platform Regulation at Stanford University’s Cyber Policy Center, “The internet has democratized speech, creating a forum for public self-expression and connecting billions of speakers and listeners who never could have found each other before.” Algorithmic recommendations both allow users to learn about those who are different from them and find communities of like-minded people.
The Supreme Court should leave the right to decide what and how to moderate their platforms and algorithms to internet platforms themselves. Protecting online service providers from lawsuits concerning their algorithms supports the foundational goal of the legislation–to promote a positive environment and discourage harmful content.
“If
“Internet companies… can ban certain things… If they don’t take any action, it makes it seems like they promote [harmful content].”
“I feel like the people should be held accountable instead of the algorithm because it’s not really the algorithm’s fault.”