3 minute read

Retweets Are Not Endorsements? The Debate Around the US ‘Section 230’ Law

Bernardo Amaro Monteiro, MA Middle Eastern Studies and Intensive Language

‘Retweets are not endorsements’ is a Twitter jargon that has been obsolete since 1996, ten years before the platform was created in 2006. Now, this might be about to change, and Daesh (the self-proclaimed Islamic State) played a crucial role in it. e US Supreme Court is currently questioning the responsibility of social media platforms in hosting and recommending harmful speech. Section 230 of the Communications Decency Act of 1996 helps protect the internet platforms from being legally liable for the content in their service, or as the law puts it, ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’ is is the logic that has been powering the internet, from search engines to social media platforms, algorithms and machine learning so ware. It’s not hard to understand, but much harder to put into practice, particularly when tech-savvy terrorists use it to their own advantage. Recruitment and ideology dissemination promoted through online marketing campaigns was key to Daesh’s success. As a result, the Supreme Court is now trying to nd the balance between protecting free speech and preventing the spread of dangerous ideologies online.

Advertisement

‘Twitter, Inc. v. Taamneh’ and ‘Gonzalez v. Google LLC’ are two cases that are rocking the foundations of the Internet. Section 230 has been the ‘all problems one solution’ law for defending big tech rms’ immunity over the consequences their products created. e case against Twitter, however, is based on the allegation that the platform is providing a key service to terrorist organizations, meaning that Twitter has been aware of how terrorists use the platform and failed to kick them o . While the accusation against Google claims that YouTube’s algorithm for video recommendation promotes terrorist content, implying that the platform is endorsing this sort of discourse. e cases are being taken together, as the outcome of one a ects the other. e accusations came a er Daesh’s attacks in Istanbul in 2017 and Paris in 2015 and assigned indirect responsibility to Twitter and Google under the antiterrorism US law.

Judges are trying to understand how social media works and how algorithms help users nd the content and conversations they are looking for. Automated tools have proved to be e cient in ltering unimaginable quantities of data and are essential to how we use the Internet. Despite these advantages, the internet’s underbelly also contains of undesirable content used for exploitation and criminality.

complexity to the decision-making process. Proving that viewing harmful content leads to an increase in violent activity is an investigation that enters into an abstract level of human social relations. Both cases will have to make the argument that there is a provable pattern between the people who saw certain content and those among them who maybe went out to practice an attack. If so, service providers will be legally liable for hosting and promoting harmful speech. Making this case will require that everyone agrees on who gets to determine the de nition of ‘harmful speech’. is won’t come free of complications. Until now, Section 230 allowed service providers to independently decide what speech they allow through their terms of service. A legal outcome that establishes the boundaries of acceptable speech is, indeed, a conversation about free speech.

What about amending Section 230?

e reason why the Supreme Court is revisiting Section 230 is because it came to realize the need for laws that are equally as advanced as the technologies that we use. is process will require the court to decide if the service provider’s immunity helps establish equal footing for everyone to raise their voice in a free environment or if its main e ect is the creation of misinformation and hate speech bubbles. Another element the court is considering is the behaviour of algorithms and how they interact with human social behaviour. For example, the Cambridge Analytica scandal exposed how targeted advertisements could polarize opinions and shape election results. Acknowledging that the Trump campaign pro ted from the Facebook recommendation algorithm also implies recognising that it is e ciently able to surveil and manipulate users by promoting certain discourses and excluding others. It is very hard to know how the algorithm works and if there is truly a di erence between how it recommends what it thinks we want to see and telling us how to think. is is an important question to ask since it is here that the di erence lies between a tool for e ciency and a tool for mass control.

e scale of human interaction on the internet brings additional

If the Supreme Court rules against Big Brother, it will mean that service providers will have to come up with much tighter policies for how we use social media. On the one hand, this could mean that users may start choosing platforms according to their content mediation policies, leading to a more diverse range of platforms. However, it is more likely that the need for regulation will empower the platform’s legal departments as they will have the rst say in establishing the rules for mediating content. It is evident that the big platforms will try to stay out of the courtroom, but they are in much better shape to contest cases, while the smaller platforms may nd it hard to defend themselves.

A much scarier consequence of changing Section 230, is it may imply that retweets can become endorsements - changing the way we interact online.

This article is from: