3 minute read
Jake Kornmehl ‘24
Why Tech Corporations Should Start Muzzling Hate
Jake Kornmehl ‘24
Currently, tech companies are classified as platforms rather than publishers and, therefore, are not legally responsible for the content posted. However, many observers believe they should be held responsible for what is posted since placement could be interpreted as an endorsement for those ideas. Therefore, these companies should be held accountable for what they choose to allow on their feeds.
Section 230 of the Communications Decency Act outlines that tech companies may participate in “good samaritan moderation” in order to conserve a safe environment on their platforms without worrying about being sued for inhibiting free speech. Although the word “moderation” introduces subjectivity into the law, there is still no reason why large tech companies should allow accounts with a large followings to post racist, hateful, or antisemetic content. The Supreme Court has recently decided to take up the cases Gonzalez v. Google and Twitter, Inc. v. Taamneh which puts the validity of Section 230 in question. The Gonzalez case alleges Google “recommended ISIS videos to users” and “was critical to the growth and activity of ISIS” and is thus legally accountable. The Taamneh case seeks to hold Twitter, Facebook and YouTube liable for a terrorist attack in Turkey.
Unfortunately, this idea of subjective moderation has created a grey area allowing large tech companies to outlaw even moderate political posts that disagree with the view of their platforms. The technology companies are expected to balance freedom of speech with the filtering of hateful content that could be emotionally damaging to viewers or incite violent acts.
Republican politicians across America believe that section 230 has allowed companies to “muzzle” conservative voices, while democrats argue that the law allows the spread of false information. Although this dichotomy is
a consequence of large tech companies being able to “moderate” the information on their platforms, it does not excuse Youtube, Twitter, and Facebook from maintaining a certain amount of safety on their platforms. With a record increase of young people on social media in 2023, it is vital that we create an environment where the rising generation can learn about current events without being exposed to or manipulated by hate speech. Hate groups have become more prominent and widespread throughout the world in part due to the fact that these extremist groups can impose their views on impressionable young people through the internet. A recent 2022 study from the ADL that surveyed youths ages 13-17 found that 65% of marginalized groups experienced harassment with LGBTQ+ respondents more likely at 66% vs. 38% for non-LGBTQ respondents. Asia American harassment increased significantly from 21% in 2021 to 39% in 2022. Women (14%) were harassed nearly three times as often as men (5%) and Jewish respondents attributed harassment to their religion (37%) compared to non-Jews (14%). Harassment was most common on Facebook (68%) followed by Instagram (26%) then Twitter (23%). Sadly, 47% of young people in this survey experienced some form of harassment on these social media platforms.
In order to hinder the growth of organizations such as the KKK (Ku Klux Klan), NSM (National Socialist Movement) and Q’Anon, these influential tech companies must be held responsible for the content on their platforms. Without clear tangible guidelines in the law, large tech socialmedia corporations will easily be able to find loopholes in order to maximize profits. Congress would be less likely to regulate tech companies if they defined clear consequences for violation of hate speech and harassment guidelines, regularly evaluated and publicly reported accurate statistics on hate speech on their platform and quickly removed it, work with communities targeted by harassment to modify algorithms, and provide data to academic researchers for critical analysis with the goal of better understanding and therefore increasing the likelihood of mitigating online hate.
Yes, freedom of speech is an integral part of the American identity. But, these tech companies have the power to either allow the spread of extreme political views and terrorist ideology or prevent it. With important issues such as global warming or the Ukraine crisis already plaguing our lives, politicians should do everything in their power to promote a safer, more moderate environment. With that in mind, politicians should indeed make sure that large tech corporations are held accountable for what information they make available to the public.