Q&A with HATE SPEECH author Caitlin Ring Carlson

Page 1

Hate Speech Q&A You open your book highlighting the ubiquity of hate speech, yet despite the problems it causes, the term is expansive and often contested. What about the term makes it so hard for scholars to agree upon a clear definition? What makes hate speech so difficult to define is the subjective nature of hate speech itself. Phrases, images, and terms that I may see as maligning an individual based on their fixed characteristics, such as race, gender identity, or sexual orientation, may not be seen the same way by others. Intent plays a role as well, and intent is difficult to determine. When slurs are used by members of the group they were originally meant to harm, it is rightly considered a reclamation or reappropriation of the term and, thus, not hate speech since the intent is not to malign someone. Some may think that hate speech is a symptom of racism and sexism, how do you demonstrate that it is in fact a driving force for issues like bias-motivated violence and genocide? I don’t think it’s possible to empirically demonstrate that hate speech causes bias-motivated violence. However, a historical analysis of genocide and bias-motivated violence clearly illuminates the relationship between hate speech and these atrocities. There has not been an incident of genocide in recorded history that was not accompanied by discourse seeking to dehumanize and other targeted groups. Thus, hate speech creates the ideological conditions for people to act out against members of another ethnic or religious group, for example. How has the US reliance on freedom of expression informed legal responses on or relating to hate speech? How is this similar or different to the way other countries approach the issue? In the United States, we tend to place the right to free expression above other rights. We consider the harm caused by hate speech to be less costly to society than the harm associated with restrictions on our right to free expression, particularly as it relates to political dissent. Only when hate speech crosses the line and becomes a true threat or incitement to violence can it be punished.


Interestingly, we have other categories of speech that are exempt from First Amendment protection, such as obscene speech or speech that is injurious to another’s reputation. Hate speech is just not one of those categories. This approach is vastly different from most other Western Democracies, which prohibit hate speech and punish it with fines or jail time. From Canada to the European Union, several countries have laws against expression that incites hatred based on a person’s race, gender, ethnicity, religion, etc. Citizens of these countries tend to place the right to Human Dignity over the right to free expression. In addition to being hurtful and a foundation of greater, more physical threats, how else does hate speech create personal, even political barriers? Several scholars, including Danielle Citron and Helen Norton, have argued that the proliferation of hate speech, particularly online, makes it difficult for those targeted to engage in the political process. For example, let’s say there’s a discussion happening on a neighborhood Facebook page about a new City Council ordinance to reduce police funding. It’s easy to imagine how, after posting her opinion, a Muslim woman might be met with a barrage of hate speech calling her names and encouraging her to “go back to her country.” To protect herself from this abuse, the woman leaves the discussion. A week later, when a spokesperson from the neighborhood is invited to speak at a City Council meeting, the Muslim woman’s perspective on the issue is not included or represented in the testimony because she was driven from the page by the vitriolic hate speech she encountered. In the future, the Muslim woman may be far less likely to engage in any online civic discourse for fear of similar attacks. In terms of personal barriers, Mari Matsuda has, for decades, warned us about what she sees as the most significant potential harm caused by hate speech, which is that those targeted will come to believe in their own inferiority. If children are raised in a world where the public discourse tells them they’re subhuman because they are Black, transgender, or Jewish, they may come to believe that they are less worthy of dignity than other people.


Facebook has been a significant subject in conversations regarding the proliferation of hate speech through social media. What responsibility does social media legally have to address these issues; how could we see these responsibilities change in the coming years? Legally, social media has no responsibility to address these issues. As private virtual spaces, social media platforms are free to create whatever community standards they want for their platforms. As users, we agree to these rules when we sign the terms of service that allow us to access the site. From an ethical perspective, social media have an essential role to play in decreasing hate speech in public discourse. However, as publicly traded companies, social media organizations’ first responsibility is often to their shareholders. It seems unrealistic to think that they will take any action detrimental to their bottom line. If, as I suspect it does, hate speech and other offensive content leads to greater engagement on the platform, it is unlikely that these companies will act differently unless users or advertisers demand it or the government steps in to regulate it. Keep in mind though, that hate speech is protected in the United States. So, even if there was a change to government regulation and Section 230 in particular, which is the law that absolves social media and other computer service providers from liability for what third parties do on their site, it would not impact the way social media organizations regulate hate speech on their platforms. Given its history, Germany’s stringent laws restricting hate speech are not surprising, even being the international leader in shaping standards for online communication and content. What of their efforts, if any, would be the greatest takeaway for countries like the US in their individual regulation of hate speech? While the German law NetzDG, which requires social media platforms to remove illegal hate speech quickly or risk substantial fines, is not perfect, there are several lessons we can take from this approach.


First is transparency. Part of this law requires large social media companies to create and disseminate reports regarding which content and accounts were removed and why. In addition, this approach serves as a reminder that regulation can and perhaps should be used to motivate social media and other computer services to act not only in the best interest of their shareholders but also in the interest of the public. Historically, bias-motivated violence and political dissent have both flourished on college campuses. What complicates higher education institutions’ ability to address hate speech? It is difficult for colleges and universities to address hate speech because of the tension between their dual goals of being places where new ideas are considered and places where people live and work. For centuries, students at universities have been asked to wrestle with concepts they disagree with in order to form their own opinions and, eventually, their broader worldview. Professors have been given academic freedom and tenure to explore alternative perspectives, test hypotheses, and speak out on critical public issues without interference from administrators. In so many ways, free expression is integral to higher education. However, problems arise when that expression, whether from faculty or outside speakers, threatens the physical and emotional safety of students who in many instances are a captive audience that cannot simply “look away” when a professor or speaker uses an offensive slur or claims one race or gender is inferior to another. Therefore, colleges and universities must engage in the hard work of finding a balance between exposure to new ideas and creating a community where people feel safe and supported enough to engage with those ideas. Authors of The Coddling of the American Mind Greg Lukianoff and Jonathan Haidt, argue that higher education has incorrectly taught students that they are fragile, emotional beings, creating a culture of extreme safety that leads to intimidation and violence. What do such claims miss about the existence of trigger warnings and safe spaces? What’s missing from the argument in THE CODDLING OF THE AMERICAN MIND is the students’ perspective, particularly students with historically marginalized identities.


In the book, I include a great quote from Mary Blair, a Black woman who was a student at the University of Chicago. CBS News interviewed her and several of her fellow students. In response to another student’s comment about the real world not being a safe space, she said, “I can assure you, all people of color who have existed in a white space know that the real world is not a safe space.” In my experience, students are not “fragile, emotional beings” and instead are mindful, empathetic people who want to be able to engage with controversial ideas in a meaningful and productive way. Along those lines, there is a fundamental misunderstanding regarding the term “safe spaces.” These are not intellectual safe spaces, but rather an environment where everyone feels comfortable in expressing themselves and participating fully without fear of attack, ridicule, or denial of experience. No one is suggesting that students in the classroom avoid or ignore ideas they disagree with. Instead, these tools allow students to engage with these concepts in a respectful and constructive way. Finally, content or trigger warnings are simply tools that some instructors use to let students know that a sensitive topic or issue is about to be discussed so that students are not caught off guard. Rather than avoiding certain topics, such as sexual assault or bias-motivated violence, altogether, content warnings allow professors to communicate with students about the nature of the upcoming material. As social media organizations seek to more aggressively remove hate speech from their platforms, what ways can content moderation be improved algorithmically and logistically? The algorithms and artificial intelligence used by social media companies to remove hate speech from their platforms have improved a great deal in recent years. Natural language processing allows companies to identify and remove all instances of particular words. However, the algorithms still struggle to identify hate speech when the meaning of a comment or post depends on its context.


For example, the phrase “go home b*tch” would not be considered hate speech if posted as a comment on a news story about a football team beating the visiting team. However, it would be considered hate speech if posted to a story about Representative Ilhan Omar’s most recent bill in Congress. In terms of the logistics of content moderation, moving the process in-house, rather than outsourcing it to firms that offer low wages and problematic working conditions would improve human content moderators’ efficacy. Dedicating more resources to identifying and removing hate speech (along with disinformation, harassing speech, and nonconsensual pornography) should be a top priority for social media organizations. What don't we understand about the phenomenon of hate speech? What should future research focus on? Right now, we don’t fully understand the various impacts, big and small, that hate speech has on individuals and on society as a whole. I would love to see future research that unpacks the psychological, emotional, and physiological impacts hate speech has on individuals. For example, how are people influenced by hate speech that is about a group they are a member of compared to hate speech directed at them personally? From a structural perspective, we should investigate the role hate speech plays in establishing and maintaining racial and other forms of discrimination and inequality. Future research should also examine the relationship between hate speech and extremism, particularly online. We need to know how, specifically, hateful rhetoric translates into offline violence and whether there are interventions that have been or could be successful at stemming the tide of hate online.

ISBN Formats TR: 9780262539906 E: 9780262361293

For more information on Hate Speech, click to request an eGalley on Edelweiss or NetGalley.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.