Fifth World VII

Page 1



Letter from the Editors Dear Reader, Dr. Cantrell always says that good writing takes time and leisure. Good thought — that which sits at the crux of the humanities — is something that needs to be facilitated and trained in an environment that allows for it. The past two years have thus been a time of immense introspection for all of us. At the beginning of the pandemic, we were all forced into near-total isolation, followed by a long and shaky return to form. Not only were we forced to spend time alone with our thoughts, but many of us had the opportunity to watch the world burn from our couches. We weren’t just thinking about ourselves, but we were given the time and freedom to think about our relation to everything around us, so 2020 marked a distinct surge in political consciousness. Generational problems of institutional racism and classism were brought to light, and the real experiences of people were finally made visible. This quest for visibility is reminiscent of the idea of “The Undercommons” as Fred Moten describes them. They refer to this world of life that exists beneath the “commons” – our collective idea of the world. It’s a gothic creature hidden from sight. It’s historical narratives that don’t get included in lectures. It’s domestic laborers hidden from view to maintain conditions of luxury. It’s the physical stratification of the neoliberal city. It’s the bodies of colored people the institutions around us are built upon. It’s the lived experiences of marginalized individuals. The undercommons is that which exists beneath the surface of the world, and our mission has been to understand them by challenging overarching societal narratives and delving into that which we cannot see. In that sense, one might call the work of this journal fundamentally surrealist. D. Scot Miller, self-proclaimed founder of the Afrosurreal Arts Movement, said “[the] Afrosurreal presupposes that beyond this visible world, there is an invisible world striving to manifest, and it is our job to uncover it.” Miller identifies afrosurrealism as a multiracial movement of radical art that seeks, fundamentally, to challenge the visible world. Afrosurrealists sit in diametric opposition to prevailing literary/ artistic canons by bringing form to the unseen. Frida Khalo, who Miller considers a staple of the Afrosurrealist canon, said “I’m not a surrealist. I just paint what I see.” That is, the world in which she exists, as a woman of color, is fundamentally surreal. To be marginalized as an individual means to be pushed to the furthest edges of society. To be marginalized as a group means that your experiences and lives shall forever be considered separate from the world’s prevailing narratives of growth and progress. According to the Hopi creation myth, which is the namesake of this journal, humanity currently resides in the fourth world. In each of the previous three worlds, humanity eventually went berserk, tearing apart the fabric of the world with their own actions. These cycles of creation and destruction are meant to eventually give way to the Fifth World, allowing us to emerge into a new land where knowledge is abundant and harmony has been restored. In theoretical terms, at least, we can find commonality in the language of the undercommons, of the afrosurreal, and of the

fifth world. All of it implies a version of existence beyond what is available to us today. All of it implies life beyond this — a life that is attainable, a life that is out of sight, a life that has merely been obscured, but a world that can be revealed to us. The fourth world will eventually fall. The issues that plague us cannot last forever, and the current devastation of the world allows us to glimpse into the Fifth World. That’s what this publication seems to be about: our collective understanding that a world exists beyond this one — that beneath its magical surface, there’s an insidious, gothic underbelly that threatens the current state of things. It can feel anywhere from difficult to horrifying to self-actualize through global strife. The sudden awareness of multiple realms, the ways in which you move through them, and the ways in which you maintain them challenges your preconceived notions of peace, of normativity, of the natural arch of your life. But, you cannot unsee the fifth world. Once you’ve seen it, you must know that it’s coming. You must know that it’s imminent. You must feel it shaking the foundations of the world around you. The fifth world is sitting just below the surface; the tension between it and its precursor can be felt by us all. The hope is that the writings of this journal really get to something – that the thought of our brilliant authors reaches into new realms. The hope is that through our writing, we can reach into the Fifth World. Our most sincere thanks go to Dr. Cantrell for his endless compassion and effort in guiding us through our research journeys. Fifth World, now in its seventh iteration, could not exist without him. His passion for and immense knowledge of the humanities has shaped each of his students as writers and individuals. We extend our heartfelt gratitude towards the faculty of the NCSSM Humanities Department and Dean Moose, who have continued to support the growth of the Research in the Humanities program. To Dr. Roberts, Dr. O’Connor, and the NCSSM Foundation, thank you for providing the resources to pursue the questions that matter most to us. To Alicia Bao, thank you for your artistic talents: you so beautifully translated our thoughts into something tangible – the cover. To Taylor Nguyen, we give our most genuine thanks for your guidance in helping us learn the ropes of Fifth World, and for your limitless patience and grace in fixing our mistakes. We wish especially to thank our manuscript editor, Sree Elayaperumal, for her wonderful dedication to our journal and its meticulous formatting. Lastly, to our family, friends, and peers, thank you for ever inspiring and supporting us. This journal is a product of your kindness. Sincerely, The Editorial Collective Katie Beard, Juan Castillo, Sree Elayaperumal, Ella Evans, Zenith Jarrett, Pearl Maguma, Sewoe Mortoo, Sanjana Nalla, Evelyn Ong, Catherine Vu, and Winnie Wang


Contents Vaccine Hesitancy and The History of Drug Regulation in the U.S.

Nadia Bishop

4

Suffering in Silence: Women and the Western Medical Institution

Sreenidhi Elayaperumal

12

Modern Minstrelsy: Transfiguration at the Hands of Whiteness

Zenith Jarrett

20

Oriental Femininity — Fears, Fantasies, and American Realities

Winnie Wang

32

Establishing the Perception of Home Amidst Violence for Indian-American Immigrants through Heritage, Household, and Permanence

Pranet Sharma

39

Nature Words and Children’s Dictionaries: The Necessity of Their Interaction

Noell Boling

47

Religion in Chinese-American Communities During the Exclusion Act Era: Understanding Historical Assimilationism to Contextualize Modern-Day Struggles for Asian American Identity

Jonathan Song

53

When Water Isn’t Life: The Flint Water Crisis as a Case Study in Slow Violence

Josie Barboriak

60

Less than Human: How the Perception of Autism is Negatively Affected by the Media and Medical Field

Harper Callahan

68

The Common Addict

Elisa Kim

74

The Commodification of Aloha: An Analysis of the Progression of Colonialism in Hawaii

Juan Castillo

82

A Driving Force in American Politics: The Emergence of the Christian Nationalist Movement and its Roots in Puritanism

Ella Evans

88

Politics and Polity: America in the Shadow of the Supreme Court

Mark Muchane

95

Historical Importance of Indigenous Irrigation Systems on the Peruvian Coast

Katie Beard

104

Understanding the Future of Virtual Worlds in the Environment of MMORPGs

Julien Cox

112

The Effects of the Incarceration Cycle on Socially Excluded Groups

Ekwueme Eleogu

117

Class Narratives in Media

Maddie Beard

122

Cartography and Culture: Romance and Rationalism in Early Modern Mapmaking

Frank Ladd

129

Classicism’s Influence on the Fall of the Empire of Liberty: How American Democracy Can Survive

Riley Holland

136

The Social Consequences of the Portrayal of Older People in Media

Catherine Vu

144

In Fear of Paganism: Exploring Evangelical Appropriation of Yoga and Meditation

Sophia Lavigne

152


Lifespan: A Scientific Analysis and Global Application

Isabella Larson

159

Prominence and Popularity of Chinese American Cuisine

Alicia Bao

167

The Nature of Museum Neutrality

Adriana JimenezWillis

171

The Global Implications of Gender-Based Violence on Politics and Society: An Analysis

Sydney Mason

177

Unequal Treatment: Impact of Implicit Racial Bias on United States Healthcare

Ethelyn Ofei

183

Existentialism and its Implications on Society

Sanjana Nalla

190

Investigation of the Influences of Christianity on Economic, Medical, and Social Reformation in China

Gracie Lin

197

Social Media in Modern Christianity

Evelyn Ong

205

The Evolution of Queer Representation in Media in the United States

Pearl Maguma

211

Misogynoir in Film and Television: Where Does this Leave Black Women Today?

Della Crawford

215

Examining the Relationship and Consequences of Hypersexuality in Sexual Assault Victims

Kendall Esque

221

Feminism and Science: A Historical and Contemporary Analysis of Seemingly Paradoxical Institutions

Meghana Chamarty

227

Tribal Influence and How the Culture of a People Can Change Over Time: A Case Study

Sewoe Mortoo

234

An Afterword to Volume Seven

David Cantrell

240


6

Vaccine Hesitancy and The History of Drug Regulation in the US Nadia Bishop

I

n January of 2021, many of us saw a very different future than the one we are currently living. With the vaccine beginning its rollout, many believed (myself included) that once we had enough vaccines to distribute, we would significantly lessen the pandemic in the US in a matter of months. However, like many people, I had significantly underestimated vaccine hesitancy. As vaccination rates began to slow far short of herd immunity, I realized over the course of 2021 that there had to be more to this issue than a few online groups and one paper about vaccines and autism. Many people think that antivax sentiment is new, brought about by the age of ‘holistic’ medicine and far-left leaning online groups; however, there is actually a long of history behind the views of those who do not trust the government and its regulatory agencies enough to take the vaccine. Looking back at the history of drug regulation in the US, we see that the majority of the laws were passed as a response to disasters, and that mistakes with drug regulation happened repeatedly for the same reasons to the same people–that disadvantaged groups within the US were constantly injured because of decisions informed by corporate profit, not collective well-being. This similarity in different disasters throughout US history has led me to ask whether a pattern of neglect could be established and potentially interpreted. Inspecting this aspect of history will allow us to understand how certain people are disproportionately affected by the structure of the American healthcare system, much of which is built for commercial profit, and how this creates a vicious cycle of distrust of science caused by the failures of the FDA to protect these people. During the pandemic, I have found it very hard to have sympathy for those who might not trust the government when it is promoting policies backed by scientific research. Even with this frustrating situation, though, I also don’t want to disregard those who legitimately have trouble trusting a system that historically has done a lot of harm to them. When the desire “for the greater good” benefits companies and never your community, how would that affect your trust of that “good,” even when it’s not only in favor of the corporations? In order to understand how various drug regulation disasters fit within the larger history of the FDA as a regulatory body, I also intend to look back at the previous

scandals of the 1890s and 1900s, which ended in the Food and Drugs Act (or Dr. Wiley’s Law) being passed in 1906. The fight for these regulations (which would eventually become the FDA!) proceeded in a similar way, with the lack of regulation causing significant problems until a number of incidents had occurred. Once the public opinion shifted significantly due to these scandals, the government had to regulate the companies regardless of their monetary influence or risk losing the people’s trust completely. Examining the events that preceded the FDA will presumably help contextualize the issues with the FDA in the later 1900s. Analyzing media from this time period is also an important part of understanding the similarities in today’s news and media. Looking at government documents and newspapers from the time may shed some light on not only how the government failed to support the FDA, but also into how people nationwide perceived those failures and how that affected their trust of the FDA and the government in this case. The newspapers will indicate what information the general public had access to, which will provide insights into how people of the time reacted to the FDA’s handling of these disasters. As I examine the way in which the FDA was reported on in the media, I hope to learn more about how this disaster was perceived, but also to explore the patterns that begin to emerge in food and drug related disasters. Comparing these two time periods to the present will help me to understand and interpret the patterns that emerge, and hopefully better understand the cause of the current situation surrounding the FDA and vaccine hesitancy in specific groups of people. I hope to also examine instances of successful resistance to the broken system, finding these moments in history where people pushed the FDA to take steps to protect the population in spite of the corporate pushback. Looking at these remarkable moments and what they have in common with one another should then lead us to the beginnings of a potential solution to our current predicament.

I

n order to talk about the FDA in the present, we must first go back to the very beginning and consider what prompted the creation of the FDA. As a result of industrialization in America in the late 1800s, people moved away from the farms that provided their food, and large

Fifth World


7

food companies formed to sell food to those in cities. In this period, America was effectively the “wild west” when it came to food safety and additives. Manufacturers were discovering and using many different chemical additives, and there were no regulations forcing them to disclose these additives or test whether they were safe. The food industry was purely for profit, and there were no agencies for public health to protect the consumer. This lack of regulation primarily hurt the poor, because food additives were used to make production cheaper, which made adulterated food cheaper- and cheaper food unsafe. Harvey Wiley, a government chemist at the time, saw how this lack of regulation was affecting the public and felt the need to do something about it. However, financial lobbying by food companies at the USDA meant that his attempts to get the word out about these dangerous additives were blocked or prevented. After he was prevented from publishing more installments of Bulletin 13, his first attempt to spread the word on what was happening, he realized that his current approach wasn’t going to work. Luckily (in some ways) for him, the importance of food regulation became very clear to the public in 1893 with the Cuban War and the ensuing ‘embalmed beef’ debacle. Several large meatpacking companies won large contracts with the US military to send beef to the soldiers on the front lines. What arrived, however, was rancid and reportedly smelled of toxic chemicals. This well-publicized incident raised public interest in food safety, and also made it clear to Wiley that if the public understood the dangers of food additives, it would be able to pressure Congress into passing legislation to regulate the food industry for the first time. In order to raise awareness, he decided to perform human trials to definitively determine whether food additives were making Americans sick. 12 young men volunteered to be fed increasing doses of borax to determine how borax affected the body. While Wiley’s studies showed exactly what was expected (consumption of borax resulted in serious intestinal illness), the more important result of these trials was the media coverage. At first, Wiley did not want his studies to be given publicity in the newspapers, instead aiming for more ‘serious’ coverage. However, as George Rothwell Brown began to publish stories in the Washington Post about the experiments, the American public became invested in the mission of the Poison Squad, and consequently started to become more aware of food additives and their dangers. When Upton Sinclair’s novel The Jungle came out in 1918, the public suddenly had a reason to be much more aware of the importance of industrial food production and the lack of food legislation. The conditions Sinclair described seeing in the meatpacking plant were disgusting, ranging from moldy meat being sold to rats and human fingers accidentally going into the mix, and these descriptions had a distinct impact on how readers perceived the American food industry. Sinclair himself said of the reaction to the

North Carolina School of Science and Mathematics

book, “I aimed for the public’s heart, but by accident I hit it in the stomach.” Americans, now uncomfortably aware of what was going into their food, wanted this problem fixed as quickly as possible. This gave Wiley the backing he needed- while several bills fizzled out in the lobbyist-filled congress, eventually the Pure Food and Drug Act and the Meat Inspection Act were both passed within a year. As a result of this scandal, laws were passed to finally give Wiley and the chemistry division the go-ahead to push back against companies manufacturing adulterated food. This new power to create and enforce regulations being given to the chemistry division would lead it to eventually grow into what is now the Food and Drug Administration (Blum).

I

n 1937, a new form of a patent drug called Elixir Sulfanilamide was developed and shipped out to hundreds of pharmacies and doctors around the country. It quickly became clear, however, that something had gone horribly wrong as deaths began to occur around the country as a result of the elixir. At the time, sulfanilamide was a well-known and widely used drug at the time for a variety of illnesses, from throat issues to pneumonia. It came in a powder form and a tablet form, but in 1937, the drug company producing it (S.E. Massengil) discovered that this powder could dissolve in diethylene glycol, and this became Elixir Sulfanilamide. However, diethylene glycol is what’s used in antifreeze and is actually poisonous when consumed. This rather significant problem wasn’t caught because of the very lax regulations of the time, which did not require companies to test whether drugs were toxic before sending them out. This oversight killed over a hundred people, sparking an inquiry into how drugs were (or weren’t, more realistically) regulated at the time. Reading articles from this time period after the disaster, we see laments for the lack of regulation, with the Evening Star saying that “authorities agree that much of it was sold as a home remedy over drugstore counters across the country to people who had absolutely no competent medical advice about how it should be used.” Dr. Campbell, who was chief of the FDA at the time, also noted that this poor regulation wasn’t an isolated incident, saying “we still have deaths and blindness resulting from dinitrophenol, which was placed recklessly on the market a few years ago by manufacturers of reducing agents.. from cinchochen poisoning, a drug often recommended for rheumatism and arthritis… from thyroid and radium preparations improperly administered to the public.” This great range of issues is very similar to the scandals surrounding food additives in the previous century: even after the aforementioned Poison Squad studies proved that it wasn’t safe to consume formaldehyde, borax or salicylic acid, the food producers had so much financial power in Washington that the government refrained from regulating the food industry until many different scandals had come out in the form of the embalmed beef story and Sinclair’s


8

The Jungle. Similarly, it took the sulfanilamide disaster and a public outcry over it to get drugs regulated more effectively. Looking at these events, we can establish a cycle of greed, mistakes/deaths and regulation, all documented by the media. In the case of elixir sulfanilamide, they reported not only on the widespread effects of the mistake, but also on the shocking lack of power that the FDA had over the situation. Everything S.E. Massengil had done was technically legal at the time, other than their chosen name for the drug. Although elixirs were legally supposed to contain alcohol, Elixir Sulfanilamide didn’t; if, however, Masengil had instead called it a “solution,” the FDA would have had no recourse to sue. At the time, there were no regulations in place that would require that drugs be tested in any way for toxicity before they were sent out. A year later in 1938, the Federal Food, Drug and Cosmetic Act was passed, which added a requirement that drugs be tested for safety and proven to not be poisonous before being sent out. With the passage of this new law, the FDA began to change. Instead of being an agency that was primarily responsible for investigating and cleaning up after disasters, the FDA became a regulatory agency, whose job it was to oversee the production and release of drugs and ensure drug safety before they hit the market. This move from being a disaster investigation bureau to a group with a largely preventative role marks the true beginning of the FDA as we know it today.

L

ooking at these two crucial turning points in the founding of the FDA, we can see a pattern emerging. Giving the FDA the power to keep Americans safe is rarely a significant concern in Washington until mistakes are made and people injured as a result. When mistakes are made, they generally disproportionately affect certain groups of people, especially people living in rural areas and people of color. This is clearly shown in the coverage of the sulfanilamide disaster, with many of the deaths occurring in rural mining towns like Rocky Mount, NC, with a story being run repeatedly in newspapers about an African-American woman who lost her brother, but kept the bottle of medicine that killed him on his grave, allowing FDA investigators to definitively connect his death to the dangerous medicine. Looking towards the present, we can see how the unequal share of the the damage has not gone away. People in rural/underserved areas and people of color are still disproportionately left behind or hurt by the current American healthcare system. For example, the ongoing opioids crisis affects rural communities of blue-collar workers much more severely than most other populations in the US. According to a study by the Farm Bureau, three out of four farmers say that they could acquire large amounts of opioids or prescription painkillers without a prescription in their community, but only one out of three say that they could access treatment for addiction to painkillers or heroin in their community.

Even when treatment is available, it may also be out of reach for those in rural communities- the survey revealed that under half the respondents felt that they would be able to get treatment that worked and was also covered by their insurance/affordable for them. This is likely due to the lack of healthcare resources in rural communities: it’s much easier to prescribe painkillers (especially ones being aggressively advertised by pharmaceutical companies) than it is to provide long-term addiction treatment or therapy for a price people in those places can afford. It’s also much easier to restrict access to opioids than to provide addiction treatment, so it’s also much more difficult for these underserved communities to recover. As a result of the failure of the FDA to prevent Purdue Pharma from getting oxycodone approved for general use instead of specific conditions, prescriptions for powerful opioids were aggressively promoted across the US, especially in rural areas. When the addictive properties of those drugs became clear, the already overburdened healthcare systems there could not keep up with the disaster that was unfolding. Many rural communities felt betrayed, having been unprotected from the greed of the pharmaceutical companies. The way in which the FDA dealt with the opioids crisis didn’t help with this feeling of betrayal, either. As the FDA cracked down on the prescriptions of opioid pain medications, former opiate addicts were forced to turn to stronger, illegal drugs like cocaine and fentanyl, and overdose deaths increased significantly in rural areas (CDC). This left many rural communities vulnerable due to the increasing number of addicts, with very few addiction resources to help them, and an unsympathetic and under-supported healthcare system. The FDA was more interested in preventing addicts from obtaining prescriptions than it was in helping those in the community who were addicted avoid making the step to hard drugs, and this more severely hurt rural communities that lacked the resources to deal with the fallout. The effects of the opioid crisis show that our regulatory systems are still far more likely to allow rural people and people of color to slip through the cracks and be injured by a lack of regulation and corporate greed.

I

t is impossible to speak fairly about vaccine hesitancy and trust in the FDA without also considering the ways in which regulatory agencies, much like the law, have historically and still do not adequately protect people of color. One of the most famous examples of this is the Tuskegee Study of Untreated Syphilis in the Negro Male, which was a deeply unethical and racist study that passively monitored the condition of hundreds of black men with syphilis for over 40 years without ever treating them, despite the fact that effective treatments for syphilis like penicillin were readily available by the 1940s. The study ran from 1932 to 1972, finally ending after being exposed by Jean Heller and the Associated Press in 1972. This article revealed many horrifying aspects of the study, from participants being lied Fifth World


9

to about their diagnosis of “bad blood,” to unnecessarily invasive tests being performed by the primarily white staff, to the ‘compensation’ for participating in the study being only hot meals and burial plans. It quickly became “a symbol of their mistreatment by the medical establishment, a metaphor for deceit, conspiracy, malpractice, and neglect, if not outright genocide” (Corbie-Smith). According to Marcella Alsan, the life expectancy for black men, the tendency of black men to seek medical treatment or participate in clinical trials and their trust in medical institutions all dropped significantly after the expose in 1972. Alsan does specify that her work only proves a correlation in the reactions of black men and “ the effects we measure are specific to black men and to geographic proximity to central Alabama, not to a post-1972 condition affecting the South overall.” I would argue, however, that even if the Tuskegee study somehow only made black men in the south feel unsafe seeking medical treatment, there are many other instances of similar mistreatment affecting black women, from the erasure of Henrietta Lacks to the horrifying experiments run by James Marion Sims, the “father of modern gynecology.” While the FDA was technically not involved with the Tuskegee Study (which was instead run by the CDC), many still do not trust the FDA as a result. The question is not necessarily what the FDA did, but what it didn’t do to stop the experiments. That “fact check” articles exist on sources like USA Today for the claim that the FDA was involved in the Tuskegee Study makes it clear that many black people feel (and rightly so!) that they have been betrayed by an organization intended to protect the American people, and as a result they are hesitant to trust the FDA to protect them in the future. One important way to gauge the public perception of and trust in the FDA is to examine media coverage from the time. While muckraking journalism by the likes of Upton Sinclair and public outcry in the news was what birthed the FDA, it soon became a double-edged sword. Soon after the passage of the Food and Drugs Act of 1906, articles about Dr. Wiley and the FDA (at the time, the Chemistry Bureau) began to paint him as in favor of overbearing regulation, eventually pushing him out of the Chemistry Bureau altogether after his failed attempt to get Coca-Cola to lower the amount of caffeine in their products. Wiley, aware of the power of the media and lobbyists, joined the staff of Good Housekeeping and wrote columns and articles about food safety and regulation, trying to keep a hold on the media coverage of the FDA. As time went on, media coverage of the FDA began to falter, for a government organization intended to keep the populace safe loses its interest to the corporate media unless it does something particularly notable–in particular, when it fails at its main job. Looking at the newspapers from the 50s and 60s, we can see this fact pretty clearly. Using the Library of Congress’s Chronicling America project, I have read 100 of the articles that mention the FDA, most appearing from the early 50s North Carolina School of Science and Mathematics

to the mid 60s in papers like the Evening Star. Of those 100 articles, 85 portrayed the FDA as less than competent. Many mentioned drug regulation, talking extensively about disasters and using these stories to criticize the FDA. Others spoke about battles over regulation and budget cuts to the FDA, repeatedly mentioning the staffing issues with even the president of the Pharmaceutical Manufacturer’s Agency saying in 1960 that “It’s a national calamity that the FDA, with its current staff, budget and facilities, is unable to perform adequately its existing statutory responsibilities.” One paper even went as far as to assert that “The FDA says that there apparently is nothing which can prevent the public from becoming a guinea pig in the testing of new drugs, because there are some things about them which cannot be learned until after extensive use” (Evening Star, 1952), making it clear that the FDA wasn’t powerful enough to protect the people from the long term effects of some drugs. Corruption was also a common topic. The “revolving door” between the FDA leadership and high-up positions in industry was mentioned repeatedly, especially around the time of the Henry Welch scandal. This occurred in 1960, when it was revealed that Welch had accepted $287,000 from firms that the FDA regulated and had lied about the source of this income to his superiors (Evening Star). Even in instances where the FDA succeeded, like the thalidomide tragedy, papers downplayed the heroism of Dr. Frances Kelsey, ignoring her refusal to give in to the impatience of industry and instead saying that a letter she “chanced” to read was the only thing standing between the newborns in the US and thalidomide. The papers also took the opportunity to point out issues with the FDA, saying that “there comes a time when a drug like thalidomide- the sleeping pill that once seemed so promising- must be tried on humans … is also a moment over which the FDA has little control, even when a drug may be as potentially dangerous as thalidomide.” People even began to recognize the cycle of disasters, regulation, then relaxation that I pointed out earlier. One senator wrote in a 1962 issue of the Evening Star, “I wish some day the country would be able to act without some sort of crisis or tragedy,” and a congressman lamented in 1962 to the East Cleveland Leader, “ Unhappily, from the earliest beginnings, it has required tragedy to spur the strengthening of the FDA.” This clear bias towards bad coverage of the FDA in the 1950s and 1960s makes it clear that not only was most news about the FDA written in a deeply unflattering and scary light, but that people were starting to recognize that disasters often had to happen before real change was made. This also leads into our current virtual landscape, where misinformation, disinformation and fact are being presented side by side. One unfortunate effect of the internet is that it is now a simple matter to create a somewhat official looking website or document containing whatever one needs to pass as information, and that it is even easier to propagate this (mis)information on social


10

media. When added to the general way in which the FDA is and was reported on, this makes the current prominence of vaccine hesitancy in the media unsurprising. Since most of the mainstream coverage of the FDA comes in the form of exposes and disaster analyses, it’s not hard to accidentally believe a fake story about the FDA or the covid vaccine because of its similarity to mainstream news coverage. The media affects how we think about current events, but it also represents and exaggerates ideas that already existed. From this, we can tell that this type of continuous media coverage makes people less likely to trust the FDA and more likely to believe misinformation comes from somewhere–from the preexisting fears of the historical damage caused by the lack of food and drug regulation in America.

W

hen talking about the internet, media and the FDA nowadays, we cannot avoid mentioning the most obvious and influential child of medical misinformation, the AntiVax movement. As long as there have been vaccines, there have been people who believed that they were dangerous, with a minister in 1772 even saying that the practice of vaccination was “sinful” and “diabolical”. After laws were passed in the early 1900s that required parents to vaccinate their children, the Anti-Vaccination League was chartered in Britain to “protect the liberties of the people.” In recent years, the anti-vax movement has been given a boost in the form of a research paper written by former doctor Andrew Wakefield that was published in the Lancet, which attempted to draw a link between autism, IBS and the MMR vaccine. Wakefield’s license was revoked and the paper was refuted many times and was eventually withdrawn from the Lancet; however, by that time the damage had already been done. Wakefield’s work has been publicized extensively, influencing a whole generation of new parents. In the UK, the MMR vaccination rate dropped from 92% in 1996 to 84% in 2002, and in the US, the decline was around 2% . As the vaccination rates for MMR and other vaccines dropped, pockets where the vaccination rate had dropped below herd immunity appeared, and with them, old diseases began to resurface, and measles outbreaks began to occur in the UK and the US as a result of the loss of herd immunity (Cureus). The movement hit a fever pitch when the coronavirus pandemic started and covid vaccines hit the developmental stage, with many people claiming that the vaccines were created too quickly and couldn’t be trusted, especially while they were still under emergency use authorization. Anti-vaxxers as a group have grown from a fringe collective to a group that makes mainstream news every day, with political figures suddenly campaigning on the “right to choose” (one’s vaccination status) and the covid-19 vaccination rate in the US hitting a plateau far short of herd immunity. History and the news accounts for part of the reason that anti-vax sentiment is so prevalent now and spreads as quickly as it does, and anti-vaxxers as a group have gained significantly more traction over recent

years. These two facts, though, when taken separately, don’t completely explain the sheer magnitude of vaccine hesitancy in the US right now, especially in the face of a global pandemic and another set of potential lock downs.

T

he combination of rural people who are skeptical of the abilities of the healthcare system and people of color with a fundamental distrust of the government’s ability to protect them results in a large percentage of the population who are susceptible to anti-vax propaganda. With the notable shift in the media view to accommodate anti-vaxxers as a voter block and the growth of the internet as a news source, misinformation and bad science share the same sites as real science. Even if someone doesn’t believe all of the conspiracy theories about vaccines or agree with Wakefield’s conclusions, seeing ideas like his given such a spotlight regularly makes them seem more legitimate. This knowledge, plus the fact that many Americans are hesitant to get vaccinated, points to the effective coincidence of the wrong events with the exact wrong time. Just as the anti-vax movement was becoming more common in the mainstream media and online due to Wakefield’s paper and the extensive coverage it received in the news, the coronavirus pandemic hit and people watched the scientific method unfold in real time. The tendency for scientists to get things wrong the first few times, then to adjust the hypothesis to match the new scientific evidence, resulted in confusing and at times contradictory messaging about the actual dangers of Covid-19 and the proper safety precautions to take during the pandemic. One example of this is the guidance on masks in the early pandemic. In January and February, the CDC stated in a press conference that it “does not currently recommend the use of face masks for the general public”(newsroom), and the US surgeon general at the time (Jerome Addams) added further confusion by tweeting misleading information, saying “Seriously people- STOP BUYING MASKS! They are not effective in preventing general public from catching #Coronavirus, but if healthcare providers can’t get them to care for sick patients, it puts them and our communities at risk!”. While public health agencies never said masks didn’t work, they didn’t recommend them in the early pandemic. Dr. Anthony Fauci later explained in an interview with TheStreet that “the reason for that is that we were concerned ... that it was at a time when personal protective equipment, including the N95 masks and the surgical masks, were in very short supply.”. However, many people (like the surgeon general) assumed that masks were not recommended because they didn’t work for the general public, not because healthcare professionals weren’t sure how effective they were and healthcare workers needed them. By nature, the way that scientific writing works led to confusion: in science, the “default” or “null” hypothesis is always that there is no link between the metrics being Fifth World


11

studied, and the studies and experiments are intended to test that null hypothesis. This does not work well, however, when writing to the general public, for when most people hear “don’t wear masks because we don’t know if they work to stop community transmission of this specific virus,” they think that means that there is evidence against masks working for Covid-19, rather than no specific evidence for this situation. This confusion around what many perceived to be rapidly changing guidelines and recommendations laid the perfect groundwork for conspiracy theories about the pandemic and the vaccine to spread online. Once the vaccine was approved, the fight ensured across social media about who was and wasn’t getting vaccinated and why. Many political leaders began to campaign and legislate about the right to not get vaccinated, irreversibly politicizing the vaccine while attempting to appeal to those who didn’t trust the FDA after its pandemic response. The debate was reframed from a matter of public health and science to one of personal choice, with many claiming that the vaccine was experimental and they wanted to wait for long-term studies before they got it. For someone who already has a historically justified level of distrust in the ability of the government to safely regulate medicine, this polarizing debate being all over the mainstream media and internet makes the choice whether or not to get vaccinated seem like a pretty obvious one. If given the option to either A) get vaccinated and place their trust in the FDA’s emergency use authorization or B) wait and see what happens before taking any action, the average person’s response would be to do nothing and wait. Taking the current state of news about the FDA in combination with the rising anti- vax movement and using that as a lens to look at vulnerable populations, we start to understand how vaccine hesitancy in the US spread so quickly.

W

hen we look at these crises, it’s also important that we examine the crucial role that marginalized groups also had in bringing about reforms to protect those in society that are most vulnerable. While the Wileys and Theodore Roosevelts of the time deserve their share (but not more) of the credit, we also must think about the groups of ordinary people who brought about change by standing up for themselves and what they believed in, making the necessary societal changes that preceded legal change. During the Poison Squad era, public awareness of the dangers of food additives was increasing due to articles written about Wiley’s experiments. However, the groups that took action to protest food additives were the primary purchasers of food, housewives. Articles appeared in the Ladies Home Journal, Good Housekeeping and in Fannie Farmer’s cookbooks exposing the dangers of food additives. Farmer, a pure food evangelist herself, even advised women to go and get prussic acid and test their food themselves for dangerous additives. Suffragettes also picked up the Pure Food cause, with Alice Lakey (a prominent suffragette at the North Carolina School of Science and Mathematics

time) introducing Dr. Wiley to many sympathetic women’s groups across America. The willingness of these women to push back against food retailers in everyday life attracted the eye of corporate leaders, who realized that pure food and a lack of food additives could easily be a selling point. It also caught the notice of Theodore Roosevelt, who knew that not only did these women care a great deal about food safety, but that many of them lived in states where they could vote and/or would influence others to vote for candidates that promised better regulation, effectively paving the way for the legal changes that needed to be enacted (Blum). Another example of this resistance comes in the form of the HIV/AIDS protests in the 80s. During the AIDS crisis, the issues with the FDA’s drug approval process became very clear as the experimental drugs for treating HIV and AIDs were not approved fast enough, and the government was largely indifferent to speeding that process up. On October 11, 1988, members of ACT-UP participated in the Seize the FDA protest and spoke with the FDA about potential ways to fix the system in order to protect queer people and prevent future deaths from slow drug approval. Their protest led to the introduction of ‘parallel tracking’, or expanded access to experimental drugs that had been proven to be safe, even if the effectiveness of these drugs was not yet clear- a massive change in how clinical trials and drug approval happened (Specter). This unwillingness to be passive victims of poor regulation, instead demanding that changes be made to the fundamental legal structures that were killing them, is truly extraordinary, and deeply necessary if further change is to be made.

T

he knowledge that both of these groups exist–privileged white anti-vaxxers and those who are hesitant because of their historical background of mistreatment–leads to real difficulties. How do we combat misinformation and those looking to benefit from it without disregarding the voices of the marginalized? When is criticism of a government agency legitimate, and when is it using this history of failures in bad faith to discredit real science for profit and political power? One key place to look is at the movement itself–who is encouraging/organizing the resistance? If the person pushing back against a type of medicine or vaccine has an economic motive to do so, or already has access to good healthcare and other social privileges, they likely aren’t resisting in good faith. This test works very well when applied to Andrew Wakefield’s work. Wakefield was already a doctor with a license and the ability to get his work published in the Lancet, a respected medical journal. His work on the MMR vaccine was not only bad science (he worked with a small and biased sample size of 12 children who already had autism), but was also done initially to make money. Shortly before publishing the paper in 1998, he had patented a “composition that may be used as a measles virus vaccine and for the treatment of inflammatory bowel disease and regressive behavioral disorder,” which was effectively


12

an alternative vaccine intended to somehow help treat IBS and autism while also vaccinating against the measles virus. This cure was based upon the very connection that he was hoping to prove the existence of in his paper (Jackson). Looking at this, Wakefield and his supporters clearly don’t pass the established criteria. Prior to publication, Wakefield was respected professionally, which he clearly intended to exploit for financial benefit in his attempt to contradict the FDA. If, however, we examine the Pure Food Movement or the ACT-UP protests, we can clearly see that their criticisms were legitimate. At the time of the Pure Food Movement, women could not vote, and thus did not have a real voice in Washington. Furthermore, their goal was to gain the protection of the government against food adulteration. They wanted to have the knowledge that if they bought food in the store, it would be safe to consume; thus, their goal was safety and not financial gain. When we examine the Seize the FDA protests and the AIDS crisis, we also discern a similar situation. The AIDS crisis, despite the thousands of deaths attributed to it, was being ignored by the Reagan administration because it was seen as a problem for primarily gay men and those who used drugs. When ACT-UP stormed the FDA in 1988, all the protesters wanted was to change FDA policy to increase access to experimental drugs that could save the lives of those affected by AIDS. Yet again, we see a case of a marginalized group organizing for themselves and their safety, and through their resistance effectively changing the way drug approval and regulation occurs in America for good. Understanding who is organizing protests and why is a crucial step to understanding the roots/goals of a movement, but also to understanding why people would join or believe in their cause.

E

ven with this method of determining the validity of the organizers of a movement, we still don’t have a way to work with those who are susceptible to bad-faith vaccine criticism because of their historical knowledge. It’s easy enough for someone who has not been systematically affected by the mistakes of the FDA to see the difference between Wakefield’s criticisms and past disasters with the FDA, but for many, it’s not so easy to disregard these warnings. How, then, can we most effectively help those people who have been mistreated by the American healthcare system? As we’ve shown so far, this problem didn’t appear overnight, and it will take time and a concerted effort to begin reversing the historical damage that has been done. In one study conducted by RCORP International that examined the barriers between those in rural areas hit hard by the opioids crisis and vaccination, the main issues mentioned were accessibility and vaccine hesitancy. Many of the vaccination efforts in the US rely on mass vaccination clinics and chain pharmacies, both of which are far more common in areas with higher population density. This makes getting the vaccine harder for those who live in

smaller towns and rural areas, as they have to go further out of their way to get vaccinated. Also, many vaccine appointments are mainly available to book online, which requires knowledge of how to use the internet to sign up, which can be difficult for those who are older or who live in areas with poor cell service, not to mention those who lack access to the internet. As is generally true, rural areas end up being underserved and forgotten about when it comes to important medical supplies and procedures. The other issue, unsurprisingly, is vaccine hesitancy, especially among those who have had previous experience with the healthcare system during the opioids crisis. According to the findings of RCORP’s study, “Addressing SUD/OUD (substance use disorder and opioid use disorder) stigma within the rural healthcare community will be critical to improve immunization among the rural SUD/OUD population as some patients (especially those involved with the legal system) were reluctant to seek vaccinations from a system of care within which they have had negative encounters.” People of color in these areas also had reservations based on historical trauma, as the study mentions: “Additionally, mistrust of the government, political affiliations, and colonialism and historical trauma were reported to further alienate a portion of rural residents and individuals with SUD/OUD from vaccination efforts.” It adds that it “is vital to attend to the specific needs and concerns of subpopulations (e.g., individuals with SUD/ OUD, people of color) who are consistently and frequently overrepresented among those experiencing poor health outcomes, health disparities and inequities due to stigma.” In rural areas that generally trust organized medicine less and that would be affected by online anti vax propaganda more, an estimated 8 out of 10 patients will still go, however, to a local healthcare professional to ask about the safety of something like a vaccine. This sounds promising, but many rural hospitals are underfunded and under supported; as a result, many healthcare professionals will not be able to or want to convince patients to get vaccinated. One major point addressed by RCORP was the role of healthcare providers in perpetuating stigma for those suffering from opiod addiction, which made those patients less likely to speak with healthcare workers and more likely to get information about the vaccine from social media or the internet. Educating rural professionals on opioid stigma and providing them with better resources to work with those patients would generally help out communities suffering from the effects of the opioids crisis; it would also allow those who have had/still have addiction problems feel more comfortable talking to doctors about decisions like getting vaccinated. Some healthcare professionals even have their doubts about the ability of the FDA to keep them safe. Provider hesitancy is a big issue, since many front-line medical workers in rural areas choose to not get vaccinated, often for the historical reasons given above. If a person asks a Fifth World


13

nurse who isn’t vaccinated about the vaccine, and that nurse refuses to recommend it, then the person will likely be confirmed in their resistance by the nurse’s hesitancy. This shows us it’s even more crucial to work within the actual communities affected to get people vaccinated, since the issue is with the community as a whole, which includes healthcare professionals, and not just with those outside the healthcare system. Current efforts to curb online misinformation are important, but personal interactions are more likely to persuade people to get vaccinated, as they occur outside of the polarizing space of the internet. The study by JCORP showed that in-person interactions and the presence of community health workers to answer questions and book appointments were the most likely ways to improve vaccination rates. Working with other community services/groups like the NAACP, soup kitchens, recovery homes and homeless shelters also were good ways to reach more people in a community, especially when it came to improving access to vaccines for people of color. If those in a community are most willing to listen to and engage in dialogue with those who live and work in and for their communities, then most attempts to convince rural communities of the vaccine’s safety must occur in person, in those communities. There also must be efforts to provide rural hospitals and healthcare providers with more resources and help, as one of the root causes of the doubt people have for the healthcare system results from the lack of funding and assistance that rural areas receive. In order to help those who have been historically mistreated and left unprotected by the American healthcare system, we must listen to their fears and work with them, allowing them to be at the head of making decisions about what their community needs to recover instead of preaching at them over the internet.

Blum, Deborah; “The Poison Squad: The American People Had No Idea What They Were Eating”, PBS, 2020.

T

Addams, Jerome [U.S. Surgeon General]. “Seriously people- STOP BUYING MASKS! They are not effective in preventing general public from catching #Coronavirus, but if healthcare providers can’t get them to care for sick patients, it puts them and our communities at risk!.” Twitter, February 29th, 2020.

his is not a problem that occurred overnight, and it cannot be fixed quickly or easily. Generations of distrust not only take time to heal, but require fundamental changes to how the FDA creates new laws and regulations as well as work on a smaller and more personal scale. For underserved communities to trust the healthcare system, they must first be protected from the lobbying of corporations, and they must also be provided with help specific to the healthcare providers in their area. There absolutely must be systemic change to prevent more disasters from occurring that disproportionately affect rural communities and people of color; moreover, hospitals in those areas must also be better staffed and funded so that communities are not adversely affected by the decisions of distant FDA officials unfamiliar with their situation. Further research and studies are also necessary in order to understand the specific nuances of the distrust that exists in different communities and how to tackle it. This will not be an easy or fast problem to solve, but it has become increasingly clear over the course of the pandemic how crucial it is that we don’t allow the current system to continue operating unchanged. North Carolina School of Science and Mathematics

Specter, Micheal; “How ACT UP Changed America”, The New Yorker, 2021. Wax PM. Elixirs, diluents, and the passage of the 1938 Federal Food, Drug and Cosmetic Act. Ann Intern Med. 1995 Mar 15. Alsan, Marcella; Wanamaker, Marianne; “Tuskegee and the Health of Black Men”, PMC, 2019. Jackson, Trevor; “TV programme raises fresh allegations about MMR doctor”, PMC, 2004. Hedegaard H, Miniño AM, Warner M. Drug overdose deaths in the United States, 1999–2018. NCHS Data Brief, no 356. Hyattsville, MD: National Center for Health Statistics. 2020. “Rural Opioid Epidemic, American Farm Bureau Federation, 2021. RCORP, “From the Front Lines: COVID-19 Vaccination Efforts in Rural Communities Hit by the Opioid Epidemic”, 2021. Hussain, Azhar et al. “The Anti-vaccination Movement: A Regression in Modern Medicine.” Cureus vol. Evening star. (Washington, D.C.), 29 July 1962. Chronicling America: Historic American Newspapers. Lib. of Congress. Evening star. (Washington, D.C.), 20 July 1952. Chronicling America: Historic American Newspapers. Lib. of Congress. The people’s voice. (Helena, Mont.), 05 Aug. 1955. Chronicling America: Historic American Newspapers. Lib. of Congress. Evening star. (Washington, D.C.), 06 Jan. 1963. Chronicling America: Historic American Newspapers. Lib. of Congress. The people’s voice. (Helena, Mont.), 12 Aug. 1960. Chronicling America: Historic American Newspapers. Lib. of Congress. CDC, “Transcript for CDC Telebriefing, February 12, 2020. Ross, Katherine; “Why weren’t we wearing masks from the beginning? Dr. Fauci explains”, 2020. Miller, Lloyd; Escondido, Fred, “Oral History of the U.S. Food and Drug Organization”, 1981.


14

Suffering in Silence: Women and the Western Medical Institution Sreenidhi Elayaperumal

P

rior to the mid-nineteenth century, the spread of disease was explained in Western medicine by Miasma Theory–the idea that all ailments, from cholera to chlamydia, were a result of inhaling wandering clouds of poison. With the discovery of microorganisms using early microscopes, a minority attempted to challenge this ageold explanation of contagion, only to be met with ridicule by the medical community. It was through persistent data collection that scientists like John Snow, Robert Koch, and Louis Pasteur were able to verify Germ Theory. Their revolutionary work marked a shift toward modern scientific medicine, characterized by controlled experimentation, the implementation of sanitation procedures, and a more comprehensive understanding of human anatomy and physiology. Today, given the evolution of medicine into an evidencebased practice, we expect our doctors to act in our best interest, objectively diagnosing and treating our ailments, regardless of who we are. At its core, however, medicine is about solving human problems, and humanity does not always align neatly with empirical practice. Although institutionalized medicine does not always reveal this, societal understandings of our humanness are reflected in and often reproduced by the medical knowledge of each era. Indeed, medicine is innately tied to the social histories of the bodies and lives of the people it encompasses–physicians, patients, and the relationships between the two. Since doctors were primarily white middle- or upper-class males during the emergence of modern medicine in the Western world, ideologies of race, class, and gender, specifically the subordination of women and people of color to white masculine authority, were intricately stitched into the fabric of knowledge on how to treat them. My interest in the disparities in the treatment of women in medicine began on social media. Several of the content creators I followed were journeying through pregnancy, and their discussions about reproductive health and accounts of their personal experiences with doctors revealed to me how their care was complicated by the healthcare system. These women frequently spoke about how they were not listened to by medical professionals, especially regarding their physical pain. After taking an interest in the subject, I found many blogs and forums where women shared similar stories

dealing with pregnancy and reproductive health. They talked about how their medical care was tainted by misogyny: many went through the experience of being denied a tubal ligation procedure, which permanently prevents pregnancy, without their husband’s consent, or even more shockingly, because their nonexistent partners could want children in the future. Being “young, childless, only [having] one child, not [being] married, and [being] married to someone with a risky job” are all reasons a woman may be restricted from having this procedure, in addition to certain insurance barriers (Cunha). To me, reading about these experiences, especially those where women pursued such procedures for years to no avail, seemed to clearly indicate that many doctors do not trust women to make decisions about their own bodies. These accounts made me aware of the struggles faced by female patients in the present day, but I discovered where these issues originated long ago by reading Mary Poovey’s book, Uneven Developments. In her discussion of the medical treatment of Victorian women, Poovey describes how authority over women’s bodies and their position in society by religious institutions was challenged by the rising medical establishment. During this era, “women’s social dependence on men was increasingly justified by reference not to woman’s fallen nature, but to this biological difference…”—the difference, that is, in reproductive anatomy between the sexes (25). In one of the most influential midwifery textbooks of the nineteenth century, the Manual of Obstetrics, William Tyler Smith detailed the concept that he and many other doctors called “reflex action” (36). It was supposed to parallel developments in chemistry and physics at the time, and stated that the human body consisted of a closed system of energy. Depletion in one part would lead to excitation in another, and vice versa. This idea was used to explain how, in the female body, the flow of energy resulted in an “economy that was perceived to be continuously internally unstable,” even though there was no similar instability for males. Instability lent itself to sensitive nerves and an easily imbalanced nervous system, which rationalized the “hysteria” diagnosis that was pushed upon many women in Victorian times (36). Dr. Isaac Ray, one of the founders of forensic psychiatry, summarizes this view of the nature of women with these words: “With

Fifth World


15

women, it is but a step from extreme nervous susceptibility to downright hysteria, and from that to overt insanity. In the sexual evolution, in pregnancy, in the parturient period, in lactation, strange thoughts, extraordinary feelings, unseasonable appetites, criminal impulses, may haunt a mind at other times innocent and pure” (37). Poovey explains how this portrays hysteria as a “norm of the female body,” yet this norm was defined as “inherently abnormal.” Such normative abnormality demonstrated the need for the medical authority of men over unstable female bodies (3738). False ideas about the existence of hysteria date back to the ancient world, where Plato’s conception of the “Wandering Womb” said that the uterus traveled throughout the body, causing disease (“A History”). Like Miasma Theory, we now regard hysteria as a ludicrous misconception. However, the archaic perspective that female organs are inherently pathological persisted in subsequent study of the female condition in the Victorian Era, where these fallacies were relayed as fact based on “scientific study.” Even today, sexist misconceptions remain linger in our medical knowledge, manifesting as misunderstandings about female biology and unfair treatment of female patients. The widespread mishandling of female patients for so long raises many questions: How did these falsehoods come to be accepted as fact in the medical community, and how have they evolved over time? What are other ways in which the female condition has been misrepresented in medicine? How has the treatment of female patients, as well as medical discourse and research, affected their lives and contributed to their oppression? Lastly, how can the long lasting effects of gender bias in medicine be undone? In what follows, I hope to answer these questions by exploring several of the ideologies that took root at the origins of modern medicine, how they have found their way into present practices, and how they have contributed to the silencing and suffering of women. Additionally, I hope to explain how female representation in various aspects of medicine can work to reverse the gender biases that exist today.

B

efore exploring how specific medical practices and ideologies have impacted women, we must survey how the female body has been studied and represented throughout time. Beginning with the fourth century B.C., Aristotle described a model of conception that spoke to the supposed brutish and inferior nature of woman. He claimed that semen provided the “seed” from which life originated, while females contributed a lesser substance, the raw matter that housed the embryo (Witt 46). This idea, along with many other similar theories, subsequently informed the actual study of female anatomy, moralizing any differences that were found. In the second century A.D., Galen wrote that female anatomy was an inversion of male anatomy, upholding the idea that women are deviations from men. When dissections became more common, this idea was North Carolina School of Science and Mathematics

proven false, but the perspective that the male body is the anatomical standard, and that the female body is erroneous or impure, persisted (“A History”). Poovey’s work illustrates this idea through an analysis of the anesthesia debate in the nineteenth century, in which ministers and doctors argued over whether or not chloroform should be administered to women during childbirth (25). At this time, religious narratives informed discussion of women’s bodies and reproductive systems, with the position of the church in Victorian England challenging the emergence of modern medicine (Poovey 25). Although it is unclear whether the incorporation of religious ideologies into medical explanations was an attempt by physicians to find common ground with the prevalent beliefs to garner trust for the medical field or simply a testament to how normalized the gender divisions created by religion were for all members of society, Christianity’s impacts on arguments on both sides were clear. For example, opponents of anesthesia usage during childbirth, such as the American Dr. Meigs, claimed that “there is no element of disease” in natural labor and that it should not be meddled with to mitigate pain (27). This was essentially a reframing of the clergy’s position that God’s jurisdiction over labor pain, which was Eve’s punishment, should not be interfered with. Such arguments were presented by medical men in a perspective that attempted to subdue “biblical imagery” by using the “physiological dimension of nature” (26). Proponents of chloroform, such as Edinburgh University’s professor of midwifery in 1847, Dr. James Simpson, employed similar tactics. He claimed that Meigs’s view that labor pain was a healthy and normal experience associated with childbirth was false, and that it was instead a force psychologically induced by the woman’s mind, due to her own stresses about the birth, that interfered with her capacity to deliver the child (28). By using chloroform, a medical man could sever a woman’s “feelings or sensations of pain” from the “severe muscular efforts and struggles” of her body. This allowed doctors to supposedly take control of a woman’s mute, physical body, while leaving consciousness and her pain to the jurisdiction of the church. This explanation attempts to transfer authority over women’s bodies to medicine while also appeasing the ideologies of religion. Counterarguments to Simpson’s assertions go even further into describing “the nature of woman.” Dr. Smith cited instances where a woman, under the influence of chloroform, exhibited behaviors that paralleled sexual excitation (30). Poovey summarizes the implications of his descriptions as “the fear that, under ether, women would regress to brute animals, a state in which they would be beyond the doctor’s control” (32). Smith claimed that because women are inherently sexual beings, anesthesia removes the “only check to which [female sexuality] would submit,” which would be harmful to their moral status (32) . At the same time that moral status was elevated to an idealized abstraction of domestic virtue which excluded


16

sexual passion. Women were constructed as fundamentally sexual and asexual, deeply embodied and disembodied. While Poovey uses the anesthesia debate to exemplify how the “professionalization of medicine” was inhibited by the inability of doctors to decide whether women’s deviation from men was based on biology or morality, the accounts presented in her work point to the general impact of how pervasive societal narratives about gender (in this case religious) affected medical education and treatment in a critical period of development for the field (25). Such lines of thought were prevalent in medical lectures and textbooks on obstetrics in the Victorian era, involving a similar degradation of women into mute objects whose suffering was because of their nature, where that nature was determined by God or their physiological differences to men. This intense focus on whether women are to be understood as inherently sexual or moral beings cannot be discovered by examination of the physical body: there is in this argument a notable lack of scientific methodology used to justify its conclusions. These theories were not only tainted by their reliance on religious doctrine, but by a lack of supporting evidence based on controlled experimentation. Most evidence for or against the use of anesthesia was based on anecdotal evidence. Proponents relied on word of mouth marketing to encourage the use of anesthesia as “the relationship between empirical practice and scientific theory” had not been established or accepted at the time (Poovey 47). However, even when women’s testimonials were cited, they were presented as “unsophisticated” compared to the “scientific knowledge” of the medical expert through quotes of women saying they wanted more of “the stuff” (44). Even when experimentation was performed, it was usually done on animals, not female, human subjects, and it often was done with an audience. Women could watch as a guinea pig died under the influence of chloroform, which served to make them question whether these drugs should be applied to them (47). Empiricism as a whole was in fact actively discouraged by the medical community. Prior to the formation of modern obstetrics, midwives governed the body of knowledge surrounding female reproductive health and birth, and relied heavily on practical experience (Poovey 40). Medical men who lacked this experience saw female practitioners as a “threat” to their “prestige,” so many arguments in the anesthesia debate incorporated the discrediting of midwives (40). While Simpson and Smith were on opposite sides of the debate, they both claimed that “principles” took precedence over “practice,” because scientific education and philosophical discussion were only accessible to medical men (40). Medical experts used the “superiority” of scientific knowledge over practical experience to bar female practitioners from entering the debate, with their understanding of scientific knowledge being founded on discussion on the study of morality and physiology rather than the experiences of women or the observed effects of

medical practices on women, which would be closer to our understanding of scientific theory today (43). By keeping the authority to write about female bodies away from women, they not only prevented midwives and female patients from having control over their own experiences, but also distanced themselves from more accurate knowledge of the female body, disdaining the repeated observations that would have been consistent with current understandings of the scientific method. This meant that the claims arising from the discussions of medical men, such as the idea that women are built around the uterus and that women have no value outside of their reproductive capacity, became the pervasive narratives found in textbooks in lectures that conveyed understanding of the female body (Rosenberg 6). Even when evidenced-based practice became commonplace, the biases associated with these myths lingered. One of the main preconceptions informing the modern study of anatomy was that a woman’s value was derived from her reproductive capacity. This meant that aside from the reproductive organs, the rest of the female body remained largely understudied. This is illustrated by the fact that women experience adverse drug reactions “nearly twice as often as men,” but the reasons for these reactions are mostly unknown (Zucker). This lack of knowledge on sex differences in drug pharmacokinetics is due to the underrepresentation of female subjects in not only human research, but animal research as well (Beery). Most FDA approved drugs are passed based on clinical trials only conducted on males (Zucker). Corresponding with the myth of the inherent instability of the female body explored in Poovey’s essay, the unpredictability of fluctuating female hormones was used as justification for exclusion of female subjects. Essentially, women were deemed too “biologically erratic” to produce meaningful scientific knowledge (Cleghorn 9). Although physiological functions of the female body can be affected by the menstrual phase cycle, and matching the phases between participants in a clinical trial can be costly, these differences were transformed into a justification for the complete exclusion of females from clinical trials. In addition, an empirical analysis of many rodent studies showed that for most traits, inclusion of female subjects does not greatly increase variability (Beery). While this conclusion can only be generalized to rodents, the unfounded assumption that women are too variable to produce meaningful (and profitable) clinical results is harmful. What’s more shocking is that between 1977 and 1993, women of child-bearing age were outright banned from participating in early phase research in the U.S. except for life-threatening conditions because they were deemed a “vulnerable group” (Liu). The idea that the cost of research or the potential loss of childbearing capacity of women outweighs the harms associated with a lack of information on how critical drugs or treatments interact with the female body reveals the extent to which even seemingly strictly regulated clinical trials are Fifth World


17

not objective. Both of these justifications were used in a time where therapeutic studies had moved beyond simple observation and trial and error. After a drug disaster in 1961, in the U.S., the FDA enacted the 1962 Drug Amendments, which required clinical trials to prove both a drug’s safety and efficacy, and guidelines have been continually improved ever since. Bob Temple, a senior advisor of the FDA, described the merits of regulated clinical trials in discovering new medical knowledge as such: “It is under the investigator’s control, subject not to data availability or chance but his ability to ask good questions and design means of answering them” (Junod). Yet, while the FDA mandated that safety and efficacy data must be analyzed appropriately by sex, and there has been more discussion about women’s health research since then, there are no specific requirements for inclusion of women or other minority groups in clinical trials (Liu). “His ability” is indeed appropriately gendered. If women are not included in clinical trials because of the difficulty or the cost, are clinical trials really free from the constraints of data availability? If women are excluded based on hormone fluctuations or potential harm to reproductive capacity, but there is the chance that a lack of study will adversely affect women, are investigators truly asking good questions? The consistent exclusion of women in clinical research calls into question whether or not medical research and study is objective, and shows that systematic bias is harmful to women because medical knowledge remains underdeveloped. The exclusion of women from the institutional and professional study of “women” extends not only to research but to medical education as well. As the female body, in a medical perspective, was long viewed only through the eyes of male practitioners, medical education has been crafted through this lens. The ways in which female bodies were seen and understood as deviations from the male body persist in education not only because males are still portrayed as the anatomical standard, but because female bodies are made visible by this standard. A 2013 study showed that in textbooks commonly used in medical schools, male patients are represented as the norm in case studies, with women mainly being represented only for reproductive diseases specifically and conditions typically associated with females, such as varicose veins (Morgan). When female presentations of disease and reactions to treatments are not represented in both medical education and research, conditions are frequently misdiagnosed in women. For example, heart attacks are more frequently misdiagnosed in women than men because women experience a greater variety of symptoms that present themselves in different combinations. This not only presents an issue in how males symptoms are the ones primarily taught for heart attacks, but that the diagnostic process as a whole fails to consider how cases may present themselves in an individual when that individual is not the anatomical standard (Brush). Even though there is a greater incidence of coronary heart disease

North Carolina School of Science and Mathematics

(CHD) and associated afflictions in men, CHD is still the greatest cause of death for both men and women in the United States (Maserejian). In such critical conditions, a lack of knowledge on female presentation of disease could be life threatening for all patients. Biases also present themselves in medical education through personification of bodily functions to reinforce stereotypes. For example, depictions of the egg and sperm usually describe the former as having a passive role, being transported through the fallopian tube, whereas the latter is described as actively swimming (Martin). In reality, the egg uses adhesive molecules to compensate for the limited motility of the sperm’s tail, but the stereotypical fantasy still persists in common conceptions of fertilization (Martin). Beyond misinformation on a large scale, the gender bias that permeates medicine is undoubtedly present in individual patient-physician interactions. Healthcare providers regularly misdiagnose and mistreat women not only as a result of a lack of knowledge of their bodies, but because of a gender bias which compounds the ignorance. A major example of this relates to differences in pain management between female and male patients. Even though women make up 80% of chronic pain patients, the pain women face is frequently “dismissed as psychological – a physical manifestation of stress, anxiety, or depression.” In situations where men receive prompt recognition and treatment of their pain, the stereotype that women are emotional and dramatic impedes their care and makes it less likely that they will even be referred to diagnostic investigations (Kiesel). When they do receive treatment for their pain, women are more likely to be prescribed sedatives instead of analgesic pain medication (Weisse). These experiences, shared by many women in the present day, show how their pain is treated similarly to the now-archaic “hysteria” diagnosis.

W

hile existing medical practice has consistently misrepresented the female experience and caused its own series of suffering, the systematic development of the medicalized assertion that women do not have autonomy over their bodies has been and continues to be used indirectly as well as purposefully to oppress women through treatment, language, and policy. Returning to the era of the anesthesia debate, medical men not only sought to exclude female practitioners and patients from discussions about their own bodies, but to uphold the barriers between women and education, prohibiting them from the production of knowledge. They attempted to justify this by citing the divergence of women from men in the process of evolution: women, they opined, had developed superior moral qualities but had a fraction of the intellectual power that male brains ostensibly possessed (Rosenberg 8). Phrenologists used “facial angles, cranial capacities, and brain weights” to show the mental inferiority of women (Dempsey-Jones). Physicians presented this evidence from a science that has long been obsolete to


18

ask why a “good mother” should be sacrificed to make an “ordinary grammarian” (Rosenberg 8). After phrenology was debunked, differences in grey and white matter between brains were used to explain the “dimorphism” between the brains of the two genders, even though aside from size, there is no significant variation (Eliot). Dr. Edward Clarke, a prominent board member of Harvard Medical School in the late nineteenth century, claimed to use thermodynamics to explain why education would interfere with a woman’s child-bearing responsibilities. He and several other doctors made assertions similar to the idea that due to the conservation of energy, women exerting their force on higher education meant that their reproductive organs would undergo damage (Rosenberg 9). Ideas of female irritability were used to claim that females’ nerves were too high strung and sensitive to study, and many families kept their daughters from an education for fear it would make her unmarriageable (Rosenberg). Dr. Clarke’s recommendation was for women to study “one-third less than young men” if they did study and to not study “at all during menstruation” (Rosenberg 10). This prescription of rest not only applied to education, but was a solution for many supposed ailments of the female body, especially nervousness. Charlotte Perkins Gilman was a recipient of the rest cure, a treatment created by Dr. Weir Mitchell, a prominent neurologist of the nineteenth century. In her story, “The Yellow Wallpaper,” Gilman illustrates a woman’s physical and psychological decline as she endures the rest cure (Stiles). To treat her nervous condition, the narrator’s husband, who is also a doctor, forces her to stay secluded in their home. He disapproves of her when he finds her writing and tells her to stop. Against her wishes, he does not allow her to do any form of work, even though she believes the change would help her (Gilman 648). He makes her lie down after each meal and sleep more, which maintains her “subdued and quiet” disposition during the day (653). However, this makes her active at night, when she obsesses over the yellow wallpaper in her room. Her strange descriptions of the wallpaper are her means of understanding her trapped feelings, since she is deprived of any other form of physical or intellectual stimulation. Since she cannot interact with others or write, she is stripped of the authority she has over her condition except for what her husband tells her, and this leads to her breakdown (656). While her husband’s attempts to silence her are justified as a solution to her insanity, her insanity stems from being silenced and controlled in all aspects of her life. Denied the right to represent her own experience, she read the experience of that denial off from the wallpaper, discovering in its “pointless patterns” the madness of her situation. Gilman’s account of this oppressive treatment of women anticipates my later argument that knowledge of the female experience is necessary for addressing their medical conditions. Although the rest cure now seems cruel and unscientific, procedures that serve to convenience husbands instead of

actually aiding their wives still exist today. A significant example is the “husband’s stitch,” which is an extra stitch placed in a woman’s vagina after childbirth to supposedly make intercourse more pleasurable for the husband. Although there is no official documentation of this procedure, the proof of its existence is in the birth stories of countless women and is embedded in their bodies. Awareness of the husband stitch rose after the publication of Carmen Maria Machado’s short story of the same name, which prompted more women to be vocal about experiences where the decision to be “tightened” was placed in the hands of their husbands and doctors against their will. The stitch is not actually thought to have a noticeable effect on pleasure; rather, it causes long-lasting physical and psychological pain for the women who experience this betrayal from those who they trust (Murphy). The very concept of the “husband’s stitch” exemplifies how the objectification and sexualization of women’s bodies is still innately tied to obstetrics. The belief that women are incapable of making decisions about their own bodies is the basis for present day the struggle over reproductive rights. After the introduction of the birth control pill in the U.S., many states banned contraceptives due to pushback from religious institutions. This mirrors how, in the Victorian era, theological ideologies were often interwoven in discussions on female anatomy. Between 1960-1980, a series of Supreme Court Cases including Griswold v. Connecticut, Eisenstadt v. Baird, and Roe v. Wade, ruled that the ban of contraceptives and abortions were unconstitutional, yet, many male legislators continue to impose policies that withhold access to these tools and procedures for women, especially those who are poor or can’t afford them (“A Brief History of Civil Rights”). As of December 2021, the Supreme Court ruled to uphold Texas’s abortion ban, backtracking on Roe v. Wade (Harper). States like Alabama attempted to instate near total abortion bans, without exceptions for rape or incest (Chandler). These attempts by male legislators to undermine progress of the reproductive rights movement demonstrate the lengths to which institutions will go to control women’s bodies.

D

ecades of systematic gender imbalances within the medical institution poses the challenge of addressing these disparities. The solution lies in the exclusion of midwifery in obstetrics discussed in Poovey’s analysis of the anesthesia debate and the perspective presented in Gilman’s story. Both scenarios demonstrate how women are cut off from the discourses that govern the treatment of their own bodies–female practitioners are looked down upon for only having practical experience and the narrator of “The Yellow Wallpaper” is not allowed to write or read–and are rendered as mute objects of study. But by telling the story of the rest cure at a time when it was still prevalent, Gilman speaks to a mostly nonexistent but nonetheless emerging group that had the potential to address the misogyny of medicine: female physicians. Most of the suffering female Fifth World


19

patients faced was due to the fact that their experiences were not represented or understood, so in being treated by someone who can empathize with them and has similarly been excluded from medical knowledge, their issues are more likely to be addressed. This idea was proven in a 2017 study that compared mortality and readmission rates for patients treated by female and male physicians. The results showed that in the patients studied, those cared for by female internists had consistently “better outcomes for inpatient care” compared to those cared for by their male counterparts, and were less likely to have emergency department visits after treatment. These results were thought to be associated with the fact that female physicians are more likely to “practice evidence based medicine, perform as well or better on standardized examinations, and provide more patient-centered care.” Based on other studies, they are thought to be more deliberate in their approach than male doctors (Tsugawa). In addition, female primary care physicians (PCPs), on average, spend “more time in direct patient care per visit, per day, and per year” compared to male PCPs, despite being paid less (Ganguli). They are said to place a focus on the “psychological and communicative” side of medicine. In radiological services, some say the “feminisation” of the field is “good news for patients” because of this increased communication with patients and colleagues as well as reduced risk taking, while others claim that “efficiency and the ability to live with risk” are essential skills (Bleakley). This analysis of the situation implies that female practitioners are strictly associated with a more emotion-based approach to medicine, whereas men are more logical and efficient. Although certain characteristics may be traditionally considered more “masucline” or “feminine” –there is a discussion to be had on distinguishing society’s essentialism of men and women from their performance in these settings–the true value of female physicians comes from their willingness to consider what has been overlooked in their own experience in a system that has long revoked their authority over their own bodies. Authority over this discourse–authority not only to speak from a position of medical knowledge, but to alter the terms of such knowledge–allows women physicians to listen more carefully and to respond more effectively to patients who have been left unheard. Informed by their exclusion, they are more able to reframe the ideals of quality patient care. The general trends in patient outcomes associated with female doctors point to the inadequacy of traditionally male interpretations of medical care. With an influx of female physicians, the overall culture of medicine is beginning to shift. In the 2020 academic year, 52.4% of medical students were female, and the percentage of the physician workforce that is female grew from 5% to 35% in the last 50 years (Stewart). However, medical education continues to be taught from a male perspective, considering representation of female bodies in medical textbooks discussed earlier. These issues can be addressed by North Carolina School of Science and Mathematics

building on the insights of poststructuralist feminism, which resists generalization of women under patriarchal norms, to redesign curriculum (Staats). This reframing would include more empathy based consultation and a greater emphasis on connection to patients during training, as well as more cognizance on the interplay of social dynamics in patient care. Being aware of how gender, race, and class affect the positions of both the doctor and the patient are important to fully understanding and addressing individual situations (Bleakley). Greater representation of female cases and symptoms in medical literature would also serve to improve outcomes for female patients by making medical students more adjusted to how both female and male bodies present symptoms of disease or respond to treatments. In order for these changes to take place, greater representation of females in higher level leadership positions is necessary. Although women make up a majority of medical school students and will eventually make a majority of the physician workforce, women doctors are “less likely to take up academic research and teaching.” Female faculty members are typically the ones who incorporate gender issues into the medical curriculum, so without them, there is not only less representation for new generations of female medical students, but less change being made in what is taught to all students. Even though “women medical students tend to make more effective facilitators than their male counterparts,” the underrepresentation of women in research means it is more difficult to design studies that actually incorporate female subjects (Bleakley). With more support for females in these positions, patients may be able to experience the outcomes associated with more study of their bodies instead of clinical research only done on male subjects.

T

hrough the perspectives presented in the work of Poovey, Gilman, and Rosenberg, as well as in many studies and reviews, the evolution and existence of the systematic gender inequalities within the Western medical institution can be better understood and more effectively addressed. These works have shown that the exclusion of the female experience in modern scientific medicine leads to the misdiagnosis and mistreatment of women in medical settings and exacerbates oppression of women in other aspects of society, but the presence of physicians and researchers who are women in these fields can help lead to a better understanding of the female condition. When women emerge as producers of medical knowledge about women after a history of being mute sufferers at the hands of medical misogyny, they are able to positively transform treatment for female patients. However, this solution brings into question the logistics of bringing women into positions of leadership within the medical institution and ensuring that women are equally represented in clinical research. For the former issue, implicit gender biases, limited allowance of maternity leave, and the pay gap mean women are often


20

overrepresented in family care practice and pediatrics while they struggle to become hospital CEOs, department chairs, senior authors at prestigious medical journals, and other positions that control resources and determine the values of a medical center or academic space (Mangurian). Bias training and better policies on childcare leave regardless of gender are steps in the right direction, but further action is necessary for systemic change. As for increasing the representation of women in clinical research, the financial and time-related costs associated with phase changes are not the only worry. Finding willing female study participants is difficult due to potential expenses associated with a treatment or mistrust. This is mostly a concern for people of color, who fear discrimination and exploitation due to the U.S.’s “history of unethical medical testing” (“Clinical Trials”). Just in the last century, studies such as the Tuskegee Experiment purposefully did not treat Black men with syphilis and lied to them about their condition. These barriers mean that most clinical research subjects remain as white males, since there are no concrete guidelines that would enforce inclusivity. Without them, researcher directors, who are predominantly male, use cost and availability to justify the lack of diversity within their studies. This need for diversity in medical trials brings up an important discussion of intersectionality implicit but not directly raised in this work. Those most affected by medical misogyny are women of color. For instance, the maternal mortality rate for Black women is three times as high as it is for white women. This is due to the structural racism and implicit bias that I have explored here, as well as in the continuing underrepresentation of people of color in clinical research and medical education, as mentioned earlier (“Working Together to Reduce Black Maternal Mortality”). The effect of these issues is only compounded with women of color, because of the limited study of female bodies and bodies of people of color. Kimberle Crenshaw’s Demarginalizing the Intersection of Race and Sex demonstrates that addressing and redressing the situation of those who are multiply burdened–here, by race and gender, but we can add class, among other determinants–is the way to create systemic change. Her important work prompted my interest in study that focuses on how to combat implicit biases and systemic inequalities for both women and people of color (140). In the future, I want to explore the impact of intersectionality on medical treatment for not only underrepresented minorities but for LGBTQ+ populations as well. Moreover, while my current work focuses on the treatment and study of those who are assigned female at birth and identify as female, I also hope to understand how gender bias presents itself for those who are not biologically female, or those who identify as women but have limited reproductive capacity (in an institution that defines women based on this capacity). I also am curious as to how

representations of sexualities and gender identities beyond cis heterosexuality in medical practice would affect patient care, considering the effect female doctors have on patient outcomes. Although some aspects of my work have made me more hopeful for the future of medicine, including the general trend of increasing representation of women in the physician workforce and the improved outcomes associated with female doctors, it also serves to remind me how fragile such progress can be. Just as the Supreme Court ruling of Roe v. Wade has not stopped many states in the present day from attempting to create strict regulations on women’s reproductive rights, there is no guarantee that the future will continue to move in a progressive direction. Continued study of the social dynamics of physician-patient relationships of the past and present is necessary to maintain this progress. By pushing to place those from marginalized groups in places of power within the medical institution, we ensure that knowledge of our bodies is created by the people who have experienced life through those bodies. However, this raises the question of how to bring these groups into medical positions when there is a certain level of privilege associated with obtaining a medical education to begin with: those who attend medical school have access to resources that allow them to rise above rigorous admissions processes, and are financially supported in some capacity throughout their education. Greater structural inequalities must be addressed before we can truly heal the system that should be healing us.

Fifth World


21

Beery, Annaliese K. “Inclusion of Females Does Not Increase Variability in Rodent Research Studies.” Current Opinion in Behavioral Sciences, Elsevier, 2 Aug. 2018.

Rosenberg, Rosalind. “In the Shadow of Dr. Clarke” Beyond Separate Spheres: Intellectual Roots of Modern Feminism, Yale Univ. Press, New Haven, Connecticut, 1993, pp. 1–27.

Bleakley, Alan. “Gender matters in medical education.” Patient-Centred Medicine in Transition. Springer, Cham, 2014. 111-126.

Staats, Marian. “Poststructuralist and Queer Feminist Theory and Practice.” Poststructural Feminism, 2012.

“A Brief History of Civil Rights in the United States: Women’s Reproductive Rights.” HUSL Library.

Stewart, Ada. “Women Close Med School Enrollment Gap, but Others Remain.” Women Close Med School Enrollment Gap, but Others Remain, 28 Feb. 2020.

Britannica, The Editors of Encyclopaedia. “germ theory”. Encyclopedia Britannica, 9 May. 2021. Accessed 11 April 2022. Brush, John E., et al. “Sex Differences in Symptom Phenotypes among Patients with Acute Myocardial Infarction.” Circulation: Cardiovascular Quality and Outcomes, 17 Feb. 2020. Chandler, Kim. “Alabama Senate Passes Ban on Abortion, with Few Exceptions.” AP NEWS, Associated Press, 15 May 2019. Cleghorn, Elinor. Unwell Women: Misdiagnosis and Myth in a Man-Made World. Dutton, 2021. “Clinical Trials Have Far Too Little Racial and Ethnic Diversity.” Scientific American, Scientific American, 1 Sept. 2018. Crenshaw, Kimberlé. “Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics.” u. Chi. Legal f. (1989): 139. Cunha, Darlena. “Tubal Ligation Requirements.” VICE, 7 May 2019. Dempsey-Jones, Harriet. “Neuroscientists Put the Dubious Theory of ‘Phrenology’ through Rigorous Testing for the First Time.” The Conversation, 6 Oct. 2021. Eliot, Lise, et al. “Dump the ‘Dimorphism’: Comprehensive Synthesis of Human Brain Studies Reveals Few Male-Female Differences beyond Size.” Neuroscience & Biobehavioral Reviews, vol. 125, 2021, pp. 667–697. Ganguli, Ishani, et al. “Physician Work Hours and the Gender Pay Gap - Evidence from Primary Care: Nejm.” New England Journal of Medicine, 31 Dec. 2020. Gilman, Charlotte Perkins. The Yellow Wallpaper. Virago Press, 1981. Harper, Karen Brooks. “‘This Is a Dark Day’: For Texas Abortion Providers, U.S. Supreme Court Ruling Feels Apocalyptic.” The Texas Tribune, The Texas Tribune, 10 Dec. 2021. Underwood, E. Ashworth , Richardson, Robert G. , Rhodes, Philip , Thomson, William Archibald Robson and Guthrie, Douglas James. “history of medicine”. Encyclopedia Britannica, 27 Aug. 2020. Accessed 13 December 2021. “A History of the Male and Female Genitalia .” The History of the Female Reproductive System. Kiesel, Laura. “Women and Pain: Disparities in Experience and Treatment.” Harvard Health, 9 Oct. 2017. Liu, Katherine A, and Natalie A Dipietro Mager. “Women’s Involvement in Clinical Trials: Historical Perspective and Future Implications.” Pharmacy Practice, Centro De Investigaciones y Publicaciones Farmacéuticas, 2016. Mangurian, Christina, et al. “What’s Holding Women in Medicine Back from Leadership.” Harvard Business Review, 7 Nov. 2018. Martin, Emily. “The Egg and the Sperm: How Science Has Constructed a Romance Based on Stereotypical Male-Female Roles.” Signs, vol. 16, no. 3, University of Chicago Press, 1991, pp. 485–501. Maserejian, Nancy N et al. “Disparities in physicians’ interpretations of heart disease symptoms by patient gender: results of a video vignette factorial experiment.” Journal of women’s health (2002) vol. 18,10 (2009). Morgan, Susan, et al. “Sexism and Anatomy, as Discerned in Textbooks and as Perceived by Medical Students at Cardiff University and University of Paris Descartes.” Wiley Online Library, John Wiley & Sons, Ltd, 19 June 2013. Murphy, Carrie. “The Husband Stitch Isn’t Just a Horrifying Childbirth Myth.” Healthline, Healthline Media, 28 Sept. 2018. Poovey, Mary. “Scenes of an Indelicate Character: The Medical Treatment of Victorian Women.” Uneven Developments: The Ideological Work of Gender in Mid-Victorian England, Univ. of Chicago Press, Chicago, Illinois, 1998, pp. 24–50.

North Carolina School of Science and Mathematics

Stiles, Anna. “Go Rest, Young Man.” Monitor on Psychology, American Psychological Association, 2012. Tsugawa, Yusuke. “Outcomes of Hospitalized Medicare Beneficiaries Treated by Male vs Female Physicians.” JAMA Internal Medicine, JAMA Network, 1 Feb. 2017. Weisse, C S, et al. “Do Gender and Race Affect Decisions about Pain Management?” Journal of General Internal Medicine, Blackwell Science Inc, Apr. 2001. “Working Together to Reduce Black Maternal Mortality.” Centers for Disease Control and Prevention, Centers for Disease Control and Prevention, 9 Apr. 2021. Witt, Charlotte. “Form, Reproduction, and Inherited Characteristics in Aristotle’s ‘Generation of Animals.’” Phronesis, vol. 30, no. 1, Brill, 1985, pp. 46–57. Zucker, Irving, and Brian J. Prendergast. “Sex Differences in Pharmacokinetics Predict Adverse Drug Reactions in Women.” Biology of Sex Differences, BioMed Central, 5 June 2020.


22

Modern Minstrelsy: Transfiguration at the Hands of Whiteness Zenith Jarrett

“One of the most promising of the young Negro poets said to me once, “I want to be a poet—not a Negro poet,” meaning, I believe, “I want to write like a white poet”; meaning subconsciously, “I would like to be a white poet”; meaning behind that, “I would like to be white.” And I was sorry the young man said that, for no great poet has ever been afraid of being himself. And I doubted then that, with his desire to run away spiritually from his race, this boy would ever be a great poet. But this is the mountain standing in the way of any true Negro art in America-this urge within the race toward whiteness, the desire to pour racial individuality into the mold of American standardization, and to be as little Negro and as much American as possible.” - Langston Hughes, The Negro Artist and the Racial Mountain

B

lack art is beautiful, and black self-expression routinely sits at the forefront of pop culture. To what end, though, do black artists have control over their creations? What obligations are black artists given by their presence in the public sphere? At what threshold does the consumption of black culture, art, and transition from free expression towards minstrelization? I ask all these questions, not because of a theoretical desire to understand the black artist, but because the very ability of black artists to be is predicated upon their answers. Black artists do, after all, fight a very particular battle unlike that of any other group. The black artist must discover a way to be both black and produce art in America — a place that violently punishes all which defies its conventions. The question at the crux of this paper is: how can the black artist survive? It is impossible to write about black art, though, without first understanding black art as a category, and it is impossible to understand black art as a category without first understanding black as a means of grouping people. Instinctually, I do not want to do this. Categories have, historically, been nothing more than a means of oppression. Categorization— the aggressive simplification of multifaceted beings into labels — exist to divide people into an in-group and an out-group. Straight and Gay, Cisgender and Genderqueer, male and female, and of course, white and black exist as binary oppositions to justify the mistreatment of the out-group or rather the love of the

in-group. Race exists as a grouping precisely because racial divides were the only way to ensure that “white” victims of global imperialism and colonialism could not ever ally themselves with “black” victims of “whiteness.” Of course, the bigoted roots of legal and social definition still pervade our society’s culture, but there is no easy way to erase their existence. No amount of denial or rejection can change the realities that race has become a real social category. Emily Bernard said of her race (more specifically, her blackness) that “its significance as an experience emerges sometimes randomly and unpredictably in flux, and yet also a constant condition that I carry in and on my body” (Bernard xiv). While we can imagine race as a social construct, all social constructs tangibly affect our social organization. Race, despite being an artificial invention, has real, material impacts. Therefore, while blackness and whiteness alike are reductive as categories, they still shape the lived experiences of all black folks. Police brutality exists; microaggressions abound; hiring biases exist. Blackness is artificial so far as any intrinsic physiological, or psychological differences go, but it is very real in the ways in which it shapes our lives. With that being said, blackness is not something rigid. It is not something immutable. “Blackness is an art, not a science.”. It is a social condition, a political affliction, an intangible bond, an inexplicable connection. Blackness is the ability to say nigga, and I do not believe any further explanation is necessary. In the American tradition, one drop of black blood (be that a grandparent, a great-grandparent or otherwise) has been enough to make someone black, with “blood” referring to ancestry, not biology. I’ve seen black people with a tendency to deny Kamala Harris’s Asian ancestry on the grounds that she is, in fact, half black. Their claim to Kamala as a purely black symbol (an icon for black girls and black girls alone) is an attempt to reclaim the tools of our oppression. However, the one-drop rule gives us no functional advantage. It enforces a rigid binary between those who are purely white and everyone else for the purpose of allowing sexual violence against enslaved women, for instance. Furthermore, that binary crumples rather rapidly because there are many “white” people with at least one drop of black blood, and they are not black. Logic (the rapper) is the example that comes to mind first. Logic is very clearly white passing and white presenting. He is one of the most fair skinned biracial people in the public eye, but he is still Fifth World


23

biracial. By the one-drop rule, he is black, but a handful of people still view him as white. However, if Logic’s blackness is not logical, it is unquestionable. No exact amount of fair skin changes his black lived experiences. He has a black father, a black family, and a black upbringing. Society, for the most part, engages with him as a black person (in so far as we all acknowledge his blackness), so he is black. Being “less black” (numerically) doesn’t make your skin any lighter. I’m at least 20% white (technically), but no one would ever look at me and make me verify my blackness. I am black, and that is what is real. The reality is that there is no genetic basis for blackness. There is no precise hair texture or skin tone that all black people have or must have. There is no darkness threshold you must pass (or many non-black people would pass it). Blackness is an art, not a science. You cannot craft a rigid formulation of blackness because blackness has no scientific basis; it has a sociological and historical existence. The historical roots of blackness can be traced back to Greece. The Homeric and Orphic creation myths assert that “darkness (night) retreats from the light, so that the world and life could begin.” Over the course of the next several millennia, blackness continued to appear as antithetical to good. Black bile, black death, and the black poplars of tartarus are contrasted with the white robes of angels and the white garments of kings. Black is a label for evil. White is a label for good. This simple social dichotomy unfurled until it was broad enough to be applied to humans: “Black skin was regarded as “damned” and as one of the reasons of enslavement since the launch of the slave trade in 1441” (HRABOVSKÝ). All that is to say, blackness has no scientific backing and collapses when you attempt to apply rigid rules to it. However, blackness is a social condition. Some people are black, and some people are not, and that is immutable, usually. As horrible as it is, blackness is antithetical to self-determination because blackness is shaped by your relation to a greater society which is historically racist. Blackness is a difficult category to categorize because it is not something that should have its gate kept, but a simple, encompassing definition is: blackness is a social condition and a political affliction that makes the bearer an outsider in a white society. Whiteness can roughly be identified as benefiting from white supremacist social structures, and blackness, conversely, is being harmed by those very same racist (specifically, anti-black) ones. All black folks in the African diaspora are tied together by the fact that we are black, and we are black because the world has made us so. That is to say, blackness is socially prescribed. When white people look at my hair and call it undesirable, that’s how I know I’m black. When conservative pundits tell me that my culture is inherently violence, that’s how I know I’m black. When a racist man looks at me and calls me a nigger, that’s how I know I’m black, and that’s how I know I can say nigga. Therefore, one idea will underwrite the rest of this paper: to be black is to struggle, and black art is the manifestation of that struggle in its many facets. Black art (i.e. music, North Carolina School of Science and Mathematics

paintings, sculpture, dance, literature, etc.) is any art made by black people, and black art, as with all art, is the physical manifestation of black culture. Hiphop, Jazz, and the Blues are not just genres, but they are the result of centuries of an inexplicable cultural development. The “duende” sits at the core of art that is “black”. There’s a soul that evades intelligibility that rests beneath all art that is black. That’s why Nathaniel Mackey says Lorca sits within the canon of the black artistic tradition. Black art is a long movement made up of the inexplicable, the “fugitive” (as Fred Moten calls it), the suffering, and the history of blackness. Mackey might even go so far as to argue that the black artistic tradition has an international element to it that extends beyond American conceptions of the black and the white — because a binary opposition cannot account for a universal type of artistic expression. After all, where would Langston Hughes and Bob Kaufman be without Lorca? Black art is the expression of blackness in ways that cannot be expressed through language that exists precisely because blackness is not something that can be easily articulated. “Frank Wilderson describes the black as the ‘antithesis of the human subject’ and ‘a paradigmatic impossibility in the Western Hemisphere.’ For Saidiya Hartman, black is also the mark of objectness, property, fungibility—the mark of the slave as the structurally ‘unthought’ foundation of the national order. Black is a mark developed in New World slavery as an ontological exception, expressed via the relegation of the black to outside (or underneath) the realm of ontology, indeed what Fred Moten calls ontology’s ‘anoriginal displacement.’ - Alessandra Raengo, Black Matters Blackness is not something that should be able to exist at the forefront of culture because blackness is a mark that moves you away from the mainstream. It moves you away from the norm. It disqualifies you from existing in society as it is supposed to be. It must be necessarily understood, then, that black art has an incomprehensible blackness that underwrites its expression — blackness that rejects and is rejected by the white society it exists in. This produces the tension W.E.B. DuBois talks about in the first chapter of . There’s an undeniable “double consciousness” within black people. On one hand, our blackness makes us opposed to society. In our blackness, we are invited to live in ways that reject “western” norms, but no matter how much we try, we cannot simply reject whiteness. We can’t escape it so easily. As Du Bois writes, It is a peculiar sensation, this double-consciousness, this sense of always looking at one’s self through the eyes of others, of measuring one’s soul by the tape of a world that looks on in amused contempt and pity. One ever


24

feels his twoness,—an American, a Negro; two souls, two thoughts, two unreconciled strivings; two warring ideals in one dark body. (14) No matter how black we are, we will never be able to perceive ourselves as wholly natural because we are trained from childhood to perceive ourselves as other to a “white world.” Black people have a special sort of second sight. Even while we live everyday as black, the world insists that we look at it as a white person would. We’re taught the white man’s history, we wear the white man’s clothes, we watch the white man’s news, and in that, we are encouraged to view the world as white. However, we are still black. Our double consciousness, then, is the splitting sensation of viewing two truths. The significance of our blackness “emerges sometimes randomly and unpredictably in flux.” It shapes how we perceive the world at times, but we are more than capable of seeing from the same point of view as everyone else. We can see things as both Black and American. However, the same can’t be said for everyone. White people have never been forced to experience blackness. No one is ever taught to experience that which cannot be explained. The full expanse of black experiences shaping black identity is never something enforced in a classroom. Hypothesizing could ever teach a white person about the precise feeling of an abrasive whiteness closing in on you from all sides. That is to say, white America has a sort of single-sightedness that produces different perceptions of blackness than black people. While black folks have a sort of dialectical perception of ourselves, the white man is stuck at “amused contempt and pity.” At the same time, however, black culture rests at the forefront of pop culture. How can this be, though, if blackness is that which is othered by society? If society is established such that black people will struggle directly against it, then why does that same society love black performance? This contradiction or dissonance might only be explained as minstrelsy. Some aspects of the black experience that should be ineffable and inexplicable to white people instead makes the expression of that experience incapable of being ignored, but white viewers will inevitably interpret black expression differently than we do. So, when performing for a white audience, what of the black artist is lost? His dignity? Is he being made into a minstrel show? His intent? In what terms do white viewers understand his message? His authority? Is the very work he produces being colonized and transfigured as someone rebrands it for white people? Entering the mainstream poses a serious threat to black artists, then — existing as an object of white consumption. I believe that the mission of any black artist is to discover how to exist as both black and an artist. This mission has manifested repeatedly throughout history. The Harlem Renaissance, the Black Arts Movement, and various other social crusades attempted, sometimes rather explicitly, to produce a new black artist. The purpose of this paper is to produce a newer black artist, a roadmap for the black artist

of the modern day. The black artistic tradition is attempting to establish an artistic identity while black — attempting to affirm and establish black arts as a concept. The conditions artists once operated under are different now. With the rise of the internet, there are fewer and fewer spaces in the world without any white presence. It follows that today’s black artist has two choices: find ways to exist in a white space or cease making art. Art necessitates a certain degree of freedom and leisure to create. Mastering the technical aspects of art takes time and comfort to focus on your craft. Under capitalism, leisure is a luxury. Living paycheck-to-paycheck produces a sort of anxiety inhibiting artistic expression. However, art is a way to make money. By selling your art, you can bring yourself profits, and therefore, doing art stops being an exercise in self-expression and becomes a job. Art is a way to make money and survive. The beauty of this is that some artists might be rightfully compensated for their work, but the damage done is immense. The very notion of “rightful” compensation implies that there is, in fact, some way to objectively quantify the value of art. As Antoon Van den Braembussche points out in “The Value of Art,” there is an intensely complicated relationship between art and economics. Art is assigned a quantifiable value based on how much people desire the piece. Whether it be based on views, streams, or records sold, Capitalism gives us a host of ways of understanding “profits.” Everywhere we look, we see new metrics of quality being thrust upon us. And so, there is an easy way to point at art and say “this piece is the best because it has been given the most value!” The Beatles must be the greatest musicians of all time because their music has the most record sales; “Salvator Mundi” must be the greatest painting to ever be made because it sold for more than any others. However, this becomes massively problematic when we look at artists that lie beneath the mainstream. Independent artists, underground artists, and experimental artists are more than capable of producing work with artistic value, but because some aspect of their work doesn’t appeal to society at large, they are quantifiably worse than their competition. In reality, though, all this does is reinforce strict societal norms. Art that can be easily consumed by the masses is art that will be popular. That is to say, art that can be easily consumed by white people is art that will sell better. And, if blackness can be understood as something that eludes the grasps of white folks, then some degree of blackness must be compromised to make profits. When a black person makes art for profits, the primary audience must necessarily be white people because white people are the majority, the standard, and the perspective that our collective paradigm is born from. When record companies sign new artists, they look for artists who will move the most records. The black artist is, essentially, made into a commodity. The only way for your art to have enough money to support your artistic lifestyle is to have a degree of mass appeal that will Fifth World


25

get you signed. Record labels have the funds to distribute and advertise your music. Corporate interests and teams of people keep celebrities in the public eye. Companies can then buy and resell your entire artistic persona — your very ability to continue living for your art becomes reliant on a company’s assistance. Companies then turn around and profit off of their investments, naturally, in order to fund the lifestyles of their CEOs. When a black artist performs a song, there’s a white person somewhere who owns it, and when a white person listens to a song, they reinterpret and alter the nature of the music. Capitalism places black artists in a dangerous predicament, then, where, in order to survive and make profits, we must sacrifice a degree of artistic liberation for money — we must sacrifice our blackness to maintain the conditions under which we can make art. The best art is the art that sells the most. Money is the determinant of artistic value. Money is what you need to survive, and black artists are continually forced to perform for whiteness. It so happens, then, that black artists in the public eye are what Karl Marx would call “propertyless workers”. In his 1844 Economic Manuscripts, Marx explained: Labor produces not only commodities; it produces itself and the worker as a commodity… the worker is related to the product of labor as to an alien object. For on this premise it is clear that the more the worker spends himself, the more powerful becomes the alien world of objects which he creates over and against himself, the poorer he himself – his inner world – becomes, the less belongs to him as his own. It is the same in religion. The more man puts into God, the less he retains in himself. The worker puts his life into the object; but now his life no longer belongs to him but to the object. While Marx was speaking from a massively different political and economic landscape, a note of what he says rings true and relevant to the argument. As black artists struggle to produce their art, and as white people consume, possess, and profit from their work, their work is itself lost to them and claimed by another party. Marx goes on to remark that labor itself becomes something that exists entirely outside of the worker. All that individual’s time, effort, and energy is put into something that the worker can never see for a handful of money from their boss, and the very energy of their life becomes someone else’s. In that, a degree of humanity and the self are lost everyday under Capitalist production. In the case of art – black art, specifically – the artist is alienated from their work and their blackness is forcibly removed from it. While art and artist are commodified, the deeply spiritual, personal, and intimate act of creating art is turned into a competition to produce objects with the most value. That’s why jazz and hip-hop are released in albums and singles rather than a constant flow of artist experiences, one informing all the next, and all the next innovating on the past. Jazz, in particular, is a musical form that ought to resist North Carolina School of Science and Mathematics

objectification in every fashion. The genre is built upon, amongst many things, the culture of improvisation around it. Jazz, the Blues, and Hip Hop weren’t always necessarily written genres. The blues needed no sheet music because its 12 (or 16)-bar structure is to be followed and built upon. Jazz might have sheet music, but a good jazz musician will imbue a new soul into his horn with every key. Hip Hop might have lyrics and repeating beats, but hip-hops onset was grounded in innovation. All these genres came up from the streets. People performing in cafes and on street corners are responsible for what we know as black music today, and many of them were forgotten or left unrecorded entirely because the process of improvisation is a spiritual and personal one. When 2 artists play the same jazz piece, they will inevitably play it in different ways, almost artificially creating souls for each. With no explicit intent being required, each artist puts their personhood into their improvisation, and an entirely new piece is created. Even listening to different recordings of the same artist playing the same piece will result in different sounds. “And the social consciousness displayed in that music. Pharoah Sanders will say MMMMMMMMMMMMMMMMMMMMMMMMMM MMMMMMMMMMMMMMMMMMMMMMMMMM MMMMMMMMMMMMM. Which is more radical than sit-ins. We get to Feel-Ins, Know-Ins, Be-Ins.” - Amiri Baraka, Black Music Jazz is a special type of genre, and improvisation presents a new way of understanding music. In the infinite possibilities of improvisation, there is room for human expression to reach new horizons. And, in that expression, there is an implicit resistance against commodification and objectification. Therefore, there is something implicitly radical about black music and improvised sound. Jazz derives its power from the continued wave of innovation, happening at all times. While, of course, jazz records exist, they are never treated as the official or exclusive recording. As with the Blues, improv is expected. Jazz is an experience, as all art could be. It’s about continued expression and the celebration of the moment. Jazz is improvised so in authentically black settings, no two performances should be the same. However, the insistence of Capitalism is that once a piece of art is ready to be released, the piece is complete — a conclusion is found once the world is ready to buy it. The artist loses control of their piece when their art is bought by those around them. Therefore, perfection and technical skill are prioritized above the duende because perfection and technical skill are more highly desirable and more easily marketable. After all, the duende is not something that can be easily quantified. As Mackey says, it “draws near places where forms fuse together into a yearning superior to their


26

visible expression.” The duende is to be found in the breaks of a singer’s voice, the stutter of a trumpeter’s fingers, the rubato of a pianist during their solo. The duende can be found in the quivering of live performance, and therefore, when art is reduced to an object, the marvel of its implicit blackness is lost. The duende is altered as those with the power to produce art en masse choose to do so. The very soul of the black artist is robbed in the process of capitalistic alienation from art. When people listen to black music, then, what they are listening to is a type of “black musical commodity”. They’re talking about a simplified, more rudimentary version of black music made to be mass produced and redistributed. The fugitive nature of the art is forcibly removed as it is made to appeal to society at large. However, “black art is any art made for black people.” Appealing to a mostly white audience doesn’t make an artists’ work any less black, but it does necessarily complicate the nature of that art. The artist’s work and the blackness that underwrites their expression is lost, but the shell of it all remains. Black art in pop culture might be understood as the remains of a missing blackness — the black art white people wish to and are capable of perceiving. That is to say, as white audiences listen to black music, they are being entertained by a vision of blackness as it really isn’t. It’s blackness as the white gaze wants it to be. The White Gaze is bell hooks’s formulation of the relationship between cinema, pop culture, and colonialism. The mainstream is established with the normative whiteness of society: it is made visible by a gaze inseparable from consumption and appropriation. The white gaze has a way of transforming even black bodies into something white people can tolerate: Despite the increasing presence of black celebrities, the white aesthetic still strongly defines beauty and worth in today’s racist culture. Many of the contemporary black celebrities, such as Halle Berry, Mariah Carey, Beyonce, Vanessa Williams, are whitewashed to appeal to white audiences, thereby denying the black body. Famous black women are often anglicized on covers of magazines: their hair and skin lightened and curls straightened. (Wallowitz) Therefore, even in our success, our blackness is robbed from us. That’s why Zendaya and Rihanna are every white man’s favorite black women. They’re beautiful, of course, but their beauty conforms to colonial standards of consumption more than anything. Blackness is marginalized and black commodities that meet the impossible standards of whiteness are further fetishized as something exotic or different. While these black artists are fully capable of thriving, the white gaze radically transforms their work from something radical into something sanctioned by colonial desire. The implicit radicalism of black art is lost by the white gaze granting it visibility. The nature of the gaze, then, is something that must be deeply interrogated. When white people engage

with black art, if not for the duende, then what is it for? What is the nature of the entertainment taken away from listening to a xerox of blackness? I would go so far as to call the relationship between the white gaze and black culture inherently minstrelized, a matter of seeing, not hearing. In engaging with black art as a source of entertainment rather than a living cultural mode or a piece of art with punctum, white audiences enact further colonial violence onto black people. Our blackness is reduced then consumed for a sort of gruesome pleasure. Minstrelsy is, after all, the process of viewing caricatures of blackness for entertainment. While the precise terms of minstrelsy have changed over the past few decades and centuries, America’s tradition of minstrelization is far from dead. This tradition can be traced back centuries and includes the likes of George Washington Johnson’s “The Laughing Song” and Louis Armstong’s “You Rascal, You” as striking examples of minstrel performance. In both of these pieces, black artists perform what white people want to see in order to secure financial positions for themselves. However, in Johnson’s case, this proved to be ultimately fruitless. Johnson died alone and broke at a young age despite “The Laughing Song” being one of the top selling songs of its time. In the song, Johnson simulates a street performance. I say simulate because, while Johnson was originally a street performer, what we have here is an inauthentic reproduction. In recording and reselling his performance, the implicit radicalism of his song is lost. The art stopped belonging to him as soon as it was engraved into a record, and his art became a commodity. While “The Laughing Song” is, originally, a song about putting on a mask in the face of racial violence, it became a piece of pop culture and white entertainment. Therefore, his street art stopped being the art of a black man performing for whoever would listen and became a minstrel scene. He was no longer making art for himself, but he was expressing his blackness white people. By looking at the Laughing Song, we can see, precisely, what minstrelization does. It radically transforms black art into something sickening and disingenuous. Johnson sang: As I was comin’ round the corner, I heard some people say Here comes the dandy darkie here he comes this way His heel is like a snow plod, his youth is like a trap and when he opens it gently you will see a fearful gap and then I laughed . . There are several potential readings for this song, admittedly. Tim Brooks observes that a big part of the appeal of Johnson’s songs are that they, essentially, give into the mockery of blacks. By repeating anti-black language and derogatory terms, Johnson reinforces preconceived notions of blackness in the eyes of his white audience. However, I would argue that hidden within this song is a striking resistance to discrimination. In the sounds of his laughter,

Fifth World


27

Johnson puts up a sort of shield against racism. When he is called slurs, and when he is discriminated against, he laughs and keeps singing. In that, there’s a note of radical potential buried within this piece; it’s a commentary on the necessary masking of trauma black people must do to survive. However, that laughter is still the most captivating part of the piece to audiences. That laughter distinguishes it from all music: in his laughter, white people found joy. Yet, its repetitive and constrained nature comments on their joy. Minstrelization should be understood as a violent and harmful process, then, where black trauma becomes exported and exploited for the profits of white corporations. It isn’t just shucking and jiving, but the very consumption of the black experience for pleasure–black artists made to laugh their way through the derision of American culture. It’s the fetishization of black trauma. However, the control enforced on the black artists by the white gaze runs so much deeper than that. Even Frederick Douglass couldn’t escape the grip of white publishers. When he wrote his first personal narrative, it was distributed by William Lloyd Garrison. Garrison was a respected abolitionist, but when publishing Douglass’s work, he insisted he was “confident that it is essentially true in all its statements; that nothing has been set down in malice” (Douglass X). In my essay, “Making Black Art for a White Audience”, I observed that: To a point, I have to agree with him, of course. Everything Douglass said was true. Everything he said was correct and valid, but the insistence that none of it was written in malice is a constriction. Frederick Douglass is talking about violent, gruesome oppression. When he writes, he speaks of his disgusting traumas and his rightful hatred for those that inflicted it, so what Garrison does is control the context under which white readers will engage with the text. He effectively shifts the meaning for the sake of accommodating his audience. (Jarrett 6) What I am trying to get at here is that framing controls the message and authenticity is lost by that frame. When black art is understood on white terms, the magic is lost. Therefore, even the most well-intentioned of white people can fall into the patterns of minstrelization and reduction. Garrison dedicated his life towards the abolition of slavery, but he couldn’t escape from implicit biases born from his whiteness. It’s not like Douglass’s book is pleasurable reading, though. No one is reading slave narratives for fun, and the events of the book are too gruesome to be entertaining; however, that’s not to say a different type of pleasure isn’t derived from the book. In reading sentimental literature, white audiences are invited to pat themselves on the back. For example, while reading Uncle Tom’s Cabin, we are all invited to exclaim and wretch in the face of the violence of slavery. Thus, white readers are validated in their physical reaction to the violence of the world. In our physical reactions to racism – the gut-wrenching horror, the nausea, the pacing of our hearts – our anti-racism is effectively affirmed. “Look North Carolina School of Science and Mathematics

at how anti-racist I am!” white readers are invited to remark. And in that way, minstrelsy can take many different forms. Without any malice or hostility, the voyeuristic insistence on viewing black suffering further reduces the radicalism of black literature into something else. Engaging with black suffering begets a sort of satisfaction. Through his framing, William Lloyd Garrison fundamentally alters Douglass’s work, and henceforth, white audiences are invited to view it as a spectacle. It becomes an affirmation rather than a complicated, layered, and radical critique. Minstrelsy can take many forms that diverge from its historical roots in minstrel shows, and minstrel performances are not even always explicitly so. Therefore, the presence of modern minstrelsy – of white consumption of pseudo-blackness all around us – can take even more new forms that account for changing mediums and shifting cultural landscapes. The example that comes to mind first is King Bach’s vines. While this has already been pointed out by Youtuber Robert Tolppi in his video “How Vine Revitalized Minstrelsy,” King Bach built a career by posting 6 second videos of anti-black stereotypes on Vine. Each video would have some punchline that pulled from a very short list of content: black people like watermelons, black people like chicken, black people have no fathers, etc. Many minstrel shows in the past featured white artists in black face performing racial stereotypes, but this is a play on the same idea. What King Bach did in his time on the platform was pedal anti-blackness in order to appease a predominantly white fanbase. This was no commentary on racism, and it feels difficult to call this black art, but King Bach expressed his blackness in ways that satisfied white people’s desires. His performance of white conceptualizations of black people was a minstrel performance. By parroting those stereotypes, he sought to bring white people pleasure. And the satisfaction of the white gaze might be what sits at the crux of modern minstrelsy. When your art is made with a white audience in mind, the integrity of the art is corrupted. It stops being about you and the world, and it starts being about what your audience wants to see. For example, in 2012, Tyler, the Creator said the point of his music was “to piss old white people off”(NME). Now, to be fair, this was almost certainly a joke, but in making that statement, his art was transformed. The context surrounding its creation and perception was changed. All of a sudden, Yonkers stops being just an expression of who Tyler is, but it becomes a case of Tyler posturing himself into something for white people to fear. By the very presence of white eyes on Tyler’s work, his art was transfigured into something it otherwise would not have been. So, while this relationship isn’t necessarily minstrel, it decenters the blackness of black art and makes the white audience a priority. Tyler becomes the object of the white gaze when he makes himself into something that they can detest. In that way, he gives the white man control over his art and its message, and he becomes a little black performer


28

for a sea of angry whites. It’s not pleasure that these “old white people” get from listening to Tyler, but they are still his crowd. He’s performing blackness . This was in 2012, though. Since then, Tyler’s proximity to whiteness has only continued to grow closer, and his quotations have only increased. In 2013, he said white people should be able to say the N-word. In 2017, he said “I don’t like Black dudes at all. I’m into White guys” (Thorpe). And with the release of his latest album, , he has moved even further away from aesthetics of blackness. As Pitchfork’s Matthew Ismael Ruiz points out: Wes Anderson’s influence looms large here, with wideangle shots on diorama sets, and vintage luxury suitcases shot with a low-contrast, brown-and-pastel color palette. It’s hard to separate Anderson’s manicured aesthetic from whiteness. There has been a definite movement in Tyler’s career as his artistic aesthetics have shifted and developed. He’s shed the hostility of his youth, and as he’s matured, begun to find real comfort and maturity in himself. With that being said, his music, however black, is never simply “for us.” He has a problematic track record with whiteness, and a recurring motif of his musical journey has been the notion of being “too white for the blacks, and to black for the whites.” Things become complicated here because, while Tyler’s art is inherently black, making it fugitive to his white fans, Tyler is fighting to be something “more than black.” It’s an internal conflict that all black artists must consider. In his speech at the Grammys in 2018, Tyler went so far as to say: It sucks that whenever we- and I mean guys that look like me- do anything that’s genre-bending or that’s anything they always put it in a rap or urban category... So when I hear that, I’m just like, why can’t we be in pop? (Recording Academy / GRAMMYs) He feels constrained by the limits of blackness in the public eye, and in a sense, he is striving to shatter the glass ceiling keeping black art and white art in separate categories. It harckens back to Langston Hughes’s “The Negro Artist and the Racial Mountain” (which the epigraph of this paper comes from). For Hughes, Tyler’s desire to be a musician and not just a black musician represents a desire to be something other than yourself. Tyler’s career has been marred by what appears to be a desire to be white and an overall disdain for blackness. However, he can never not make black art because Tyler will always be black. Therefore, I’m actually extremely hesitant to call the relationship between Tyler and his viewers minstrelized. White people are consuming his art for pleasure, of course. His art is an expression of who he is, and following the changes and shifts in his public persona with his albums presents a sort of dialectical conflict within him. But there are white Tyler fans who will never understand the struggle to find an artistic identity as a black artist. Instead, they derive a message from his art. There’s still something to take away from it for them because they keep coming back, but the tumult of blackness is lost.

There’s so much confusion and pain tucked away into the back of his art that makes it inaccessible without explicit effort. In 2012, it might have been because he was performing with a white audience in mind, but now he’s just performing for white audiences, and I think he’s found himself at a loss. In his song, “Manifesto”, Tyler even acknowledges all of this with a response, saying: Black bodies hanging from trees, I cannot make sense of this (Uh) Hit some protest up, retweeted positive messages (Uh) Donated some funds then I went and copped me a necklace I’m probably a coon to your standard’s based on this evidence Am I doin’ enough or not doin’ enough? I’m tryna run with the baton, but see, my shoe’s in the mud I feel like anything I say, dawg, I’m screwin’ shit up (Sorry) So I just tell these black babies, they should do what they want. Tyler doesn’t know what he’s doing. He doesn’t know what to say. There is no way for him to become a white artist no matter how much he tries, and his efforts to do so always result in aesthetic movements away from blackness. But, at the same time, this is just the art he wants to make. Some of it is genre-defying, some of it adopts white aesthetics, some of it is a rejection of everything white. This is the “too black for the whites and too white for the blacks” that I was talking about earlier. To a point, it has allowed him to foray further into the mainstream than most other black artists could ever hope to. He has white approval now. Get Lost was nominated for more Grammys, and his tours are always sold out, all without making explicit attempts at performing blackness. Once again, Tyler is not white, and his art will forever be black. Something is lost on his white fans: the duende of familiar confusion that I can feel won’t be felt by everyone. But, his art is just not for us either. In his desire to be more than a black artist, Tyler rejects the duende that could be there. All of this context surrounding his art effectively changes our readings of it. It effectively changes his position in our cultural landscape. He makes art for white people, and that art also has aesthetic attempts at distinguishing itself from blackness within it. As with Yonkers, Tyler is still making art with a white audience in mind. The relationship between artist and audience is different now because 10 years have radically changed who Tyler is, but his primary audience is still inescapably white. His work is fundamentally black, but the aesthetics he chooses to represent it with are white: Tyler is putting what is black in white terms. In a sort of reverse of common conceptions of cultural appropriation, Tyler is translating what he knows into terms that white people will want to view. Minstrelization, therefore, places Fifth World


29

another burden on the modern black artist. It almost feels as if any artistic freedom and creative liberty separate from black radical politics only serves to separate us further from our culture and invite whiteness into our arms. This is incidental minstrelsy — it’s a minstrelsy that simply cannot be escaped because it avoids the politics of blackness. Thus, Tyler creates a space that is welcoming to white people, a white sonic space without dissonance or dissent. Tyler has found himself in a position of performing for white people. White people love his music, and that is what it is. Whether he would like it differently or not, his music employs white aesthetics and draws white fans towards it, and in that, we find the scene of a black man performing his black art for an ocean of white faces. But, that still isn’t necessarily his fault. It’s a two-way street. A minstrel scene takes two parties to initiate, and white people can minstrelize anyone. JPEGMAFIA (henceforth known as Peggy), for example, is a radical black artist who creates music meant to challenge structures of the establishment. The most striking example of Peggy’s radicalism – his utter unpalatability – would be his 2016 song, “I Just Killed a Cop Now I’m Horny.” The first 3-minutes of this 6-minute song are dedicated to a recording of the murder of Deputy Kyle Dinkheller from 1998. Dinkheller was killed by Andrew Howard Brannan, a Vietnam War veteran with PTSD, in a traffic stop gone horribly wrong. While watching the footage, you’ll see Brannan dancing and acting irrationally, and eventually, the altercation escalated to murder. A lot is going on in this video. Due to a host of different factors, including the United States’s gross mistreatment of veterans and police training that does not adequately prepare officers to de-escalate tense situations, Dinkheller died. However, the main thing most police departments have taken away from this is to shoot first and ask questions later because “cop killers lurk around every corner” (Beauchamp). The sociopolitical relevance of this recording is massive and its impacts on police culture run deep. By placing this at the beginning of the song, Peggy places us in an uncomfortable situation, with the intent of challenging conceptions of the video. By introducing us to this horrific event before he raps, he establishes something that he proceeds to reframe later in the song. In his second verse, Peggy dedicates time to exploring the point of view of civilians, rapping: Oh my god This pig want to take my life 26 no job And now they wanna take my license ... He came closer Grabbed my toaster Put the gun to the dome of a dead cop Now, who’s the owner? To clarify, Peggy isn’t expressing a deep rooted desire

North Carolina School of Science and Mathematics

to kill the police; rather, he is exploring the context under which police murders happen. They don’t happen in a vacuum, and the police have a long history of physically abusing people, particularly those having mental health crises. While he has expressed sympathy for Dinkheller in the past, mainstream perceptions (reinforced as they are by the establishment/police departments) of the murder tend to be strikingly one-sided. Therefore, “I Just Killed a Cop Now I’m Horny” invokes a sort of cultural dissonance within the listener, fundamentally challenging our perceptions of police violence, with Peggy singing during the chorus: When your mama dies, yeah (I just killed a cop) When your daddy dies, yeah When your sister cries Who gonna weep when these coppers die, yeah? Needless to say, that song should not be palatable. Neither should the rest of his music be easily heard. Other notable Peggy lyrics include: “Trump Era, I’ll be killing the feds” (from “I Might Vote 4 Donald Trump”), “Kill Trump, do ‘em like Floyd did Gatti” (from “Germs”), and “Pull up on a c*****r, bumpin’ Lil Peep” (from “I Cannot F****** Wait Til Morrissey Dies). There’s a violent hostility underwriting his music, but it’s not an artificial one. JPEGMAFIA is angry because, as he would say, “Black people got shit to actually be mad about.” He was raised in a now-gentrified Brooklyn; he moved to Alabama in middle school; he was financially coerced into entering the Air Force and exploited to fight another man’s war (releasing his first songs while on tour). He was also in Baltimore when Freddie Gray was killed. JPEGMAFIA recalls admiring the anger of the people in Baltimore at the time. It was genuine, it was raw, and it was determined to destroy racist structures around them. “I respected that Baltimore was like, ‘We’re not going to march. We’re going to break shit.’ Because that’s how shit gets done. That’s how America was started. No one is so nice that they’re going to listen to your empathetic views, especially not police that shoot 12-year-olds for having play [toy] guns. So there’s no reason to not turn up on people like that.” - Sheldon Pearce, Have Things To Be Mad About’ The anger that Peggy expresses in his music is the raw, unbridled anger of black America. It’s a product of his blackness and his struggle against a white supremacist society. The constant current of hostility bearing down on black America is painful and tiring, and in his anger, Peggy is lashing out. Essentially, the sonic space Peggy tries to curate is an explicitly black one, made for black people. In fact, Peggy commented on the nature of his music, stating: A lot of these dudes in metal, they’re just mad at the world


30

because, like … who even knows? But I want to create a space for these invisible black people – where niggas who have genuine shit to be mad about can come and be actually mad about it. I just want it to be communal: we’re all here and we’re all weird and we’re all, like, fucked up and depressed. And despite all this, JPEGMAFIA’s fanbase is predominantly white! Without any statistics proving his audience is overwhelmingly white, all I can do is suggest watching a video of his performance at the Pitchfork Music Festival in 2019. As he crowd surfs and raps, he is surrounded by seas of white faces and hands. The explicit blackness of his music is turned into a commodity, a means of entertainment for those who are “just mad about the world because, like. . . who even knows?” The unbridled anger of black America is turned into something that white people listen to for enjoyment and pleasure–a commodity, like the scars on Douglass’s back. All that is to say, even in his intentional political provocativity, even in his hostility towards the music industry, even in his aggression towards whiteness and his attempts at curating an explicitly black space, JPEGMAFIA still manages to be minstrelized. And, if he can be minstrelized then anyone can. That’s why Kendrick Lamar’s “Money Trees” became a TikTok dance. A song about poverty, crime, and trauma was reduced to a TikTok fad. That’s why those same white people will go to a Kendrick concert, get on stage, and proceed to say the n-word with no hesitation. It’s not about the art because the art is lost on them. However, the aesthetics produced by black trauma fascinate white people. They’re really, really into it. And, it’s not just a new thing. Even in 1992, The Phoenix New Times ran an article about the overwhelming whiteness of Public Enemy’s fans (Koen). Contrary to what titles like and “Pollywanacraka” might indicate, white people love Public Enemy! Again, that’s not necessarily the artists’ fault. Anyone can be stolen. However, it does mean one thing: the white gaze is not something that can be escaped. With the rise of the digital era, there is no space on the planet where white people will not be without cameras. And, you can’t sustain a musical career with no audience. We live in a Capitalistic society, and you need to make profits to survive. Your music cannot be your own, and therefore, you cannot avoid the complications that come from a white audience, so the question of how to deal with minstrel relations is necessarily in order. Thus, the central question of this paper: “how can we exist as both black and artists in America?” As I mentioned earlier, with the rise of the digital era, it is more difficult now than ever to escape from the white gaze. Every space you put your art into, every post, ever blog, every forum will inevitably be crowded by white people. Therefore, the choice of some artists is to accept this inevitable consumption by white fans. In March of 2020, JPEGMAFIA was asked if the whiteness of his audience bothered him at all, and his immediate response was “no, I don’t give a f*** at all”. He went on to say:

Whoever likes my music likes my music. I have intentions with certain themes and stuff that are aimed at people like me but... It comes with the territory… I’ve watched these kinds of questions be asked to artists like Public Enemy in the 80s and it was the same context. It’s just like “why aren’t there more black people here” type sh*t, and it’s just like cuz we don’t have money and there’s not a lot of us in the states… Every artist I’ve ever been to that’s big, the whole goddamn place is white… I don’t really think about it because it’s never been any other way. Like, they asked Miles Davis the same shit in the 50s… This shit’s been like this. It’s never gonna change, probably. (Cambridge Union) The reality of America is that white people have a higher population, more disposable income, and more access to these concerts and artists. Tyler, the Creator’s concerts are predominantly white, too. As are Kendrick concerts. In the wake of the Top Dawg Entertainment Championship Tour show in Washington D.C. (one of the blackest cities in America), Taylor Crumpton wrote an article for Paper (the magazine) about her experience. The audience was so overwhelmingly white, that the space was transformed into something almost hostile to her. SZA, Kendrick, and Schoolboy Q performed in that show, and Kendrick even performed unreleased freestyles, but despite that, she was left thinking: I now have to ask, ‘What’s the cost of me attending this concert?’ Is it going to be a continuous struggle of young white boys groping my ass and white girls asking me to twerk while I have to remain calm as white people scream ‘nigga’ for hours, just out of pure privilege? The reality of being a black entertainer is, quite tragically, that you cannot escape having your art commodified and your culture fetishized by white eyes. Peggy simply accepts this and keeps moving. He can’t stop it. He can’t necessarily change the conditions under which his art is produced and therefore, it seems like a fruitless task to try to change it. To say, “I don’t want white fans” would be akin to saying “I don’t want to profit from my art” because you would be cutting off a major portion of your income. Concerts and festivals are inaccessible to black people because they are expensive, outside of black communities, and exist . Capitalism doesn’t care about the race of those who buy tickets. It demands that tickets are sold for whatever price produces the most profits. If it happens that white people will buy tickets for more, then white people will be the ones who get tickets. It is what it is. Therefore, some artists can’t to worry about how white people perceive them because their art is intrinsically restricted by Capitalism. After all, nobody wants to be broke. Of course, there are other options. You can always do what Noname did and stop performing for white people entirely. The challenge of minstrelization that artists face is only reinforced by capitalism, and so, Noname has dedicated her life to deconstructing Capitalism Fifth World


31

and white supremacy. The movement of black liberation is difficult, and therefore she has made sacrifices, sidelining her music to focus, in part, on the Noname Book Club — a national program dedicated to uplifting black voices and supporting black-owned bookstores. As the website describes it: Noname Book Club is a Black-led worker cooperative connecting community members both inside and outside carceral facilities with radical books. Each month, we uplift two books written by Black, indigenous, and other people of color. We believe building community through political education is crucial for our liberation. We also believe everyone (especially racialized and colonized people) should have access to unlimited educational materials. This is why we make sure all the resources we offer are free. (“About – Noname Book Club”) Noname acknowledges that while she’s a rapper and an artist, she’s also a human being. She exists as a political actor and a colonized person at all times because no matter what, she is a black woman in America. Therefore, she acts with the interests of black liberation in-mind, going so far as to reject an offer to be on the Judas and the Black Messiah soundtrack because the movie wasn’t radical enough (TIGG). Politics before profits. All artists are given a choice to prioritize money or their freedom and dignity. If you want to make art to support your ability to make more art, then okay. If you want to entertain as many people as possible, then okay. But, you cannot pretend that you’re not performing for the white gaze above all else. Black liberation and the fall of capitalization (and further, the commodification of art) is the only way you’ll ever be free. Money can’t buy you happiness anyways. Money is important to survive, but it shouldn’t necessarily have to be. If economic forces are enticing artists into minstrelization, then you always have the option to reject those economic forces entirely. The late poet Bob Kaufman said: ABOMUNISTS DO NOT WRITE FOR MONEY; THEY WRITE THE MONEY ITSELF. (Kaufman) From 1958 until, essentially, his death, Kaufman was a vocal and active public artist, and as such, he was involved in numerous literary movements in the Bay Area and New York City (Garner). He was arrested over 100 times for disturbing the peace, and lived most of his life in poverty or jail because of his fringe political and philosophical beliefs. He read Lorca, Jack Kerouac, Walt Whitman, Langston Hughes, and he grew up in New Orleans. That is to say, he has a long list of poetic influences, and his work reflects those ideologically. The idea of Abomunism comes from Kaufman’s 1959 work, . In the Abomunist Manifesto, Kaufman effectively lays out the framework for a satirical political party based on his own ideals, and one of the first things he mentions is the idea of writing money. North Carolina School of Science and Mathematics

For Abomunists, writing poetry is producing something of value because our art has value to us no matter what. Even if it’s not palatable to an audience or the world, our art is our everything. This would be where the tension between the Capitalists and the Abomunists lie. As it stands, everything has a monetary value applied to it. Therefore, the art and the artist is turned into a commodity. Something with visible, quantifiable, and tangible value. Abomunism completely rejects this. They reject money, they reject value, they reject this middle man in-between. Their poems are their life, and so their poems are their everything. This produces a sort of conflict though, of course, because Kaufman lived in America, and you need money to function, but he champions the cause anyways, proclaiming: ABOMUNIST POETS, CONFIDENT THAT THE NEW LITERARY FORM “FOOT-PRINTISM’ HAS FREED THE ARTIST OF OUTMODED RESTRICTIONS, SUCH AS: THE ABILITY TO READ AND WRITE, OR THE DESIRE TO COMMUNICATE, MUST BE PREPARED TO READ THEIR WORK AT DENTAL COLLEGES, EMBALMING SCHOOLS, HOMES FOR UNWED MOTHERS, HOMES FOR WED MOTHERS, INSANE ASYLUMS, USO CANTEENS, KINDERGARTENS, AND COUNTY JAILS. ABOMUNISTS NEVER COMPROMISE THEIR REJECTIONARY PHILOSOPHY. “Foot-Printism” is the literary form of the Abomunist. It’s all about performing your art for yourself. Part of the mission of Abomunism is the production of poetry with more value than money, and to do that, you have to first make sure that the art has value . Palatability should not be your priority. The things your mind can create and think and feel are infinitely complex and beautiful, and you should not have to tailor your art to people who are not you. Kaufman performed a large chunk of his poetry in the streets of San Francisco. He would walk around the crowded roads reading poetry to anyone who would listen, and he was arrested for public disturbance countless times. His art was rejected by the public, and he made no money from it, but his art had value to him. This stanza of the Abomunist Manifesto was a proud proclamation of his career that was to come. He’s an Abomunist, and he knows what he feels is true, so he is fully prepared to perform at dental colleges, kindergartens, etc etc. Essentially, he’s ready to fail to gain wealth because he’ll successfully produce poetry that nourishes him instead. Essentially, it’s an insistence that his poetry has real value, and as such, he will continue to produce it, no matter where that might be.


32

Kaufman presents us with a new model for the black artist: one that resists and opposes social convention entirely. Kaufman didn’t sell his poems – he barely wrote any down – and he spent most of his life in poverty, but he was an Abomunist til the end, and in that, there was some degree of liberation. Kaufman’s art was no one’s entertainment, but rather, an extension of his life into the world. It had no monetary value; it was just him and his poetry. In that vacuum – outside of economic pressures – is where art reigns free. When the production cycle doesn’t force your art onto your audience, you’re free to give it time to stew, or release it unfinished, or finish it when everyone else thinks it’s not done. Making art with no demands from the audience affords you a great deal of creative freedom, effectively allowing a more genuine piece to be created. This is where the duende lies. This is “where forms fuse together into a yearning superior to their visible expression.” Where imprecision, ugliness, discourse, and stutters are welcome as part of the human sound. I said earlier that the duende sits at the crux of all that is black, and this is why. The duende can be identified in the margins of society — deep in the crevices and cracks of our normative walls, and the blackest of art can be produced from its depths. Of course, these are depths that take sacrifice to reach. Kaufman was arrested well-over 100 times in his life, and he made several visits to psychiatric hospitals. There is so much of his soul laughing and dancing in his poems, but that soul didn’t get there without carrying immense pain first. And who wants to do that? In order to make art that is not actively commodified and transformed by white audiences – art that is honest to ourselves – we have to suffer, and what sort of existence is that? This returns me to the question I asked at the beginning of this paper: how can the black artist survive? The black artist exists, of course, by being black and producing art, but satisfying those two conditions is a difficult thing because art is always transformed by the world around us. One challenge the Black artist inevitably comes across is the reality of minstrelsy; white audiences and the white gaze are capable of turning a protest into a minstrel show. Those audiences cannot be escaped or avoided either; all black artists must tackle the question of minstrelsy. How do you make enough money to live without sacrificing a dimension of your art? While refusing to be the white man’s commodity? Some artists have found freedom in economic security, using the comfort of wealth to refine their skills. Tyler, the Creator has released one amazing piece after another for years now, and they only seem to improve in production quality with each iteration. The benefit of money is, evidently, that with money comes leisure and more technically refined art. Kendrick Lamar and Charles Chesnutt expand upon this idea, creating art that critiques the relationship between the black performer and the white audience. Abomunism isn’t for everyone. If the black artist must be minstrelized, then they ought to

at least control the conditions under which that happens. However, Abomunism should certainly be for some people, and it offers a sort of escape from normativity, tradition, and the pressures of cultural expectations. Reject money, reject wealth, reject comfort, reject signifiers of a Capitalistic life. Produce art that will transform your soul, and never let anything hinder that freedom. This diametric choice is the conflict of the black artist. It’s not really so diametric, of course. Nothing ever is. However, it does present a sort of dialectic relationship. The thesis: the black artist as they have always been — Kendrick, Tyler, Chesnutt. The antithesis: the rejectionary poet. Abomunism rose from opposition to convention. It exists for those outside the norm. The synthesis is a third position outside of these two but produced from within their conditions: the Noname Book Club. The Noname Book Club functions on two levels here. It was founded to promote black radicalism and literature across the nation, but in doing so, they chose to support small black-owned bookstores. This two pronged approach both supports people within our current system while educating people about its failings. America has never been good to black artists let alone black people, and acknowledging this, we cannot hope to be liberated as artists before we are liberated as people. If we want our artists to be able to live free and do art without being consumed by the all-encompassing white gaze, then we have to deconstruct whiteness and Capitalism within our society. At the same time, there are things we can do now. Opposing the system doesn’t have to mean rejecting it entirely. You can support independent black artists everyday online or in your local community — all it takes is making a conscious effort towards redirecting your gaze.

Fifth World


33

““An Act Concerning Servants And Slaves” (1705)”. Encyclopedia Virginia, 1705.

Wallowitz, Laraine. “Chapter 9: Resisting The White Gaze: Critical Literacy And Toni Morrison’s “The Bluest Eye””. Counterpoints, vol 326, 2022, pp. 151-164. JSTOR, Accessed 6 Jan 2022.

Baraka, Amiri. Black Music. 1st ed., De Capo Press, Inc., 1998, p. 210. Bernard, Emily. Black Is The Body. Knopf Doubleday Publishing Group, 2019, p. xiv. BRAEMBUSSCHE, ANTOON VAN DEN. The Value Of Culture. Amsterdam University Press, 1996, pp. 31-43. Brooks, Tim. “The Laughing Song”—George Washington Johnson (C. 1896). 2022, pp. 1-3, Accessed 7 Jan 2022. Cambridge Union. JPEGMAFIA | Q&A | Cambridge Union (2/2). 2020, Accessed 9 Jan 2022. Chesnutt, Charles W. (Charles Waddell), 1858-1932. Dave’s Neckliss. Charlottesville, Va. : Boulder, Colo. :University of Virginia Library ; NetLibrary, 1996. Crumpton, Taylor. “Have White People Stolen Rap Concerts, Too?”. PAPER, 2018. Douglass, Frederick, 1818-1895. Narrative Of the Life of Frederick Douglass, an American Slave. Boston :Bedford/St. Martin’s, 2003. Du Bois, W. E. B. The Souls Of Black Folk. A. C. Mcclurg And Co., 1903, p. 14, Accessed 6 Jan 2022. Garner, Dwight. “Two Poets With Alert And Nimble Eyes On American Life (Published 2019)”. Nytimes.Com, 2019. Grater, Tom. “‘Song Of The South’ Won’T Be Added To Disney+, Even With Disclaimer – Deadline”. Deadline.Com, 2020. HRABOVSKÝ, Milan. “THE CONCEPT OF “BLACKNESS” IN THEORIES OF RACE”. Asian & African Studies , P65-88. 24P., vol 22, no. 1, 2022, pp. 65-88., Accessed 6 Jan 2022. Jarrett, Jacob. Making Black Art For White People. 2021, p. 6-7, Accessed 7 Jan 2022. Johnson, George W. “The Laughing Song”. Victor, 1890, Accessed 7 Jan 2022. Kaufman, Bob. The Abomunist Manifesto. BEATITUDE, 1959, p. 1, Accessed 9 Jan 2022. Koen, David. “WHITE THE POWER PUBLIC ENEMY’s COLORBLIND WHEN IT COMES TO MAKING MONEY”. Phoenix New Times, 1992. Mackey, Nathaniel. “Nathaniel Mackey | Cante Moro”. BLACKOUT ((Poetry & Politics)). Marx, Karl. Economic And Philosophic Manuscripts Of 1844. Dover Publications, 2007. Nme. “Rapper Says He like to ‘Piss off Old White People.’” NME, uploaded by NME, 11 Apr. 2012. Pearce, Sheldon. “Radical Rapper Jpegmafia: ‘Black People Have Things To Be Mad About’”. The Guardian, 2019. Raengo, Alessandra. “Black Matters”. Discourse, vol 38, no. 2, 2006, pp. 246-264. JSTOR, Accessed 6 Jan 2022. Recording Academy / GRAMMYs. Tyler, The Creator TV/Radio Room Interview | 2020 Grammys. 2020, Accessed 9 Jan 2022. Ruiz, Matthew Ismael. “5 Takeaways From Tyler, The Creator’S New Album Call Me If You Get Lost”. Pitchfork, 2021. Thorpe, Isha. “Iheartradio Unsupported Country”. Iheart.Com, 2017. TIGG, FNR. “Noname Passed On ‘Judas And The Black Messiah’ Soundtrack After Seeing The Film”. Complex, 2021. Tolppi, Robert. How Vine Revitalized Minstrelsy. 2021, Accessed 7 Jan 2022.

North Carolina School of Science and Mathematics

Wheatley, Phillis. “On Being Brought From Africa to America.” Gleeditions, 17 Apr. 2011, Originally published in Memoir and Poems of Phillis Wheatley, Geo. W. Light, 1834, p. 42.


34

Oriental Femininity — Fears, Fantasies, and American Realities Winnie Wang

“From invisible girlhood, the Asian American woman will blossom into a fetish object. When she is at last visible—at last desired—she realizes much to her chagrin that this desire for her is treated like a perversion…” - Cathy Park Hong, Reckoning

F

rom the beginning, Chinese women in America have been perceived as a very specific sexual threat. Historically, though they worked alongside women of many races and backgrounds in the mining communities scattered throughout California, they were often singled out from their white counterparts as having contributed to the high spread of sexual transmitted diseases. They were shaped to be scapegoats; even in today’s society, the sexualized nature of Chinese women stereotypes is apparent in popular media, “yellow fever,” and heightened Western obsession with Eastern beauty standards. Throughout American history, different periods of immigration policy directed towards controlling and limiting Chinese immigration often included clauses that specifically condemned the presence of Chinese women in America. Despite the magnitude of their long presence in America, the history of Chinese Americans has been seemingly overshadowed by a tragic American history of economic depressions, global and national wars, and socioeconomic developments. Contemporary discussion of Asian American history as part of American history continues to be largely limited, especially the history of Chinese-American women. The manner of compliance and silence of Chinese prostitutes and immigrants in general is not only an emblem of their omission from American history, but part of the continuing violence of the history of Asians in America. In our time, the COVID-19 pandemic has not only given way to a biological virus, but a virality of hate and danger. It brought a wave of seemingly unprecedented antiAsian sentiment that swept the nation. Suddenly, those who shared my eyes, my hair, my skin, were being killed — simply for looking the way they did. Chalked up to a “sex addiction” and a “bad day,” the murder of Asian women was the occasion for a brief wave of sympathy which swept the nation on social media, as users showed their temporary concern for the issue by posting yellow squares on their

timelines, but it was gone as soon as it came. For most Asian Americans, however, the feeling lingered: the paranoia of leaving residences and becoming infected with COVID-19 was only strengthened by the fear that they could be the next victim. Was this truly a novel feeling though? For Chinese women, being associated with sickness and disease is historical. To be fetishized and demonized simultaneously is possible when one’s identity is defined by a history of prostitution that was both necessary for the function of a segregated American society dependent upon Asian labor yet hostile to Asian economic, political or social aspirations. disgraced by American society. Xenophobic slurs directed towards Asian Americans in the midst of the COVID-19 pandemic, including but not limited to the “China virus” and other variations of “coronavirus” with an inexplicable tie to the Chinese population, draw from roots in the early waves of Chinese immigrations and the diseased and heathen stereotypes imposed in the mid-nineteenth century. It is interesting, then, that despite the persistence of xenophobic sentiments towards Chinese Americans and the contemporary manifestations of historical stereotypes, Chinese Americans are also associated with expectations of success, often expressed as the “model-minority” myth. The submission and passivity associated with Asian Americans, combined with restrictive and often racist immigration policies, have created a cycle that continues to degrade and diminish the values of hard work and achievement in Asian communities while denying them access to political power and cultural capital. Though there are a multitude of specific ethnic studies that dissect the history of Chinese immigration and immigration policy in relation to modern yellow peril, this paper aims to fill the void of intersectional research that focuses on the impact of anti-Oriental immigration policy that is tied to Chinese prostitution and the modern demonization of Chinese women. At the same time, it recognizes that American society is indifferent not only towards the actual heterogeneity that exists within Chinese American communities, but to the differences among Asian Americans. American society continues to aggressively aggregate all Asian Americans into a general demographic, destroying the significance of cultural and ethnic signifiers that generate diversity within the Asian population. It is then difficult, yet necessary, in studying

Fifth World


35

the history of Chinese immigration to try to separate the work from a generalization to all Asian immigrants. Thus, in considering the modern demonization of Chinese culture and women, it is crucial both to consider the contexts in which historical immigration policy has informed the contemporary yellow peril, which combines sexual and social contagion in the figure of the Asian woman. But if this woman is, for white America, an orientalized fantasy of an Asian female sameness, for Asian American women she becomes a figure of political solidarity across differences. “The dead body of a Chinese woman was found last Tuesday morning lying across the sidewalk in a very uncomfortable position. The cause of her death could not be accurately ascertained, but as the top of her head was caved in, it is thought by some physicians that she died of galloping Christianity of the malignant California type.” - Ambrose Bierce, 1870

I

n 1832, 16-year-old Afong Moy arrived in New York Harbor from Guangzhou. She was accompanied by traders Nathaniel and Frederick Carne, and is to date the most well documented, and therefore proclaimed the first, Chinese woman pioneer to arrive in the United States. A petite Celestial lady, she was the first public introduction of Orient exoticism, history, and dignity (Kong 2020). The public response was curious, and resoundingly positive: a glimpse into patrician Orientalism, the “Oriental” figure was revered. The Carnes merchants sought to take advantage of this public exhibition, using her presence to market Chinese goods as exotic and rare. Her feet were bound– both in the traditional, Chinese lotus feet practice, and in the form of puppeteering — playing the strings of public consciousness to the sensory stimulation her presence provided, manipulation of the “foreign” female body for the purpose of promotion. From the very beginning, an icon of cultural convergence and conciliation was tainted with western fantasies. Her femininity became a commodifiable exhibit (commonly known as the Chinese Lady) that shaped American perception of China for decades to come (Ling 1998). Beyond the actual woman that is lost to this exhibition of Orientalism, we see the beginnings of a dehumanizing construction that exploits Chinese cultural values to belittle the Eastern influence in the Western world. Up to fifty Chinese are commonly estimated to have stayed in the United States for varying reasons and lengths of time leading up to the Gold Rush. Most were temporary workers, leaving minimal meaningful impression on American society and little documentation for future research. As one of the very first long-term residents, Afong Moy received extensive news coverage during her time in the United States. She was both an advertisement and a spectacle — under her second manager, an American

North Carolina School of Science and Mathematics

promoter and circus founder, P. T. Barnum, Moy was commonly on stage against a scenic Orientalist background, highlighting aspects of her cultural exceptionalism (Ling 1998). Her bound feet, Chinese clothing, and accessories served as merely props to a constructed reality of China. Though her exhibits received waves of positive feedback in the beginning, audiences often responded arrogantly to the contrasts between her cultural signifiers and the opposing American ones. Her adornments and bound feet seemed antagonistic to the concept of American femininity, and her signification of the absolute power within Chinese dynastic government opposed republican ideals of self-government and free labor. Afong’s arrival during a period of significant upheaval in American society placed her between American slavery and the civil war, Native American removal, the moral reform movement, and an era of ambivalent feelings towards women and a rising first wave of feminism. Though little primary documentation of her performances exist, the context of American society in which her performances were popularized serves as a means to understand Americans’ initial orientalized impressions of Chinese femininity, and the shifts that occurred throughout her career that parallel well-known eras in American history that we are familiar with today, such as the Gold Rush, the emergence of immigration policy, and even slavery (Zhang 2014). The first Chinese immigrants began arriving in the Americas starting in the mid-nineteenth century. The economic consequences of the Opium wars, coupled by environmental disasters that decimated the livelihoods of lower classes in many parts of China, drove families and communities to seek opportunities elsewhere while escaping their poor fortunes. When rumors of prosperity in the Gold Rush gained traction in China, waves of Chinese immigrants flocked to California to join the frenzy for gold. At this point, placer mining, which was the earliest form of gold mining typically defined by individual panning and other tedious techniques, was being largely phased out. It was replaced by heavily capitalized mining industries, which pitted “white” workers against the multi-racial working classes that began to reside in California. Though they moved for the intent of working in the Gold Rush, most Asian immigrants ended up as laborers: some worked on American farms or in the textile industry, and others were employed by the transcontinental railroad companies, building railroads that would later be central to Westward expansion (Takaki 1990). Few women were a part of the first waves of immigration — in fact, an 1855 census showed that of the total Chinese population in America, only 2% were women (Takaki 1990). Mining was a male-dominated activity. Chinese cultural values and financial roadblocks accounted for these low numbers, as it was difficult for women to travel alone. The racial violence that was associated with American immigration also deterred most men from bringing their wives to establish families in the Americas. Most immigrant


36

laborers intended to return to China after they had labored for a few years. The dangerous sentiments directed towards Asian immigrants were not a constant: in a January 1952 address, California governor John McDougal proclaimed that the Chinese were ““one of the most worthy classes of our newly adopted citizens — to whom the climate and the character of these lands are peculiarly suited,” paying tribute to the immigrants for their significant contributions to the quickly expanding infrastructure and industrialization of the United States (Takaki 1990). The governor’s address failed to account for the rapidly shifting political climate that soon began to question the legitimacy of Chinese immigration to the United States. A nativist movement, spurred by racially violent nationalism and the crowding job market cried “California for Americans,” blaming the competition in the mining industry job market on immigrants, especially Asians. A later committee of the California Assembly decried the growing presence of Chinese miners, citing their concerns about the corruption of the well-being of miner districts due to the distinct customs and languages of the Asian miners, the degradation of existing white workers, and the discouragement of other American workers from migrating to a crowding state. Like the backlash that Afong Moy had faced for her non-Westernized views, Asian laborers were seen as heathens that posed a political and moral threat to a white, Christian America. Furthermore, most Asian laborers did not intend to seek American citizenship, and were seen and described as “servile contract laborers.” The tension and racially motivated violent sentiments directed towards Asian miners discouraged Chinese men from planning to permanently settle in the Americas, meaning that few women would be included in the first waves of immigration (Yung 1995). Back in China, families continued to live on the brink of subsistence, continuing to be plagued by environmental natural disasters that threatened their communities and livelihoods. Further victimized by population pressure and the consequence of foreign imperialism on the Chinese government, many peasant families with the means to send men to emigrate in search of employment were driven to do so. Given that few men brought families along with them to the mining sites, many of the few women that did live as a part of these mining communities were prostitutes (Hirata 1979). Due to the tremendous gender imbalance, and the relative lack of alternative employment opportunities for women in mining towns, prostitution became a major occurence. Driven by economic impoverishment from natural disasters and war, many families back in China resorted to selling their children, alongside infanticide, abandonment, and mortgaging. When it came to female children in China’s patriarchal and patrilineal society, families received little benefit from her labor, and she would never carry on the ancestral line. They were often seen as a financial and familial burden, and families began

to relieve themselves of their daughters by selling them into prostitution. Not only would they benefit financially from the sale and her earnings, but the costs and burdens of her upkeep were no longer their responsibility. These unwanted daughters became the supply for an unsatiated demand in San Francisco and other cities where miners would congregate, exclusively male communities where a business such as prostitution prospered off the sexualization of exotic Chinese women (Hirata 1979). In mid-nineteenth century San Francisco, the overwhelming population of male laborers traveling alone, lack of employment opportunities for women, and the demand for sex work in developing towns created conditions that catalyzed the prominence of prostitution. In societies that are undergoing rapid industrialization, there is generally the assumption that men make up the majority of the workforce and there are few women accompanying them to settle families. Thus, prostitution serves as a dual economic function: it helps to maintain labor forces that consist of younger bachelors, as laborers with families would likely require hirer wages, and it enables businessmen to exploit women for a huge profit that can then be invested in their other interests (Hirata 1979). Even as slavery began to ebb, Chinese prostitution in California was seen as a continued form of involuntary labor. While white prostitution in California quickly transitioned into a capitalist organization that was characterized by manager/ wageworker relationships, Chinese prostitution, fueled by sexual and racial prejudice, remained a semifeudal organization for most of the decades that it was prominent in. In the years following Afong Moy’s presence in the United States, women slowly began to immigrate from China. Few women left searching for employment opportunity in prostitution on their own will, as the journey was difficult financially and Chinese culture discouraged the lone travel of women. Most women had their sales arranged by their families involuntarily, and others accepted their families’ offer out of filial loyalty. Due to the fact that most women in China were also not educated, it was also easy for them to be coerced or tricked into signing contracts (Zhang 2014). The patriarchal society also placed an emphasis on family preservation, especially when it came to male migrants that went to work in the United States and the wives they left behind. If they had the means to, men would return periodically for the sake of starting a family, and with luck, their wife would bear a son that would later migrate to join the workforce in America. This system continued to reinforce cycles of infanticide and abandonment of female children. Furthermore, it reinforced the values that “decent” women were discouraged from traveling to the United States, as they would be expected to remain and carry on their filial duties. Only the “indecent” women, or prostitutes, were the ones that would migrate (Hu 2014). Though Chinese Americans faced sizable racial prejudice Fifth World


37

due to the assumptions that most were temporary laborers not seeking American citizenship, business owners contributed to the conditions under which this was the case. As was the same with African Americans and their families, Asian workers were paid extremely low wages to ensure that they would not bring their families when they migrated, as they would not have the wages to provide for a growing family. Typically, when the prostitutes reached the United States, they were separated into one of three groups. The top contestants would be selected to serve as concubines or mistresses for affluent Chinese men in San Francisco (many of whom were brothel owners themselves; racial prejudice towards Chinese at this time made it so that prostitution was one of the only avenues in which Chinese laborers had significant success). The best of the rest went to higher-class brothels that served only Chinese men, whereas the rest went to serve in lowly houses that served a racially mixed audience. At the time, Chinese culture strongly disapproved of interracial relationships, especially sexual ones for women. Thus, the lower class brothels had both a classist and racist meaning, as Chinese men and white men alike would visit the dens for their lower fees (Hirata 1979).

Figure 1. A woman peers through the wired mesh of a lower-class brothel in 1880 San Francisco.

The presence of the Chinese prostitutes soon was recognized by the general American public. Stories about the prostitution trade made their way into newspapers and other medias, sparking widespread shock. Though many were outraged by the cruelty of the system, others took it to signify the immorality of Chinese women and Chinese laborers in general. Prostitution was seen as further proof that the Chinese were heathens, and that the way they oppressed their women in their patriarchal cultural values was connected to how they would plague the United States with their immorality (Ling 1998). Given that the majority of Chinese women in the United States at the time served as prostitutes, generalizations were made towards all of Chinese women and the patriarchal societal values that the Chinese brought with them as they migrated. In San Francisco and other mining towns, tensions grew and xenophobic sentiments towards Asian laborers became dangerous. Though it is not commonly

North Carolina School of Science and Mathematics

acknowledged, the immorality associated with Chinese women and the men they were inextricably linked to later led to anti-Chinese immigration policies, such as the Page Act and the Chinese Exclusion Act. Many states along the West Coast also passed legislation that banned interracial marriages. Though this didn’t specifically target Asian women, the acts can be seen as a consequence of Chinese prostitution. The acts served to further degrade the presence of Chinese women in American society, as well as provide minimal avenues through which Chinese women could migrate and settle in American communities. The development of Chinese prostitution as a prominent venture in the nineteenth century calls upon the ideological and economic conditions within the two realms that the enterprise connects: California’s need for labor and the economic shortcomings of China; and to Chinese patriarchy and white xenophobia.

S

ince the efforts of local authorities were often futile in diminishing the presence of Chinese prostitutes, and many were bribed into being complacent in the trade, it was not long before the federal government stepped in to restrict the prospering brothels. Though the Chinese Exclusion Act is only the most commonly mentioned antiAsian legislation, it was barely the beginning of a history of xenophobic sentiment in American government: it just happened to be the most explicit example. In the 1870s, Protestant American women organized campaigns that fought against prostitution on the West Coast. Their efforts culminated in the passing of laws to ban prostitution, notably, the Page Act of 1875, which states “That in determining whether the immigration of any subject of China, Japan, or any Oriental country, to the United States, is free and voluntary . . . it shall be the duty of the consul-general or consul of the United States residing at the port from which it is proposed to convey such subjects . . . to ascertain whether such immigrant has entered into a contract or agreement for a term of service within the United States, for lewd and immoral purposes; and if there be such contract or agreement, the said consul-general or consul shall not deliver the required permit or certificate.” (Page Act) The act specifically applied to laborers from “China, Japan, or any Oriental country.” Explicitly, the act forbade the importation of women for the purpose of prostitution. In practice, the act was wielded as a weapon to prevent the further immigration of Chinese prostitutes, “undesireable” laborers, into the United States. This law was also an early effort to discourage Chinese immigration without explicitly making prohibitions on the basis of race or ethnicity. Instead, it sought to generalize Chinese immigrants to immoral or coerced laborers, which only furthered the racial tensions already present. The impact of this act targeted Chinese women, in particular,


38

as people believed they would transmit “Chinese diseases” to clientele. The Orientalist misogyny imperative in this act is central to Chinese and Chinese American history. It is through these gendered and violent exclusions that the United States was able to simultaneously exclude Chinese families from settling while also maintaining the image of the American Dream, as if the country had open borders to anyone seeking opportunity. This act is an important landmark in the start of Orientalism in immigration policies, and served as the end of the United States’ open borders. Even today, the concept of Orientalism is central to the perception of Chinese Americans and Asian Americans as a whole in American society. Later on, the evils of Chinese prostitution were cited to pass the Chinese Exclusion Act of 1882. The passing of this explicitly racist act signified an important shift in the history of American immigration: one from a generally “open-door” policy to one that was growing to be increasingly restrictive. The Chinese were the first targets, characterized by race and class, to be affected by severely limited entry. The 1882 law affirmed the 1790 Naturalization Act in barring Chinese naturalization, and further prohibited the immigration of Chinese Americans. The act declared that the government had decided that Chinese Americans posed a threat to the United States that “endangers the good order of certain localities within the territory”. This language serves to blame Chinese laborers for creating competition within the labor market and lowering the overall wages that were offered to laborers, and disenfranchising white businesses and the power and wealth generated from those that would directly benefit some portion of American citizens. Though the class dimension of the Chinese Exclusion Act is often overshadowed, it is important to note that the context of the act only bans the eligibility of lowly, unskilled laborers. Exceptions still existed for merchants, diplomats, and students. Thus, the Act declared “That for the purpose of properly identifying Chinese laborers who were in the United States . . . the collector of customs of the district from which any such Chinese laborer shall depart from the United States shall, in person or by deputy, go on board each vessel having on board any such Chinese laborers and cleared or about to sail from his district for a foreign port, and on such vessel make a list of all such Chinese laborers, which shall be entered in registry-books to be kept for that purpose, in which shall be stated the name, age, occupation, last place of residence, physical marks of peculiarities, and all facts necessary for the identification of each of such Chinese laborers . . . .”(Chinese Exclusion Act) When this law was passed, the United States had few measures of immigrant control infrastructure that might be used to enforce it. Modern immigration officials, passports, green cards, and deportation policies were all measures that initially existed to control Chinese immigration. The

Bureau of Immigration implemented the Bertillon system to document Chinese immigrants — the system consisted of a series of extremely invasive and degrading examinations that would seek to classify the identity and race of each individual attempting to pass through. “First, the person’s picture is taken, full body and from the waist up. Then the face, frontal view; and then from the back of the head, and facing left and right. Afterwards, a machine is used to measure the width of the skull. The distances between the eyes, ears, nose, and mouth are measured as well as one’s height and the length of one’s hands and feet. The distance between the shoulder, elbow, and wrist are measured, as are the distances between the hips, knee, and calf. The arms are measured out-stretched and bent as are the legs measured while standing and in-step. All of these measurements are taken while the person is nude. The length of the fingers and toes between each joint is also recorded. There is nothing that is not recorded in great detail.” (Desnoyers 1991) Though minimal documentation of the Bertillon system exist, there are a few primary accounts from immigrants that underwent the examination. As can be seen in this excerpt from Liang Qichao’s journal, measurements of limbs, heads, ears, teeth, and genitalia were taken to document each individual.

Figure 2. A photograph from Alphonse Bertillon’s photo album from his exhibition at the 1893 World’s Columbian Exposition in Chicago, showcasing the technology with which measurements were recorded and how the process might appear.

T

he Chinese Exclusion Act is the first to explicitly argue on racist foundations that “Oriental culture” is unsuitable and incompatible with American values and degrades the general moral fabric of the United States. However, though it is the most well-known of its kind, the Chinese Exclusion Act is only a brief snapshot into the history of legislation that have shaped and continue to construct immigrant dynamics in modern society. Throughout this work, we have examined and investigated the emergence of Orientalism in American legislation and policy, and how the role that Chinese women played in the gold rush communities of early nineteenth century California

Fifth World


39

still molds their identities today. We see this phenomenon in popular media — the implementation of Eastern beauty standards in Western ones, fashion trends, and continued fetishization have demonstrated that though Orientalism has ceased to exist as a political form of oppression, it is still exploited and misinterpreted. In contemporary American society, Chinese immigrants continue to fight against the language historically weaponized against their existence in the United States. Beyond the adoptance of Orientalism, the modern acceptance of immigrants has largely become contingent on the labor value that they can contribute to the capitalist economy. From the first exclusion acts to the naturalization acts later that sought to characterize immigration by education level and technical skill, the place of immigrants relies heavily on their ability to be an economic asset. For example, even today, the liberal argument in “support” of undocumented immigrants falls into narratives that undocumented immigrants are central to the economy of the United States, citing the different industries that exist that are built upon exploiting cheap, unskilled labor. The sentiment that immigrants are immoral exists in the deeply rooted feelings of distrust and the belief in the criminality of immigrants. The identity of the immigrants continues to be constructed by criminalization — imagery throughout the nineteenth and twentieth centuries characterized Chinese men as predators that would prey on white women (Lyman 2000).

Figure 3. Yellow terror in all his glory. 1899.

North Carolina School of Science and Mathematics

Today, similar narratives construct Chinese immigrants as a juxtaposition between submissive and aggressive, a taint upon the purity of innocent white women. In considering the modern day COVID-19 pandemic, the sentiments directed towards Asian-Americans seem like a novel wave of growing xenophobia. However, the coarse generalizations about Asian American immigrants have been reproduced by the recent history of immigration policy, specifically by the language within these policies that continues to define immigrants racially. Chinese immigrants, in particular, are associated with a history of disease and immorality that has darkened the moral fabric of the United States. The modern concept of the “illegal immigrant” was built upon this concept of the Chinese immigrant, one that was inherently detrimental to the American society as a whole as it carried values of Chinese patriarchy and filial loyalty. Even the simultaneous sexualization and demonization of Chinese women in modern society can be found to have roots in the first waves of Chinese prostitution in San Francisco and the layers of legislation that were created in the decades to come to bar the insatiable thirst that a capitalist economy built on exploiting the labor of single men has for prostitution. The conditions of modern Chinese immigrants and Chinese women can be seen as a dangerous consequence of the neglected American ideals of imperialism, military intervention, and economic exploitation that create the material conditions in which movements such as yellow peril thrive.


40

Desnoyers, C. A. (1991). Self-Strengthening in the New World: A Chinese Envoy’s Travels in America. (2), 195–219. Hirata, L. C. (1979). Free, Indentured, Enslaved: Chinese Prostitutes in Nineteenth-Century America. (1), 3–29. Hu, Y., & Scott, J. (2014). Family and Gender Values in China. 1293.

(9), 1267–

Kong, X. (2020). The Chinese Lady: Afong Moy in Early America. By Nancy E. Davis. New York: Oxford University Press, 2019. xi, 331 pp. ISBN: 9780190645236 (paper). 79(4), 1072–1074. Ling, H. (1998). State University of New York Press.

.

Lyman, S. M. (2000). International Journal of Politics, Culture, and Society, 13(4), 683–747.

.

Takaki, R. (1990).

Penguin Books.

The University of Texas at Austin Department of History. (2019, July 18). Page Law (1875). Immigration History. The University of Texas at Austin Department of History. (2020, January 31). Chinese Exclusion Act . Immigration History.

Bertillon System.

in All His Glory. Yung, J. (1995). California Press.

. (1893). The

. (1899). Yellow Terror

. University of

ZHANG, T. (2014). The Start of American Accommodation of the Chinese: Afong Moy’s Experience from 1834 to 1850. (3), 475–503.

Fifth World


41

Establishing the Perception of Home Amidst Violence for Indian-American Immigrants through Heritage, Household, and Permanence Pranet Sharma

W

hat is the definition of home? What does home mean to people? Can the word “home” have multiple definitions? For some, a home is synonymous with a house, the physical building in which they lived, in which they spent most of their time. For others, there was not as clear of a connection. Instead, home for them was a construct, or an idea. It was a place where they felt comfortable, a place where they could express themselves. This particular definition of home made the distinction between a “house” and a “home” even more apparent. In a house, people may not be able to express themselves; in a home, they always can. The definition of home, perhaps, can be extended. If people do not feel at home within their house, is there another place where they feel at home? What factors influence this? Introducing the confounding variable of nationality complicates this sense of confusion. Is a country truly home for anyone? What about a state? What about a city? There are people who do not feel at home in any structures but feel at home in a land. There are people who do not feel at home in a land, but feel at home in structures within that land. The question of immigrants and their definition of home must take all of this into account with another twist: there could, perhaps, be multiple homes. For some immigrants, these might not exist at the same time, with a definitive “before” and “after” in one dwelling place or another. On the other hand, some immigrants exist in a state of superposition; they can consider multiple places to be their home depending on which state they collapse into. The idea of immigrants forming their home in a completely new place is also fraught with complications. In our exploration of this idea, we will be discussing ideas endemic to South Asians—particularly Indian-Americans— mapping their journeys and exploring what factors cause them to perceive a place to be their home, exploring which particular aspects they consider in cementing their identity. Looking at the depth and scope of Indian immigrants throughout the past of the United States, in all the cases that they have faced racism, the common theme uniting them has been their tenacity to remain in the face of that racism,

North Carolina School of Science and Mathematics

their optimism that the racism would vanish over time. They have fought against the discrimination, maintaining their roots, living through generations who believed that the United States could be a conceivable home for them, that they could and did belong. The journey of Indians in the United States has been a journey surrounded by violence, a journey in which adaptation required a highly specific knowledge of home in order to survive, a journey in which the surroundings changed but the core definition of home remained. This definition has been shaped over time by the collective experiences of Indian-Americans across the nation into a combination of the two roots of home, a syncretism of palpability and impalpability. From the early 20th century where Indian-Americans worked primarily in agrarian labor to the late 20th century where blue-collar employment gave way to white, the Indian-American has sojourned through almost constant racism and stereotyping. The strength of culture, the intense value of familial ties, and the gradual growth of stability, however, served as three cornerstones allowing Indian-Americans to establish a home amidst the violence. As time progresses, and racism evolves, the definition of home remains the same, providing a foundation for the growth of the Indian-American community in the future. The passing of generations carve out a niche in a country that once turned on their ancestors, transforming distaste into development and bringing a part of their soul to color a rich land.

T

he United States in the early 20th century was a tumultuous place. The Gilded Age had reached a close, but the tremendous materialism, consumerism, and excess wealth that accompanied it continued a remarkably tight hold on the economy. Cities in the North began to expand, driven by industries such as oil and steel; new technologies surged across the nation. The remnants of the Civil War could still be felt, despite the passage of almost four decades of peace; violently supplanted by Jim Crow, the Reconstruction period in the South had failed to achieve


42

much of its objectives, and the South remained a broken, mostly agrarian society. Despite this dichotomy, the United States’ reputation as the Land of Opportunity endured, even as the nation pursued an increasingly aggressive policy of expansion into Asia and America, making manifest its destiny as a global ideal. And within this ideal of Manifest Destiny, the American Dream began to take hold. America was a place where people could go to get a job, where they could pull themselves up by their bootstraps, where social mobility reigned supreme, and where “social support for the promotion of merit” was the goal (Taussig). This idealism and romanticism permeated not only the people who lived in the United States but people worldwide as well. America was seen as an escape, as a place where they could leave behind the struggles of their past and be embraced into the arms of liberty, freedom, and opportunity. With this background of economic stagnation and growth, rampant corruption, and the hold of organizations over government, the principles that fall under modern conservatism were at their peak. Individual worker rights were limited; labor strikes were met with the force of— termed detective but essentially mercenary—paramilitary groups such as the Pinkertons (O’Hara). Taxation was minimal, and the rich accumulated staggering amounts of wealth, levels of which had never been seen before. In spite of its imperial expansion in 1898, political isolationism was the primary philosophy, an ideal of America being for Americans. The repercussions of this isolationism can be seen throughout the course of history; the majority of America’s involvement—or lack thereof—in the First World War was due to this principle. Born of isolationism, the roots of anti-immigration discourse can be traced back to the turn of the century; as boats began to come across oceans to Ellis and Angel Islands, immigrants were often treated with rancor, sometimes “admitted then deported,” sometimes “excluded completely” (Cannato). The United States’ attitude towards immigrants has always been an exercise in hypocrisy: the country was established through immigration—none of the establishing population of this country was native to the continent—and yet these selfsame people detest immigrants and immigration practices. A major contributor to this anti-immigrant isolationist attitude can be found in fears of unemployment. The pervasive belief was that immigrants would “steal” jobs from Americans, that the economy would be overwhelmed by people who were not American. The deep racialization that had originated from the Civil War and the heritage of slavery remained pervasive. And consequently, immigrants from Europe were treated better than immigrants from Asia. The fear perpetuated as corporate conservatism valued Asian labor for its affordability; Asian immigrants were far cheaper and required less to sustain than immigrants from Europe. The roots of the modern exploitation of Asian Americans can be traced here, with employers taking

advantage of a lack of citizenship and deprecating working conditions to generate a larger profit. Asian American labor was exploited almost ruthlessly, with corporations hiring Asian Americans who were then taxed without having representation or say in government. It is interesting to note how one of the founding rallying cries, one of the fundamental values that America is built on—“no taxation without representation!”—was so mercilessly ignored when corporations wanted a cheap source of labor; the relentless drive of capitalism gradually shaved away multiple tenets that were once considered the core of national identity. It may seem like a counterintuitive decision to pursue work here in such an atmosphere, a place where workers were abused—even more so if they were immigrants, and double that if they were Asian. But for many people, despite the hostile socio-racial continuum of America at the turn of the century, the lure of the American Dream proved to be stronger (Taussig). America was still a land of opportunity, a land where dreams would come true, a land where anyone could achieve anything.

Home Through Heritage or many Indian immigrants, America was a land where they could secure a economically viable future, a future distanced from the horrors of the present that they were ensconced in. India was deeply within British colonial rule at the turn of the century, a grip that only tightened with time, with “rebellions being put down with violence” and the “exploitation that accompanied imperialism” sucking the continent dry (Tharoor). For many people, economic opportunities were few and far between; the agriculturedriven economy that had been built was drained by the British, with many farms being forced to convert to producing cash crops such as indigo and poppy (which was then processed to opium and sold to China—the long arm of British imperialism at work). The farmers never saw a penny from the profits that these cash crops yielded, and the decimation of fields of grain such as wheat and rice in favor of cash crops caused “widespread famines to sweep the subcontinent” (Srivastava). Against a backdrop of restricted freedom, increasing governmental violence, and rampant racism, it seemed an obvious choice for many workers to travel to the United States—a country that had thrown off British shackles in the past—in the pursuit of employment. It was to the shock of most that the environment in America, a country with the reputation of being the land of the free, could be so hostile to them; escaping the maws of racism in colonial India, they fell into another beast across the Atlantic. Sikh farmers sought to work in Canada, initially, only to be met with racist attacks, prompting a mass exodus of Sikhs south of the border. But even there, respite could not be found. Organizations in the vein of the Ku Klux Klan were formed, with hatred directed towards

F

Fifth World


43

Indian immigrants. On September 4, 1907, in Bellingham, Washington, a mob of four or five hundred White men— members of the Asiatic Exclusion League—drove out Indian immigrants who had worked in the textile mills of the area (League). The mob beat the workers, corralling them into the Town Hall, and stole most of their valuables. By the next day, hundreds of immigrants departed Bellington and searched for work in other Washington towns. However, they were often met with the same treatment. Riot after riot drove the populace from place to place, the environment thick with racial charge—Indian-American immigrants were painted as “Hindoo menaces to American society” (Sohi). The press furthered this hostility. Newspaper publications throughout the United States published deeply racist articles about Indian-American immigrants (see Appendix I). It became extremely difficult for IndianAmericans to travel anywhere without facing violence or danger. However, the public perception of them shaped by the media implied that they were the danger (Puget Sound American). New immigrants were instantly thrust into this environment, where the people they saw were disgusted upon seeing them, and newspapers that they read actively spread misinformation about their actions, shifting the violence wreaked against them from physical to mental. The racial antagonization that Indian-Americans faced was not limited to personal experience; the legislative machinery of the United States swiftly worked to restrict the rights that Indian-Americans had, and racism spread from localization among the public to systemic discrimination. Anti-miscegenation laws in several states prevented Indian men from marrying White women and vice versa. In 1913, the Alien Land Act of California prevented non-citizens from owning land, restricting the stability that immigrants could establish in the United States (Webb-Haney). Much more widespread legislation impacted immigration four years later; in 1917, the Asiatic Barred Zone Act was signed into law (Asiatic). Under the provisions of the new law, immigrants from the Asiatic-Pacific region were subjected to literacy tests, were placed into new categories of inadmissibility, and were restricted from immigrating to the United States. It was the most sweeping immigration act since the Chinese Exclusion Act of 1882 and it barred admission to the United States almost entirely for immigrants from Asia. Immigration from India decreased substantially, and for several years, Punjabi farmers would travel over the border from Mexico to circumvent the law. Political groups formed in California to aid this influx of immigrants; the Ghadar Party notably facilitated over-theborder crossings and campaigned for Indian independence and freedom. Between 1920 and 1935, “almost two thousand Indian immigrants crossed over the border from Mexico”, searching for economic sustenance (Chakravorty). With the rising systemic pressure on Indian-Americans, it became harder for them to perceive a home in the United States Citizenship laws—notably, the Naturalization Act of North Carolina School of Science and Mathematics

1906—were signed to prevent anyone who wasn’t a “free white person” or an “aliens of African nativity and persons of African descent” from gaining American citizenship. The first Indian-American citizen, therefore, was a Parsi, who was ruled to be completely white (Naturalization). Colorism was rampant in the treatment of Indian-American immigrants; people with lighter-colored skin were favored, while people with darker-colored skin were often discriminated against more. The closer someone’s skin color was to being white, the more likely it was that the long arm of systemic racism would avoid them. The Indian-American experience became fraught with even more violence, not only from the communities that surrounded them, but from the government as well. The most significant example of racial injustice perpetuated by the law—particularly the judicial branch—can be seen in the landmark Supreme Court case United States vs. Bhagat Singh Thind. In 1923, Thind filed for naturalization under the Naturalization Act, claiming that as he was from the north of India, he had Aryan blood, therefore including him within the provision of being a free white person. The Supreme Court unanimously rejected Thind’s argument, stating that it was “common sense” that Thind could not be considered to be White, meaning that he could not become a naturalized citizen. Further, they argued that “the great body of our people” would never accept Indian immigrants as American (U.S. vs. Thind). It was the first example of the judicial system upholding unconstitutional legislation that discriminated against Indian-Americans and set a precedent for the future. It served as a blatant example of systemic racism within the country as a whole, with the perception of Indians being negated to such an extent that they were considered to be outcasts and aliens that would never find a place in American society and always rejected by the populace Despite being surrounded by violence on the personal, community, and national fronts, Indian-Americans fought to form a home, drawing on their culture as a source of stability. In 1912, the first Gurudwara in the United States was built by Sikh immigrants in California (Stockton). Religion is often one of the strongest bonds a community can form, and by establishing a foundation of religion within the United States—founded on their freedom to practice religion secured by the First Amendment— a home could begin to be defined. As these roots of home started to be placed down, the influx of Indian-American immigrants swelled—more and more people searched for employment opportunities that were better than those they could find in India. Surrounded by hostility, Indian-Americans sought refuge through a creative relation to their heritage. Drawing on their roots and on their ancestry, they were able to find a place in which they felt secure, a place where no matter what the dangers of the world around them were, they could remain optimistic. Indian culture is often tied deeply with


44

faith, and it was this faith that helped sustain so many early immigrants. Not only were they able to find a home in this faith, but they were also able to use this faith to establish a home in the United States; heritage served dually as a home and as a mechanism to create a home. Their heritage served as a source of comfort for them, a much-needed respite from the violence that surrounded them. Thus rooted, a more permanent place could be formed.

Progresses: Home Through Household he socio-racial continuum of the United States shifted considerably as the 20th century progressed; the home that Indian-Americans had built in the midst of violence sustained as the violence decreased. Indian-Americans started to experience upward mobility socially, taking advantage of education in the United States to secure higher levels of schooling. This period became a definitive time of transition, where the perception of Indian-Americans began to change, where the majority of the United States began to adjust their prejudice. Although it was a long journey to get there, it involved several factors of optimism for a future in which prejudice would be minimal and racial discrimination would be unsubstantial. As the 20th century progressed, the primary employment of Indian-Americans began to diversify. Agriculture gave way to higher-skilled, higher-paying jobs as education permeated throughout the community. Notable IndianAmericans began to gain national recognition. After attending Harvard Medical School, Indian-American biochemist Yellapragada Subbarao discovered the use of adenosine triphosphate in cells for energy, further isolating several essential nucleotides within the human genome (American Chemical Society). Author Dhan Gopal Mukerji became recognized as the first successful Indian man of letters in the United States. Graduating from the University of California, Berkeley and Stanford University, Mukerji went on to win the John Newbery Medal in 1928 for his work . The semi-autobiographical narrative explores the relationship between humans and animals, the nature of exploitation, and the grandeur and spiritual power of the Indian countryside (Mukerji). It was the first major American literary prize to be awarded to an Indian-American author, and a landmark moment of the developing transition in the attitude towards IndianAmericans. Literature can often be seen as a platform for expression, a canvas on which the colorant of experiences can paint a vibrant landscape of identity; a work by an Indian-American and set in India, therefore, was a pure form of expression, syncretizing intellectualism with whimsy, reinforcing the changing role of Indian immigrants in American society. The press that had so negatively denigrated Indian immigrants began to shift, too. The most notable example

T

of this transitioning can be seen with the appointment of Gobind Behari Lal to the position of editor in the San making him the first Indian-American to hold a major post in an American media organization. For his coverage of the science at the tercentenary at Harvard University, Lal earned the 1937 Pulitzer Prize for Reporting (Pulitzer). Contextualizing this win, it becomes even more apparent the way that perceptions of IndianAmericans were shifting throughout the United States; from publishing racially charged articles about the dangers that Indian immigrants posed to white American society to the selfsame immigrants winning major prizes in publication and serving on the editorial staff of impactful newspapers, the perspective of Indian-Americans can definitively be seen to have shifted beneficially. It was a sign that the “racial discrimination that had surrounded them upon their first arrival into the United States had the potential of reducing” (Chakravorty), of becoming less virulent and venomous, of possibly dissipating. This time period was accompanied by tremendous changes within India, too. In 1947, India achieved independence from the British Colonial rule that had constrained it for two centuries. This was paralleled by a wave of change across the subcontinent, not simply politically but socioeconomically as well; the general trend of the grip of technology rising globally coupled with the first real freedom that people in India had in centuries led to a “changing landscape of employment” in the country (Zamindar). Although the economy—and resultant employment—had been dominated by agriculture, educational reforms began sweeping the country, changing the development of labor. A focus was placed on the establishment of strong secondary education systems, particularly in the form of the Indian Institutes of Technology (IITs), public technical universities under the jurisdiction of the national government dedicated to scientific research, technological career development, and economic revamping. Universities in the vein of IITs were established throughout India, and it swiftly became the norm for people to “complete their education at such institutions, move to cities, drive trends of urbanization, and gain employment in white-collar labor” (Tiwari). The profile of immigrants from India began to shift with the shift in the socioeconomic landscape; as the workforce in India modernized and blue-collar work transitioned to whitecollar work, the makeup of Indian American immigrants changed accordingly, reflecting the Indian environment. Legislation in the United States began to shift as the century progressed; the draconian immigration laws that restricted based on race began to loosen. In 1946, the LuceCeller Act was passed, which allowed one hundred Indians and one hundred Filipinos to immigrate legally to the United States every calendar year (Luce-Celler). Further, the passage of the act allowed Indian-Americans to become naturalized citizens for the first time. In 1952, the Immigration and Nationality Act was passed, allowing Indian-Americans Fifth World


45

to obtain permanent residency in the United States, and removed all racialization within immigration legislature (McCarren-Walter). The reforms in the legislation were unprecedented, and it was a complete paradigm shift from the systemic racism that had been established within the annals of American law. As the years progressed, open immigration encouraged more and more Indians to travel to the United States. More notably, the demographic of the Indians who arrived in the United States further began to shift. Rather than the individual workers that had characterized much of the immigration in the early 20th century, families would move from India to the United States, as the socioeconomic status of many of the immigrants was now enough to support familial obligations and necessities. The motivation of Indians within the United States began to change because of this as well. It was not the case anymore that immigrants simply wished to find a source of employment that was more desirable and more stable than employment that they could get elsewhere. The motivation of families moving to the United States was that of permanence—a concept that we will discuss later in this paper—and stability, of “establishing a life in a new place” that could ideally be sustained over generations (Chakravorty). It was a motivation that was more prospective than the motivation ever had been, and it was a clear sign of the shifting experience among the people. While the violence that Indian-Americans faced abated, allowing the families that moved to the United States to consider the establishment of home under more secure terms than their progenitors had, it did not dissipate completely. It began to surface in new, insidious ways that would have generational impact on the communities. The legislative violence became something more psychological, something more implied. It became something that would affect the perception of the Indian-American community and have repercussions on other minorities throughout the United States, changing the way that the socioracial continuum of America was seen. It became one of the clearest ways that systemic racism would take hold of the people in this country. This violence arrived in the form of the Model Minority myth (Saran). The Model Minority myth is a concept which describes the phenomenon in which a “particular minority group within a country is perceived as achieving a higher degree of socioeconomic success than the population average, therefore serving as a point of reference to other minority groups that are considered to be worse by comparison” (Saran). It is an especially insidious phenomenon as it dually negatively affects the “model minority” by imposing unrealistic expectations and punishing nonconformity while removing individualism, as well as the other minority groups by penning them as worse and less deserving than the “model minority”. Further, model minority discourse in the terms that it is conducted in the United States causes enmity between the minorities that it pits against each North Carolina School of Science and Mathematics

other. It creates racial tension between two groups that have solidarity, preventing the creation of unison and hampering the drive towards shared equality that the groups would benefit from partaking in. In the United States, the Model Minority myth particularly considers Asian Americans and Jewish Americans to be the model minority—for the course of this paper, Indian-Americans will be the “model minority” focused on—and BIPOC as well as Hispanic Americans to be outgroups that are compared to the “model minority”, and therefore freely vilified. As described by theory, this not only put unrealistic expectations on how Indian-Americans began to be perceived in the United States but caused a rift between them and other minority groups. Racism began to spread twofold between minority groups, a horrifying concept when considering the fact that they were both fighting against racism. An almost Machiavellian construct, the Model Minority myth caused Indian-Americans to face racism from other minority groups within the country. In such an environment that applied new forms of racism on them, Indian-Americans found refuge both within their culture and then a more physical form of home. As IndianAmericans climbed up the socioeconomic ladder, they began to purchase and build houses of their own, bringing their family and developing strong households. While in the past, heritage had been a common link among several individuals scattered throughout the United States, the introduction of family introduced another link, another bond that resisted breakage. Therefore, Indian-Americans’ second perception of home is defined through household, through family, through filial ties. It was essential for Indian-American immigrants to shield themselves from this new form of violence. And they were able to by building upon their heritage with their household. Indian culture values family and familial ties immensely, focusing on forming bonds that transcend formality. The concept of joint families—or large extended families all living underneath the same roof—had to be adapted for an American lifestyle. But the addition of those ties with the ties that had already been formed by heritage strengthened the Indian community. It allowed them to sustain more against the new forms of violence that had appeared in the American socioracial continuum; by forming the unity and cohesion that a familial unit had, a deeply human and personal refuge and support against racism, Indian-Americans were able to sustain their lives combating violence. Households formed both a tangible and an intangible way to perceive home for Indian-American immigrants. It was a physical refuge, and a physical reliance on humans, but it served as an intangible representation of culture and heritage that transcended anything that had been brought to the United States in the past. Familial ties introduced the concept of generationality within Indian-Americans, with the overarching ideal being to build a future that would sustain not only themselves but their progeny, establishing


46

a new life for their progeny. The experiences of firstgeneration and second-generation immigrants further highlighted this. Indian-Americans born in the United States started developing a new definition of home within themselves; often following in their parents’ footsteps, both in the perception of their worldview, their education, and their employment, the United States began to be often the only home that some of these Indian-Americans knew. And the perception of home for the immigrants began to strengthen, as it was not only limited to their experiences in making a place in the United States, but it extended to their children who dealt with the United States in a form of unprecedented exclusivity. Throughout the journey of Indian-Americans in the 20th century and their transition into better relations with the American populace, a general trend of better employment countered by insidious violence unlike any seen before surrounding them, the establishment of household was the strongest tie among themselves. An almost scientific ideal, the strength of the bond that IndianAmerican families were able to forge among themselves, coupled with the reliance and power of their heritage, allowed them to sustain through the turn of the century and into a new millennium, forming a niche and developing elongated sustenance.

A

s the century began to turn, the Indian-American experience shifted even more drastically. Restrictive immigration laws were struck down; visas were established that allowed skilled laborers to extend their stays in the United States by multiple years, encouraging even more immigration of white-collar workers (Vasic). The familial dynamics that had begun to be established started developing further. The fusion of heritage and household had started snaking radicles into the socioracial continuum of the United States, developing the ideal that Indian-Americans would be a part of the United States and the history of the United States as firmly as any other minority. In this stage, the violence begins to fall. The most prominent racism experienced by communities at large is the extension of the Model Minority myth, perpetuated across generations. First-, second-, and third-generation immigrants inherit the pressures of the past, a constant barrage of stereotyping and destruction of individuality that causes psychological damage that is often unseen by the public. But the abject violence that once surrounded Indian-American communities throughout the nation—the continuous and deadly attacks that would be wrought against them—began to diminish. It is incongruous to say that racism against IndianAmericans has disappeared completely. As long as systemic problems exist, and the perception of white supremacy permeates America, some form of violence will remain. But over the years, the “reduction in the violence” has created an environment in which Indian-Americans have been able to

“thrive and establish their identity” (Chakravorty). Through this shift, through the turbulence of the 20th century and into the platitudes that the 21st century offered, the final perception of home for Indian-Americans can be defined through permanence. The United States was considered, in the past, as a place where Indian-Americans would simply find greater economic opportunities that would uplift themselves and their families, providing a source of income that could not be found elsewhere. But this swiftly transitioned into a place where Indian-Americans could start an entirely new life, bringing their families—and often starting new ones— commencing a trend of generationality that would continue for decades. The pursuit of the American Dream began to mellow; multi-generational Indian-Americans stayed in the United States not simply to grow economically, but to remain in the place that they considered to be their home due to the prodigious length of their—and their progenitors’— stay. Most Indian immigrants today come to the United States as an extension of their work; in the rapidly changing technological world, software projects cross cultural and national boundaries alike. The H1B visa, a visa dedicated to skilled labor, has become dominated almost entirely by Indian immigrants from software companies, with 70% of H1B visas being issued to Indians (Ali). This pipeline has become a source not for people to simply achieve greater economic success, but to put down roots and gain permanence in the United States. The concept of permanence is the capstone in the ways that Indian-American immigrants define home. It can only be established after sustaining violence, and develops through a period of growth, through greater acceptance and more comfort. Permanence ties heritage and household together, as the stability required to achieve permanence can only be created by securing heritage and household. It is easy in the permanence that home reaches in the United States to forget about the violence that Indian-Americans faced throughout history as well as the microaggressions and generational repercussions of systemic issues that they face today. But permanence can act as a shield against this, too, as an oasis surrounded by the violence; permanence forms a final structure of security for Indian-American immigrants, a construct that brings the concept of home further into the tangible, marrying the force of the impalpable with the strength of the palpable. Permanence further serves dually not only as a way that Indian-American immigrants perceive their home, but as a new motivator for Indian-American immigrants; establishing permanence in the United States serves as a bridge from a developing country into a developed country. It is in this permanence that modern Indian-Americans find comfort, and plant their soul.

T

he story of Indian-American immigrants throughout the United States is a fascinating one, rich with complexity and diversity. A mere 20 pages cannot do Fifth World


47

justice to the expanse of the Indian-American experience, or elaborate on the experiences of every Indian-American. The concept of home in general has always intrigued me, and as an Indian-American has served as a cornerstone of extensive soul searching and introspection, of existentialist thoughts and late nights of consideration of identity. This concept has similarly been on the forefront of the minds of Indian-American immigrants throughout the course of the past century. The most fascinating facet to explore along this journey has been the concept of motivation. What motivated IndianAmerican immigrants to establish home in a place where they were surrounded by violence? The ways in which they perceived home, and the mechanisms they used to combat the violence, have been explored extensively in this paper. But implicit within the perception of home—implicit within the use of the word “home” itself—is a deep-set motivation to live in the United States, to live amidst a culture that historically turned on Indian-Americans. Establishing that motivation is similarly essential to understanding the prognosis of Indian-Americans and the homes that they establish within the United States. In the earliest stages of Indian-American immigration, the motivation was purely economic, incentivized by wishing to gain money and employment while in an atmosphere of economic decline within India. The United States and the lure of the American Dream was the strongest in this stage, and was enough motivation to establish a home amidst violence. But as the century progressed, the motivation became more intangible. Defining home through household and permanence lent to the trend of wanting to live in the United States for generations. For many Indian-Americans, the reason for that has transcended simply a higher economic status or familial stability. Establishing permanence in the United States has become a mechanism to uplift India. Indian-American immigrants have the highest median income out of any ethnic group within the United States (Dmekouar). Much of that money flows to India, affecting economics on a microscopic level, but making tangible change. By supporting family members, by providing a launchpad for their economic independence and their own development, by creating a strong support system for socioeconomic advancement, the motivation for modern Indian-American immigrants to establish a home in the United States is the strength of the impact that they can have at home. The influx of money from the United States has been a source of economic growth, of the creation of safety nets allowing risk-taking, in essence a way for families to catalyze their maturation into an uncertain future. The three-fold definition of home yields an end to a “before” and an “after”, a conclusion to the limbo that many immigrants find themselves in. Through the strength of the motivation of Indian-American immigrants to establish a home in the United States, the perception of IndianAmericans changed over time. Racism began to wane and North Carolina School of Science and Mathematics

acceptance and inclusivity swelled, catalyzing this definition. The quantum states of immigrants collapsed definitively into creating a new home, a place where they could return to their families and support the country that bore them. The story of the Indian-American immigrant is not a story often told. But it is a story that is essential to tell. Ignorance about Indian-Americans abounds in the United States, ranging from within the education system to throughout public knowledge. It is essential for people to understand this story, to learn it and understand it within themselves. It is even more important for Indian-Americans to learn this story, to internalize it and to never forget it, to remember the sacrifices made towards establishing a home while they establish a home for themselves. For home is at the center of the Indian-American story. Home forms a core, a foundation, something the story cannot be told without, something that shapes the progression of the story. Exploring the concept of home from the Indian-American perspective provides insight into IndianAmerican immigrants themselves; understanding their perception of home creates an understanding of their psyche, of their culture, of the strength of their bonding. Heritage, household, and permanence form the trifecta of this definition, and forms the core of the Indian-American story. The importance of heritage and culture as well as the strength of the ties to it. The value of familial bonds, their superposition with heritage, and the depth of their impact on individuals. The development of permanence, of remaining, of securing generations into the future not just in the United States, but in India as well. And as this story hurtles towards the future, we can only watch and see what comes next. By perceiving home definitively, IndianAmericans carve out a niche for themselves in the world, a bridge between countries and cultures, a syncretism of innovation and tradition, a transformative state that belongs uniquely to them and will shade the future with the color of their strength.


48

“Adenosine Triphosphate.” American Chemical Society.

Appendix I

Ali, Aran. “The Data behind America’s H-1B Visa Program.”

, 26 May 2021.

Asiatic Barred Zone Act. Cannato, Vincent J.

Harper Perennial, 2010.

Chakravorty, Sanjoy, et al.

Oxford University Press, 2019.

Dmekouar. “This US Ethnic Group Makes the Most Money – All about America.” “Have We a Dusky Peril?”

7 Sept. 2015.

1906.

League, Asiatic Exclusion.

Arno Press, 1977.

Luce-Celler Act of 1946. McCarren-Walter Act of 1952.

Figure 1. Clipping from the Puget Sound American representing the American attitude towards Indian-Americans in the early 20th century

Mukerji, Dhan Gopal. Gay Neck. Dutton, 1928. Naturalization Act of 1906. O’Hara, S. Paul.

. Johns Hopkins University Press, 2016.

“Pulitzer Prize for Reporting.” The Pulitzer Prizes. Saran, Rupam. Routledge, 2017.

.

Sohi, Seema. University Press, 2014.

Oxford

Srivastava, Hari Shanker. ASR Publications, 2014. “Stockton Gurudwara.” Supreme Court. United States vs. Bhagat Singh Thind. 19 Feb. 1923. Taussig, Doron. 2021. Tharoor, Shashi.

. ILR Press,

. Penguin Books Ltd, 2018.

Tiwari, Piyush. India’s Reluctant Urbanization: Thinking Beyond. Palgrave Macmillan, 2015. Vasic, Ivan. Citizenship. McFarland & Co., 2009. Webb-Haney Alien Land Law. California, 1913. Zamindar, Vazira Fazila-Yacoobali. Boundaries, Histories. Columbia University Press, 2010.

,

Fifth World


49

Nature Words and Children’s Dictionaries: The Necessity of Their Interaction Noell Boling

“A

corn,” “dandelion,’’ “moss,” “fern,” ‘’bluebell,” “pasture”: These words and around 45 others related to nature and the countryside were removed from the Oxford Junior Dictionary in 2007 (Flood). The exclusion of these words sparked an outcry from a community of writers, artists, and naturalists. In 2015, in a letter to the Oxford University Press, this community, headed by Margaret Atwood, protested that these nature words should be reinstated because of the implications their absence could have on children and society. This letter insisted that children’s dictionaries have the ability to shape children’s cultural lives through the words they chose to include and emphasize. Such changes to the dictionary, it argued, contribute to a larger issue of the growing disconnection between children and the natural world, which is proving harmful to the population (Atwood et al.). The removal of these nature words also inspired a book, The Lost Words, which has grown to encompass concerns about the larger problem of a growing distance from nature. Mingling romantic and scientific concerns about the separation of people from nature in a time where climate change is becoming irrevocable, The Lost Words seeks to emphasize the importance of keeping children and future generations connected to nature through words. Since its creation, this book and its resulting album of Spell Songs have been used in thousands of schools across Britain to reintroduce these nature words and their beauty to the next generation (The Lost Words). These efforts raise the questions of why this removal of nature words from the dictionary occurred, of whether the removal is justified based on the purpose of the dictionary, and to what extent such words are necessary for the dictionaries based on their perceived significance to children’s lives. The protest movement gained power through a Change. org petition that reiterated the concerns of the writers, demanding that these nature words be reinstated into the dictionary and objecting to the “replacement” of nature words with words associated with technology (Terry). As a result of the Change.org petition, Oxford University Press released a statement rationalizing the elimination of the nature words from their Junior Dictionary. In this statement, Oxford denied a cultural motivation in the removal of nature related words from the dictionary. Instead, their rebuttal of the protests was primarily North Carolina School of Science and Mathematics

founded in the creation of the dictionary using an online database or “corpus” that evaluates which words are most frequently used and encountered by children. According to Oxford, this made the dictionary entirely empirical– Oxford University Press had no agency in the decision as to which words composed the dictionary. It was not that the words excluded from the dictionary were irrelevant to modern day childhood, but that the words simply weren’t used as frequently as before. This corpus-based argument was also used to explain that nature words were not replaced by technology words by Oxford, but that all words in the dictionary are reevaluated as a whole, instead of substituting one category for another. Through both of these arguments, Oxford University Press suggested that the ethical and cultural argument of the outcry against the removal of nature words was irrelevant to its practice. To further rationalize the dictionary change, Oxford argued the removal of nature words in 2007 was insignificant because not all words pertaining to nature were removed from the dictionary and a few additional nature words were added simultaneously. In fact, the words cited as removed in the Change.org petition were included in the Oxford Primary Dictionary, a larger dictionary intended for slightly older children. Through this, they stated that the protests against the words’ removal were overly romanticizing nature words, when they could just be researched online if they are truly needed (Oxford Education). Regardless of Oxford University Publishing’s intent in the removal of these 50 nature words, the dictionary’s revision exposes a shifting of societal priorities away from nature. This shifting of priorities is ostensibly justified by what is coined as progress–understood here as technological advancement–but this progress has greatly altered the proximity of childhood to the natural world. The removal of words associated with nature from a children’s dictionary unearths ethical debates about how to educate children– about the nature of their education. Nature words were removed from the Oxford Junior Dictionary because it is meant to due to reflect modern children’s language usage, but its mission as a teaching tool necessitates agency in word selection: to teach is not merely to reflect “progress,” but to reflect upon the relation of progress to social life and the natural world. Therefore, it is essential that nature words are prioritized because their presence encourages


50

connection with nature, resisting the growing rift between childhood and nature, cultivating a relation which is essential for wellbeing and teaching.

I

n order to examine the significance of the removal of words associated with nature from the Oxford Junior Dictionary, the mechanics of the dictionary itself must be understood. In their rebuttal of objections to the 2007 updates, Oxford University Press insisted their dictionaries were purely objective because the dictionary’s word list was created using the Oxford Children’s Corpus (Oxford Education). This corpus is an online database of works written for and by children that, as of 2019, contained over 300 million words. These words have been collected from websites and books intended for children and from submissions written by children for BBC Radio 2’s 500 Words Competition. According to Oxford, these sources offer access to the topics about which children talk about, allowing the creation of dictionaries that are relevant to their lives, thus fulfilling the dictionary’s purpose as a reference. Oxford argues that its dictionaries use the children’s corpus to spot children’s language trends and to close the gap between writers of varying skill levels through their dictionaries (Oxford Owl). Because this corpus is data driven and based on the frequency of words within the corpus, Oxford University Press asserts that their dictionaries are free from individual biases. This means the Oxford Junior Dictionary is an Informative Dictionary, an academic dictionary created for the purpose of referencing spellings, pronunciations, definitions, and examples of the use for words or to find words that are unknown to the reader (Shcherba 11). Specifically, this dictionary includes words children ages 7 to 11 would encounter most often and words used in curriculum designed for this age group. Because of this, Oxford explains the dictionary is fashioned to improve the reading capabilities of this age group (Oxford Owl). This unearths debates over the use of dictionaries intended for designated age groups of children. Should these dictionaries be used as simply factual references for spelling and word meaning, to improve vocabulary and reading comprehension, to expose children to new words, or a combination of these? Scott Huler argues that the dictionary should be a reference for words that children encounter and that reintroducing these nature words will not fix the problem of their less frequent use. His argument revolves around dictionaries as reflections of linguistic changes and not manipulators of those changes (Huler). Stefan Fatsis compares the outcry around the removal of nature words to the early 1960s outcry around slang words moving into dictionaries and “replacing” proper English words, when Fatsis believes this change was essential to represent current language use (Fatsis). Both of these arguments describe and support the Oxford Junior Dictionary’s design of the dictionary around reflecting words children encounter the

most in their lives. The situation is more complicated, however, for these dictionaries are intended not only to be Informative Dictionaries, but teaching tools. In order to teach children, the dictionary must be created with agency and intention, including the act of deciding what needs to be taught and for whom. This necessary intention means the Oxford Junior Dictionary cannot be an entirely empirically constructed reference for children, as Oxford claimed it is. Oxford University Publishing’s insistence on the dictionary’s objectivity makes its decisions appear as a neutral reflection of cultural values rather than, say, an assertion of some values over others—the values of “progress” over “nature,” for instance. This contradicts the dictionary’s actual choice of those words necessary to the values and topics most important to teach. If this is true, then Oxford’s removal of words associated with nature would imply that those words are not seen as worthy of teaching. The same argument then suggests that if there is intention in the selection of words to remove and to add to the dictionary, then the replacement of nature words by technology words reflects and reproduces specific values. How, then, does a dictionary delineate the values it prioritizes while remaining universally applicable to children’s lives? How can a dictionary of limited words serve both as a useful reference of common words and yet prioritize cultural values that deviate from frequent word usage? Fatsis stresses that these lines would be very difficult, if not impossible to draw (Fatsis). Central to Oxford University Press’s argument is its assertion that the removal of nature words from the dictionary was innocent of cultural values (Oxford Education). Historically, however, dictionaries cannot be considered unbiased representations of language because they reflect social views and anxieties, although this is complicated by modern corpora (Coleman). This assertion was developed by Julie Coleman in , published by none other than Oxford University Press itself. Coleman speaks to the practice of lexicography, the study of words and compilation of them into dictionaries. Historically, lexicography has been layered with ethnic, cultural, and other biases based on social anxieties and priorities of daily life (Coleman 101-102). The funding of dictionaries and specification of dictionaries towards certain groups of people also reflect (and perpetuate) the biases engrained in lexicography (Coleman 98-99). Not only the definitions and word selection themselves, but the examples created by lexicographers to explain words are biased. Sidney Landau examined the evolution of a children’s dictionary used for curriculum and found that over time, the alignment of examples to gender roles and gender norms shifted to reflect changing societal values (Landau). These studies demonstrate how dictionaries have reflected the linguistic and social values of their time periods, reproducing those values as “natural” rather than historical. Although data-driven corpora do reduce certain definitional and selection, the historical biases ingrained in lexicography

Fifth World


51

cannot be ignored: like the “child,” words come to us with histories of their changing use. Then again, cultural values could also guide how the data for this corpus is sourced, and Oxford is extremely vague in articulating how these sources are selected. They simply state that entries from the 500 Words Contest are used and other websites and books intended for children are used, without describing how these books and websites are chosen and if this selection could possibly be free of bias (Oxford Owl). The Oxford dictionary explains how they do not intentionally shape or reflect cultural attitudes and simply reflect frequently used vocabulary over time due to their use of a corpus. Therefore, the dictionary cannot and should not be used to prioritize certain values (Oxford Education). However, even the data collected from the corpus will reflect cultural values because the corpus’s sources, children’s writing and media, both reflect cultural values. The very discrepancies between “children” as an object of study and literary representation and actual children who live out these discrepancies are historical. Even the changes in frequency of word use away from nature words and towards technology words demonstrated by the corpus reflects cultural values. Because of this, individual biases may not be directly present in the dictionaries or may not be intentional, but cultural values will still be reflected in the selection of words for these dictionaries, and Oxford University Press cannot rightfully deny this is so and should not ignore the “unintentional” guidance of cultural values in the formation of their dictionaries. This is an essential discussion in relation to children’s dictionaries because children are particularly receptive to suggestions and learning presented to them. Children’s receptiveness to naming, particularly names for species, was explored through a study of children’s knowledge of native animals compared to Pokémon. Eight year old children could identify 80% of 150 selected species in the study, proving how they easily absorbed and retained names (Balmford et al.). This knowledge of specific names is provided by dictionaries. Naming is particularly important for nature words because specificity is essential to understanding and connecting with nature. In her book, Braiding Sweetgrass, Robin Wall Kimmerer describes the importance of the names of nuts and trees to the memory of her Potawatomi culture (Kimmerer). Kimmer also describes her wonder of words in the Potawatomi language that describe detailed nature processes such as the force of a mushroom pushing up through the ground, which English doesn’t have a word to describe. Although she speaks of the limits of English, she also speaks of the power of words. The specificity of these nature words encourages intimate observation of nature by exposing its small details. Kimmerer speaks of how these bring awareness of elements of nature that may not have been noticed before their names were shared. The awareness facilitated by these words inspires wonder about these natural processes, therefore forming a connection of North Carolina School of Science and Mathematics

wonder between her and nature (Kimmerer 48-59). This demonstrates how the inclusion of words associated with nature in dictionaries is important because the specificity of these words is essential to facilitating a connection to nature. In a language-driven society, dictionaries can help to encourage and cultivate that bond.

S

trengthening the bond between children and nature is vital to lessening the growing social distance between children and the larger world. The demands of the writers, naturalists, and citizens protesting the removal of nature words from the Oxford Junior Dictionary represents a larger problem reflected in the dictionary. At the same time, this dictionary not only reaches, but is tailored to a subset of children whose experience of this problem is determined by class privileges. Western cultural capitalist values facilitate the conditions in which consumption of and dominion over nature is naturalized. This denies the intimacies and reciprocities practiced by many cultures, including many Native American groups. Robin Wall Kimmer describes how these attitudes towards nature could have been greatly affected by religions and the language they use. In Christianity’s creation story, the first humans are provided with the Garden of Eden for them to use to their desire to fulfill their needs, and some interpretations of the Bible even say that the human role is to have “dominion” over nature. However, in Robin Wall Kimmerer’s Potawatomi culture, the first human, Skywoman, was saved from drowning in the seas of the Earth by the animals living there, many of whom sacrificed themselves for her. To thank them, she planted the seeds that became all the plants of the Earth. Skywoman provided the Potawatomi people with Original Instructions for how to treat nature with reciprocity in thanks for all that the animals have always given humans, instead of purely taking from an impersonal world (Kimmerer). The language of dominion versus reciprocity immediately sets a precedent for treatment of the natural world. Even the mechanics of the English language encourage this distance. For example, through pronouns, English provides diversity to the human identity (however gendered) and merely acknowledges the vast and heterogeneous category of animals as the impersonal “it.” The Potawatomi language acknowledges only a distinction between the animate and inanimate, giving animals the same term of respect as humans and forming a much closer connection to nature through language (Kimmerer 48-59). These different attitudes mean that the demonstrated distance between children and nature would be experienced differently by varying cultures. Because this is an English dictionary published by Oxford University, the audience of this dictionary would have been raised in a society with many of the Western consumerist and colonial attitudes towards nature that would exacerbate the division between children and nature. Some arguments depict the desire to reinstate nature words back into the Oxford Junior Dictionary as wistful


52

thinking based off of romanticization of nature, but arguments for reinstatement are founded in the importance of the actual growing division between children and nature. The distancing between children and nature truly occurs because urbanized development of and cultural attitudes towards land encourage childhood to rely more on electronics and less on outdoor experiences. Over time, interactions with nature have transitioned from utilitarianism to frontier-driven romanticization to electronic distancing (Louv 16). Modern capitalist urbanization diminishes the presence of nature, leading to a lack of experience of nature and little time spent within it (Bratman et al.). In addition to the increasing prevalence of urban environments organized by efforts to maximize profit, children are losing a familial link to farming and are being distanced from the origins of their food. This distances children from the human reliance on nature for survival and the knowledge of how ecological systems impact basic human needs (Louv 19). As the world urbanizes, children also have less direct access to natural play spaces, and are therefore encouraged to play indoors as an alternative (Charles et al.). The formation of suburbs forces nature outside of groomed parks and inside national parks, which provides a distance between children’s everyday lives and the natural world (Louv 19). This is compounded because parents are no longer spending time outdoors with their children or, in their own activities, offering examples for their children, thereby discouraging regular outdoor play (Louv 11). In the classroom, children are being introduced to nature from a primarily scientific perspective, which encourages an impersonal relationship with the outdoors. For example, children are gaining an increasingly abstract relationship to animals, which forms their relationship with non-domestic animals as one of study instead of an emotional connection (Louv 24). Children are being taught how nature is becoming more developed and controlled by humans, for instance through the genetic modification of plants, which also drives an impersonal and purely academic investigation of nature (Louv 19). These relationships don’t foster regular outdoor play. The time children spend in school and childcare increased by almost 9 hours a week from 1981 to 2007, leaving less time in children’s lives for outdoor play and more time in structured, academic settings. Out of the free time children do have, the amount of time spent on varying forms of electronic media continues to increase over time (Charles et al.). Richard Louv describes how cultural attitudes have shifted so nature time is seen as an unproductive use of time. In an increasingly product based society, parents are choosing activities to help prepare kids to improve their academic capabilities over creativity and wonder-based activities (120-122). This means that parents, influenced by cultural attitudes and the urbanizing world, tend to discourage children from coming into meaningful contact with nature. Thus, concerns about an increasing distance from nature are not rooted primarily

in romanticization, but are supported by documented evidence, legitimizing the importance and urgency of these concerns. The attitudes that drive a separation from nature present an anxious and fearful approach to nature, or “the wild,” that makes the outdoors an unsuitable place for unsupervised nature play, discouraging children’s curiosity of nature. Home Owners Associations and governments restrict forms of nature play such as building natural structures, constructing treehouses, and damming rivers because of fears of damage to property and public images. Conservationists also discourage many previous forms of play because of new knowledge of human effects on the environment by restricting play in certain areas or types of play that may directly harm wildlife (Louv 28-31). Whether or not these groups are justified in their discouragement, they increase fear of the impacts of interacting with nature. News and other media also amplify fears through what Joel Betse describes as “Bogeyman Syndrome,” which is an assumption of much higher “stranger danger” than reality justifies (Louv 126-127). The fear created by this syndrome builds to an overall social anxiety about unsupervised kids in an uncontrollable world. Media also increases awareness of violence and crime at parks and occurrences of animal attacks and disease outbreaks. Similarly to the Bogeyman Syndrome, the increased awareness of these incidences leads to exaggerations of their frequency (Louv 130-132). As this fear decreases children’s curiosity and knowledge of nature, they will lose exposure to nature. While Oxford University Press argued that their removal of nature words from the Junior Dictionary was insignificant because the removed words were included in a dictionary for another age group or could be found online, if children come into less contact with these elements of nature, they will not search for the language to understand this contact. The dictionary could, however, introduce children to these words of a possible experience, therefore bringing awareness of these rich wonders to children. The growing distance between childhood and nature as demonstrated through changes in society and growing fears around the natural world causes dictionaries to no longer consider nature words as relevant as they used to be. The dictionaries themselves don’t cause the problem, but as symptoms of the changes, reinforce and perpetuate them. Children are spending less time in nature, so their lack of experiences in nature means they write less about nature in the sources that will be used to create the children’s corpus. With less time spent in nature, children aren’t even learning the words to describe the environment around them. A study investigating children’s ability to identify Pokémon versus common animal species found that the children in the study could identify less than 50% of the common species in the study. Four year olds could identify 7% of Pokémon and 32% of wildlife, whereas eight year olds could identify 78% of Pokémon and only 53% of the common species of Fifth World


53

wildlife (Balmford et al.). Even advanced biology students in high school were unable to identify 10 common wildflowers in the UK. 86% of the students tested in this study could not even name more than three species (Charles et al.). Because children no longer know names for the nature around them and these words aren’t making it into the Oxford Children’s Corpus, Oxford University Press considered these words no longer relevant for their dictionaries, disregarding the historical circumstances of their abandonment. However, these words and nature experiences are still as important for children’s lives even though they are increasingly distanced from them.

E

xposure to nature is essential for children due to its therapeutic effect in physical wellbeing and prevention of mental illness. Beliefs that nature is beneficial to health are extremely widespread, including Chinese Taoist beliefs in the improvement of health due to time spent in gardens (Louv 45). The Harvard Scientist Edward O. Wilson has proposed the biophilia hypothesis, the idea that humans have an instinctual affinity for nature, implying that humans need nature to function (Grinde et al.). While this hypothesis has been affirmed by some scientists and doubted by others, the beneficial effects of nature on people have been proven time and time again. The CDC’s National Center for Environmental Health shows that children are far more active when they spend time outside and exercise outside than when they are indoors (Louv 48). Because of the U.S.’s national obesity epidemic and the increasingly sedentary lifestyles of English-speaking countries around the world, the encouragement of time spent outside is extremely important. With decreased time outdoors, an estimated 61% of individuals ages 1 to 21 in the US are Vitamin D insufficient. This vitamin, which is primarily produced with contact with the sun, is important for many body processes; a deficiency can cause cardiovascular issues (Charles et al.). Even just seeing nature frequently is proven to improve bodily healing, build tolerance to illness, and lower overall stress response (Grinde et al.). The study of how ecology interacts with the human psyche, ecopsychology, has provided much evidence for the therapeutic benefits of time spent with pets and in gardens (Louv). Frequent time in nature has been demonstrated to reduce symptoms of Attention Deficit Hyperactivity Disorder, and regular experience with nature can improve resistance to negative stress and depression (Louv 35). A 2003 Cornell study demonstrated that children living in closer proximity to nature had a higher self worth, less anxiety, and less depression overall (Louv 51). While many factors weigh into this study such as variation in lifestyles due to proximity to nature, it is clear that nature has a positive effect on mental health regardless. Therefore, dictionaries should emphasize the importance of facilitating a beneficial connection with nature, and should as part of their educational mission actively work to shrink the growing proximity from nature that plagues dictionaries. North Carolina School of Science and Mathematics

In fact, increased exposure to nature for children has a positive influence on their grasp of language and overall academics, which the teaching dictionaries aim to foster in the first place. The constant stimulation of the electronics that shape society encourages a narrow focus and concentration, dulling the use of all senses. Nature exercises these senses actively, and this practiced use of senses helps develop focusing skills (Louv 55-70). Harvard psychology professor Howard Gardner even coined “naturalist intelligence,” the ability and sensory awareness allowing the recognition of specificity in the natural environment, such as identifying species, as one of his eight forms of human intelligence. He theorizes that this form of intelligence has manifested itself through collectibles such as cars and clothing accessories, but the categorization of naturalist intelligence emphasizes the psychological importance of natural knowledge to the human brain (Louv 72). Although this theory hasn’t been concretely proven, Professor Leslie Owen Wilson has demonstrated that children who exhibit this naturalist intelligence have much higher sensory awareness and observational ability for nature than children who don’t exhibit as much naturalist intelligence (Louv 73). Just as sensory improvement due to nature encourages a different type of thinking than an urban electronic environment, so play in nature requires more thinking than structured play. Investigations of the social hierarchies of children playing in built human structures compared to natural spaces demonstrated that physical capability was the most important factor in human- structured play spaces. However, in nature, the leaders who emerged through play were the more inventive and creative thinkers (Louv 88). Exposure to and play in nature encourages more thought and creativity than structured urban playspaces, developing children intellectually rather than purely physically. Furthermore, not only creativity and focus, but attentiveness and memory are improved by interaction with nature. A Spanish study in 2015 demonstrated that as the amount of green and natural spaces at primary schools and along the commutes to and from school increased, the memory and attentiveness of the children of the study increased (Dadavand et al.). Because the Oxford children’s dictionaries aim to help children improve their reading and writing skills, an encouraged connection to nature would help children build the skills necessary to fulfill this mission. An improved connection to nature would help children learn not only nature words better, but all curriculum being taught to them.

T

he true root of the problem of the removal of nature related words from the Oxford Junior Dictionary is the overall trend of the growing distance between children and nature. This leads to a decrease in the knowledge and use of nature words. Dictionaries themselves may not be intentionally at fault for children’s less frequent use of nature words, but they reflect and extend the problem


54

of the transition of childhood away from nature. As for Oxford University Press’s argument that because they are simply an empirical reflection of word use they have no agency in this problem, the historical roles of dictionaries and the role of the dictionary in teaching children requires agency and intention. This intention is critical to evaluate what is worthy and necessary to be taught, instead of merely introducing children to words with which they are already familiar. Therefore, it is important for dictionaries to prioritize words about nature to acknowledge and resist the widespread problem of childhood losing contact with nature. It is essential that dictionaries refuse to sit idle as a symptom of this issue because contact with nature improves the health and learning capabilities of the children the dictionary seeks to teach. This argument extends far beyond the Oxford Junior Dictionary because it questions what the role of a children’s dictionary should be and how much of an impact a dictionary can have for us. Is it the dictionary’s place to try to improve society through what it teaches children? In reality, how impactful can a print dictionary be on a child’s worldview in a digitally driven world? Should the prioritization of issues like separation from nature be reserved for thematic dictionaries instead of informative dictionaries because of the difficulty of drawing the line of what’s important to emphasize without losing the ability to be a universal reference? Would a thematic dictionary be more applicable to children’s lives in the digital age because the internet is a more comprehensive reference tool, so the teaching aspect of a print dictionary would be most important? Could a dictionary ever even be used as an unbiased reference tool for language because is it possible to ever achieve an unbiased representation of language? Research into the removal of words from the Oxford Junior Dictionary prompts many more questions about the actual impact of this removal, but the significance of including nature words is certain. The specificity of nature words shines a light on the beauty, complexity, and importance of the outdoors. In a world of a human detachment from nature, as the author of The Lost Words, Robert Macfarlane says, “We find it hard to love what we cannot give a name to. And what we do not love we will not save.”

Atwood, Margaret, et al. “Reconnecting kids with nature is vital, and needs cultural leadership.” Open letter to Oxford University Press, January 12, 2015. Accessed 18 November, 2021. Balmford, Andrew, et al. “Why Conservationists Should Heed Pokémon.” Science, 29 Mar. 2002, Accessed 18 Nov. 2021. Bratman, Gregory N, et al. “Nature and mental health: An ecosystem service perspective.” Science Advances, vol. 5, 2019. Science, Accessed 18 Nov. 2021. Charles, Cheryl, and Richard Louv. Children’s Nature Deficit: What We Know – and Don’t Know. Children and Nature Network, Sept. 2009. Coleman, Julie. “Using Dictionaries and Thesauruses as Evidence.” , edited by Terttu Nevalainen and Elizabeth Closs Traugott, Oxford University Press, 2012, 98-110. Accessed 18 Nov. 2021. Dadvand Payam, et al. “Green spaces and cognitive development in primary schoolchildren.” PNAS, June 15, 2015. Accessed 18 Nov. 2021. Fatsis, Stefan. “Panic at the Dictionary.” The New Yorker, 30 Jan. 2015, Accessed 18 Nov. 2021. Flood, Allison. “Oxford Junior Dictionary’s Replacement of ‘Natural’ Words with 21st-Century Terms Sparks Outcry.” The Guardian, 13 Jan. 2015, Accessed 18 Nov. 2021. Grinde, Bjørn, and Grete Grindal Patil. “Biophilia: does visual contact with nature impact on health and well-being?.” , vol. 6,9 (2009): 2332-43. Huler, Scott. “What’s a Dictionary’s Job? To Tell Us How to Use Words or to Show Us How We’re Using Them?” The Washington Post, 25 Jan. 2018, Accessed 18 Nov. 2021. Kimmerer, Robin Wall. Braiding Sweetgrass. Milkweed Editions, 2013. Landau, Sidney I. “The Expression of Changing Social Values in Dictionaries.” , vol. 7, 1985, p. 261-269. Project MUSE. Louv, Richard. Books, 2008.

. Algonquin

Oxford Education. “Nature Words and the Oxford Junior Dictionary.” University Press, 12 Jan. 2018. Oxford Owl. “Blog: Why Does Your Child Need a Dictionary?: Oxford Owl.” Oxford University Press, 23 Jan. 2020. “Oxford Primary Dictionary: Age 8+.”

, Oxford

,

, Oxford University Press, 22 Sept. 2020.

“Spell Songs.” The Lost Words, Heritage Creative. Terry, Jackson. “Nature Related Words Should Be Reinstated in the Junior Oxford English Dictionary.” Change.org. “The Lost Words: A Spell Book.” The Lost Words, Heritage Creative.

Fifth World


55

Religion in Chinese-American Communities During the Exclusion Act Era: Understanding Historical Assimilationism to Contextualize Modern-Day Struggles for Asian American Identity Jonathan Song

T

he onset of the COVID-19 pandemic has seen a disturbing rise in hate crime attacks against Asian Americans. In only one and a half years after the United States went into a general state of pandemic lockdown, organizations such as Stop AAPI Hate have received over 10,000 reports of incidents involving attacks on Asian Americans (Stop AAPI Hate). An overwhelming plurality of these cases are attacks directed specifically towards Chinese Americans. There are a number of possible explanations for this disturbing trend. One of the most prominent rationales is the blame that many in Western nations have placed on China for failing to contain the spread of the SARS-CoV-2 virus when it was first reported in the city of Wuhan, Hubei in late 2019. Another is the common perception among Western nations that China’s authoritarian government poses a general threat to the welfare of the world as a whole. These two ideas have been combined to create the seed for multiple conspiracy theories that have become widespread despite having no basis in factual evidence. These theories include, but are not limited to, the suggestion that the SARSCoV-2 virus did not arise naturally but was accidentally spread from a laboratory environment in Wuhan in a breach of containment, or, more pointedly, that the SARSCoV-2 virus was deliberately manufactured and engineered by scientists working for the Chinese government to use as a biological weapon. The transmission of these theories by prominent American figures, including former U.S. Secretary of State Mike Pompeo, has bred inflammatory language that is often xenophobic, especially Sinophobic, in nature. One such phrase, “Kung Flu,” has gained traction among the American populace, due in no small part to its use by politicians including former U.S. President Donald Trump. One unresolved question is why attacks on Asian countries such as China have resulted in attacks on Asian American and Chinese American communities. After all, many victims of recent anti-Asian hate crimes were citizens of the United States, and a large portion of the Asian American community consists of second-, third-, and higher-generation immigrants who were born in the North Carolina School of Science and Mathematics

United States and may have never even been to Asia before. Why then, do Asian Americans, and especially Chinese Americans, experience growing amounts of xenophobia targeting them solely for their perhaps distant relation to a country located halfway around the world? It seems that no matter how physically or historically removed an Asian American person is from Asia, they are categorized by the general American populace as “belonging” to Asia. Are Asian Americans inseparable from their heritage, even if this heritage includes more than a century in the United States? Claire Kim, in her 1999 theory of racial triangulation claims that a process of “‘civic ostracism,’ whereby dominant group A (Whites) constructs subordinate group B (Asian Americans) as immutably foreign and unassimilable with Whites on cultural and/or racial grounds,” is put in place by the United States social power structure in order to exclude Asian Americans from the body politic and uphold them as a “model minority” for other oppressed minority groups such as African-Americans (Kim). This is why, in the eyes of groups considered as “insiders” by the triangulation model (such as blacks and whites), Asian Americans will always be inexplicably intertwined with their nations of heritage. Thus, as the COVID-19 pandemic has demonstrated, Asian Americans will never be a part of “America,” rendering them an easy target for scapegoating and hate. Despite these difficulties, Asian Americans have made noticeable strides to attempt to assimilate with the culture of the United States. One phenomenon in which attempts at assimilation are particularly visible is the religious composition of Chinese Americans. According to the Pew Research Center, 22% of Chinese people in America profess Protestant Christianity. This is a significantly higher proportion than in Mainland China, where only 2% of the population are Protestants, largely due to state-sponsored atheism in the People’s Republic of China and ongoing persecution of minority faiths in the country. However, when compared to pre-PRC data from Nationalist China, which espoused nominal freedom of religion, this figure of less than 2% remains more or less the same. In fact, the


56

ratio of the proportion of Chinese-Americans practicing Protestantism to the proportion of Mainland Chinese practicing Protestantism is one of the highest among Asian American immigrant groups, being higher than Vietnamese Americans, Korean Americans, Indian Americans, and Filipino Americans (Pew Research Center). What drives this high rate of conversion to Christianity among Chinese-Americans? In “Chinese Conversion to Evangelical Christianity: The Importance of Social and Cultural Contexts,” Fenggang Yang makes a surprising proposal about the nature of religion among ChineseAmericans post-1965. While many religious groups would like to believe that conversion is primarily motivated by the self, Yang rejects this as an “individualistic approach” that is “inadequate to understand the phenomenon of convert groups - collectivites with similar characteristics, such as ethnicity or national origin, converting at a high rate in the same time period.” Traditional understanding holds that en masse conversions of an immigrant group to Christianity are primarily out of assimilationist motive, or for personal socioeconomic gains. Yang disputes this, however, pointing out that many Chinese converts to Christianity are professionals who do not need the church for material benefits, and that Chinese churches in North America are often self-contained entities that would not provide any assistance towards assimilation. Yang instead attributes this phenomenon among Chinese Americans to a “process of coerced modernization” that is a response to “social and cultural changes in China” such as the Cultural Revolution. The adoption of a Western religion by Chinese people is an “identity reconstruction of immigrant Chinese in a pluralist modern society” (Yang, 1998). While conclusions regarding adoption of Christianity as a means of coming to terms with China’s recent tumultuous history may be adequate to support post-1965 trends in Chinese American culture, the Chinese have a much longer and richer history in the United States. Beyond the key role that Chinese people have played in American development since at least the San Francisco Gold Rush in the mid-19th century, the Chinese American community during the Exclusion Act Era from 1882 to 1965 is particularly unique in the sense that the absence of in-migration due to federal immigration law led to its cultural principles being driven by the Chinese already in America, with little influence from China itself. The paradigm shift in Chinese American attitudes towards their position “in-between” two sharply contrasting societies can be viewed most aptly through the lens of religion. Preceding the passage of the Exclusion Act in 1882, an insignificant proportion of the Chinese-American populace had converted to Christianity, according to the City University of New York (Drabik). However, Iris Chang notes that by the early 20th century, a substantial number of Chinese families were attending church every Sunday (Chang 183). What drove this change? This paper will examine the

complex dynamic between traditional values and Christianity among the Exclusion Act Era Chinese American population. Firstly, this paper will contextualize the religious landscape of the Exclusion Act Era by demonstrating how the clash of Eastern and Western value systems pre-Exclusion Act unsurprisingly led to conflict amongst white Americans and Chinese immigrants. Secondly, it will be shown that, surprisingly, the adoption of Christianity by Chinese en masse during the Exclusion Act Era was not out of sense of patriotism or duty to a new country, nor an attempt at assimilation to American values, but instead a logical and necessary extension of Eastern Confucian values in the context of Western American society. Finally, this paper will analyze the continued violence against Chinese Christians, even conversion, in order to illustrate the continuity of the perpetual struggle for identity, acknowledgement and acceptance by the Chinese American population, from the 19th century to the present-day.

T

he first wave of Chinese immigrants arriving to the United States, during the approximately forty year period preceding the passage of the Exclusion Act in 1882, were primarily motivated by the economic opportunities that could enable them to return to China and live a more prosperous lifestyle in their homeland. For a significant proportion of migrants, “getting to America served as a means to an end,” Chang writes. Thus, the bulk of the people arriving in this first wave were generally unconcerned with assimilation into American society. Speaking on Chinese American participation in civic processes in the mid-19th century, Chang states that “the right to suffrage or election to public office were the last things on their minds: their ambition lay not in becoming part of the governing class, but in earning a living” (Chang 66). This is not to say that the Chinese did not advocate for themselves at all: one of the era’s largest labor strikes was the Central Pacific Railroad Strike of 1867, when thousands of Chinese laborers refused to work until better wages and conditions, equal to those of white laborers, were implemented. Ultimately, though, the majority of these actions were grounded in economic rationales. This indifference towards assimilation in early Chinese immigrants also manifested itself in an apathy towards adoption of Christianity. Instead of converting, Chinese immigrants, upon arriving in America, primarily continued practicing beliefs in accordance with traditional Chinese Confucian or Daoist values. These value systems were non-theistic, as opposed to Christianity, and emphasized social harmony and filial piety. Adherence to the Confucian principles of respect for social hierarchy generally resulted in the Chinese presence in America being undisruptive to existing white societal systems during this era. However, white people often violently antagonized Chinese Americans, no doubt stoked by the virulent racism published in major mass media outlets popular in that day. Fifth World


57

Hannis’s survey of the , a prominent San Francisco newspaper, over the three years from 1850 to 1853, reveals the dramatic changes in public opinion of Chinese immigrants in America over a relatively short period of time. In the first few years of the gold rush, the editor of the Alta was a man who was sympathetic towards the cause of the Chinese immigrants. Similarly, during the first few years of the gold rush, public opinion of the whites towards the Chinese was, if not positive, fairly neutral and indifferent. However, the death of this Alta editor resulted in his replacement with outright racists who drastically shifted the editorial stance of the newspaper to one that emphasized the criminality and inferiority of Chinese. Hannis’s analysis shows that this time period also corresponded with a sudden rise of anti-Chinese sentiment in San Francisco. Why are these anti-immigrant attacks so pointedly directed at Chinese people? Part of the reason is certainly economic: Chang explains that “many California businessmen, eager to cut costs in hard times, hired the Chinese because they were usually willing to work longer hours for less than half the pay” (Chang 117). As these hirings were usually at the expense of the white American labor force, this economic dynamic within California certainly drew the ire of recently laid-off white Americans. However, other immigrant groups were contemporarily seen with similar disdain in regards to their economic habits. In Bret Harte’s 1872 poem, “Plain Language from Truthful James,” more popularly known as “The Heathen Chinee,” both an Irish and a Chinese immigrant are depicted as cheating in a game of cards. What is interesting is that the Irishman is blatantly cheating; the narrator of the poem observes: “And my feelings were shocked / At the state of Nye’s [the Irishman’s] sleeve, / Which was stuffed full of aces and bowers, / And the same with intent to deceive” (Harte). And indeed, in 1870s America, white immigrant groups such as the Irish were much more often “forced to resort to violence” and comprised a much more significant economic burden on society than Chinese who “shouldered more than their share of the total tax burden” (Chang 123). If Chinese immigrants were not singled out by antiimmigrant critics for an economic aspect, what were they targeted for? Analysis of print media of the day shows that white Americans primarily took issue with the presence of Chinese belief systems in a predominantly Christian country. One of the most popular magazines in the 1870s and 1880s, Harper’s Weekly, ran a series of editorials attacking the Chinese presence in the United States. Take the editorial entitled “A Breach of National Faith,” published on March 9th of 1879, following Congressional efforts to supersede the Burlingame Treaty, which established formal relations between the United States and the Qing Empire. The writer of the editorial declares that “our civilization is overwhelmed by barbarism” and that the “Chinese and the Anglo-Saxon do not readily assimilate.” They distrust the “inevitable” tide of Chinese immigrants because “with North Carolina School of Science and Mathematics

Eastern blood would come Eastern thoughts and Eastern habits” (Harper’s Weekly). Sinophobic rhetoric of the day exhibits an anxiety over the influx of a foreign belief system to the fairly rigid Christian thought structure of the 19thcentury United States. Even more pointed attacks at specific Chinese belief systems such as Confucianism can be found in other publications such as , which published sermons from pastors who frequently distorted Confucian principles in order to paint a portrait of Chinese as barbaric. On January 12, 1854, a lecture by a “Reverend Mr. Syle” was printed in a series of sermons from the Protestant Episcopal Mutual Benefit Society. This Mr. Syle claims that Confucianism is “a kind of Stoicism - cold and heartless. It tends to foster pride and great hardness at heart” ( Times). It is questionable as to on what grounds Syle bases his claim. Confucianism, in actuality, places high importance on social harmony and filial piety, preaching that humans, who are fundamentally good, are capable of finding moral virtue in what appears to be an immoral world. However, Syle is not concerned with true Confucian principles, and continues to preach that “it [Confucianism] has now degenerated into juggling and chicanery… It is impossible to express the degradation of mind consequent upon this belief” (New York Times). In his position as a pastor, Syle is using the religious system of the Chinese people to attack them, thus justifying the white people’s imposition of Christianity upon the Chinese. The contrast between traditional Chinese belief systems and Christianity allowed anti-immigrant rhetoric to specially target the presence of the Chinese in America as opposed to other immigrant groups. If immigrants from Europe primarily practiced the same Christianity as people already in America, their value systems could not be attacked without condemning one’s own. However, Chinese people, with their completely foreign value systems, could safely be called “barbarians” with “degraded minds.” It is important to keep in mind that this conflict was not a one-sided affair. Just as white people used religious arguments to attack Chinese Americans, Chinese people used their own religious arguments in order to defend themselves from the onslaught of rhetoric against their people. Written accounts of these defenses are few, especially given the second-language nature of English for the first wave of Chinese immigrants. One surviving example is an 1879 letter from the merchant Wong Ar Chong to the activist William Lloyd Garrison, written in response to Senator James Blaine’s advocacy of a precursor to the 1882 Chinese Exclusion Act, an act that Garrison opposed. Wong repeatedly uses arguments from Christian ethics to rebut Blaine’s points. Responding to Blaine’s incessant usage of the word “heathen” for Chinese people, Wong claims that he “should judge from the tone of his letter that he was somewhat lacking in Christian charity” (Wong). To Blaine’s claim “that China people pay no taxes in this country,” Wong


58

refutes that the government does “not allow Chinamen to become citizens in California, where they pay $200,000 in taxes,” and questions why politicians like Blaine do not follow “the fruits of your [Christian] Bible teachings, when you talk about doing unto others as you would have them do unto you” (Wong). In this way, Wong takes Christian teachings and spins them against Christians who assault Chinese people for their own belief systems. Wong uses dogmas such as the so-called “golden rule” to denounce the hypocrisies of whites, pointing out their exclusionary and anti-immigrant discriminatory actions do not conform with the actions commanded of them by the Christian Bible. Such is the contentious relationship many Chinese immigrants had with religion pre-Exclusion Act: both being scapegoated for national problems because of their own religious practices, and using religion as a counterpoint to rebut this scapegoating.

I

t is evident that Christianity was eschewed among the Chinese in America before the Chinese Exclusion Act; however, after the passage of the Chinese Exclusion Act, those Chinese living in America were forced to construct for themselves a new identity, separate from that of their countrymen still in China. The cohorts of Chinese Americans that grew up during this period were often American-born Chinese (ABCs), generations removed from the homeland of their ancestors. The shift in identity during the Exclusion Act era can be seen through the lens of Chinese American approaches to Christianity: in the seven decades following the passage of the Exclusion Act, the number of Chinese-led Protestant churches in the United States increased dramatically, from seven in 1890 to sixtysix in 1952 (Yang, 2002). But if Chinese families were still predominantly practicing traditional Chinese religions and adhered to traditional Chinese values when the Exclusion Act was passed, what drove this change in their nature? Surprisingly, to answer this question we must return to the core tenets of Confucianism. In fact, Confucian values were the inroads for Chinese adoption of Christianity as a means of adapting their beliefs to their environment. In traditional Chinese culture, education was highly valued as a symbol of status and prestige, and was widely regarded as the key to social mobility. For millennia, the gentry social class of China was dominated by those known as “scholarbureaucrats,” men who passed the imperial examinations and were highly educated in Confucian texts. These “scholarbureaucrats” were given administrative roles in China’s highly meritocratic government and wielded immense amounts of power. Since the examination system did not formally discriminate on the basis of economic standing, passing these tests nominally created a path for families to be lifted out of poverty. Thus, a highly education-focused culture evolved among the Chinese people, and Americanborn Chinese during the Exclusion Act Era were taught to value the Eastern esteem for education by their Chinese

parents. Iris Chang posits that the initial waves of conversion to Christianity were primarily driven by devout Confucian parents who strongly emphasized the need for their children to gain an adequate education. Public schools were often not an option for Chinese children: “as early as the mid-nineteenth century, state authorities tried to exclude Chinese American children from attending white public schools,” so much so to the point that, upon the passage of California law granting public education to blacks and Indians, Asians were still barred, so that “for fourteen years, from 1871 to 1885, Chinese children were the only racial group to be denied a state-funded education” (Chang 176). Even after Chinese Americans were allowed into white schools, Chinese American children were often the targets of vehement racism and stereotyping. Thus, Chinese parents were forced to search for other methods to school their children. Many parents found a solution to their problems in missionary-run schools which did not discriminate on the basis of race. In fact, Chinese children began patronizing Chinatown churches for an education to the point that “by 1920, almost all of the Chinese American children in San Francisco— close to a thousand of them— were attending Sunday school” (Chang 183). Chang’s assertion can be confirmed by first-hand missionary accounts of those working among the Chinese Americans in California. In Gospel Pioneering: Reminiscences , the missionary William C. Pond recollects his experiences establishing Chinese churches in the late 19th century, during the early years of the Exclusion Act era. An examination of his accounts reveals that the strategy of his contemporary missionaries towards the Chinese in California was widely focused on using education to reach out to Chinese families. Ponds writes of a “Rev. William Speer” and a “Rev. S. V. Blakeslee” who, in an effort to establish a church within San Francisco’s Chinatown, “had gathered a small class whom [they] hoped to reach with Christian influences through teaching them the English language” (Pond). This approach was widely emulated, including by a “Rev. Otis Gibson,” who proposed “to organize Chinese Sunday schools, and after that, Chinese evening schools, at which the Chinese could learn English…” (Pond). Thus, multiple missionaries throughout California utilized the offering of schooling and education as an opportunity from which they could begin the process of converting Chinese Americans to Christianity. This approach was effective in drawing Chinese Americans to churches: Pond observed that the establishment of Chinese Sunday schools and evening schools “called forth an immediate response. Indeed these Sunday schools became a real fad” (Pond). While the Chinese American experience with the church began with the purpose of receiving education, Chinese interaction with the church afterwards was not limited to schooling. Arrival at the church instead became a first step Fifth World


59

on the process towards conversion. Pond at first did not believe the conversion efforts would be fruitful, drawing on accounts he himself heard from missionaries in China. He writes that “I [Pond] had no expectation of immediate results… I had heard for so many years of the very slow progress… that I supposed that it would take two or three years before Chinese conservatism could be overcome.” Instead, Chinese Americans began converting to Christianity “after not more, I am sure, than three months,” when “our teacher [of a San Francisco church] came to me and said that eight of her pupils seemed to have given themselves to Christ” (Pond). This quick period of conversion aligns within a greater trend towards assimilationism in the Chinese American populace during the Exclusion Act era. As American-born Chinese began to constitute a significant demographic of the Chinese American population, they increasingly turned to Western thought systems and ideas beyond the grasp of the culture of their parents and grandparents. Consumption of English-language media and involvement in American civic and societal groups such as YMCAs and Boy Scouts opened American-born Chinese to distinctly American ideas. Chang notes that consumerist values transformed Chinatown such that “‘Take it all in all, the Chinatown of today is not the Chinatown of the bygone days,’ Dr. Ng Poon Chew [a prominent San Francisco publisher] wrote in 1922” (Chang 184). Confucian values began to erode: instead of the filial piety of their ancestors, who left marriage up to educated matchmakers, “Chinese American women… dated whomever they pleased and selected their own husbands instead of leaving these decisions to their parents,” a scandalous decision for a person brought up in a traditional Chinese household (Chang 184).

T

he Exclusion Act era was a period of flux within Chinese American approaches towards their identities, with a new generation of American-born Chinese making strides towards assimilating into Western society. However, one would be mistaken by concluding that Americans reacted to these assimilatory efforts with all-around tolerance and acceptance. Especially in a religious context, even after conversion to Christianity, Chinese Americans often faced attacks on the basis of preconceived notions about their ethnicity. These attacks can be contextualized within a broader trend of Asian American struggle for identity, acknowledgement, and acceptance that forms a continuity from the beginning of the Exclusion Act era to the present day, over fifty years after the abolition of the National Origins Formula and the end of federally mandated Chinese Exclusion. This phenomenon of racial discrimination against Chinese Americans regardless of their assimilation status did not go unnoticed by Chinese immigrants even during the early years of the Exclusion Act era. Yew Fun Tan, a student at Yale College, argued in a letter to the Evening Post North Carolina School of Science and Mathematics

that was addressed in an 1882 Harper’s Weekly editorial that the Chinese-American population should not be blamed for the slow process of assimilation. Tan claims “that it is not the Chinese, but the Americans, who refuse to assimilate” and that “it is unfair to assert that the Chinese will not remain and become good citizens, for the good reason that [the Americans] will not allow them to do so” (Harper’s Weekly). Essentially, Chinese American efforts towards assimilating were blocked by both legal obstructions, such as the inability of Chinese to become naturalized American citizens, as well as systematic racism entrenched within society at a fundamental level. Pond’s missionary narrative also accounts for the presence of vehement attacks and discrimination towards Chinese Christians even after their conversion. Pond acknowledges that many of the missions he founded were not successes, and that a significant portion of them were discontinued after a year or two. He gives a few reasons for the failures of certain missions: “some because the business which gathered Chinese in that locality was discontinued; some because through lack of funds we were compelled to let them die.” However, the most interesting reason he gives is that “the mob drove all the Chinese from the town” (Pond). Why did systemic violence against Chinese Americans continue even after the establishment of churches and nominal attempts by Chinese immigrants to assimilate into American society through conversion? This question may be answered through Kim’s aforementioned theory of racial triangulation. Kim proposes that the Exclusion Act era, or in terms of Asian American history, the time period before the passage of the Civil Rights Act, was an era of “open racial triangulation.” During this era, it was in the economic interest of whites to keep Asian immigrants unassimilated from the common populace. Kim, citing a 19th-century clergyman who opined “There is nothing in human character, on the face of the whole earth so stable, so fixed, and so sure and changeless, as the character of a Chinaman,” argues that the Eastern belief systems practiced by Chinese made them seem more attractive and docile in the eyes of white employers. Kim writes that “racial triangulation reconciled the urgent need for labor with the imperative of continuing White dominance” (Kim). Thus, racial triangulation had two main benefits: one was to render Asian and Chinese American labor plentiful in supply through relative valorization; the other was to block Asian and Chinese Americans from attempting to enter society and thus rendering them valuable as scapegoats for economic issues through civic ostracization. Kim also asserts that the dismantling of de jure racial discrimination within legal statute through Civil Rights era legislation such as the Immigration and Nationality Act is not equivalent to the dismantling of racial triangulation as a means of preserving the white power structure, a relationship that continues, although coded in non-racial terms, to this day. In Kim’s words, “today, it


60

[racial triangulation] allows them [whites] to conscript Asian Americans into the war of racial retrenchment while denying them genuine equality with Whites” (Kim). The continued ostracization of Asian Americans from American society, coding them as fundamentally “foreign” and “Asian” may explain contemporary trends in ChineseAmerican approaches to Christianity. Yang’s rejection of Chinese conversion to Christianity as assimilation due to the self-contained nature of Chinese churches is rooted in what Timothy Tseng terms “an indigenous form of evangelicalism” that has become the standard for modern day Chinese-American Protestantism (Tseng). Observing that Chinese-American evangelical Christians predominantly rejected mainline Protestant denominational labels, instead focusing “on building congregations and organizations that reinforced a generic evangelical identity,” Tseng postulates that this “separatist stance” was adopted for “its ability to bypass the constraints of geopolitical realities and construct an alternative Chinese identity” (Tseng). The geopolitical realities Tseng discusses are that of the Cold War, in which the loyalty of millions of Chinese Americans to the United States were called into question. During the 1950s and the 1960s, Chinese Americans were once again confronted with the harsh reality that their place in American society was in constant turbulence due to a socially-constructed identifier of them as “Chinese,” no matter how long their personal history in America. To avoid being entangled in the contentious politics of Cold War Sino-American relations, Chinese Christians severed ties with “mainline Protestant institutions such as the United Methodist Church” which “aroused suspicion… because of their ties of social justice activities in China” (Tseng). Thus, the distancing of Chinese American Christians from American Christians counterintuitively assisted in Chinese American efforts to distance themselves from Chinese, and aided in the formation of present-day Chinese American identities: “a transposition from a nation-state centered identity to one that was culturally centered” (Tseng).

T

he history of the Chinese in America is a long and tumultuous one, filled with complexities and contradictions that may appear tangled at first glance. This turbulent history is perhaps best exemplified through the lens of the complicated Chinese American relationship with religion. Religion has always been a source of conflict to the Chinese American experience; whether it be through a cultural confrontation between Chinese immigrants and white Americans rooted in a broader disconnect between Western and Eastern belief systems, or through a distinct American attempt at blocking the Chinese American endeavor to modernize and adapt through religious conversion in order to enforce a rigid racial dichotomy between “insider” and “outsider”. At the same time, this endeavor at modernization and adaptation does not always sprout from a simple desire to assimilate, as a reductionist

approach would tend to favor, but is often a confluence of traditional beliefs with a changing environment. The experience of the Chinese in America has molded religion into a tool to both commemorate one’s heritage and stretch the boundaries of one’s identity. How has this dynamic changed in the years after the lifting of the National Origins Formula in 1965? There are certainly similarities between the use of religion to reconstruct identity pre-1965 and the post-1965 observations made by Yang, Tseng, and others. However, there are major differences resulting from changes to the composition of the Chinese American community itself. The biggest shift in paradigm between these two eras is the re-opening of the Chinese American community to outside influences in the form of in-migration. While pre-1965 during the Exclusion Act Era, developments within Chinese America could be interpreted as uniquely Chinese American, this assumption breaks down once we consider today’s dynamics of large-scale immigration from Asia. Further work is needed to analyze what effects this new wave of migration to America has had on the social dynamic of Chinese Americans. How have members of the existing pre-1965 Chinese American community been affected by a large-scale influx of immigrants hailing from a Communist, predominantly atheist Mainland China? Yang’s described “process of coerced modernization” undergone by these immigrants is very much interconnected with sociopolitical upheavals in China such as the Cultural Revolution, and even more notably, the Tiananmen Square Massacre which marked the start of what is now known as the “third wave” of migration from China to the United States. These events did not only serve as push factors for migration, but also posed ideological challenges to the Chinese cultural web that those migrants brought with them to America. Tseng’s analysis of the creation of a brand of self-sufficient Chinese-American Christianity is grounded in the process by which Chinese-American evangelicals attempted to separate themselves from a Communist Chinese national identity. But what of those migrants who come bringing that patriotism and fervor with them? To what extent is the cultural dynamic of China intertwined with the cultural dynamic of Chinese-Americans? This returns us to our original problem of the position Asian Americans occupy as perceived “foreigners” to American society. While we have laid out an analysis of how religion historically affected Chinese-American society, the Asian American experience as a whole is much more nuanced and complex. Aggregations and contrived categorizations have resulted in a common false perception of homogeneity among the Asian American community. How do demographic factors such as religion play into the civic ostracization of an entire, multi-faceted racial group of Asian Americans? How have the different subsets of the Asian American community evolved over time to respond to conflict between white America and the Asian American Fifth World


61

community as a whole? These are difficult questions to answer precisely due to the extremely heterogeneous nature of what is socially constructed as a singular “Asian American” identity. It would be unwise to equate, for example, the experiences of an evangelical Korean American raised in a Protestant household in Seoul with an Indian American raised in a Hindu household who later converted to Christianity. Lisa Lowe warns against describing Asian American politics in terms of a single homogenous bloc, which “essentializes Asian American culture, obscuring the particularities and incommensurabilities of class, gender, and national diversities among Asians” (Lowe). This essentialization of Asian America “also reproduces oppositions that subsume other nondominant terms in the same way that Asians and other groups are disenfranchised by the dominant culture” (Lowe). Indeed, Kim states that “it is precisely the reality of [Asian American diversity] that is effectively obscured through persistent patterns of triangulating discourse” (Kim). That being said, while Lowe cautions the reductionism of the Asian American community to static categorizations, she does not discount the importance of unity under the term “Asian American,” of course, being careful to take into considerations the heterogeneity encompassed by that term. She writes that “it is possible to utilize specific signifiers of ethnic identity, such as Asian American, for the purpose of contesting and disrupting the discourses that exclude Asian Americans” (Lowe). And so, with that in mind, we return to the problem of Asian American relations with opposing Asian and American cultural influences, a problem that is more important to grapple with now than ever. The circumstances arising from the COVID-19 pandemic have pushed Asian American issues to the forefront of national conversation and debate like never before. How do those of us who are a part of the Asian American community come to terms with our place in a society eager to label us as outsiders? This question is so overwhelming that it is not inconceivable to think that perhaps this is a question we will never adequately answer. However, in examining our history, we may find solace in the stories of those before us who resisted, adapted, and survived, and hope we may follow in their footsteps.

North Carolina School of Science and Mathematics

“A Breach of National Faith.” Harper’s Weekly. 9 March 1879. p. 182. Web. Accessed 13 December 2021. Chang, Iris. The Chinese in America. Viking, 2003. Drabik, Grazyna. “Religious Practices of Chinese Immigrants.” , Macaulay Honors College of the City University of New York. 12 May 2009. Web. Accessed 13 December 2021. Harte, Bret. “Plain Language from Truthful James.” 1870. Mark Twain in His Times, Web. Accessed 13 December 2021. Lowe, Lisa. “Heterogeneity, Hybridity, Multiplicity: Marking Asian American Differences.” , vol. 1, no. 1, 1991, pp. 24-44. Web. Accessed 13 December 2021. Kim, Claire Jean. “The Racial Triangulation of Asian Americans.” 1999, pp. 105-138. Web. Accessed 13 December 2021.

vol. 27, no. 1,

“Mr. Yew Fun Tan.” Harper’s Weekly. 22 April 1882. p. 243. Web. Accessed 13 December 2021. Pond, William C. Excerpt from , 1833-1920. Welch, pp. 4-10 “Protestant Episcopal Mutual Benefit Society Lectures.” Welch, pp. 20-22.

, 12 January 1854.

Stop AAPI Hate National Report. Stop AAPI Hate, 30 September 2021. Web. Accessed 13 December 2021. Pew Research Center, 4 April 2013. Web. Accessed 13 December 2021. Tseng, Timothy. “Protestantism in Twentieth-Century Chinese America: The Impact of Transnationalism on the Chinese Diaspora.” , vol. 13, Brill, 2004, pp. 121–48, Web. Accessed 13 December 2021. Welch, Ian, editor. The Chinese and the Episcopal Church in mid-19th century America. Australian National University, Canberra, 2013. Wong, Ar Chong. “Letter to William Lloyd Garrison.” 1879. Slate. Web. Accessed 13 December 2021. Yang, Fenggang. “Chinese Conversion to Evangelical Christianity: The Importance of Social and Cultural Contexts.” vol. 59, no. 3, 1998, pp. 237–257. Web. Accessed 13 December 2021. Yang, Fenggang. “Religious Diversity Among the Chinese in America.” Religions in Asian America: edited by Pyong Gap Min and Jung Ha Kim, Altamira Press, 2002, pp. 71-98.


62

When Water Isn’t Life: The Flint Water Crisis as a Case Study in Slow Violence Josie Barboriak

T

he water crisis in Flint is one of those examples of injustice which first became known in the wider United States slowly, then all at once. In modern-day America, such a calamity was virtually unheard of— 100,000 people exposed over the course of 18 months to an insidious poison in their drinking water (Denchak 2018). By the time most people were learning of the crisis, it was too late for Flint’s residents. Lead had already done irreparable damage to their bodies and minds. Flint is regarded as a tragedy, a one-in-a-million collection of errors committed by wellmeaning people in power; whatever the meanings of those in power, however, it is the product of their actions and of their refusals to act. It’s at first difficult to see the violence in Flint, but in the use of scientific and governmental power to deny citizens safety, what looks like hapless inaction on the part of the state nonetheless served to concentrate harm in already-disadvantaged communities. Flint’s mighty industrial past seems distant from its current state of lagging disrepair. When General Motors alighted on the town in 1908, workers flocked to the “Vehicle City”— 150,000 in thirty years, a population explosion (Bach 2019). Most of these new citizens were Black Americans, seeking to escape the racism of the South for the economic opportunities of the industrial North. Auto workers in midcentury Flint thrived, and industry’s promises were fulfilled as families filled two-story houses and department stores sprang up downtown. Economically, Flint relied completely on General Motors. In 1950, 90% of Flint’s earnings came from General Motors, and with a high median income, a middle class grew (Bach 2019). Back then, the risks of a city depending on one industry were overshadowed by the undeniable excitement of the massive boom in automobile production, so no other large businesses came to Flint. And in response to the growth, there was little economic incentive to allow poorer and richer communities to consolidate: competition, not collaboration, was the rule (Adams 2017). Wealthier residents of the Flint area moved outwards, forming new townships, and these wealthy suburbs hoarded resources for themselves. “Flint resident” became an identity repudiated by wealthy whites in flight to the suburbs. Black Americans, under racist housing policies from General Motors, were concentrated in Flint proper (Bach 2019). This was the

1970s. Flint had yet to reach its population peak or get to the point in 1980 where it boasted a higher median income than San Francisco (Thompson 2015). But Flint was set up to fail through economic isolation as money and power consolidated outside of its borders. After General Motors abandoned Flint around 1990, its revenue declined, and so did its value to the state. Demolition of some factories and the removal of employees from others soon depleted the base of people to spend money within the town of Flint, and the township problems that had been simmering below the surface came to light as moneyed residents abandoned the town (Adams 2019). Flint’s majority-Black population was left with underfunded facilities, high unemployment, and low wages— even those who were able to work earned meager pay for their labor. That survival-of-the-fittest competition between towns surrounding Flint, framed as an even contest between municipalities with equal resources, had disempowered thousands of people living in Flint, placing their lives at the mercy of state leaders. Between 2003 and 2014, the state cut off around 55 million dollars in revenue to Flint, starving the city and leaving it further vulnerable to abuse of power and mismanagement of the coming environmental crisis (Bach 2019). For as Flint had grown strong from industry’s nourishment, industry had been poisoning its water. Used as the city’s water source until 1967, the Flint River had suffered a host of issues. Downstream dumping from General Motors, lumber, paper, and meatpacking industries depleted the river’s oxygen levels and led to thousands of fish dying. New regulations in the midcentury forced GM to dilute its waste before dumping it in the Flint River, which led to the push to switch to water from the DWSD (Detroit Water and Sewerage Department)— not for the purpose of cleanliness and safety, but rather for increased water capacity with which to dilute waste and decreased costs (Carmody 2016). That 1967 switch to a new source of water failed to improve the Flint River’s quality. Treated and untreated illegally-dumped industrial waste and raw sewage filled the river and caused bacterial blooms, and road salt and landfill water seeped in. The water was about as safe and sanitary as that of medieval Europe, and the same refrain Fifth World


63

from Flint leaders followed each spill. “As far as we know, no community uses the Flint River for a drinking water source,” the county’s director of environmental health assured constituents in 1999 (Carmody 2016). This statement paved the way for the lax enforcement of environmental regulations that would cause one side of the dual harm— environmental and economic— that would hurt Flint. It did not help that Flint was often referred to as the “Crime Capital of America.” Poverty and pollution had plagued the town for half a century, driven by GM’s waste dumping and racist housing practices and the state of Michigan’s lax enforcement of environmental regulations and encouragement of competition for resources among communities. Flint became known as a failure of a city, a headache for its state government, a problem seemingly too complex to solve with the necessary policies. Flint’s is a distinctly American story. The 1950s dream of the two-story house perishes when the industry leaves, and the money leaves, and the wealthy leave, and who do they leave behind? The people of Flint were left to find out what happens to an American town no longer viable, underprovided for by economic Darwinism and the unchecked competition of capitalism. It was abandoned by the industry that had become its god and, unable to attract new ones, was left with its river transformed into a source of danger rather than a source of life. Those in power who coated their prejudices in smiles, insisting that the State knows best, and those now on trial as main actors in the crisis, plead that they were working to the best of their ability, and that they always meant well. But what does it mean, that all these people meant well? Power that means well, that elicits gratitude, power that decides who lives and who dies– any power outside of and unresponsive to a community will often fail to act in the community’s best interests. Meaning well, or even having vaguely equitable intentions, comes up short when decisions made by those in power, their actions or inactions, have massively harmful and inequitable consequences upon those excluded from the decisions that shape their lives. Slow violence is a phrase used to describe environmental harm that is gradual, strategic, and incapacitating to a group of (often already-disadvantaged) people. It’s helpful to remove violence from its typical association with the overt exercise of the police power of the state. But the incremental killing of Flint’s citizens, the consequences of decisions made by those in power to harm those removed from it, is a slow war against the citizens whom American authorities deemed culturally invaluable. The subjecting of their bodies to symptoms such as anemia, kidney failure, and permanent brain damage (in children!) is all the more infuriating when considering how easily this could have been prevented. The Flint water crisis was allowed to happen because no one in power cared about Flint, a town stereotyped as the Murder Capital past its prime. No one outside of Flint wanted to grapple with the complaints of Flint’s citizens or to admit North Carolina School of Science and Mathematics

that the state of Michigan had failed it long before the crisis. The issues behind the Flint Water Crisis are complex and related to racial and wealth inequalities in the US that placed Flint in this Twilight Zone of inconvenient importance, ignored in favor of its more privileged communities. Racist and classist conceptions of Flint informed the misuse of governmental, scientific, and media authority that prolonged the slow violence of the crisis, proving the importance of the proximity–or distance– of people in pain to power. Those who are most affected by environmental disasters are the most knowledgeable about them, and therefore must be placed in the positions of institutional power where decisions regarding their own safety are made. When denied institutional power, these people must instead create their own coalitions, alliances, and movements that are less influenced by institutional power structures. Since their knowledge is often opposed by media that frames environmental harm from the state’s point of view, these people must also find their own ways to publicize and speak about their experiences in order to reduce future environmental harm caused by prejudice and slow violence.

W

hat could explain the decision to abandon the waters of Lake Michigan, a clean, safe water source, in favor of a new pipeline that would leave Flint temporarily relying on the contaminated Flint River? The answer is the key to power relations in Flint leading up to the crisis, the key to this puzzling decision at the heart of the crisis— Michigan state emergency manager law, which paved the way for the abuse of power from those outside the Flint community to harm those within it. Receivership is the process through which a party unable to manage its own finances cedes control to another managing party. You may be familiar with conservatorship, and receivership is the same process— it is usually applied to companies which have failed: outside management steps in and guides the organization away from bankruptcy (Chapman et al. 2020). It’s an opportunity ripe for abuse of power and misunderstanding of a situation, especially when applied to larger entities with life-and-death stakes, such as governing bodies of a group of people. Michigan is the only state in which locally-elected school boards, mayors, and city councils can legally be replaced with an emergency manager during a financial crisis. These positions are the backbone of local government, which ensures that citizens are being heard and that big decisions within a community are made by members of their own. But in Michigan, if your economic situation gets bad enough, the local government can be replaced with one state employee. This law had actually already been repealed through public uproar and a state referendum process— only to be passed again by a lame-duck legislature under Republican governor Rick Snyder! This new and improved discriminatory law was exactly the same, with the added provision that it can’t be (Hammer 2017).


64

The use of this emergency-manager law reveals the prejudices at work behind the Flint water crisis. It’s been widely criticized for its racial implications: 50 percent of Black Michigan residents have been placed under emergency manager rule; only 2 percent of white residents have (“Law That Led to Poisoned Flint Water Racially Discriminatory” 2018). The law operates with the logic that what local governments can’t do, the state can do better. The local governments which the law has been used to replace have been for communities that are majority low-income and Black. The United States has rid itself of explicitly racialized laws, but in its place are laws like these, convenient tools to disenfranchise and harm impoverished and Black constituencies, with poor African-American communities bearing the greatest intersectional burden. Prejudice is the spirit of the emergency-manager law’s paternalism, and Flint was in danger. In March 2013, Flint, facing growing debt (remember that 55-million-dollar revenue cutoff?) was placed under the control of a state-appointed emergency manager, Ed Kurtz. At this time, the state of Michigan was attempting to finance a new water project, the Karegnondi Water Authority (KWA). The stated goal was to save money by stopping the purchase of processed Detroit water in favor of a new, closer pipeline. There was only one hurdle— the KWA pipeline was, at the time, nothing but a stack of blueprints (Hammer 2017). The engineering firm TYJT (Tucker, Young, Jackson, Tull) was hired to assess the viabilities of Flint’s future water options: either joining the KWA and treating the water at an upgraded Flint water treatment plant, or continuing use of DWSD water, or blending Flint River water with DWSD water to reduce costs. They concluded not only that the blended water option had the lowest projected cost, but that KWA-provided cost estimates of their own project were likely incorrect (Tucker, Young, Jackson, Tull Inc 2013). Their report expressed concerns for the cost to Flint and Flint’s ability to treat the water and pointed out that stated issues with DWSD water, such as Flint’s autonomy and presence of backup power, would likely worsen under the KWA plan. In short, the state proposal that Flint join the KWA pipeline was an idea of dubious merit, even before taking into account the dangers of temporary use of Flint River water. In response to this TYJT study, the emergency manager, seemingly annoyed at the answer he’d been given, commissioned further studies from other firms. Emergency managers, the DEQ (Department of Environmental Quality), and KWA failed to act in the best interests of Flint. In particular, the emergency manager Kurtz continued tilting the scales in favor of a more expensive option and ultimately succeeded in getting the KWA plan approved (Forrer et al. 2019). The DEQ also upheld this narrative— even after a cheaper offer from the DWSD that would save Flint money, the DEQ insisted that the KWA option was better. In the meantime, the local Flint water treatment plant

which hadn’t been in regular use for half a century would have to be upgraded. Notably, money for building the pipeline came from outside vendors, but Flint was provided no money for the expensive upgrades to its water treatment plant— instead, the nearly-bankrupt city of Flint would have to raise its water costs further (Hammer 2017). One has to wonder what amazing foresight from an emergency financial manager, seeing beyond the predictions of engineering firms, could justify such a decision. Maybe, just maybe, it had to do with state pressure to build the pipeline. The push to switch water sources didn’t go unchallenged from forces within Flint. Wantwaz Davis, a city councilman serving under emergency management, had won a surprise victory for the position. As a 17-year-old, he’d shot and killed his mother’s suspected assailant in self-defense, but was incarcerated nonetheless. The causes that drive an individual to crime are complex, most often the result not of moral failing but of desperation to keep one’s family safe, housed, and fed in a city lacking opportunities. It’s not hard to feel hopeless in a city where one’s elected government can be disempowered through forces one can’t control. After serving 19 years in prison, Davis began attending city council meetings and going door-to-door campaigning and telling his life story. His election prompted a public apology from the editor of the for failing to warn the public of his candidacy (Goodwin 2016). Davis’s treatment displays the dismissal of Flint as a whole due to its problems of poverty and crime that were the creation of decades-old state policy. Considering that many of the causes of crime were the state’s indirect creation, dismissing a concerned citizen in Flint for having a criminal past is like leaving a vegetarian in a room full of meat and then berating them for choosing not to starve. After the water switch in April 2014, as a lifelong Flint resident, Davis was suspicious. Among other things, he didn’t want to drink water from a river which he’d spent his life seeing used as a dump for industrial waste. Davis was one of the first to request a federal investigation, calling the emergency manager law a “dictatorship” in letters to the Attorney General (Goodwin 2016). These letters would go unanswered for months. Even holding a position of elected office in Flint, displaying leadership and the trust of the community, Davis was unable to exercise power because of the emergency manager law. Most reporting on the Flint water crisis fails to identify opposition to harmful decisions, but Flint residents were sounding alarms from their positions of disenfranchisement in every step of the process. Those who knew the Flint River best were ignored, and Flint homes got their first taste of river water— and a unique blend of treatment chemicals— in April 2014. The ironlead pipes making up Flint’s infrastructure are surprisingly not uncommon in the United States. Water that’s been treated properly, using chemicals called orthophosphates that form a protective coating, can pass through it without issue. But in Flint, water management authorities added no Fifth World


65

orthophosphates to the water. Was this omission a result of too-high costs, as estimated by the engineering report? Regardless, the heavily-chlorinated water began to carry with it iron chunks and traces of lead, piped straight into Flint’s homes (Banks et al. 2018). The application of emergency manager law, the state’s push to switch water sources, and the treatment of elected officials in opposition display the prejudice that enabled and extended the state overreach behind the water crisis. This decision shows that those who’d been living in Flint and knew the history of the river’s toxicity had a better instinct for a safe course of action than the empowered state manager. The Flint River’s use as an interim water source was ill-advised and would never have begun without the supposition that a state emergency manager was betterequipped to deal with Flint’s financial issues than its own citizens. This supposition was predicated on racist and classist ideas of Flint. Despite opposition from Flint’s elected officials and quantitative evidence that the plan necessitating the water switch was poorly thought out even from a purely fiscal perspective, the amount of power given to the emergency manager and failure of the Governor’s office to question this decision or answer to local elected officials’ qualms resulted in the decision behind the beginning of the public health crisis.

T

he power that structures an American city is twofold: that which the government enforces and that of public opinion. Michel Foucault argues that the institutions which govern the production and distribution of knowledge are key to understanding power relationships. In the city, this plays out through the government and the press. The state’s power is that of decision making, which entails a monopoly on violence (albeit subtle violence), while the press’s power is a monopoly on knowledge, though these powers overlap and intersect. We’ve discussed already the state’s impositions on Flint’s city government and the violence of the decision to switch water sources. As problems began to arise with the water, these impositions continued: violence was enacted through the control of narrative, exemplified by government denial and refusal to act in the face of complaints. Now that the issue of water in Flint had begun to affect the public, the media of Flint came into play— mainly in the reporting strategies used by the throughout the process of complaint and denial. Just as government positions of power represent the most privileged parts of their constituencies, so too do media positions represent managerial and corporate interests. Scrolling through an article introducing the reporting team, tasked with determining and presenting important information to a city that’s 57% Black, it is almost comical to witness the onslaught of white faces. Twenty-three members of the reporting team are white, two are Black (Eng 2019). To make an obvious statement, media representation, both in front of and behind the screen North Carolina School of Science and Mathematics

or page, matters. Without it, the prejudices of wider society only play out more subtly and harmfully through the flawed representations we consume. Light-brown water, like a child’s analog of beer sipped from muddy puddles, spews from your American taps, and what do you do? You complain, you protest, you lean on your American government, on those who are supposed to protect you and provide clean water to drink. From as early as June 2014, complaints about water quality were documented by the . The water smelled and tasted awful, residents said. They were calling it “poop water” (Erb 2015). The emergency management discounted these complaints, with the subtle help of the . There’s a multitude of articles written about the latest updates in water complaints. The majority follow the same rough structure— a middling statement about slight issues with the water, an impassioned quote from a Flint citizen who doesn’t trust it, and, finally, a quote from a Flint official that reassures that the water is safe. A newspaper can’t avoid giving voice to complaints, but the human brain is hardwired to interpret the final information presented as the most important, and in Flint, this was always the government’s word on the issue1. It’s important to look at these responses, since the Flint emergency management was nothing if not consistent in its treatment of complaints. This response was to deny the complaints completely and attempt to avoid an investigation. This phenomenon can be seen in three articles: one about water hardness in June 2014, another about boil water advisories in September 2014, and the third about a protest stemming from trihalomethane contamination in January 2015 (Fonger 2014 2015). Each ends with a statement from an authority: “people are wasting their money buying bottled water” and “I wouldn’t panic.” Most chillingly, the last article’s final statement appeals to the economic argument that Detroit’s water would be too expensive, costing the city around $12 million. We know now that the choice to pursue the KWA plan came in the face of a steep cost reduction offer from Detroit, one that would have “saved 50% today and 20% over the next 30 years” when compared with KWA (Hammer 2017). The state’s maintaining of the economic argument leaves one considering how the Flint emergency management could possibly have justified switching water sources if that line of reasoning had (accurately) disappeared, and whether Flint would have quickly returned to safer water that was cheaper for both the city government and its residents. 1 Judith Butler argues in her essay “Photography, War and Outrage” that the phenomenon of embedded reporting, in which journalists are attached to military units involved in armed conflicts, serves to reinforce the perspective of its home government. Applying that argument to a slow war such as Flint, the dynamics of reporting in Flint also reinforced the perspective of Flint’s interim government. By sandwiching dissenting views between affirming ones, Flint reporting aided in the discounting of complaints and the public view that complaints were irrational.


66

Amid denial from emergency management, Flint’s city council hadn’t taken long to get on board with the petitioners and activists. It was March of 2015 when they voted to “return to Detroit water through whatever means necessary”(Fonger 2019). This vote would be the beginning of national attention in Flint, even before concerns about lead, but the disempowered city council clashed with the mayor and the emergency management, who were still asserting that the water was safe. And the initial national media coverage would discount the city council further, treating and enforcing the mayor and the emergency management’s position as the seat of power in Flint. The story of stereotypes of Flint is a sad one. Until now, I’ve focused mainly on the issues inside Flint. Now, seeing the city from the outside, one can identify the race- and classdriven forces, combined with decades of underinvestment, that led to a fear of Flint. A 2013 Business Insider article about Flint being “the most dangerous city in America” treated effects of Flint’s numerous issues as causes— unemployment, poverty, drugs, and crime— without questioning the causes behind them (Sterbenz and Fuchs 2013). The biggest danger posed here was the racially-motivated conception of Flint as a crime-ridden city to its efforts towards getting help. That first national coverage of the crisis was an article by the New York Times about Flint’s water woes. It begins with a description of the water by activist Melissa Mays, and it spends the most time on the emergency manager and mayor, disputing the city council’s vote. The emergency manager at the time, Gerald Ambrose, asserted that Detroit water would be no safer than Flint water, and Mayor Dayne Walling claimed that he was still drinking the water. The article ends with a set of complaints from Flint residents about their water quality (Smith 2015). The article subtly frames the issue of water in Flint as a gulf between science and the Black public, giving most of its attention to Ambrose and mayor Dayne Walling and framing the crisis as a conflict between educated officials and irrational residents. Left out were the protests that Flint residents had held for months and the city council members who opposed the switch. Rising water costs were shrugged away, an inevitable byproduct of living in a dying city with a decreased number of water system users (Smith 2015). News plays a vital role in the national perception of a crisis. This article was most Americans’ first exposure to Flint, and here, a reader would probably come off relatively indifferent to the issue due to the power structure inherent in its framing. The racist and class assumption that people are incapable of governing themselves or understanding and representing their own experience acted throughout. This NYT article is also careful and subtle about who is given authority, which robbed the Flint issue of its urgency. Mayor Dayne Walling is literally credentialed— the article mentioned his education as a Rhodes scholar. In contrast, the opposing voice on whom the article spends the nextlargest amount of words is Tony Palladeno Jr., a man

described and discredited as wearing a red Flint baseball cap, who was escorted by police out of a water advisory committee for frequent outbursts about the bad quality of the water and the need for a federal intervention (Smith 2015). Comparing these two single descriptors, a subtle dichotomy emerges— the state is educated, rational, wellmeaning; the citizens are emotional blue-collar workers. These parallel characterizations of the sides of the fight for Flint’s water would slow the Flint water effort to be heard on the national stage. The issue of urgency is one of slow violence. Stories that play out over months, that appear less urgent, stories that are less flashy or dramatic or not stories at all— these stories lack the excitement of those which urge publication. Yet Flint displays that it is all the more important that these stories are publicized. It wasn’t until late 2015, eighteen months after the switch, that national media gave sustained attention to the issue. The national media’s late response to the crisis displays another facet of disregard for Flint, caused in part by the initial article but also by the discounting of those like Davis who’d tried to get the issue to the national media for months. Flint’s longtime portrayal as a problem, a perpetrator of its own harm, meant that the national media was left blindsided and slow to act when Flint’s citizens were in need of help from government mistreatment. Around October, the tide finally turned in Flint, spurred on by NPR broadcasting a Michigan Radio segment quoting activist LeeAnne Walters and Hanna-Attisha. Genesee County declared a health emergency, asking Flint’s residents to use filters and promising further study. Notably missing was a promise to switch back to Detroit water (Jackson 2017). For all its prior reinforcement of the state’s perspective, the ’s editorial board began to fight fiercely for its citizens, writing, “Every day officials wait to reconnect is another day of Russian Roulette with the health of this community’s most precious asset— our children” (Editorial Board 2015). Drawn to the conflict between residents and the mayor and the state, as Synder still neglected to dramatically act or call the situation an emergency, national media finally descended upon Flint. CBS, the New York Times, NPR, and The Guardian all reported on the fact that the Flint residents were paying for water that was poisoning their children, most memorably in the story of Walters, whose 4-year-old immunodeficient son Gavin had lost weight and struggled to pronounce words after drinking Flint’s water (Davey 2015). Throughout the crisis, local and national reporting had served a vital role in Flint— first for reinforcing the state’s claims that the water was safe, then for making known the atrocities committed against Flint’s children. Media coverage will often serve to incorrectly speak for or represent the disadvantaged, often using their narrative power to further existing powers. It’s a vital tool, one that very rarely serves poor or Black populations, but that can lead to quick Fifth World


67

institutional response when it does. When mainstream reporting ceases to treat disadvantaged populations as a mute social body and begins to embrace these populations’ perspectives, it serves its true purpose— to bring light and truth to issues of injustice and oppression.

A

fter Flint’s local interim government had used flawed science to justify ignoring protests and complaints about Flint’s water quality, the Environmental Protection Agency acted as an enabler. Science is often thought of as empirical truth, and it’s said that “numbers don’t lie.” Numbers however, omit the conditions which produce them. At the time of the switch, a press release from the city wrote that the river had “a proven track record of providing perfectly good water for Flint” and that the Michigan Department of Environmental Quality (DEQ) was committed to continually testing the water, which was clean and pure (“Office of Governor Rick Snyder…” 2016). Further investigation into these DEQ state testing protocols by the EPA would later conclude that the DEQ gave the Flint water treatment plant instructions to use testing procedures that weren’t thorough enough, failing to comply with federal rules regarding lead testing (EPA doc). So, while Flint residents’ government failed to test the river water for lead, residents were suffering hair loss and body rashes, which are symptoms of lead poisoning. Dr. Mona Hanna-Attisha, a pediatrician in Flint, was one of the first to raise the alarm for lead after hearing about the lack of corrosion control. The Michigan Department of Health and Human Services (MDHHS) had withheld wider lead data from even Hanna-Attisha, a respected doctor, so she focused on data from her own patients. She found that blood lead levels of children in Flint had increased since the switch. At a hospital press conference in September, Dr. Hanna-Attisha shared her data, only to have the state call her an “unfortunate researcher” who was attempting to spread hysteria (Gross 2018). They took issue with her smaller sample size, after denying her access to a larger one. There’s inherent power within the authority of science, and misusing that authority can have disastrous results. Authorities can speak of parts per billion and chemical tests, but if those drinking the water are suspicious of it, communication may prove unfruitful. The language of science can be blind to the language of experience, and to discredit a concerned populace seeing that a long-polluted water source hasn’t been properly treated is rude at best and dangerous at worst. A protestor had summed it up in January of 2015: “Why do we have to drink brown water? No one else has to drink brown water.” (Erb 2015). Policing of knowledge, not allowing scientists outside one’s control to verify results about lead poisoning— it all displays a disrespect for Flint’s citizens. Dr. Hanna-Attisha’s experience was a turning point in the saga of Flint’s water, with science beginning to shift to the

North Carolina School of Science and Mathematics

protestors’ side. Her study was accompanied by independent lead testing results. LeeAnne Walters, the mother of an immunodeficient son, had reached out to an MDHHS nurse about her child’s high lead level result. The nurse’s response? “He is barely lead poisoned… It is just a few IQ points. It is not the end of the world” (Reynolds 2015). Going to the EPA back in late July, Walters had been referred to Dr. Marc Edwards, a Virginia Tech researcher with a reputation for protecting the public. Now, throughout August, citizens collected 265 sample kits of the water. Testing revealed that it was four times more corrosive than Detroit’s water—and that 20% of samples were above the federal action level (Guyette 2015). Protests in Flint had failed to garner national attention, despite the lack of local response to the crisis. Wantwaz Davis, the councilman who’d opposed the switch, had led a multi-issue protest in July campaigning for lower water bills (Hedden 2019). There’d been multiple protests regarding Flint’s brown water, and they’d been followed by local action: a collection of pastors visiting Lansing, bottled water giveaways from businesses and an organization for ex-cons, and the formation of an environmental coalition including the Michigan ACLU (Jackson 2017). Faced with denial of the water cleanliness issues from the MDEQ, these citizens petitioned the EPA in September (NRDC 2015). Looking at the EPA’s track record for environmental racism issues, it’s not hard to predict what would happen to Flint. Nine out of ten cases brought to the EPA are dismissed, mostly for procedural reasons (Lombardi et al. 2015). These cases, most of which involve the question of racial discrimination through adverse environmental conditions, have often been failed by the EPA. In short, the EPA is often slow to act. The EPA’s own document auditing the mismanagement of the crisis mainly offers pleas of ignorance. Flint lies under the jurisdiction of EPA Region 5. In Flint, its action was hampered by the incorrect sampling data and false assurances that the DEQ was using corrosion control treatment. But after finding that CCT wasn’t being performed as needed in April 2015, the EPA did nothing until July (Flint Water Advisory Task Force Final Report 2016). Behind this reluctance to act, there’s a general indifference toward Flint. “I’m not so sure Flint is the community we want to go out on a limb for,” wrote Region 5 chief in a September email (Akin 2016). The term “slow violence” comes to mind again, but with a different spin— the violence of inaction. Government agencies far removed from environmental harm can hesitate to act, especially when the target of harm is a poor and Black city. Abandoned by the state, Flint turned to citizen science, which was essential to garnering national attention. By combining the power of on-the-ground concerned citizens and accredited scientists, it began to be possible to redress the wrong in Flint. The government hadn’t made it easy for them. Using flawed science as a tool for continued harm and oppression had


68

given an undeserved force and authority to reassurances that the water was safe. After the failures of the MDEQ and MDHHS, citizens and scientists together could finally succeed in spurring action. Citizen science, which allows those who are most knowledgeable about a crisis to do the majority of the work to solve it, can be a double-edged sword. Flint’s citizens shouldn’t have shouldered the responsibility of testing their own water for lead, and lauding citizen science too highly can risk deemphasizing the importance of institutionally empowering those harmed. Nevertheless, citizen science, though a byproduct of historically devaluing the perspectives that best understand a harmful situation, was vital in Flint. It speaks to the resilience of Flint’s citizens in consolidating a form of authoritative, powerful knowledge completely separate from the state.

T

he roots and mechanisms of environmental racism in the Flint, Michigan water crisis show the importance of listening to narratives that come from those most directly impacted by environmental degradation. The anger, the pain, the discomfort, all of the hurt demands to be felt, known, and understood by the wider world. Violence accumulated in Flint through each neglectful decision, dismissive email, or drop of contaminated water. That violence, added to the circumstances that had already aligned to place Black residents of Flint at a position of lowest priority for the state government, has harmed and is still harming Flint, and will continue to harm disadvantaged communities throughout the world. In situations of structural injustice, power needs to be placed in the hands of those who suffer. For it is they who understand the situation and its urgency best. It’s apparent that local political and media leadership needs to include those who are in a position to be the most violated and that national regulations need to protect those people. This isn’t just a shallow diversity issue or a quota that needs to be filled— uplifting people who aren’t protected by wealth and status, those who are directly hurt by pollution, will make the work of activists more relevant and more efficient, when those in power have intentionally concentrated harmful environmental situations in places where residents are less empowered to fight back. And institutionalizing the knowledge that we know these people have regarding the crises that affect them will make crisis management more equitable and more effective. The existence of prejudice within a certain situation is near-impossible to empirically prove after the fact. We know, though, that this would have never happened in Ann Arbor, Michigan, for example. The government overreaching, the negative stereotyping, the denial of concerned citizens, the reluctance to let science do its job— it all displays a general disrespect for the experience of mostly poor and Black population of Flint, including their knowledge of the environment in which they live. This

disrespect has resulted in loss of life, and lead poisoning will have its impacts on Flint for decades in the future. The gradual nature of this structural harm makes it harder to garner sympathy from outside viewers or make front-page news: unlike wars, bombings, or genocides, which are often and justly reported upon, the more subtle violence that occurs more often in the United States isn’t a compelling narrative to those removed from it until after a disaster like Flint has already irrevocably occurred. This was the reason that water quality complaints in Flint failed to garner national attention— Flint had already been discounted in the American mind. Sickness caused by a lifetime of drinking polluted water, sickness that disproportionately affects certain communities - slow violence is the most effective way in which a group of people can be incapacitated. The locus of fault shifts and spins and dissipates until the cause behind every problem seems sheer circumstance, and the people who had the power to gradually harm another group can be absolved from public blame. How does fearing one’s taps for years change a person? A 2018 study found that nearly half of Flint’s residents were considering leaving and that an individual’s inclination towards staying in Flint depended mainly upon their impression of the water quality. The neurotoxic problems caused by lead can be hard to pinpoint as a result of lead, leaving parents to wonder for years if any quirk of their child’s behavior is actually a result of the eighteen months spent drinking lead. The fact that most environmental crises today are caused by either neglectful or directly harmful government policies lead to a plausible mistrust of the government within the communities in which harm is concentrated. The poor are alienated from environmentalism, since the government’s utility as a tool for positive change is decreased when disadvantaged communities lose trust in a government which is simultaneously their oppressor and a necessary avenue through which systemic change will occur. A few months ago, seven years after the water switch, a $641 million settlement was passed in Flint to financially compensate the families who were harmed. The settlement isn’t a perfect solution, especially considering that the state of Michigan’s attorney fees are attempting to procure approximately 30% of the settlement for themselves and not for the children of Flint (Fleming 2021). This money isn’t going to undo the damage that lead poisoning has caused, and it’s not going to bring back the Flint residents’ trust in their government, but it’s a step in the right direction. It’s probably unrealistic to hope that allowing the privileged to take a backseat in the power enacted by government and media storytelling will completely resolve the deep-seated prejudices in our systems. There’s a balance to be found, where members of the community that’s been harmed are listened to and gain power, but people outside

Fifth World


69

those communities who are privileged enough to not face the emotional turmoil that comes with being disregarded also continue working for change and against the policies that allow these disparities in power to continue. So, how do we prevent and resist more Flint, Michigans? We have to question the predominant narrative around each situation: its origins, its characters, which stories it leaves out. We must search for counter-narratives, search for expressions of the pain and the emotion that are central to our human experience but pronounced in those repeatedly hurt by the systems that were supposed to protect them, and amplify those narratives. In the case of Flint, these counternarratives can be found through empowering those who are the most vulnerable, the Black and poor citizens of Flint, with strategic policy change. The story of Flint isn’t over - it’s repeating on a smaller scale every day, and it’s each of our responsibilities to change the plot before it ends in more disaster. Listening, we must learn from those who know most; learning from them, we must act with them for justice.

Flint Journal Editorial Board. “State water plan falls short of protecting Flint without Detroit connection.” , 2015. Flint Water Advisory Task Force Final Report, March 2016. Forrer, D. A., K. McKenzie, T. Milano, S. Davada, M. G. O. McSheehy, F. Harrington, D. Breakenridge, S. W. Hill, and E. D. Anderson. “Water Crisis In Flint Michigan – A Case Study: Water Crisis In Flint Michigan – A Case Study”. Journal of Business Case Studies (JBCS), vol. 15, no. 1, May 2019, pp. 29-44. Foucault, Michel. “Discipline and Punish.” Rivkin and Michael Ryan, Blackwell, 2004, pp. 549-565. Goodwin, Liz. “Meet Wantwaz Davis, the ex-con who tried to save Flint.”

n, edited by Julie

, 2016.

Goodin-Smith, Oona. “Flint’s history of emergency management and how it got to financial freedom.” 2018. Gross, Terry. “Pediatrician Who ExposedFlint Water Crisis Shares Her ‘Story Of Resistance’.” National Public Radio, 2018. Guyette, Curt. “Independent water tests show lead problems far worse than Flint claims.” 2015. Hammer, Peter J. “The Flint Water Crisis, the Karegnondi Water Authority and Strategic– Structural Racism.” Critical Sociology, vol. 45, no. 1, 2017, pp. 103–119. Hedden, Adrian. “Councilman leads protest at Flint City Hall, addresses police chases, water rates.” 2014.

Adams, Dominic. “Flint voters elect two convicted felons, two others with bankruptcies to city council.” , 2013.

“Law That Led to Poisoned Flint Water Racially Discriminatory, Civil Rights Attorneys Say.” Center , 2018.

Adams, Dominic. “Here’s how Flint went from boom town to nation’s highest poverty rate.” The , 2017.

Lombardi, Kristen, Talia Buford, and Ronnie Green. “ENVIRONMENTAL RACISM PERSISTS, AND THE EPA IS ONE REASON WHY.” , 2016.

Akin, Stephanie. “Was EPA Unwilling to ‘Go Out on a Limb’ for Flint? .”

Nixon, Rob. Slow Violence and the Environmentalism of the Poor. Harvard University Press, 2013.

2016.

Office of Governor Rick Snyder (2016a) Gov. Rick Snyder releases departmental emails produced regarding Flint water crisis [Press Releases]. Michigan.gov, 12 February.

Bach, Trevor. “What Will It Take to Save Flint, Michigan? .” US News, 2019. Banks, Stacey, Charles Brunton, Kathlene Butler, Allison Dutton, Tiffine Johnson-Davis, Fred Light, Jayne Lilienfeld-Jones, Tim Roach, Luke Stolz, Danielle Tesch, and Khadija Walker. “Management Weaknesses Delayed Response to Flint Water Crisis .” EPA, Environmental Protection Agency, 2018. Butler, Judith. “Photography, War, Outrage.” PMLA, vol. 120, no. 3, Modern Language Association, 2005, pp. 822–27. Carmody, Tim. “How the Flint River Got so Toxic.”

Reynolds, Dean. “Public health emergency declared in Flint, Michigan, due to contaminated water.” CBS News, 2015. Smith, Mitch. “A Water Dilemma in Michigan: Cloudy or Costly? .”

, 2015.

Sterbenz, Christina, and Erin Fuchs. “How Flint, Michigan Became The Most Dangerous City In America.” Business Insider, 2013.

26 Feb. 2016.

Chapman, Jeff, Adrienne Lu, and Logan Timmerhoff. “By the Numbers: A Look at Municipal Bankruptcies Over the Past 20 Years.” PEW Research, 2020.

Thompson, Derek. “The Richest Cities for Young People: 1980 vs. Today.” The Atlantic, 2015. Tucker, Young, Jackson, Tull Inc. (2013) City of Flint Water Supply Assessment: For Submittal to

Denchak, Melissa. “Flint Water Crisis: Everything You Need to Know.” 2018. Eng, Bernie. “Introducing The Flint Journal | MLive Media Group newsroom staff.” 2012. Erb, Robin. “Who wants to drink Flint’s water? .”

State of Michigan, Department of Treasury, February.

,

, 2015.

Fonger, Ron. “City adding more lime to Flint River water as resident complaints pour in.” 2014. Fonger, Ron. “Flint flushes out latest water contamination, but repeat boil advisories show system is vulnerable.” 2014. Fonger, Ron. “Officials say Flint water is getting better, but many residents unsatisfied.” Journal, 2015. Fleming, Leonard N. “Flint residents press for more money in $641M water settlement.” 2021.

North Carolina School of Science and Mathematics


70

Less than Human: How the Perception of Autism is Negatively Affected by the Media and Medical Field Harper Callahan

I

was first introduced to my research about a year ago, when I learned of a paper published by the Yale School of Medicine, entitled “Attend Less, Fear More: Elevated Distress to Social Threat in Toddlers With Autism Spectrum Disorder.”; which had caused much outcry within the autistic community because of its methodology. Looking at the paper in more detail, I understood why. In its research, both autistic and non-autistic toddlers were subjected to extremely stressful scenarios in which their responses were measured and recorded. It seemed inane to me: what was the point of a paper that offered no real insight into psycology, and only served to terrorize toddlers? How did the ethics board within Yale approve an experiment that subjugated 64 autistic and neurotypical toddlers to traumatic experiences (3)? After some reflection, I came to a conclusion which revolted me: the tactics this research utilized were only possible because the research was focused on autistic children. This left me with more questions than answers. Why was this research acceptable when autistic children were being studied? Why was the research focused towards a vague goal of diagnosing autism at an earlier age, and was that difference even necessary to know? My revulsion, coupled with my bemusement as for the research’s purpose, compelled me to investigate further. This compulsion was brought about by a simple truth; within those toddlers, I saw myself. I have been diagnosed with autism for most of my life. This fact is one I hold with shame every day of my life, terrified that the people around me will figure out this secret. However, upon reading the paper presented by Yale, new questions began to arise amidst my shame. Why did I feel such a shame about a fundamental aspect of myself? More importantly, why was a fundamental aspect of myself treated in such a clinically cruel manner for such small insights? As these questions were stewing in my head, I observed multiple instances of autism represented within the media. This was by no means intentional, as part of my internal shame dictated that I stay away from such representation: I was not looking for examples; rather, I was thinking about my effective response to the Yale study. However, as I watched, I grew increasingly curious about how my identity was represented in popular media no less than in academia. This was not how I acted within the world, and this was not

how any other autistic people I knew acted. No, this was a perversion of autism presented as genuine. Was this truly how society at large regarded autistic people? My research quickly expanded past my original understanding of the topic. This paper was originally intended to talk about the ableism within media and medicine. As I pursued my research, however, I found a truth that became impossible to ignore: media and medicine worked not only to suppress, but to erase autistic voices. In popular media, this erasure was accomplished through creating a stereotype that autistic people are unable to think and act intelligently for themselves, functionally smothering voices from the autistic community with a veneer of pity, while medical research sought to remove autism from humanity through genetic sequencing. For those already born with autism, medicine provided therapy that did nothing but smother it. These truths left me terrified of my work. Yet, I could not leave it unfinished. Others have said what I say in this paper, but this work is an admission to myself of these truths. Hopefully, they provide the same clarity to the reader. I feel that it is important to mention my own place within the autistic community. I do not speak for every autistic person, nor do I make an attempt at this. Like neurotypical people, the lives and experiences of autistic people are vastly varied, and no two experiences will be alike. I am fortunate that I underwent more ethical means of therapy, and can function entirely independently. This independence undoubtedly shaped my views. Others, with similar or completely different backgrounds than myself, have every right to disagree. I firmly believe that my arguments showcase a widespread ableism, but it would be hypocritical to assume that all autistic people would feel the same. Indeed, the insistence that the broad spectrum of neurodivergence must fall under a finite set of similarities is a large hindrance for the neurodivergent community. It would be useless to say this work was not mentally taxing. Every time I began working, I had to plunge myself into a world where it seemed everyone wished I did not exist. Outside of strict research, I was trapped within my work. Every aspect of my life became defined by my autism, and because of the shame of my identity, I could not confide in my peers also working in research. Besides a handful of

Fifth World


71

exceptions, no one knew of my connection to my work. And without knowing that connection, none of my passion and drive for my research could be understood. I was alone, plunging ever deeper into a cold, hateful world, and the only escape was to keep plunging. I say all of this not to be melodramatic, but to emphasize a point: this research is written out of anger at the perceptions of autism within the media, and the treatment of autistic people by medicine. This anger drove me to this research, and without it my research would never have finished. It is necessary to the truth of my work. The media has created a false perception of autism, a perception so twisted that the medical field feels obligated to cure minds that it does not wish to understand, or that it obstinately misunderstands as anomalous and deficient. In other words, the autistic community is undergoing the violence of eugenics because of the perceptions media has created, and these perceptions have been perpetuated by medical study. It is my hope that I can convince you of this truth. To discuss anti-autistic behavior, the media must be examined first. It nearly universally creates an understanding of autism that is based on almost completely incorrect notions, which serve only to further the misunderstanding of autism with the wider public. It may seem judgmental to imply that the entirety of media is inherently anti-autisic. A word more specific than media may be used to categorize those areas of entertainment that create austic characters in a less than flattering manner. However, the choice to categorize anti-autistic media under the conglomerate of ‘media’ is a specific one. It is done to highlight the scale and all-encompassing nature of this ableism, and to highlight that the current form of autistic portrayal in the media is almost universally ableist. A more specific term would only serve to understate this enormity, which I will attempt to convey throughout this paper. Before talking about the tropes and stereotypes used to convey autism, I think it is prudent to understand why autism is even portrayed so specifically within the media. This was not always the case. Characters throughout literature have been represented with mild to severe autistic tendencies, which were often associated with profound artistic or scientific gifts. However, the first popular media that explicitly featured an autistic character was Rainman, produced in the late 1980’s. Rainman’s central character, Raymond Babbitt, is an autistic man with an extraordinary ability for numbers, statistics, and calculations. He was based on a real person named Kim Peek, who had by all accounts a almost superhuman ability to recall facts. However, Kim Peek was not autistic, but rather had FG syndrome. Rainman, the public’s first great introduction to autism, based its portrayal of autism upon a man who was not autistic (Opitz et al. 146) . However, the inaccuracy made little difference, as Rainman went on to win 26 awards, including four academy awards, and was the highest grossing U.S. film in 1988 (imdb). This was the first time autism had been portrayed, and people North Carolina School of Science and Mathematics

were enamored by it. Of course, given its monstrous success, it would not be the last. The next iteration of autism in the media was the 2003 book . Like Rainman, this sold unfathomably well, selling over 2 million copies, a play adaptation which won 5 Tony’s. However, unlike Rainman, the book’s author, Mark Haddon, has publicly stated that the book is not about autism, but rather ‘self-discovery’; a ‘self-discovery’ that included the words “Asperger’s syndrome” on the cover of the first edition while remaining ignorant of autism and its symptoms. During a 2003 interview with , Haddon highlighted his apathy towards true autistic representation. “ I have to say honestly that I did more research about the London Underground and the inside of Swindon Railway Station, …than I did about Asperger’s syndrome” (Fresh Air). In my opinion, these movies started the craze of creating autistic characters in the media. Both Rainman and Curious Incident proved that representations of autism don’t need to be accurate; rather, they only need to confirm the biases of the public. If that is done, it sells. Autism is now a trope, a stereotype used by the media to produce cheap and profitable characters that have no relation to the actual diagnosis. Characters who are lauded as underdeveloped but strangely human, much like animals on display. This tokenization of autistic people uses the incorrect notions which were created and developed throughout the past decades and have culminated into stereotypes that consider people with autism to be largely unable to care for themselves and broken in some way. This creates and perpetuates negative and harmful understandings of autism. Sometimes, autism is peddled as a reduction of humanity, a reduction into something unfeeling and unable to survive in a human world. This stereotype usually places the autistic character as a supporting character to the protagonist, the autism being a character flaw that the protagonist either works around or learns to live with–a means of “selfdiscovery” predicated upon the negation of the seemingly unimaginable autistic self. This treatment of autistic characters is harmful and reductionist, and only serves to further ableism within a greater society by separating autistic people into the category of ‘other’: a category that places them squarely outside the bounds of humanity. This has very real effects on the perception of autism, especially within medical spheres, which will be explained later within the essay. One of the most telling examples of this stereotype is the portrayal of Sam Gardener, from the netflix show Atypical. Atypical aired in 2017 and was nominated for a Peabody award in 2018, and had middling success with a non-autistic audience. Autistic people, however, criticized the show for having no autistic voices on set. No one in the original cast is autistic, and while some members of the crew had autistic family members, no one within Atypical had a diagnosis. The choice to name a show about the life of an


72

autistic person, Atypical, indicates the direction this show takes, implying that autism is a deviation from the normality or typicality of thought. Of course, this implication is not flattering, as the rest of the show confirms. Sam is portrayed in an almost sociopathic manner, with no understanding of any social cues, be they verbal or non-verbal, and a complete lack of moral understanding. The autism presented within this show is one that is utterly incapable of independence, a fact that is hammered in by the supporting cast explicitly controlling his actions and how he is perceived to the outside world. His mother restricts him and even states that her child can’t function. This blatant ableism is met with sympathy from the other characters, unsurprisingly. After all, this is a confirmation of the general stereotype. His sister is portrayed as a driving force for his own growth, but actively works to suppress parts of his personality to make him more ‘presentable.’ This is again another form of ableism, as it seizes control of Sam’s life from him; moreover, it also carries with it the implicit assumption that the personality traits related to Sam’s autism are traits not desirable to the world, reinforcing to the audience that autism is something that is separable from normal, neurotypical society. The central conflict between the parents is over difficulties with Sam. While the parents of autistic children do undoubtedly have unique struggles in childcare, promoting Atypical as a show about autistic people, then conspicuously showing how a central struggle for the lives of others is based solely on the otherness of autism, only furthers the stereotypes that portray autism as a disease for the neurodivergent and a problem for the neurotypical. However, what truly confirms the ableism of the show is the story’s treatment of Sam’s autism. Sam’s autistic tendencies become a laughing stock for the audience’s enjoyment. From strange mannerisms, to his coping mechanisms for dealing with situations he cannot function within, every part of his autism is mocked by the show for the audience’s enjoyment. This treatment of Sam’s autism is blatantly ableist, yet is completely typical within autistic portrayals. However, in cases where the autistic character takes the role of the protagonist, the basic concept of autism must be augmented somewhat. No one would relate to something less than human. Instead, characters are given a great intellectual gift that serves to counteract their regression from humanity. Now, autistic people are usually deeply interested in certain topics, and tend to have a much higher skill set in that topic. For myself and many others, that speciality is rooted in STEM. The issue with savantist stereotypes is that the character’s ability is their only positive strength, and every other aspect of their personality is severely stunted. The autistic people that I know, myself included, have interests and specialities, but rarely do these interests replace other skills. The insistence upon somehow ‘balancing’ an autistic character’s abilities with severe disabilities only serves to cast an autistic character as something less than human, something closer to a machine than a person. This is done

explicitly, to create a character that the audience feels utterly alien towards. By creating this savant mythos, the audience is unsure of what the protagonist will do, and thus tension is generated directly from the inhumanity of autism. A telling example can be seen in , specifically the theater production. The stereotypes used within the play are a great example of how savantist ableism operates. Christopher Boone is portrayed as unable to function due to his autism, and has tendencies that are so severe as to restrict his daily living. These tendencies are almost universally incorrect representations of autism, but as mentioned previously, accuracy was not the point of Curious Incident. However, these suffocating restrictions on his humanhood are supposedly counteracted by his incredible ability to perform mathematics, and this savantism is the driving force throughout the play: the many obstacles in Christopher’s story are directly related to his interest in completing a mathematical test. This narrative motivation effectively sets Christopher’s personality almost entirely around mathematics, and thus the plots are organized almost entirely around his savantism. This duality, of his savantism driving him forward, only for his autism to drive him back, perfectly reflects the understanding the media has towards autism; autism creates a character that is less than human who is nevertheless narratable: while their differences cannot be corrected, their savantism can be used to move the character into a path or narrative adjacent to, but not a part of, typical stories of a “typical” humanity. Stereotypes such as these create an ableist understanding of autism, portraying autistic people either as unable to understand and survive in normal society or as geniuses who have an almost unparalleled skill in a specific field, but who are hindered by their inability to function because of autism. In both cases, autism is perceived as a disease, and an obvious one to be recognized by popular audiences. This perception is the primary reason for the mistreatment of autistic people.The stereotypical characters are clearly unable to survive and understand the world around them, and this is because of their autism. However, this perception is ignorant of the actual lives of autistic people, an ignorance it reproduces. In the broadest strokes, autistic mannerisms are overplayed to such an extent that they create a mockery of autism. As an autistic person, I frequently find myself relating more with the neurotypical character, than the socalled autistic one. The stereotypes do not represent the neurodivergence they claim to represent. In this lies the true danger of these stereotypes; they create an image of autism that is so far removed from the reality that autistic people are shamed into silence. And when autistic people do advocate against these stereotypes, the public perception is such that they can never truly be taken seriously. After all, if the media is to be believed, the average autistic person can barely brush their own teeth, much less advocate for themselves. This inability for autistic people to represent themselves, and the perception that autism is a disease, Fifth World


73

leads to medical experts who are trained to be deeply ableist; and whose ableism affects the autistic community in deeply harmful ways.

A

utism studies are almost entirely based on the implicit assumption that autism is a disease to be cured. This can be easily noted based on the scientific terms surrounding autism. Within the vast majority of autism studies, the term used to describe autism is Autism Spectrum Disorder; a term deliberately chosen to separate the diagnosis of autism from what is considered healthy and right by neurotypical society. Consequently, it is obvious that the correct process to treat a disorder is to cure it. This naturally follows that people with autism must be studied and analyzed, not to understand how autism functions, but rather to find its causes, and to eliminate them. This is why the Yale experiments occurred. They were not conducted to help autistic children; rather, they existed to discover how and when autism forms, to work towards its removal. The goal of removing autism is not just centered diagnosis and intervention. Applied Behavior Analysis is a form of therapy designed not to remove autism, but to ‘fix’ an autistic child. The ideals of this therapy are abhorrent. ABA’s entire purpose is to transform an autistic individual into someone neurotypical - removing the individuality and identity. ABA, like many other medical treatments of autism, seeks to eliminate the traits that are seemingly ‘most autistic’ by reinforcement learning. Reinforcement learning is by no means a bad method in teaching children, but the extent and purpose that it is used in ABA is the cause of the issue. As mentioned, it is used to curtail anything that can be perceived as autistic behavior, and so in doing so, forms a child that is primarily driven by stimulus and response. A child undergoing ABA therapy loses their agency as they are required to respond in specific ways to certain cues. And while ABA is known to be effective, what does effectiveness mean? An effective culling of an autistic child’s interests? A suppression of what made a child unique? The aspects of the child’s personality that must be stripped away are determined by their parents. Effectiveness is determined by how well the therapy converts the child into what their parents see as normal, even if this means creating a mask for the child to smother their identity. It is necessary to mention an important trend regarding the medical treatment of autism; namely, the clinical treatment of autism in medical studies. Studies of autism purposefully remove the human elements of the autistic from the studies, and only discuss the actions of autistic people, especially autistic children, as symptoms of a deeper disease. ABA treats autism as a series of bad habits, and works to remove them. In both cases, the ideal is to strip the person with autism of their individuality, and implies that the actions of autistic people are not their own, but are automatic responses to internal stimuli, devoid of autonomy or direction. It is needless to say that this is deeply ableist, North Carolina School of Science and Mathematics

but, considering the general stereotypes regarding autism, and the ableism from experts previously discussed, it is hardly surprising. Not only the existing stereotype in popular culture, but the governing assumption in scientific study, is that those with autism cannot think or act for themselves, and the consensus of experts is that autism is a disease. If these points are combined together, it is easy to see why the actions of autism are treated in this particular manner. This ignorance of autism perpetuated by the media might explain public perception, but it does not excuse the conduct of autism experts. Experts in any field should not rely on popular culture’s perception of an issue; rather, in order to effectively understand autism, experts must understand the people who have it by working with their own self-representations. Talking with autistic people of diverse backgrounds, understandings, and ages, and approaching these conversations without the assumption that autism is something that needs curing would go far in establishing a better understanding of autism studies one that wouldn’t work towards diagnosing and removing autism from individuals, but rather working with autistic people to help them function and become independent in society. However, at this point, independence is not the purpose of these studies, but rather the removal of autism from society. This effort is only strengthened by the media’s perception of autism, leading the public to largely support these actions to ‘cure’ autism. Actions that are undoubtedly eugenic in nature. Accusing the industry of medical research of eugenics is enormous in its implications. However, by using the assumption that medical research is actively seeking the erasure of autistic people, by curing or cleansing their traits, many aspects of autism research begin to make sense. In the paper “Attend Less, Fear More”, mentioned in the introduction of this paper, the point of the research was to “inform about novel treatment targets and mechanisms of change in the early stages of ASD” (Macari et al. 1). This paper was never meant to improve the lives of those with autism, or to help autistic people better understand themselves. Instead, it subjugated both autistic and neurotypical toddlers to expressly stressful stimuli to find a method of earlier discovery and treatment, whatever that may entail. Medical research does not intend to help autistic children, but rather detect autism at an sufficiently early age to remove it. Autism studies are focused on discovering when autism starts, pushing the trails to a younger and younger age. While this research is from a variety of eugenics-based organizations, none have as much money or influence as Autism Speaks. Autism Speaks is the largest autism research organization, and while its official treatment of autism is one of acceptance, their past and current actions create a different narrative. Until 2016, part of Autism Speak’s mission statement was to find a cure for autism, and was only altered when autism advocacy groups publicly decried Autism Speaks. While their mission statement has changed,


74

the goal of curing autism has not. This goal has taken a multifaceted approach; by working to stigmatize autism as a disability, and to remove autism from the wider population. For eugenic solutions to be acceptable, it must be widely considered that the alternative to eugenics is a life that would be filled with trauma. Autism Speaks has worked towards creating this stereotype through one-sided propaganda. For example, in 2006, Autism Speaks produced a short film called which portrayed autistic children not as children, but as burdens and scars that parents must suffer. In one child’s case, she asks her mother how she is doing, and tells her that she loves her, while the mother talks about wanting to kill both her daughter and herself because of autism. The message presented by this film is clear: autism kills a child just as severely as any disease or accident would. It is better to prevent or treat autism than to have to experience autism as a parent. Autism Speak’s work follows this mantra. Their official goals of research are painfully based on eugenics: categorizing and finding the sources of autism, diagnosing autism as early as possible, and sequencing the genes of autistic people. However, I would like to focus on Autism Speak’s MSSNG program. The official goal of MSSNG is to provide more personalized treatment towards autistic children. However, given the past and current actions of Autism Speaks, it is clear what the purpose of MSSNG is. A fundamental aspect of genetic sequencing is that it allows parents to view a child’s genetic makeup before birth, giving parents the option to not go through childbirth. Because of the hostile perception of autism, MSSNG will prevent autistic children from being born. Other groups who present themselves as autism support groups exhibit similar abhorrent behavior. The Autism Research Institute is one such example. Unlike Autism Speaks, the ARI is quite upfront about their goal in their mission statement; to treat autism. Indeed, their official slogan is that “autism is treatable.” As could be expected from a group trying to treat an aspect of humanity, their methods have been met with large controversy and skepticism. Until 2011, the ARI funded a program called Destroy Autism Now! (DAN!). A name that is, without a doubt, based upon eugenics. The program itself followed in its name, by promoting unsafe and unfounded ideas about the causes and ‘treatment’ of autism, including promoting the claim that vaccines, heavy metals, and dietary imbalances cause autism. Coincidentally, one of the founders of DAN! had positions of power within one company that provided heavy metal testing in urine, and another company that sold nutritional supplements for autism (quackwatch). As has been seen before, the stereotype of autism was twisted and exploited for money. While DAN! was defunded in 2011, the current research in ARI reflects many of the same ideals that were seen in DAN!, namely, trying to find the cause of autism through genetics, diet, or brain inflammation. The goal of this research is clear, ARI even proudly states it as its slogan.

The ARI believes that autism is treatable, and from their previous work, it is clear that their ‘treatment’ involves the removal of autism. Autism Speaks and the ARI were only chosen on account of their prevalence within autism studies: Autism Speaks is the largest autism research organization in the USA, and the ARI provides over $200,000 in funding each year (iacc). The goal of eugenics, however, is a constant throughout most, if not all, autism research groups. The eugenic study of autistic people is not just enacted through discovering a ‘cure’ to autism. To remove a group of people from society, there are two ways to do so; physically and forcibly removing a group, or creating a society fundamentally hostile to their existence. Societal norms are a key factor in which autistic people are shown that they are unwelcome within neurotypical society. As an example, eye contact is a social nicety that causes many autistic people discomfort. However, in order to be considered part of society, and not outside of it, eye contact is a requirement. Thus, autistic people are forced to sacrifice either their place within society or their own comfort, over something as trivial as a nicety. This trend continues throughout society, ranging from sensory overload to certain particularities, forming a clear and depressing picture: autistic people are, by definition, outside of society, and can never be fully accepted. In addition to a banishment from society based on specific cues of social identity, the rhetoric the media presents has been used to a grave effect. The anti-vax movement began because of a fear that vaccines cause autism and has remained a core belief throughout the movement’s existence. The anti-vax movement assumes that death from measles, polio, or any number of brutal diseases is preferable to a life of autism. In other words, it is better to suffer and die than to be autistic. And people do suffer and die. The stereotype of autism has led to the deaths of countless people, and that number seems to only be increasing. As an autistic person, I can’t help but feel a deep shame towards this. My identity, a fundamental part of me, is twisted in a way to convince parents that their children should rather die than suffer my fate. This thinking contributes greatly to the stigmatism surrounding autism. If people are willing to die to not become autistic, what does it say about autism? It is important to reiterate that throughout all of this, autistic people are regarded as unable to advocate or even think for themselves. Autistic people are not allowed to participate fully in society. Any work in advocacy is met with the implicit assumptions that autistic people are not able to think independently, and thus are unreliable sources for information, much less explanations, of their own experience. Thus, the majority of representation of autistic people comes from the parents of autistic children, which is unreliable. Parents of autistic children have a secondhand understanding of autism, and this understanding is not infallible. Ableist sources such as Autism Speaks can Fifth World


75

convince parents of incorrect stereotypes, and persuade them into believing that autism is a thing to be cured. Like an invasive weed within a garden, ableism cannot be removed from humanity, at least not within a single generation. However, it may be possible to remove the worst aspects of ableism. Medical studies that seek to diagnose autism through genetics, or attempts to cure autism, have no place in society and should be pruned from the discussion of ableism as quickly as possible. These tests do nothing but work towards the goal of eradicating autistic people, and thus must be halted. It is important to mention that not all autistic studies aim to cure autism. Studies that seek to help those with autism, or to understand how aspects of autism work, should be lauded. A better understanding of autism allows autistic people to be better understood within society. Medical research of autism should not end, but it must be changed drastically. Within the media, I have doubt that the perception of autism will change. The current stereotype of portraying autistic people as not quite human is simply too popular to truly be removed. Autism as a disease, and savantism as a compensation for the disease, will remain a staple of media representations of neurodivergent people for a long time. However, if change were to occur, it would begin through positive, accurate depictions of autistic people. This is a complicated issue, as autistic people are varied, and a well-portrayed autistic character would not be relatable to many autistic people. To assume that a single portrayal of autism is universally applicable is the reason for the harmful stereotypes that exist today. In order for autistic people to be truly represented in the media, there must be many different people portrayed. Most importantly however, is that these characters must be written by those with autism. Anything less would be tokenization, and would only be a repetition of the current stereotypes.

place it outside. Both would be a deliberate choice, and both would be a judgment of my identity. Back in my hometown, where my closest friends knew about my diagnosis, I felt the difference of treatment, the poignant exclusion and implicit understanding that, while I was respected, they did not consider me able to understand the finer points of interpersonal relationships. It is the same at NCSSM, where I have heard the usual slurs. As far as I am aware, such comments have not been made towards me; I am fortunate enough to hide my identity, and pass for a neurotypical person. So, instead of facing outright ableism for my autism, I live in fear of discovery; that every minor mistake I make, every mispronunciation of a word, every bad joke, and every interest of mine expose my autism, and change me from a student, or a friend, or the million other ways that I could identify myself, to solely an autistic person. With this paper, I am giving up my ability to live as a neurotypical in society. A secret that has ended friendships, caused heartbreak, and fuels my self-hatred is now being shared with the public. While I am terrified of the consequences, at least I can finally remove the mask. Autism is not a disease. Autism is a fundamental part of the human experience, and working to suppress and remove autism from humanity is nothing short of monstrous. The medical treatment of autism transforms a group of people into disorders, and seeks to cure them. Above all of this, the search to find the genetic sequence that creates autism aims to remove a group of people from existence. This systematic culling only occurs because we are considered not fully human.

I

Kreidler, Marc. “A Critical Look at Defeat Autism Now! and the ‘Dan! Protocol.’” Quackwatch, 1 June 2015.

must take a step back from the current topic. What I have written is intended to be a systematic breakdown of ableism within society. This was done in an overarching sense, as there simply is no other way to approach a topic of such enormity. An explanation of rationale and aspirations of these groups explains ableism in an academic sense, but not a personal one. The true oppression and shame comes not from large systems of power, but from everyday encounters with peers, strangers, and the autism community itself. Every day, I live with a mask. An aspect of my life that explains my ended friendships, unassigned work, and inability to do tasks that ‘normal’ people do regularly. However, I cannot explain it, because I know what the stereotype is. I know how people will look at me if I was open about my autism - the judgment for the way I talk, the way I act, the things I am interested in, even the way I carry myself. Either they would fit my actions and personality traits within the lump idea of ‘autistic traits’, or they would

North Carolina School of Science and Mathematics

“The Autism Research Institute.” Autism Research Institute, 3 Dec. 2021. Gross, Terry, and Mark Haddon. “Children’s Book Writer and Illustrator Mark Haddon.” NPR, Philadelphia, Pennsylvania , 26 June 2003.

,

Macari, Suzanne L., et al. “Attend Less, Fear More: Elevated Distress to Social Threat in Toddlers with Autism Spectrum Disorder.” Autism Research, vol. 14, no. 5, 2020, pp. 1025–1036. MSSNG, Autism Speaks. Opitz, John M., et al. “The FG Syndromes (Online Mendelian Inheritance in MAN 305450): Perspective in 2008.” Advances in Pediatrics, vol. 55, no. 1, 2008, pp. 123–170. “Portfolio Analysis Report.” IACC. “Rain Man.” IMDb.


76

The Common Addict Elisa Kim

T

he complexities and nuances of addiction have received a notably increased attention with the increasing prevalence of mental health seemingly pervading all sectors. The notion of addiction as a disease was not widely believed in the past, addiction in itself was presented and treated in simpler terms. In media, addiction was often formulated as a trait of the “addict” who fit one of a few popular tropes, the cool druggie, the self-destructing alcoholic, the homeless crackhead, and the romanticized tragic hero being some examples. However, as the mental health and more complex, personal aspects of addiction have been more heavily emphasized in recent years, the portrayal of addiction and its various displays and implications has expanded. With current popular teen television shows such as Euphoria and All American, addiction as it relates to substance abuse in younger people has expanded past the parties and social benefits into the complexities and personal struggles of identification, de-glamorizing the experiences of the addict. High-school characters appear as AA or Narcotics Anonymous members, grapple with the guilt and shame of their disease, and experience extreme withdrawal symptoms while simultaneously managing their daily commitments and circumstances. The expansion of addiction, however, reaches beyond the commonly referred to cases within substance abuse. Addiction through behaviors (ie. through sex or gambling) and even as relating to more common habits is being further studied through media. Addiction through mental illnesses such as eating disorders ordersis yet another branch expanding and complicating the portrayal of addiction in media. The increasingly normalized narrative of addiction in media correlates with a more casual definition of addiction. Addiction has become increasingly casually constructed as an undesirable, difficult to control or satiate want for something, whether it be an object or a behavior, including shopaholism, workaholism, caffeine addiction, and beyond. Difficulties arise more commonly in the dialogue of what constitutes addiction and whether the definition of the addict is applicable to everyone, rather than sufferers of a certain disease. The previously constructed disease model used to a partial extent to destigmatize addiction as a personal flaw is being revisited and revised by popular use as personality differences. If everyone has something they are addicted to, it only becomes a problematic, “real” addiction when it completely overtakes and starts to destruct one’s life. Some of these addictions, then, are able to be prescribed

self-help books or intentional mindfulness. If self-help is enough to treat “addiction,” this raises the possibility that it would be more accurate to say that this “addiction” is not an addiction at all, but rather an undesirable desire or craving. An addiction, after all is an uncontrollable desire, which then hinges on the proof of an absence of control over one’s behavior. The increasing umbrella of applications of addiction as an idea then necessitates a continued study of the basic concept of the “addict,” bringing to question the term’s existence. Rather than a personal signifier, the term can almost be applied as a segment of the personality. Then, the “addict” even within a socially recognizable “addict” would function as simply a more advertised part of one’s self-identity. This would suggest that everyone inhabits an inner “addict,” which is necessary to the completion of one’s self-image, however this identity fragment is projected publicly and privately. One interpretation of the fragmentation of the identity acts within a defined multiplicity of self-identities, as one chooses consciously or unconsciously which to display to what extent at any given instance of self-portrayal or expression. In this research, I intend to apply the multiplicity of self-identities to the debate of addictions as desires vs. addiction as a disease. I am interested in habits and desires in contrast to addiction and the ways in which they are related to one’s unique multiplicity of identities, because I wish to know how identity affects and is affected by the action-taker/user/ addict’s experiences and treatment in order to understand how these multiple identities can be better implicated in addiction prevention and treatment practices. First, I will argue for the existence of the multiplicity of identities, creating the identities of the action-taker/user and that of the addict, which I will distinguish from each other. I will then discuss the creation of previously described selfidentities utilizing processes such as romanticization and self-ideation, focusing on the creation of the addict identity in comparison to that of the healthy, desiring identity. A building upon the processes i.e. romanticization described by arguing for the affectation of one’s current circumstances on one’s identities, circling back to the built identity of the addict, in comparison to that of the healthy, will follow. Lastly, I will exemplify these self-identities constructed over the creation and the building of them to describe

Fifth World


77

the experiences within addictive treatment and self-help, comparing the two to better find a relationship/comparison between the addict and the healthy and better address treatment of both, one as a disease and the other as personal undesirable desires.

I

n order to build an argument upon the separation of identities, one must first argue for the existence of multiple identities within one self. Self-identity is often centered around the question of “Who am I?”, defined by the characteristics one finds in themself and displays in their day-to-day life. Joseph A. Bailey, II, M.D. describes this self-identification process as creating a “complex multidimensional concept with several components...an integrated image of [oneself] as a unique person,” pointing out the origins of the word “identity” originating from the sixteenth century. This denotation of identity “originally referred to a set of definitive characteristics that made a person a ‘natural self’ --a ‘real self’ preserved over time.” Bailey makes the argument that self-identity is how a person reacts to life, regardless of the current circumstances. Using this conception of self-identity, I’d like to argue for a continual process of building one’s identity, based on the experiences one lives through, as suggested by Walter F. Kuentzel of the University of Vermont. Kuentzel uses previous research on identity within leisure to build upon the question of “Who am I?” in self-identification and further the importance of solidifying one’s self-identity. What’s proposed is that self-identification, rather than being of use to how we react to situations, propels certain reactions and behaviors from the self. People react in order to confirm how they see themselves, in an assurance and order he describes as “ontological security,” ontology being the metaphysical study of being (Kuentzel). Building upon this growth-based model of self-identity creation, I assert the multiplicity of selves. Though Bailey states that the self-identity is unrelated to one’s circumstances and environment, he defines an “environmental self” which builds upon one’s mental/emotional health growth and experiences throughout a lifetime. Several psychological movements support this model as Salgado and Hermans point out in their study of multiplicity within the dialogical self. Cognitive psychology builds upon the idea of “multifacetedness” of the self, psychodynamic psychology utilizes a more broad outlook on the selves within the self (ie id, ego, superego) and more modern social constructionists see a “multiphrenic and relational self.” Salgado and Hermans believe that the “I” one uses to denote oneself in conversation allows for multiplicity and unity of one’s selfidentity between these dimensions (Salgado). In addition to the existence of these multiple selfidentities, there exists relational interaction between them, which may often involve conflict. As Carla Cunha outlines in her research regarding organization within the dialogue of self, there is a constant reorganization of the fragments of North Carolina School of Science and Mathematics

one’s self, a constant evolving towards the “future self.” In the process of imagining the future self one desires, there often exists an “I” in relation, in conflict, to an “other” even within one person, thus confirming the co-existence of multiple selves (Cunha). Further, these selves may conflict over a power struggle even within one situation, one dialogue. This is exemplified in speeches, as studied by Dorien Van De Mieroop in the International Pragmatics Association. There exists a comparison between the differing identities, comparative to the process of standardized relational pairs. In a speech, for example, the speaker may switch from expert to equal or advertiser to non-advertiser, whether explicitly or implicitly (Van De Mieroop). In a more basic application, in a study of Asian Canadians and their value systems. When one identified as Asian, their ranking of values were different in comparison to their rankings when identifying as Canadian, suggesting that “distinct value systems can exist within an individual as a result of different contexts and even different self-states” (Stelzl). Utilizing these relationally defined identities, one is able to create the identity of the addict in addition and contrast to that of the healthy user who desires, rather than obsesses over. The addict identity has dialogical roots in the early nineteenth century, as alcoholism was seen as having causes outside of medical disease. One attribution of alcoholism was “a product of inherited traits,” and another was “the result of interactions between persons and immoral or ‘degenerate influences,’” alcoholics often being seen in discourse as “degenerates” (May). The shared discourse of medicine, desire, and identity continues within the topic of sexuality as well. In building the identity of the healthy user/ action-taker, I refer to the research conducted by Kristin S. Scherrer on the asexual identity. A lack of interest in sex was what over-half of participants in one part of the study attributed to their asexuality. Within many participants’ descriptions of their asexuality, they feel the pressure of the desire for sex being “natural and essential” (Scherrer). The existence of the desire often correlated with the selfprescription of the identity. A fine line is drawn where desire can be seen as purely desire, and when desire confines oneself to an identity which is seen as diseased. Obvious differences exist between sexuality and addiction, with the latter obviously being the only one medically distinguished. Here, the two are compared only within the comparisons of often undesired desire (undesired as defined by current social norms) in each scenario. The differences between the societal connotations of the addict identity and the recreational identity are highlighted in one study by Daniel Alan Crutchfield Jr. and Dominik Güss. This research has shown that achievement i.e. vocational and self-clarity are both associated with the non-addict, who can beat addiction. The addict identity alters to become, or at least surrender to, the non-addict identity. The struggle, Crutchfield Jr. and Güss argue, lies in identity commitment to the non-addict identity, against risky behaviors. Similarly, the mental


78

change from user identity to recovery identity “accounted for 49% of the variance in life satisfaction,” further emphasizing the fundamental differences between the two conflicting identities (Crutchfield).

T

he manners in which these identities are created need to be further explored in order to advance the argument regarding the use of these identities within unwanted desires, whether of an addictive nature or not, starting with romanticism and glamorization. One way in which one figures a part of oneself and one’s private and/ or public image is through fashioning one’s behaviors, habits, lifestyle, etc. after an idealized model. Two instances of this fashioning are exemplified here. In the first case, fanfiction authors, who are often younger girls, are given the ability through fanfiction to deepen and complicate sexual identity. In fanfiction, the author can insert themself into a popular TV series, a video game, or even a celebrity’s life, often as romantic interests. Within this insertion, the authors often show a questioning and subversion of gender and sexual norms on the foundation of altering the original story/reality, often “creating spaces for multiple subject positions.” These unique, multiple subject positions then alter the author’s romantic and sexual realities as well, as they have literally written their imagined destinies. In the second case, female psychiatric patients will seek to present, or even alternatively present themselves in a “virtual social identity” as societally idealized and privileged images of women, and their identity and the traits which make up that identity change by her idealizations and behaviors of those idealizations. The researchers of this study conclude that “Voicing an ideal may be a way of attempting to resolve (or conceal) the many contradictions of socially negative traits…for some of these women, voicing these idolizations... may be part of the process of shifting from patient to person and re-engaging in the performance of different gender identities” (Caldas-Coulthard). Considering the conclusions of these two cases, the processes of glamorization and romanticization can be shown to effectively change the ways in which one consciously works to alter their private and public identities, affecting their behaviors and lifestyle as well. The fantasized processes of romanticization and glamorization applied to one’s identities previously outlined by the two cases can further be applied to the ideals figured in popular media portrayal, especially as relating to mental disorders and drug use. With the glamorization of police work in modern television and movie programs, one study showed that several officers attributed their career choice to these romanticized media portrayals. However, once on the job, many officers admit to a disappointingly unglamorous job. According to the study, “Realising that the work is often far from the glamorous images portrayed, the investigators in this study found alternate ways in which to derive satisfaction from their jobs...However, in the Heinsler study

detectives found clerical tasks to be unexciting and thus drew upon the glamorous images of their profession portrayed in the media to reframe clerical duties” (Huey). As applied to mental disorders, another study illustrated that the glamorized projections of mental disorders on social media have shown increased normalization of these disorders which causes an increase in the behaviors characteristic of them as “...now see mental disorders as relatable, normal and desirable, while people actually diagnosed with any mental health disorder might get a false impression that what they are experiencing is normal and common,” pointing out the disparities within this romanticization process while recognizing its evident effects (El). Similarly, a varied response was seen in the glamorization of drug use in media in one study, suggesting that societal assumptions of young people’s perceptions of drug use may be “over-simplified and exaggerated.” Participants in a study involving the romanticism of lyrics related to cannabis in french rap debated whether drug use was even sensationalized or rather normalized through this glamorization. For example, with the celebrity Amy Winehouse headlines regarding her usage of “crack” cocaine, young people often recognized and pointed out the negative aspects of her drug use, seemingly deglamorizing and disapproving sensationalized substance abuse (Shaw). Though in another study focused on heroin use in movies produced in the 1990’s, increased portrayal of heroin use was glamorized in media ie movies and correlated with increased heroin use starting in the 1990’s overall (Tonkovich). These studies highlight the complexities of glamorization and romanticization affecting behaviors and lifestyles of consumers of media and participants within society in dramatically different ways, although confirming the effects of these idealizations projected onto and within modern society nevertheless. The glamorization and romanticization processes affecting self-ideation as previously outlined can also be applied to the process of speaking or thinking one’s identities into existence can catalyze the creation of an addict, a healthy self, or a combination of the two. One study found that one of the three key parts of an addict’s narrative of recovery was the reconstruction of their identity. As Patrick Biernacki and Dan Waldorf extensively studied the process of addiction recovery, they conclude that often recovery is framed within the dialogue of the “management of a spoiled identity,” where the decision to end drug use arises from the negative interference of the addict/drug abuser identity into other seemingly unrelated identities (ie the parent, romantic partner, or employee identities). Recovery begins with “The individual coming to an understanding that his or her damaged sense of self has to be restored together with a reawakening of the individual’s old identity and/or the establishment of a new one.” James McIntosh and Neil McKeganey built upon Biernacki and Waldorf’s findings with their own study of addicts’ narratives of drug use recovery. McIntosh and McKeganey discovered that many

Fifth World


79

of the participants felt that their reframing of their drug use as negative often stemming from a seeming reawakening to the “‘true’ nature of the drug using lifestyle and its ability to distort reality.” There was much discussion about the discrepancies between the self “at heart” and the self who did these awful things, between the self “at heart” and who was reflected back from others/society etc. (McIntosh). The recovery of drug addicts showed a distinct separation between the faulty, addicted identity and that of the true, renewed recovery-based identity.

S

imilarly to these applications of romanticizing processes on the self-ideation and creation of one’s identities, the traits and circumstances themselves which are the foundation for or subject of this glamorization/ romanticization and similar processes often influence and are influenced by one’s proposed self identities more directly. The individuality of one’s past, collective self, and current situational circumstances’ continual effects on habits and behaviors, in turn create these multiple selves, which serve different purposes in different settings. In a study investigating “The tension between the private self of the person with an ileostomy and their public social identity as an ileostomist,” there’s a focus on the social perception of a lack of bodily autonomy, especially with younger, more socially active patients. With an ileostomy, a patient receives a surgically constructed anus with an appliance which collects waste when the large bowel and rectum must be removed. Though the appliance is often hidden under everyday clothing, narrative evidence showed an awareness that the subjects were undesirable and longing for their past selves, which they had taken for granted. They ate differently to avoid the complications of having the device. One subject felt that her whole world was crashing. Many of the narratives described “psychological barriers in sexual relationships,” where criticism of the appliance and sexual attractiveness took on a more personal, emotional meaning. The patient began to doubt seemingly unrelated and nonphysical aspects of themselves. Even when the ileostomy patient may appear as “normal”, they may have to decide whether to present as normal or not. According to the study, “one of the tensions in self presentation is between revelation and secrecy.” The knowledge of the change in one’s physical appearance, even when unseen to the public, was shown to effectively enact change in one’s self-image overall (often inducing a shameful, confidentlacking, and incompetent image), which often altered the way in which they presented themself as a social identity. Rather than simply inducing processes victimizing oneself as experiencing tragedy, the mere knowledge of the change was able to produce conflict within how one manages to present oneself by altering the way in which one behaved (Kelly). And in the opposite direction, the multiple selves are also able to enact change on one’s behavior and individual North Carolina School of Science and Mathematics

traits. In one study researching how self-identity affects purchase intention, the self-identity was shown to predict the purchase intention dependently and independently of other “behavioural determinants” which help define one’s self. Theory of Planned Behavior, which states that in addition to attitudes towards a behavior and subjective norm (a factor of social pressure), perceived behavioral control, defined as “the person’s beliefs as to how easy or difficult performance of the behavior is likely to be”, is also a factor in predicting behavior) explains some of these “behavioural determinants.” Especially in products consumed in a social context i.e. “being trendy,” these outside determinants were highly useful in prediction of purchase intention. The most significant in predicting the intention in this study, however, was perceived behavioral control “by far.” And though the study concluded a “significant, but not very strong” relationship between self-identity and purchase intention and a “weak” relationship between self-identity and attitudes, it was also stated that “self-identity represents a useful predictor of purchase intention.” This relationship must be viewed not just directly or with the existence of a “mediating effect of attitudes,” but rather as a “dichotomous” “relationship between self-identity and purchase intention” (Puntoni). More strongly supporting this relationship, another study focusing on TPB (Theory of Planned Behavior) and planned behavior with entrepreneurial self-identity found that “Self-identity predicted founding intentions, above and beyond the effect of the TPB variables. Moreover, self-identity showed a characteristic moderating effect with TPB-intention predictors. Their effect was weaker or even zero at low levels of self-identity,” and that “Self-identity was predictable by past behavior, personality structure, recalled adolescent competencies, and early parental role models. Moreover, an engagement in entrepreneurial activity led to an increase in self-identity over time.” The TPB-intention predictors ie attitudes, identity, and behavior ie purchase intention enabled and increased each other’s effects. In this research, self-identity’s correlations with “attitudes, norms, and perceived behavioral control” were high, between 0.44 and 0.61 (Obschonka). The self-identities’ abilities to affect each other has similarly been found to be of high effect through recent research. Returning to consumer-based applications, research done at the University of Groningen in The Netherlands found supporting evidence for their prediction “that environmental self-identity is related to one’s obligationbased intrinsic motivation, feelings of moral obligation, to act pro-environmentally, which in turn affects proenvironmental actions.” This motivation strengthened the relationship between the environmental-based self-identity one claimed and the pro-environmental actions/purchases one took to bolster this identity. The “personal norm” of the environmental attitudes one had were constructed under the influence of the created environmental self-identity and in turn predicted pro-environmental actions, such as


80

the use of green energy. The personal norm was relating to buying a more expensive, sustainable option. However, it should be noted that the study also concluded that there was not a significant causal relationship of past behavior on current choices (van der Werff). Similarly, another study on self-identity in environmental consumerism focused on fashion marketing showed that the “38 percent of consumers who found used organic cotton content salient had positive attitudes toward organic and sustainable agriculture, preferred to ‘buy locally’ and had a strong selfidentity as environmental, organic, and socially responsible consumers.” Agreement with statements concerning being ethical ie ‘I think of myself as someone who is concerned with ethical issues’ was “correlated with behavioral intention to purchase fair trade groceries (r=0.25),” (Hustvedt) further confirming the multidirectional relationships between self-identity, attitudes, and actions/behavioral intentions. Another study on bilingual Chinese college students found that in the context of Chinese EFL (English as a Foreign Language), similarly, learning a new language affected participants’ self perceptions as well as how they culturally identified. Their participants seemed to gain behaviors and attitudes, in addition to the maintained native beliefs and cultural identities they came in with. Thirty-seven percent of subjects agreed with the statement: “As my ability of appreciating English literature and art increases, I have become more interested in Chinese literature and art.” The study also found that for this research, the English majors’ self identity changes were higher than those of non-English majors and that those of females were higher than those of males, suggesting nuanced effects of the fragments of these cultural self-identities (Yihong). The Chinese and English sub-identities could further be affected by gendered and academic differences in how these participants identified themselves. Within a larger context of organizational identities, which applies to a group of people, rather than the individual, a similar theme was found in research conducted by Timothy Kuhn and Natalie Nelson of the University of Colorado at Boulder. Kuhn and Nelson investigated a reengineering of individuals’ self-identities based upon a set organization-wide change which affected the group identity. The results showed that the individual group members more centralized to the organization had more similar prioritized social identities, that many of the group members’ individual identities were influenced by the group identity, and that following the organizational change, members were more alike in their other sub-identities (Kuhn). Within the context of addiction and the modeling of the addict and healthy/recovering identities in relation to each other and other socially categorized identities, a review of the drug abuse and identity in Mexican Americans proposed a four-stage model for the creation of the addict and later the recovering identities. The first stage presents the “casual user,” the second names the “drug addict” at the start of addiction treatment, the third alters the addict through

treatment into a “recovering addict,” and the fourth stage takes this “recovering addict” identity outside of treatment into the outside world, where this identity is met with more contention and conflict, within sober societal contexts. Beginning with the first stage, three subsets of the drug use identity that were notable were found :”the vato loco (crazy guy), the pinto (prison inmate), and the tecato (drug addict, primarily addicted to heroin).” As the “” ages or reaches life events like marriage, they will often “change identity and behavior.” Age tended to affect how the drugs were used (i.e. needle sharing in younger populations). In adolescents, the social identity within the school popularity hierarchy complicated the drug use further, boiling these social categories to a certain substance. “‘cowboys’ use chew, ‘stoners’ smoke pot, ‘jocks’ drink beer, ‘cool dudes’ smoke cigarettes, while ‘waste bags’ use the ‘harder stuff.’” The popular identity one belonged to, or rather self-induced, needed to fit the idealized image of this identity which was romanticized in a way by this certain group. These identities would pressure an individual to fit in and thereby selecting the substance which needed to be abused. Longterm, this substance abuse based on the type of substance and abuse which was to an extent chosen by the group in which the individual fit, would obviously lead to different levels of addiction and addiction-based identities. Similarly, race and ethnicity had an effect on the substance preference of drug use, confirming the variability in identities which could affect the addiction-based identity and the behaviors and lifestyle which would be associated with this identity. As the identity staged towards recovery, the review found a strengthening of Mexican ethnic identity under the correlated measures of increased social responsibility, personal health, and maturation. The other identities of ethnic, social, and otherwise physically health-related bases were, then, being affected by the “addict” and “recovering addict” identities (Castro).

A

s the “addict,” “recovering addict,” and “healthy” identities have been essentially investigated, the next logical step is to utilize these identities within treatment methods, working towards better discerning addictive treatment from self-help and finding a more accurate working relationship/comparison between addiction and healthy undesirable desires. In other words, what must come next is the application of these defined identities to treatment better suited to the situation. Within the multiplicity of identities, the addict’s experience can be studied in comparison to the self-help experience. With addiction, as evidenced in a study on food addiction in particular, food addiction could either reduce or increase stigma regarding obesity as an external cause or a “behavioral causal attribution,” respectively. This study showed results that the food addict label increased stigma surrounding obesity. “In the context of attribution theory, the food addict label may have increased blame toward obese individuals by attributing weight to eating Fifth World


81

behavior, where food addiction may be interpreted as a euphemism for overeating.” In addition, this effect may be seen vice versa from obesity to food addiction, possibly increasing the reach of the stigmatization from a societal viewpoint to “perceptions of competency for more solitary acts” (DePierre). Like with the ileostomy patients, those dealing with food addiction may experience a significant decrease in confidence in other aspects of their lives, leading to differing behaviors and attitudes in seemingly unrelated areas. In contrast, self-help movements have been seen to alter the individual’s identity by focusing more around the collective identity, working to “translate negative and stigmatized emotions and identities imposed by dominant groups and classificatory schemes embedded in modern institutions, such as medicine, psychiatry, and the criminal justice system, into positively valued self-definitions” (Taylor). David Gauntlett, in article on the “pursuit of a happy identity” within self-help suggests there being “one of three challenges to the readers’ own narrative:” present in self help books: 1) making the narrative of oneself stronger, increasing the sense of personal power and control 2) transformatively rewriting oneself “to become a new, strong, positive person,” 3) amending one’s narratives so that one can accept their world and life more happily. Often, you place yourself as a “life manager” or something along those lines (Gauntlett). Though there exists strong similarities between the addict’s experience within a multiplicity of identities and the self-help experience within a multiplicity of identities, the differences between the two suggest a more negative, harder to control experience for the addict. As the client of the self-help movement works to pursue a happy identity, the addict struggles to pursue any identity which overshadows the demanding addict identity. What comes next is utilizing these experiences to better, perhaps more descriptively, find a relationship between desires/habits and obsessions/compulsions/addictions- in other words, healthy vs. unhealthy behaviors/lifestyles. In a study focused on group membership and social identity within addiction recovery, identity preference was found to be correlated with increased self-efficacy, which correlated with less addiction-based behaviors (ie relapse), and increased sobriety, emphasizing the utility of identity labels, which here included terms such as “recovery” or “ex” to further secure societal positioning as healthy, rather than diseased as an addict. The recovering addict identity is created in recovery to oppose the identity of the addict. Similarly to the self-help movement groups, the group identity of recovering addicts as an organization was also helpful in allowing for more peer support in recovery. The importance of “relative (not absolute) levels of identity” was highlighted as the relation-based identities correlated with higher rates of recovery and increased health overall, though the definition of recovery as created by those in recovery remains highly subjective (Buckingham). The denoted definition is derived “from the now obsolete English verb

North Carolina School of Science and Mathematics

‘to addict’, which meant ‘to bind, attach, or devote oneself or another as a servant, disciple, or adherent, to some person or cause.” However, the term wasn’t connected to substance use/abuse until the twentieth century. As the umbrella of potential addictions increases in size, it’s been argued that the concept of free will is “endangered” and essentially complicated as addiction takes a moral position “condemnation -- about supposed excess,” with activities as necessary and simplistic as work or buying goods (shopping) (Bailey). A Deleuzian theory focuses on addiction more simply as a product of biopsychosocial desire, which may be problematic and controversial as portraying a possibility for getting addicted to everything. However, it separates addiction into a search for pleasure vs. search for an effect, as since the desire is simply defined as desire, addiction is then made complex by the circumstances, i.e. “alcohol at parties makes it possible for a shy girl to talk with boys” (Oksanen). Highlighting parallels with addiction (substance or behavioral), “normative compulsive behaviors...falling in love,” and the malleability of the prefrontal cortex, where these processes occur, Marc Lewis proposes to confront this complexity by utilizing “at least three specific mechanisms that accelerate our attraction to addictive rewards and entrench addictive activities-without making it a disease,” 1) “tendency toward delay discounting...narrowed beam of attention toward imminent rewards,” 2) “motivational amplification by addictive rewards,” 3) “fusion between personality development and the consolidation of addictive habits...crystallization of depressive or anxious personality traits.” Lewis’s working definition of addiction is “a habit that grows and self-perpetuates relatively quickly, when we repeatedly pursue the same highly attractive goal” (Lewis). With these three highly differentiated definitions of addiction, the addict identity is forced to work within an unclear foundation of what it is working against. Possible implications of the current concepts of the multiplicity of selves within addiction and undesired behavior treatment and prevention must then hinge upon the variance of experiences of the multiple self-identities. As Scott Kellogg wrote in his article “Identity and Recovery”, “These identities may have varying levels of salience in different situations and they may at times conflict because they each offer alternate and possibly incompatible ways of both defining a situation and guiding behavior (Shibutani, 1955/1968). However, for the stability of the personality, identities will be organized into a hierarchy that will reflect their overall impact on behavior and perception.” Kellogg suggests 3 paths to identity restructuring: 1) reversion 2) extension 3) emergence, which all typically lead to one single identity transformation and a reinforcement of this identity, though there is a chance of denial (Kellogg). In a study focused on the Elaborated Intrusion (EI) Theory, other researchers propose training direction away from cravings via visuospatial tasks and visual imagery, as the EI Theory “defines cravings or desires as effectively alden cognitive


82

effects, where an object or activity and associated pleasure or relief are in focal attention,” with these cravings being differentiated from intentions or otherwise controllable behaviors (May).

T

he findings of this research support the model of multiplicity of identities presented. The self can be fractured and split into a myriad of sub identities which may sometimes conflict and support each other. Two of high value in this research are the identities of the addict and the healthy person with habits, behaviors, and objects of favor. Finding the differences between the two, which essentially contrast and diverge offers a unique insight into the deconstruction of how one chooses to identify themself and how this can affect and is affected by the identities which are continually built over time, impressed by the experiences one lives. These experiences can be influenced by cognitive processes such as romanticization and imagination which create altered realities for the person in question. The subjectivity of health in concern to possible dependencies and addictions are often based on other identities and can affect the treatment of these undesirable behaviors and traits. Much of this is also due to the discourse of addiction, which necessitates a differentiation from normality. Addiction in contrast to desire presented interesting findings in this study. One aspect of high value is in the treatment through ideas such as self-help and self-help books, or medication and psychiatric treatment. The boundaries of the human experience and what is considered “normal” differs by person and is continually altered for each individual through their unique experiences and the environmental factors which influence the ways in which they interpret such concepts in society. Questions which may arise from this study range from philosophical to pharmaceutical. If one finds themself in the position of the addict, how may treatment be more accessible through cognitive pathways and self-introspection in how one views oneself. Evidence here suggests an identity in which one can change and no longer be an addict, as well as an identity which supports self-responsibility and the ability to control oneself to be helpful in treatment. Findings work against theories which support a more environment-based blame within unwanted desires such as those often found in addiction. The language in which we utilize to describe our unwanted desires could also prove helpful (ie not so casually comparing them to addictions) as the question of whether addiction is a desire becomes less contested. Some habits and desires must essentially be labeled and categorized as controllable, as the resources available must be used conservatively so as not to produce stronger fragmentations within one’s self-image and self-identity(ies). Further exploration of these topics would certainly influence the therapeutic and pharmaceutical treatments for

addiction as currently defined. In addition, more detailed inspection of the lines and darkening of mentioned lines could potentially prove helpful in determining the most beneficial treatments for these undesirable habits and desires, whether or not they’re seen as products of mental illness. A question I’d like to raise particularly is to what extent these undesirable behaviors and longings for objects and people would extend to a certain personality. As addict-prone traits have been identified and postulated upon in recent years, a certain weight is placed upon the identification and prevention of these behaviors, whether classified as mental illness or not, as to allow for more control of the human experience within one’s own life. How, for example, does one account for one’s fascination with danger and “risk-taking”?

Fifth World


83

Bailey, Joseph A. “Self-Image, Self-Concept, and Self-Identity Revisited.” Medical Association, National Medical Association, May 2003

Salgado, João, and Hubert J M Hermans. “The Return of Subjectivity: From a Multiplicity of Selves ...” ResearchGate, E-Journal of Applied Psychology, July 2005.

Bailey, Lucy. “Control and Desire: The Issue of Identity in Popular Discourses of Addiction.” ResearchGate, July 2009

Scherrer, Kristin S. “Coming to an Asexual Identity: Negotiating Identity, Negotiating Desire.” NCBI, U.S. National Library of Medicine, 1 Oct. 2008.

Buckingham, Sarah, et al. “Group Membership and Social Identity in Addiction Recovery.” ResearchGate, Jan. 2013

Shaw, Rachel L., et al. “‘Crack down on the Celebrity Junkies’: Does Media Coverage of Celebrity Drug Use Pose a Risk to Young People?” Aston Publications, Aston University, 2010.

Caldas-Coulthard, Carmen Rosa, and Rick Iedema. Identity Trouble: Identities. Palgrave Macmillan, 2008.

Stelzl, Monika, and Clive Seligman. “Multiplicity Across Cultures: Multiple National Identities and Multiple Value Sytems.” Sept. 2009.

Castro, Felipe G., et al. “Drug Abuse and Identity in Mexican Americans: Theoretical and Empirical Considerations.” ResearchGate, 1991

Taylor, Verta, and Lisa A. Leitz. “From Infanticide to Activism: The Transformation of Emotions and Identity in Self-Help Movements.” Chapman University, 2010.

Crutchfield, Daniel Alan, and Dominik Güss. “Achievement Linked to Recovery from Addiction: Discussing Education, Vocation, and Non-Addict Identity.” Alcoholism Treatment Quarterly, Taylor & Francis, 14 Nov. 2018. Cunha, Carla. “Constructing Organization Through Multiplicity: A Microgenetic Analysis of SelfOrganization in the Dialogical Self.” International Journal for Dialogical Science, 2007 DePierre, Jenny A., et al. “A New Stigmatized Identity? Comparisons of a ‘Food Addict’ Label with Other Stigmatized Health Conditions.” APA PsycNet, American Psychological Association, 2013 El Habbal Jadayel, Rola, et al. “Mental Disorders: A Glamorous Attraction on Social Media?” ResearchGate, 15 Jan. 2018 Gauntlett, David. “Self-Help Books and the Pursuit of a Happy Identity.”

, 2002

Huey, Laura, and Ryan Broll. “‘I Don’t Find It Sexy at All’: Criminal Investigators’ Views of Media Glamorization of Police ‘Dirty Work’.” , 3 Dec. 2013 Hustvedt, Gwendolyn, and Marsha A. Dickson. “Consumer Likelihood of Purchasing Organic Cotton Apparel: Influence of Attitudes and Self-Identity.” ResearchGate, Feb. 2009 Johnson, Ella R. “The Romanticization of Violent Male Offenders: How Trauma and Internalized Sexism Can Explain Women’s Fascination with Serial Killers.” , 2020 Kellogg, Scott. “Identity and Recovery.” ResearchGate, 1993 Kelly, Michael. “Self, Identity and Radical Surgery .” Kuentzel, Walter F. “Self-Identity, Modernity, and the Rational Actor in Leisure Research.” Taylor , Journal of Leisure Research, 13 Dec. 2017 Kuhn, Timothy, and Natalie Nelson. “Reengineering Identity: A Case Study of Multiplicity and Duality in Organizational Identification.” ResearchGate, 2002 Lewis, Marc. “Addiction and the Brain: Development, Not Disease.” Springer Link, 11 Jan. 2017 May, Carl. “Pathology, Identity and the Social Construction of Alcohol Dependence.” BSA Publications Limited, 2001. May, Jon, et al. “The Elaborated Intrusion Theory of Desire: A 10-Year Retrospective and Implications for Addiction Treatments.” , May 2015. McIntosh, James, and Neil McKeganey. “Addicts’ Narratives of Recovery from Drug Use: Constructing a Non-Addict Identity.” 21 Feb. 2000. Obschonka, Martin, et al. “Entrepreneurial Self-Identity: Predictors and Effects Within the Theory of Planned Behavior Framework.” ResearchGate, 4 Nov. 2014. Oksanen, Atte. “Deleuze and the Theory of Addiction.” Ethics Workshop, Taylor & Francis Group, 2013. Puntoni, Stefano. “Self-Identity and Purchase Intention: An Extension of the Theory of Planned Behaviour.” , Jan. 2001. Reppel, Alex, and Kravets, Olga. “The Glamorization of Information Technology.” , Macromarketing Society, Inc., 25 Jun 2015, 475-476.

North Carolina School of Science and Mathematics

Tonkovich, Russell. “Glamorization or Condemnation: The Accuracy of Hollywood’s Portrayal of Heroin Use in Motion Pictures in the 1990’s.” , 2004. Van De Mieroop, Dorien. “Co-Constructing Identities in Speeches: How the Construction of an ‘Other’ Identity Is Defining for the ‘Self’ Identity and Vice Versa.” John Benjamins Publishing Catalog, John Benjamins Publishing Company, 1 Sept. 2008. van der Werff, Ellen, et al. “It Is a Moral Issue: The Relationship between Environmental SelfIdentity, Obligation-Based Intrinsic Motivation and pro-Environmental Behaviour.” , 27 Aug. 2013.

, John Wiley & Sons, Ltd, 21 Feb. 2005.


84

The Commodification of Aloha: An Analysis of the Progression of Colonialism in Hawaii Juan Castillo

T

he Hawaiian archipelago was first settled by Polynesians at around 400 C.E., after traveling over 2000 miles to the islands in canoes. The islands were populated by small communities who remained distinct,, battling one another for territory. The first instance of European contact was in 1778, when Captain James Cook of Great Britain landed on the island of Kauai. There are claims that upon the arrival of Cook and his crew, Hawaiians “attached religious significance” to them. If so, this is due to the timing and location of their arrival. The Europeans arrived at the sacred harbor of Lono, the fertility god of the Native Hawaiians, while the Hawaiian people near the harbor were holding a festival to the same god. This coincidence led to the unquestionable consideration of Cook and his men as gods, and of course, the Europeans took advantage of this “immortal” status. Cook and his crew, however, were exposed to the islanders as mortals when one of the crewmen died, leading to tense relations between the islanders and the Europeans. Cook eventually decided to leave, but was forced to return to the islands because of a broken mast. Hawaiians did not want Cook to return, so they had numerous disputes and committed thefts against the British crew, including a cutter, which led Cook to retaliate (Hawaii - history and heritage). Following these disputes, Cook, out of anger, attempted to kidnap King Kalei’opu’u to force the return of the stolen boat. This tactic had previously worked for Cook; however, islanders surrounding the king saw through his tactic and feared for the king’s life. Following news that a Hawaiian official had been killed by Europeans, Hawaiians then attacked Cook and his crew, killing Cook, along with four marines. It is at this point that sources diverge, with some claiming that the remaining crew simply left for Britain again, and others stating that peace was reached following the incident and both sides of the small battle were apologetic for the misunderstanding (The Death of Captain James Cook, 14 February 1779). Following the discord with the first European explorers to reach Hawaii, King Kamehameha became Hawaii’s first king, uniting all of the islands into one kingdom under his control between 1791 and 1810 (Hawaii - history and heritage). The arrival of colonizers in 1820, under the guise of Christian missionary work, brought diseases that reduced

the Hawaiian population from around 300,000 to 70,000 by 1853 (Hawaii - history and heritage). By 1893, Americans seized control of the Hawaiian economy, effectively taking power over the islands and overthrowing the Kingdom of Hawaii; they then established the short-lived Republic of Hawaii, which ceased to exist after the annexation of Hawaii as a United States territory in 1898 (Hawaii - history and heritage). The establishment of colonialism in Hawaii was relatively quick, but no less violent than any other instance of American colonialism. The arrival of Christian missionaries can be considered the establishment of colonialism in the Hawaiian Kingdom. The common misunderstanding is that colonialism ended here with the end of the newfound discovery of new land to use. This misconception could not be farther from the truth, as the influx of foreigners seeking to settle in an occupied state and consume the commodities of Hawaii has never ceased, with one exception. While World War II forced a cease to tourist operations, it greatly encouraged an unprecedented level of growth in the economic sector of Hawaii. Modern day-tourism was driven by heavy military occupation during World War II, and the effects ran deeper than more people buying and selling products on the islands. It encouraged the appropriation of Hawaiian symbols for the protection of groups whose loyalty to the American Armed Forces was under question. Another sentence, providing context for “the forced fighting” Through the institution of white supremacy, minority groups are not only automatically positioned to live at a disadvantage compared to White people through social injustice, but are placed in a suffocating environment when forced to seek liberation simultaneously with all other minority groups. Because of this inescapable gridlock minority groups found and still find themselves in, White people created a narrative through which they could become the liberators of the oppressed groups, allowing their original colonizer narrative to be distorted. Frequent patterns have suggested that the main form of destruction is linguistic. In present-day conditions, this is not a true “destruction” of the language, per se, but rather a destruction of the significance attached to multiple words and expressions. This is not to say that the language itself was not under direct attack throughout history, as it has been

Fifth World


85

multiple times, especially and most significantly through the ban placed on it by colonizers for nearly 100 years. The methods by which the language has been desecrated are mirrored in the same tactics used to sexualize the hula, as well other imagery, sold through the production of “aloha wear.” Hawaiian culture is not only destroyed but is used for the benefit and sake of the economic growth of the American occupation of Hawaii. Through an analysis of different facets of Hawaiian colonialism, I seek to examine the ways in which colonialist ideals have shaped the commercialization, exploitation, and commodification of Hawaiian resources, especially through the tourism industry.

T

he process, or rather institution of colonialism, is a collection of measures taken to exploit a foreign group of people and their resources for the greater economic advantage of a so-called “mother” country. To rob a people of their resources, dismantling their communication and identity, mainly through the desecration of their language, effectively removes stability from that group. Most pertinent to the United States, the processes involved in the eradication and near-eradication of a group of people have been most prominent in the seemingly interminable and gruesome history of Indigenous peoples surviving colonizers in the Americas or more plainly put, the white man and his dogma. To reclaim a language suggests that the colonizer appropriates this foundational asset for their own use, but to what end does this hold? The process of reclamation of a language does not solely signify a recuperation of the language itself, but of power associated with that cultural foundation. Article 13 of the United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP), passed in 2007, outlines the “right to revitalize use, develop and transmit to future generations their histories, languages, oral traditions, philosophies, writing systems, and literature, and to designate and retain their own names for communities, places, and persons.” This almost seems to contradict the notion of reclamation, for this “right” was voted on, not returned to, Indigenous peoples on a document that they themeselves did not write. With four nations – Australia, New Zealand, Canada, and the United States – refusing to endorse the declaration, the processes of colonialism become seen as a cycle, with four former British colonies asserting their unwillingness to reinstate the power of an indigenous language. After their dissent to the UNDRIP in 2007, the United States cited their reasons for voting “no,” saying that nations that did sign the agreement did “not appear to uphold these minimum standards, which the United States claimed to uphold in their defense. In the 1980s the native Hawaiian language had been reduced to fewer than 50 speakers under the age of 18, signifying a near extinction of future generations using the

North Carolina School of Science and Mathematics

language (Kawaiÿaeÿa et al.). This can largely be attributed to the 1896 ban on Hawaiian language instruction in “every school and public function” (Kawaiÿaeÿa et al.). At the time of the institution of the language ban, UCLA’s Language Materials Project estimates that around 400,000 to 800,000 native Hawaiians spoke the native language fluently, compared to the less than 9,000 in 2019. This destruction is not just a statistical marvel and tragedy, but a cultural disaster conveyed through its near extermination of a people. The simple institution of a law degraded one of the building blocks essential to a culture. This law not only demonstrated that language is essential to a people, but that it also depended on its youth to persist. The preservation of language is an essential factor in the longevity and power of culture, as demonstrated through the effects that actions such as the establishment of the UNDRIP and the 1896 ban on the Hawaiian language in schools. The weaponization of education is a common tactic used throughout colonialist actions. During the European seizure of Africa, new systems of education were introduced into the continent. An article published in the Mediterranean states, “education had been accepted world wide as the gate way to the development but for it to achieve its aim, the content must be tailored to the needs of the society it has to be internally driven” (Nwanosike and Onyije). This explains the quick turnaround from the overthrow of the Hawaiian Kingdom to the ban on Hawaiian-language instruction in 1896. In order to keep the youth from rebelling, their education had to limit the thinking of the students so that they served solely the capitalistic goals of the colonizer in its stolen land, not their own interests and desires. This disregard for the Hawaiian language throughout history led to a resurgence of “pride in the Hawaiian culture and language.” A journal published by students at the University of Hawaii refers to this period in the 1960s and early 1970s as the Hawaiian Renaissance. This period led to a renewed interest in the preservation and further cultivation of Hawaiian culture through its language. In 1978, the Hawaiian language was reinstated as an official language of Hawaii, along with English. In addition to the recognition of the language as official, programs were introduced in order to restore the use of the language to the youth, which can be considered the most important group to target when renewing the use of a language. In 1987, Papahana Kaiapuni, a K-12 public school program in which the Hawaiian language was the form of instruction, was opened on the Hawaiian islands of Hawaii, O’ahu, Kaua’i, Maui, and Moloka’i. The program, created to educate younger generations of native Hawaiians, proved to be more effective than expected. Unexpectedly, the program performed not just as a functional school for young Hawaiians, but also as a catalyst for the rebirth of the culture in older generations (Yamauchi and Lunning).


86

T

W

Figure 1. “Aloha” sign on Terminal 2 at the Daniel K. Inouye International Airport in Honolulu.

1

First-generation Japanese-American

2

Second-generation Japanese-American

his rebirth in older generations can be at least partially attributed to Kaiapuni students and alumni, who were identified as “carriers of culture and language,” signifying the importance not only to the succeeding generations but to those who missed the opportunity to experience the virtues of Hawaiian culture in their youth (Yamauchi and Lunning). Of course, the lack of substantial preservation of Hawaiian culture for nearly 100 years allowed for the further colonization of the Hawaiian Kingdom. It would not, however, be true exploitation of this society if at least part of it wasn’t used to the advantage of the colonizer. More so than the abuse of resources, manipulating Indigenous spirituality, materials, and words into marketable products has proven to be the most effective continuation of colonization. The most obvious and significant evidence of this lies in the commodification of the Hawaiian word Aloha. This is not just evident in the United States, but globally, promoted by the marketing of the language as an aesthetically pleasing facet of Hawaiian culture. To be clear, Aloha is not the mere shallow definition of “hello” and “goodbye,” given by the American understanding, or rather lack of it (Miller-Davenport). The Hawaii Law of The Aloha Spirit, though provided by an oppressive institution, provides a comprehensive definition of the Aloha Spirit. The law states: “Aloha” is more than a word of greeting or farewell or a salutation. “Aloha” means mutual regard and affection and extends warmth in caring with no obligation in return. “Aloha” is the essence of relationships in which each person is important to every other person for collective existence. “Aloha” means to hear what is not said, to see what cannot be seen and to know the unknowable. It can be argued that the commercialization of Aloha is the most obvious form of commodification, as the appropriation of language allows multiple other forms of exploitation to stem from it. At the Daniel K. Inouye International Airport in Honolulu, Aloha is displayed simply as a sign on the terminal. Handling nearly 22 Million passengers in 2019, the airport serves as a bridge between the appropriation of culture and the physical exploitation of resources (State of Hawaii DOT). Tourism, enabled greatly by air travel and cruises, can be pointed to as the modernday pathway to colonialism, or colonialism in and of itself.

hile the Kingdom of Hawaii was overthrown in 1893, the massive explosion in tourism to the islands began after World War II. Dr. Christen Sasaki of the University of California at San Diego argues that tourism is directly linked to the extensive militarization of Hawaii. After the attack on Pearl Harbor by Japanese forces on December 7th, 1941, all civilian tourism was discontinued. However, due to Hawaii’s importance both in terms of strategic location and evident appeal to foreign attackers, military influence only grew during this time. Dr. Sasaki claims that “the sheer volume of military personnel’s consumption drastically transformed Hawaii’s economy and the nature of tourism in the territory.” He also explains that “Between December 1941 and May 1944 more than eight thousand new businesses opened across the territory.” The island’s location between the mainland United States and the Pacific theater during the war provided Hawaii with the perfect conditions for economic expansion. The military occupation and use of the islands did not just allow for continued utilization of the islands for their economic abundance, it encouraged and fed this seemingly endless growth for the territory (Sasaki). This military influence on tourism is referred to as “militourism.” The way in which the over-militarization of Hawaii led to its development as a tourism hub can be considered as more or less indirect.,. Insofar as the occupation of Hawaii by the United States Armed Forces led to militourism, the direct consumption by people stationed there contributed greatly to the development. The indirect influence underlies a more fear-driven motive, particularly a racial tension between Japanese-Americans and White Americans following the attack on Pearl Harbor. Dr. Sasaki further explains that the nature of this tourism “opened up economic pathways for the Japanese American community.” This, however, was more forced than encouraged as the claim suggests. Japanese-Americans became targets of racially motivated hate as a result of the war in the Pacific. In a mass effort to assimilate to American culture while simultaneously attempting to eliminate any trace of Japanese culture, Issei1 and Nisei2 on the islands were forced to demonstrate a sense of patriotism not just to America, but to Americanized Hawaii. Sasaki says Japanese Americans were “trapped in this environment of fear,” causing them to display excessive amounts of loyalty to the American colonialist ideology, amid its disregard for extreme racial tensions. This fear-driven assimilation to American culture for the sake of preserving families and lives amplified an experience unique to immigrant families, where the secondgeneration take on a protective role much earlier than expected when protecting their parents. Similar to how

Fifth World


87

young students of the Papahana Kaiapuni reintroduced Hawaiian culture to their senior generations, young Japanese Americans instructed their older family members to “discard all things Japanese, including their language and clothing items,” again displaying the importance of language in the stability of a community (Sasaki). In an effort to protect themselves and their communities, Japanese American families, developed the industry surrounding “aloha wear.” Aloha wear, also referred to as Aloha attire, has complicated roots in its development as a staple of the Americanized Hawaiian image. While British and American colonialism is at the root of the continued abuse of the occupied Kingdom and its resources, Japanese Americans were unintentionally instrumental in the commercialization of Hawaiian culture. Issei and Nisei did more than take part in this industry: they largely drove it in a type of forced servitude to the American-Hawaiian economy. As Japanese American workers continued their performative yet necessary patriotism to America, they, along with migrant workers of other nationalities, “designed and printed hula girls, flowers, ‘ulu (breadfruit), and other ‘aloha-themed’ imagery onto reams of fabric, [contributing] to the cultural prostitution and aestheticization of the Native presence” (Sasaki) (Gomes). This labor forced upon Japanese Americans defined Hawaii’s significance on a global scale, transitioning from a military stronghold to a tourism capital. The coerced patriotism of Japanese Americans during and following World War II provided evidence that the United States protected citizens and non-citizens alike from foreign threats, especially within America’s self-proclaimed borders. By pitting minority groups against one another and not the actual perpetrators of abuse, the United States government and the tourism industries established an atmosphere where white people could be seen as mediators of ethnic struggles, enabling the white savior complex.

T

he media has served a talking point for this feeling, recently through a debate surrounding Disney’s 2016 film Moana. Though the Hawaiian experience under colonialism has been unique to Native Hawaiian islanders, the debate surrounding cultural appropriation in such films is the same elsewhere throughout the Pacific islands impacted by colonialism. In an examination of the “trope of paradise,” a study published by the University of Hawaii Press analyzes the mixed reactions to the movie. The article outlines an instance at the National University of Samoa where one of the authors of the article showed Samoan graduate students a film created by students at Auckland University, in which students were interviewed on topics surrounding cultural appropriation in Moana. Rather than feeling appalled by the scenes accused of cultural appropriation and theft, the

North Carolina School of Science and Mathematics

Samoan students believed that the film was “to be celebrated as a rare moment of cultural pride on a global stage,” which highlights the colonial mindset ingrained in culture outside of the Pacific island nations (Alexeyeff and McDonnell). The same article identifies this rift between diasporic and territorial notions of a “homeland,” claiming that “such homelands/diaspora dualism relies on nation-state cartography that emphasizes insularity and occludes the movement of people and ideas between and within Oceania.” This emphasizes not just the physical difference and borders between the Pacific diaspora and those living in their homelands, but also the cultural bounds encouraged and further divided by the colonial ideology fundamental to Western imperialism. In a desperate attempt to paint themselves as saviors, the West continues to hamper the liberation of Pacific islanders in a much less obvious manner.

W

estern culture as a whole misconstrues Hawaiian and other Native Pacific cultures as something that should be tamed. Too often throughout the gruesome history between the white man and Indigenous peoples language far past “divisive” is used to convince not just Western media consumers but also the Indigenous people living under colonialist societies that the latter are “savages,” “wild,” and “untamed,” Western ideologies have made religious symbols in Hawaiian culture a symbol of desire and even of lust. This can be seen in the sexualization of the hula. The hula is, to Hawaiian culture, an expression of the relationship between the people and the land (Schmidt). Tourist economic demands have transformed this culturally significant dance into an erotic spectacle (Trask). Native Hawaiian political activist Haunani-Kay Trask criticizes the sexual nature in which the hula is presented to tourists seeking pleasure. She describes hula as a product, bought and sold for the pleasure of the white man (Trask). Trask names the commercialized version of the hula the “hotel version,” in which “the sacredness of the dance has completely evaporated while the athleticism and sexual expression have been packaged like ornaments” (Trask). Hula, along with the Aloha spirit are the most easily bought and sold “products” of Hawaiian culture because of their appeal to Western culture through over-sexualization and easy-to-market images. The use of people as products in such fashion inevitably leads to exhaustion of these resources and the traumatization of those who are forced to provide them. Tourism has significantly exacerbated Hawaii’s levels of homelessness and has raised the cost of living to unsustainable levels. (Trask) The threat to Hawaiian natives extends past the use of language and cultural symbols into health disparities and consistently rising suicide rates among Native Hawaiians. A study performed through multiple institutions including Portland State University, the University of Washington, and the Trans Justice Funding Project investigates how the


88

treatment of Native Hawaiians, in this article referred to as best understood through Historical Trauma theory, whch explains the rising suicide rates in Indigenous communities (Alvarez et al.). The study argues that “colonization is an example of a community-level trauma that was originally situated in and therefore continues to negatively impact Indigenous communities” (Alvarez et al.). Community-level trauma as a result of colonialism remains unresolved, not least because of the difficulty of returning to pre-colonial standards of life. Colonial trauma is “the culmination of the complex interactions between these historical and ongoing losses rooted in colonization, and Indigenous health and mental health disparities” (Alvarez et al.). Opening the pathway to healing for these communities begins with open acknowledgment and investigations into the relationships between the causes and rippling effects of these losses. This is difficult to accomplish through Western means of investigations and perspectives on suicide, so Indigenous scholars seek to bring about awareness for cultural and historical understandings of suicide to gain insight into risks for suicide (Alvarez et al.). This argument itself reinforces the fact that all understandings of humanity, whether it be medical, cultural, or spiritual, take root in the desire to satisfy the whitemale experience, as described by Toni Morrison3, unless indigenous explanations are made central to such understanding. What this study implies is that trauma is not a onetime event that renders one unable to continue to practice their culture, but rather an ongoing event continuously recreated through the incessant destructive practices of the tourism industry in Hawaii. This is all inherently tied back to the white supremacist need to satisfy the colonial desires of consuming the otherness of exploited peoples. The root of the issue of trauma and the resulting suicide rates is the use of colonialist/Western “solutions” to a non-Western issue. This rift between the colonizer and the colonized does much more than create disputes within and among colonized peoples; it divides the meaning of what the culture should be and how it should be honored. This was most evidently displayed by the discourse surrounding the usage of Moana to represent and honor the cultures of Pacific island nations, specifically Samoan culture. White supremacy does more than justify the radical exploitation and domination of indigenous peoples; it completely ingrains itself in a society, becoming a part of even non-whites within that society, occupying their languages and cultural practices. Through the dismantling of language, it is evident that a society can fall apart easily. This is recognized not just at a local Indigenous level, but at a global level, emphasized by the passing of the UNDRIP to allow for free expression in Indigenous communities through the use of their languages,

while simultaneously reclaiming other aspects of their culture. A more subtle nod to the sheer importance of language in a society occurs in the Hawaiian Renaissance, which sought to restore the prominence of Hawaiian culture to the island by revitalizing the use of the Hawaiian language. Colonizers are aware, of course, that language is the foundation of a people’s self-understanding; thus, the appropriation and commercialization of language is inseparable from the theft of land and the subjugation of labor. The Aloha Spirit in Hawaiian culture is defaced everywhere that Hawaii is marketed as a paradise for the Western population to consume, market, and use for pleasure. Through creating internal conflicts between and within minority groups, the white supremacist ideology persists, continuing to uphold the colonialist institution. Through the commodification, exploitation, and commercialization of Hawaiian resources, people, and culture, Hawaii is an active site of colonization, rather than a former colony. While not a direct attack on native Hawaiians, the tourism industry is nonetheless responsible for the deaths of the peoples whose cultures it seeks to kill. Suicide rates increasing in Hawaiian and Indigenous communities throughout the United States have proven that the American Empire continues to colonize and abuse weaker societies for economic gain at a terrible cost. At the root of every issue previously discussed is the insatiable desires of white supremacy for non-white cultures, encouraging the distortion of Native imagery into something that must be controlled and seen as a spectacle.

3 American Literature

Fifth World


89

Alexeyeff, Kalissa, and Siobhan McDonnell. “Whose Paradise? Encounter, Exchange, and Exploitation.” , vol. 30, no. 2, 2018, pp. 269–294.

Historical and on-Going Losses in Hawai’i.” Genealogy, vol. 4, no. 4, 2020, p. 116. ProQuest. “Captain Cook Killed in Hawaii.” History.com, A&E Television Networks, 9 Feb. 2010. Cheung, Alexis. “The Origins and Appropriations of the Aloha Shirt.” Racked, Racked, 23 Feb. 2018. “The Death of Captain James Cook, 14 February 1779.” . Gomes, Andrew. “Patriarch of Watumull’s helped develop aloha wear industry.” TCA Regional News, May 30, 2020. ProQuest. “Hawaii - History and Heritage.” Smithsonian.com, Smithsonian Institution, 6 Nov. 2007. “Hawaii Law of The Aloha Spirit.” “Image of Terminal 2 at Daniel. K Inouye International Airport.” Hawaii’s Honolulu Airport Will Run , Conde Nast Traveler, 25 July 2018. Kawaiÿaeÿa, Keiki K. C., et al. “Püÿä i Ka ŸÖlelo, Ola Ka ŸOhana: Three Generations of Hawaiian Language Revitalization.” 2007. Miller-Davenport, Sarah. “A ‘Montage of Minorities’: Hawai’i Tourism and the Commodification of Racial Tolerance 1959-1978.” The Historical Journal, vol. 60, no. 3, 2017, pp. 817-842. ProQuest. Morrison, Toni. “Unspeakable Things Unspoken: The Afro-American Presence in American Literature.” Within the Circle, 2020, pp. 368–398. Nwanosike, Oba F, and Liverpool Eboh Onyije. “Colonialism and Education.” Mediterranean Journal of Social Sciences , vol. 2, no. 4, Sept. 2011, pp. 41–47. , United States Census Bureau. Sasaki, Christen Tsuyuko. “Threads of Empire: Militourism and the Aloha Wear Industry in Hawai‘i.” American Quarterly, vol. 68, no. 3, 2016, pp. 643–663. Schmidt, Olivia. “The exploitation of Hawai‘i by the tourism industry.” University Wire, May 09, 2019. ProQuest. Trask, Haunani-Kay. “Tourism and The prostitution Of Hawaiian Culture.” Cultural Survival Quarterly, vol. 24, no. 1, Apr 30, 2000, pp. 21. ProQuest. Yamauchi, Lois, and Rebeca J. I. Lunning. “The Influences of Indigenous Heritage Language Education on Students and Families in a Hawaiian Language Immersion Program.” Heritage Language Journal, vol. 7, no. 2, 2010, pp. 46–65. “2019 Air Traffic Report at Daniel K. Inouye International Airport.” State of Hawaii Department of Transportation, 2020.

North Carolina School of Science and Mathematics


90

A Driving Force in American Politics: The Emergence of the Christian Nationalist Movement and its Roots in Puritanism Ella Evans

A

merican Exceptionalism is founded in Christianity, specifically, in the Puritan errand into the wilderness of New England. When the Puritan settlers encountered Native American populations, they immediately identified these people as inferior, or “savage” and “heathen,” and initially attempted to convert many to Christianity while taking their lands. This resulted in many violent territorial and cultural conflicts. During King Philip’s War, which almost destroyed New England, many of the recently converted “preying Indians” took up arms against the English colonies. This and other destructive native wars against the settlers, along with the general decline in piety, resulted in the formation of the American Puritan Jeremiad rhetoric (Miller, 79). The Jeremiad was an “immemorial mode of lament over the corrupt ways of the world” (Miller, 79). Through this sermonic form, the Puritans and other settler groups viewed native peoples as Satan’s agents whom God had allowed to test his people. Within this narrative, Native peoples become not political actors struggling against territorial invasion but, again, as agents of Satan whose actions were understood as part of God’s plan to bring them closer to Him. This belief is exemplified in colonial works of literature, such as Mary Rowlandson’s captivity narrative, which seeks to translate her private experience of loss into the public language of redemption. Through the Jeremiad, the Puritan theocracy attempted to transform a political struggle caused by the encroachments of the colonies upon Native American lands, into a narrative of God’s triumph over Satan. This divine narrative allows the Puritans to transform political disasters into evidence of their divine destiny, for God, they argued, only chastised those whom he loved, in order to return them to his holy community. The Jeremiad works through absolute oppositions between God’s people and the enemies of God; it is an allegory which converts historical and political agency into religious conflict between good and evil. Throughout American history, this language of the jeremiad has been replicated, replacing native people with other various groups who were seen as inhibiting America from fulfilling its purpose as a “chosen nation,” and whose resistance was transformed into an occasion of national redemption through violence.

America was founded under the premise of religious tolerance and freedom. The Puritans and many other settler groups traveled to America to freely practice their faiths. However, tolerance to religions outside of their own was almost nonexistent. At first, the colonists undertook numerous efforts to “save” the natives from their traditional ways through missionary efforts that worked to assimilate native populations into European religions and cultures, stripping native populations of their traditions, identity, and religion. These European settlers saw themselves as superior to the natives; soon, they would translate this superiority into the opposition between God’s and Satan’s people. What allowed them to do so was the belief that they were God’s chosen people destined to fulfill God’s will in the new world and the refusal of indigenous peoples to accept their destiny. This belief has resulted in the modern viewpoint of American Exceptionalism- the belief that America is a morally superior nation with the duty of bringing freedom and liberation to the world. Even as its religious ideals have waned, Puritanism in New England has exercised a disproportionate influence on American political ideals and its domestic and foreign policies. The United States’ mission throughout the 20th century to serve as a policing nation, involving itself in other countries’ domestic affairs, shows that our country believes that it is our responsibility, as Woodrow Wilson proclaimed, to “make the world safe for democracy.” This view has been refurbished in modern times through the vilification of immigrants and xenophobia towards “foreigners” and others who have taken the place of the “savages” whom the Puritans despised in new narratives of the American nation. American Exceptionalism is the belief that the United States is morally superior to other nations, and that it is the duty of the United States to bring democracy to the world. The Puritan notion of duty will be inflated in the twentieth century into “American Greatness,” which the Puritans never asserted. The discourse of “American Greatness” draws, however, upon the Puritan Jeremiad, whose narrative of national redemption begins with a fall from a previous unity with God. The modern version of the Jeremiad describes America’s fall from its previous status as a morally exceptional nation, thus asserting that there was a Fifth World


91

time in which America was great. This sentiment of a lost state of morality allied to prosperity and national purpose has resulted in cries of “Make America Great Again” (MAGA). MAGA has become a rallying call for Conservatives, but in order to understand this political phenomenon, the origins of American Greatness must also be understood. The slogan “Make America Great Again” was first used by Ronald Reagan during his 1980 presidential campaign. Reagan repeatedly referenced John Winthrop’s ideal of America as a “city upon a hill”; however, Reagan distorted the original intentions of this phrase. Winthrop used this phrase to emphasize that the Pilgrims must model themselves as a proper and godly civilization, as the eyes of the world would be upon them. He insisted upon the obligations of the wealthy to the poor even as he urged the poor to respect the division of wealth. Reagan misused this speech in citing a grand sense of American destiny, set forth by their predecessors, instead narrating this “city” of America as “a tall, proud city” “teeming with people of all kinds,” a “light” and a “beacon,” a “magnet” for people “from all the lost places who are hurtling through the darkness, toward home” (Rodgers, 2018). In doing so, Reagan turned this message into one proclaiming American Exceptionalism, a patriotic trumpeting call far different from Winthrop’s insistence upon “charity” as the basis for political community. Reagan ignores the historical context of the speech and the concept Winthrop originally articulated of the Puritan errand into the wilderness, instead touting America as a “shining city upon a hill” threatened, however, by darkness.

T

o those sporting MAGA hats and t-shirts, the time in which America was great was (as defined by the leader of this movement) during periods of military and industrial expansion at the onset of the 20th century and again in the late 40s and 50s, in the years following World War II (Krieg, 2016). Following World War II, the US entered a period of massive economic growth and established itself as the dominant Capitalist nation in the global political scene. This period of national prosperity in the wake of a victorious overseas war marks what followers of the MAGA movement identify as the period of American greatness. Religious participation, church memberships, and religious funding in America also greatly increased immediately after WWII (Beckman, 2000). The 1950s also saw a return to the traditional family model, in response to women’s emergence as a workforce in the war; in response to increasing demand for civil rights, this family model was racialized into an image of a middle-class white thriving. These “good old days” are marked by images of men mowing the lawn, women cooking in high heels and long skirts, and children playing in the yard with the family dog (Williams, 2010). A woman’s value was again seen in her quality as a housewife, cooking, and raising children. A man’s value was seen in his ability to provide for his family through working a fulltime job. Americans also felt a significant sense of moral North Carolina School of Science and Mathematics

superiority to the rest of the world in this era, feeling as if their defeated of Germany and Japan had made the world safe for democracy yet again. If this is the mark of American “greatness,” then American “greatness” is imagined as a society which created the perfect environment for straight, working, Christian, white men to economically prosper and raise large families, with disregard to those from all other cultures, races, religious groups, gender identities, sexual orientations, and those with ideals varying from the norm. The 1950s were also plagued by blatant racism, sexism, and homophobia, but Conservative leaders romanticizing the “Good Old Days” tend to conveniently forget to mention events of social dissent during this time. The Civil Rights movement, feminist movements, and the growing involvement in France’s colonial war in Vietnam, are completely overlooked in the discourse surrounding prior American greatness in the 1950s. Such a view of this golden era of American greatness is exemplified by Conservative critic Ilana Mercer, in an article written for American Greatness, entitled “For The West To Revive, Christians Must Toughen Up”. This title alone demonstrates the sense of lost American identity and the desire to return to a time when America was “great” because religious faith was stronger. Mercer places this duty on American Christians, writing that they must stop focusing on “loving thy neighbor,” instead placing a greater emphasis upon protecting themselves and their individual interests from the “other,” who is no longer a neighbor, but a threat. She also writes that American Christians should not succumb to “white guilt” (Mercer, 2021). By this logic, Mercer is concluding that to revive American Greatness, everyday American Christians must work to promote their own selfinterests, instead of considering the interests of marginalized groups. This essentially involves ignoring institutionalized racism, which has long benefitted white people and actively working to dismantle institutional policies that would allow the “other” to prosper- something Mercer believes has made America “weak.” It also upholds the narrative of American Exceptionalism by erasing the counter-histories, social and political, of those most injured by the continuing “errand into the wilderness,” which was of course once their land. This belief is a common one held among Conservatives, and plays into the American ideal of hard work and that one should be able to “pull themselves up by their own bootstraps” Many Conservative Christians starkly oppose social welfare efforts, even while a clear Biblical message is to care for the poor, as indeed Winthrop explicitly argued. In opposition to the Affordable Care Act and its attempts to expand healthcare coverage, Kansas Republican Representative Roger Marshall attempted to use the Bible to oppose this effort, stating in a Stat News interview: “Just like Jesus said, ‘The poor will always be with us’...There is a group of people that just don’t want health care and aren’t going to take care of themselves.” This stance demonstrates a belief held among many Conservative Christians that those


92

who are poor want and deserve to be poor, and it is not the duty of working, financially secure Americans to take care of these people. Christians should not be burdened by trying to care for others, and should instead focus on their own self-interests, which coincide with the nation’s interests. In the mid 20th century, the American religious scene saw the emergence of evangelical Christian capitalism in response to more progressive religious movements. Up until this point, the church has been relatively progressive, or at least socially tolerant. Bettering the lives of the poor, keeping children out of factories, and working towards ensuring all children received education were rallying cries and priorities of the church following the Industrial Revolution. Protestant America was actively involved in “the Social Gospel,” with advocacy for women’s rights and against racial injustice and income inequality. The Southern Baptist Church (SBC) even publicly supported women’s right to abortion until 1979 (Roach, 2015). Increasingly, American Evangelicalism emerged as a cultural and political response to the growing scientific pressure upon elements of the Christian faith (such as the creation of Earth) and social pressure upon traditional family structures. Radio gave evangelical preachers such as Billy Graham a huge platform in which they could reach a larger audience than ever before. Prominent evangelical figures rose to not only cultural fame but also political importance. Graham regularly provided counsel to US Presidents, spoke at official White House events, and was known as the “Pastor to the Presidents” (Billy Graham Library, 2020). During this time, religion was once again used as justification for racial segregation. Many white Christian private schools were formed to resist integration, arguing that it was their “God-given American right to exclude African Americans” from their institutions. While racial segregation was not a particularly appealing rallying call, framing the older language of “purity” in the terms of “religious liberty” created a movement that white Conservatives could more easily support (Morris, 2019). The political rise of Conservative Evangelicalism came in a time period when Democrats were starting to embrace issues of feminism and LGBTQ rights, two issues that challenge the “traditional” heterosexual family. Peter Montgomery, known for his extensive scholarly research on the Religious Right, identifies the conservative political response to societal shifts: “conservative operatives looked at evangelical churches that had traditional ideas about the role of women and sexuality, and saw those churches as places where they could convince people that voting conservative was part of their religious duty”. Fears of communism and authoritarian governments also fueled concerns among conservatives that values of Christianity and family were at risk. Polarizing news outlets capitalized on these fears, framing AIDS as a plague sent by God, abortion as the “national sin” and saw the devil appearing in the form of “demonic gay Teletubbies” (Morris, 2019). This mentality of the people of God imperiled by the

enemies of God mobilized the Christian voting base, for political issues of both fiscal and social importance. The idea that one’s faith and way of life is under direct attack was an incredibly strong fundraising pitch that allowed evangelical activists and nonprofits to blossom in political power while simultaneously experiencing significant economic growth. American Christianity saw a shift in the 1970s, with the re-emergence of the concept of the “undeserving poor” taking center stage in religious politics. In his book, The Undeserving Poor: America’s Confrontation with Poverty, Michael Katz, a social historian, argues that Conservative Christian leaders targeted welfare programs because they “believed [the system] weakened families by encouraging out-of-wedlock births, sex outside of marriage, and the ability of men to escape the responsibilities of fatherhood” (Katz). Sociologist Paul Froese of Baylor University (a Baptist-affiliated University) described this “new religiouseconomic idealism” in terms of predestination, writing that Conservative Christians believe “that the free-market works because God is guiding it” (Froese, 2012). This allowed believers in this ideology to ardently support free-market capitalism, as they saw a period of economic prosperity which only deepened their belief that they were “chosen by God” to prosper, and allowed them to write off those who suffer from economic disadvantage as deserving of their situation (Jenkins, 2017). Analyzing this movement is even more interesting when taking into account the idolization of Donald Trump by the Christian Right. On the surface, one would imagine that Trump embodies everything the Church denounces–greed, gluttony, materialism,, profanity, and infidelity in marriage. Infamous for his statements saying that when it comes to women, he could just “grab ‘em by the pussy,” his numerous sexual assault and harassment claims, and a widely publicized affair with porn star Stormy Daniels, Trump is not representative of the messages of faith, love, and selflessness proclaimed by the Church. Yet no group has been more vocal at MAGA rallies (or in storming the Capital) in support of Trump than the Christian Right. Why is it that this group so strongly supports this political character who clearly does not exemplify the beliefs that they profess as a hugely important aspect of their identity? The context of America’s “culture war” is paramount in understanding how Trump was capable of recruiting the highest number of white evangelicals to turn out to the polls in his support (Morris, 2019). Here we find the persistence of the jeremiad, from the Puritans to the age of evangelicalism and into our own time. Many Trump supporters cite their fear of “liberalism” taking over the country and threatening the “American family” as a driving factor in their support of Trump. In this way, they see American Greatness as undeniably tied to Christianity, which in a secular nation, legally cannot be the driving force of politics, yet Christianity seems to find its way into almost all aspects of American politics, especially those whose narratives require an absolute opposition Fifth World


93

between good and evil, God and the devil. Christianity is a driving force in the American political landscape. While the US is officially and legally a secular nation, Christianity reaches almost every area of influence in America. In Church of the Holy Trinity v. United States (1892), the Supreme Court even declared in 1892 that the US was a Christian Nation. Religion has been used as justification for extensive American involvement in foreign affairs. American foreign goals are frequently based on three Protestant themes that can be traced to colonial times, specifically the narrative supplied by the jeremiad, which is capable of refurbishing the story that America is a morally exceptional nation chosen by God to prosper, and that such prosperity is threatened from within and from without.

T

he belief that America is “God’s Chosen Nation” has long been pronounced by American presidents, especially in setting their foreign policy goals. In his inaugural address, John Adams thanked an “overruling Providence which had so signally protected this country” (Adams, 1797). Religion is commonly cited in Presidential addresses: In his address supporting the League of Nations, Woodrow Wilson appealed to this idea of America being a divine nation, chosen by God when he spoke of the “moral obligation” the United States held, and promised to “lead in the redemption...liberation, and salvation of the world” (Wilson, 1919); and Franklin Delano Roosevelt declared that America had “divine heritage” in his 1942 declaration to Congress. George Bush’s presidency marked the beginning of an era in which religion was specifically used as justification for American foreign involvement. This rhetoric clearly articulates the notion of America as a “divine, chosen nation,” with Islam now assuming the role of “evil” after the Cold War, and has heavily influenced how these leaders view America’s role in the world, specifically regarding American involvement in foreign affairs. The specific force that America is fighting against has changed throughout history, but the common theme has been one of good vs. evil, with America always being the force of good, fighting to defend a certain “mission” against an adversary. A reflection of the oppositional mentality of the jeremiad prevalent in American Society, this viewpoint is rooted in Protestant millennialism. Millennialism is the belief, expressed in the book of Revelation, that Christ will establish a thousand-year reign of the saints on earth (the millennium) before the Final Judgement (Landes, 2015). This period is expected to be a time of supernatural peace and abundance on earth. Prior to the collapse of Olver Crowell’s revolution in 1658 Puritans believed that England was to be this “new Israel,’’ the origin of the land of saints set to reign during this millennium, (Johnson, 2020). Following this failed attempt to develop a Puritan theocracy in England, Puritan leaders shifted their hopes of creating this “new Israel” to Puritan New England.

North Carolina School of Science and Mathematics

In Colonial America, the mission of New England settlers was to establish this millennium, with the adversary identified as the “papal antichrist,” or the growing influence of Pre-Reformation Catholicism in Europe that was filled with corrupt leaders abusing religious systems and creating systems such as indulgences. During this period, the “City on the Hill” was used as the means to accomplish the goal of establishing a “new Israel”. Following the Revolution, American leaders saw themselves as creating an “empire of liberty” (as stated by Thomas Jefferson) against Old World tyranny, focusing on expanding America’s continental land. Jacksonian Democrats also prioritized expanding America’s continental land during the Manifest Destiny era–with “manifest destiny” replacing the Puritans’ “visible sainthood”--and also held the goal of firmly establishing a Christian nation that provided rights for the common (white) man. This movement identified its opposition as both the “savages” (Native Americans) and a political aristocracy “threatening America” (University of Maryland Law). While the events of the American Civil War complicated this narrative–for the people struggling to be free had their own stories of national destiny–during the period of American Imperialism, which was also that of Jim Crow, Theodore Roosevelt and those who succeeded him envisioned overseas expansion, and spreading Christian Civilization internationally against the opposition of natives who had their own religions and languages that differed from Western ideals (even if they were Christian). They referred to them as “barbarians and savages” (Judis, 2005). Following World War II and the obvious enemy of Nazi Germany, the US to the language of the Cold War, addressing the “godless” and “evil empire” of the USSR, as later termed by Ronald Reagan, in its efforts to “protect the free world”, which in reality was only an attempt to maintain America’s military and economic position of power as the leader of the global political scene (History.com editors, 2005). With the rise of Conservatism during the Bush era, the new goal was to “spread freedom” and, according to Bush, to “protect free nations” through the War on Terror, identifying the Taliban and other terrorist groups as the latest direct threat to this “freedom”(Procon.org). While the subject of the opposition has changed throughout history, the theme of fighting to preserve “good” against an outside force of “evil” has been preserved throughout American history. In this sense, American Exceptionalism has been a driving force in the way that the United States conducts its foreign affairs. The religious values set forth by the Puritans in America founding have also had a strong impact on domestic policy, for, as the Puritans discovered in Salem, there were enemies– ”witches”--within. This plays into political issues regarding LGBTQ+ rights, the right to an abortion, social welfare systems, economic justice, race, and environmental justice. The implications of religious values is apparent in Congress, as Congress is and historically has been overrepresented by Christians. Almost 90% of current representatives identify as


94

Christian, compared to 65% of the US population. In modern America, “it’s almost impossible to win the presidency without some show of serious religious commitment,” says Dr. Bruce Schulman, director of the Institute for American Political History at Boston University. Many attribute Mitt Romney’s failure to win the 2012 Presidential Election in part to his Mormonism. Presidential candidates didn’t really make faith part of their political campaigns until the 1970s, when opposition to secularism galvanized the Religious Right, boosting the campaigns of Ronald Reagan, George H. W. Bush, and George Bush. Today, most Americans want a president who has a strong personal faith (Pew Research). Schulman attributes this new expectation for political candidates to overtly display their religion to the rise of the Religious Right in America. Recent Democrats such as Barack Obama and Hillary Clinton have shifted their campaign approaches by talking about their Christianity on the campaign trail (Butters, 2016). Strangely enough, their faith did not prevent them from being cast on the side of the “enemy” in the narrative of cultural warfare. A prime example of how religious discourse permeates American politics is seen in the argument over abortion rights. Abortion is not naturally a controversial topic, but it has been constructed as such. For the first half-century of America’s existence, “abortion was neither illegal, rare, or controversial” (Munson, 2018). Historically, the national movement to oppose legalized abortion was not organized until 1967, when the National Conference of Catholic Bishops formed the Right to Life League (Munson, 2018). The Roman Catholic Church created the “right to life” movement, which was rebranded as the pro-life movement after Roe v. Wade gained huge international attention. Abortion controversy centers around three points of view: the religious perspective, attitudes towards women’s employment, and societal views towards women’s sexual freedom (Kelley, 1993). The religious opposition to abortion is the loudest and most influential voice against abortion; its advocates believe that life begins at conception, stating that abortion violates the sanctity of life. Abortion had been one of the key rallying cries of the Religious Right. Religious leaders and conservative politicians have successfully turned their opposition to abortion into a matter not only of political debate but of religious involvement as well. Again, abortion has only recently become a deeply partisan issue. As recently as 1984, the percent of Republicans and Democrats who identified as pro-life only varied by 5 points (40% of Democrats and 45% of Republicans). In 2020, 17% of Democrats identified as pro-life while 60% of Republicans took this position. During the 1970s and 80s when the Conservative Christian movement was emerging and beginning to gain traction, its leaders wanted to expand its base beyond those who opposed the Civil Rights Movement. The argument that “liberals were killing babies” was publicly a less shameful cause for many moderate Republicans than fighting to

prevent children of color from receiving the same quality of education and opportunities as white children. This has been key to unifying the Republican party. About 40% of Americans state that abortion politics is one of the most important issues impacting their vote, making abortion politics a central issue in the American political scene (Pew Research Center, 2020). By making the issue of abortion religiously charged, the Republican party has been able to strengthen its influence over religious conservatives, articulating the endangered unborn child to the danger faced by both the family and the nation, or the national family. Political leaders who oppose abortion regularly cite religion in justification for this viewpoint. In this way, while the Constitution states that America is a secular nation, religion, especially Christianity, is still a driving force in American politics.

W

hile founded on ideals of “religious tolerance and religious freedom,” America has been intolerant towards certain religious minority groups. From the American colonists’ initial treatment of the “heathen” natives to the banning of “Papist” Catholics and other nonPuritans from many colonies, this ideal was displayed only in its violation in early American history. Although such movements have waxed and waned, pinpointing different religious groups throughout American history, they have always been present. Catholics, Jews, Native Americans, Mormons, and Muslims are some of the religious groups that have been specifically attacked for their religious beliefs. The notion of America as an exceptional nation chosen by a Christian God has fueled this xenophobia. As I have been arguing the narrative of a former American Greatness threatened by decline implies that the rise of diverse religious groups threatens an America conceived as a single community of faith. Such a community, or family, is historically indebted to racial fears and hatreds, expressed in the support of southern evangelical churches for slavery in the antebellum period, for racial segregation in Jim Crow, and, afterwards, for segregated educational institutions with curricula hostile to historical truth. Thus, members of the Christian church in America are often hesitant to condemn racism in America. In 2019 Robert P. Jones, the head of the Public Religion Research Institute, (a nonpartisan polling and research organization) found that 86% of white evangelical Protestants and 70% of both white mainline Protestants and white Catholics said that the “Confederate flag is more a symbol of Southern pride than of racism”; about two-thirds of white Christians said that killings of African-American men by the police are isolated incidents rather than part of a broader pattern of mistreatment, and more than 60% of white Christians disagreed with the statement that “generations of slavery and discrimination have created conditions that make it difficult for blacks to work their way out of the lower class.” This same research concluded that the more likely a person was to hold racist Fifth World


95

attitudes, the more likely they were to identify as Christian (Luo, 2020). These are sentiments which can be traced back to the Puritan narrative of themselves as God’s “chosen people.” Believing that they were divinely chosen led the Puritans to see everything that denies or contradicts their beliefs as a test of their faith, and therefore something that must be quelled. In colonial times, this resulted in the harsh treatment of Native Americans, or as the Puritans saw them, “heathen savages” animated by Satanic energies. In modern times this sentiment manifests itself through sexism, homophobia, xenophobia, and racism. This fear of the “other” or the “outside” has caused modern American Christianity to strongly value the immediate family and homogenous community; anything that threatens this traditional heterosexual family model of the father working and the mother tending to the house and children–a model racialized as the white family, governed by the privilege and power of the white man, was seen as a threat to a way of life imagined as fundamentally American. During the rise of the Christian Nationalist movement in the 80s, religious conservative leaders such as Phyllis Schlafly and James Dobson capitalized on these fears Schlafly was a key anti-feminist female voice fighting against the Equal Rights Amendment who advocated to women that they should not desire equality and that the inequality they faced was really not that bad. She preyed on fears that female equality would force women away from their babies and homes, leading to the breakdown of the all-important traditional white American household (Kennedy, 2020). James Dobson, a highly influential American evangelical preacher, established the James Dobson Family Institute, an influential Christian Right organization with the mission to “help preserve and promote the institution of the family and the biblical principles on which it is based” (James Dobson Family Institute). While Winthrop’s “model of Christian charity” was the church and the state, Dobson’s is the white Christian family, upon which the church and state should be modeled. Dobson directly opposed any efforts of diversity or inclusion, stating that “tolerance and its first cousin, diversity, are almost always buzzwords for homosexual advocacy” (Dobson, 1997). Furthermore, Dobson was a loud voice advocating against abortion rights and LGBTQ+ rights, citing the threat that these held to the “American family.” In its essence, the argument that feminism works to undermine “family values” is simply an attempt to control women and limit their bodily autonomy. An extremely relevant example of Conservative opposition to the “other” is seen in Donald Trump’s presidential campaigns as well as in his presidency. Trump has regularly referred to immigrants as “invaders” throughout his campaigns. This term villainizes immigrants by making the border a division between “us” and “them,” much as the Puritans believed that Satan’s agents were “outside” the community of the elect, in the “wilderness.” North Carolina School of Science and Mathematics

Such exteriority to “us” was extended to the Islamic world, as Trump has played a key role in the rise of Islamophobia in America, perpetuating the belief that those of Middle Eastern heritage have connections to terrorist attacks, leading to the creation of harmful policies (such as travel and refugee bans for those originating from many Muslim nations) that target these people. Trump stated that “Islam hates us,” that Muslims harbor “unbelievable hatred,” and that it is “very hard” to separate “radical Islam” from the religion as a whole (Muslim Advocates, 2018). Creating a villain is necessary to the narrative work of the Jeremiad, as the tests and trials of the faithful are crucial to the promise of redemption. In this constantly renovated narrative of the absolute struggle between good and evil, God tests his Chosen People “savages,” communists, liberals, feminists, terrorists, and all others who advocate for a wider and more generous definition of “family.”

P

uritanism played a key role in America’s founding and its effects on modern American Christianity. American Exceptionalism, an element of American identity that has been present since colonial times, is founded in Christianity, specifically, in the Puritan errand into the wilderness of New England. However, this notion that America is an Exceptional nation only holds true for those who assert a “purity” of which the actual Puritans never dreamed, that is, those who fit into the “traditional” and racialized heterosexual family structure. Opposition to this societal standard is seen as opposition to God’s will and a threat to America’s destiny. Thus, deviation from these traditional values–that is, the values associated with national “greatness”–occasioned by immigration, the rise of minority influence, diverse political voices, feminism, movements towards LGBTQ+ rights, and greater inclusion in America has created a countermovement of the Christian Right. This group has been growing in influence since the late 1970s and currently plays a huge role in American politics. A major rallying cry for Conservative Christians is that America has fallen from its values and thus is losing sight of the “mission” for which the Puritans believed they were called by God to fulfill. This idea requires the belief that America was at one time great, and that such greatness must be reclaimed in righteous struggle against those who are without and those who are within the redeemed community. Recreating a narrative of America as a nation that has been led astray by “outside” factors that threaten its Christian greatness and “inside” forces that compromise its familial “model” has served as a very effective political cry that brings Christian nationalists out to the polls in overwhelmingly high numbers. The support that Conservative Christian America has thrown behind Donald Trump exemplifies that this group’s interest often lies not within the scripture of the Bible, but in upholding a societal status quo that has provided them with great privilege. While America is legally a secular nation, religion,


96

especially that of the Christian Right, has a great hold over American policy. As American Exceptionalism is rooted in the Puritan beliefs that they were God’s Chosen People, so the constant renovation of Puritan values have played a defining role in American politics both domestically and internationally. This sense that America must bring liberation to the world has significantly influenced America’s foreign policy, leading the US to insert itself (often through military force) into the politics of other nations. America’s foreign policy initiatives have followed this “us vs. them,” “good vs. evil” mission sense established by the Puritans, with the various “them” or “evil forces” shifting throughout history, while history itself is rewritten as theological allegory. What remains constant, however, is that we are identified as God’s people and our actions understood as central to God’s plan for the world. Religion also is deeply influential in American domestic policy, as seen in political battles over integration, abortion rights, and the use of religion in public schools. Throughout the 20th century, religion became less a part of American public schools, judiciary, and societal expectations. With this came a deviation from the traditional family, especially after World War II, when women began experiencing what it was like to be the breadwinner of the family. In response to many progressive movements, the Christian Right preyed on fears that the decline of family values was leading America down a path of irreparable damage. This group also targeted immigrants and minorities as threats to American “values,” which were identified with a traditional family unknown (for instance) to the Bible, but located instead in an imaginary past of a white America. This extends the American “mission” backwards into the home, with a similar opposition between a Christian “us” and a secular “them” based upon conventional gendered and sexual relations, with the “wilderness” imagined now as a society of sexual deviance and gendered transgression. So goes the new jeremiad: only if we turn from such practices to the past, will God make America great again.

Adams, John. “Inaugural Addresses of the Presidents of the United States : From George Washington 1789 to George Bush 1989.” Avalon Project - Documents in Law, History and Diplomacy, 1797. Beckman, Joanne. “Religion in Post-World War II America.” Religion in Post-World War II America, the Twentieth Century, Divining America: Religion in American History, TeacherServe, National Humanities Center, Duke University National Humanities Center, 2000. Butters, Julie. “Why America Can’t Separate Religion & Politics.” Religion and Politics, 2016. “Current Exhibit: Pastor to Presidents.” The Billy Graham Library, 26 Oct. 2020. Froese, Paul. “How Your View of God Shapes Your View of the Economy.” Religion & Politics, 19 July 2012. History.com Editors. “Reagan Refers to U.S.S.R. as ‘Evil Empire’ Again.” History.com, A&E Television Networks, 16 Nov. 2009. “Important Issues in the 2020 Election.” Pew Research Center - U.S. Politics & Policy, Pew Research Center, 28 Apr. 2021. Jenkins, Jack. “The Strange Origins of the GOP Ideology That Rejects Caring for the Poor.” ThinkProgress, 9 June 2017. Johnson, Daniel, and Paul Lay. “Cromwell’s Revolution.” Law & Liberty, 20 Sept. 2020. Judis, John. The Chosen Nation: The Influence of Religion on U ... Carnegie Endowment for National Peace, 2005. Kelley, J. “Moral Reasoning and Political Conflict: The Abortion Controversy.” The British Journal of Sociology, U.S. National Library of Medicine, 1993. Kennedy, Lesley. “How Phyllis Schlafly Derailed the Equal Rights Amendment.” History.com, A&E Television Networks, 19 Mar. 2020. Krieg, Gregory. “Donald Trump Reveals When He Thinks America Was Great | CNN Politics.” CNN, Cable News Network, 28 Mar. 2016. Landes, Richard. “Millennialism.” Encyclopædia Britannica, Philosophy and Religion, 2005, . Luo, Michael, and Eliza Griswold. “American Christianity’s White-Supremacy Problem.” The New Yorker, 2 Sept. 2020. Matte, G., 2021. Opinion | Still Puritan After All These Years (Published 2012). [online] Nytimes. com. Mercer, Illana. “For the West to Revive, Christians Must Toughen up ‘ American Greatness.” American Greatness, 25 Oct. 2021. Morris, Alex. “False Idol -- Why the Christian Right Worships Donald Trump.” Rolling Stone, Rolling Stone, 23 Dec. 2019. Munson, Ziad W. Abortion Politics. Polity, 2018. Rainey, Jane G. “Church of the Holy Trinity v. United States.” Church of the Holy Trinity v. United States, 2009. Roach, David. “How Southern Baptists Became pro-Life.” Baptist Press, 18 June 2021. “Timeline of Record of Bigotry - Muslim Advocates.” Muslim Advocates, 2018. Williams, Amy, et al. “What Was so Great about the 1950s?” Ms. Magazine, 21 Feb. 2019. Wilson, Woodrow. American Rhetoric: Woodrow Wilson -- “Final Address in Support of the League of Nations”, 1919. Zernike, Kate. “Buzzwords; Hello, Synergy, Begone, Crisis.” The New York Times, The New York Times, 30 Jan. 2005.

Fifth World


97

Politics and Polity: America in the Shadow of the Supreme Court Mark Muchane

The Present Crisis and Constructing Americanism in the Shadow

T

he Supreme Court is facing a unique challenge in its history, dominated by a conservative wing for the first time since the Progressive Era. The Court is at the top of mind for anyone concerned with the very basic nature of the United States of America. The Supreme Court sits at the center of public life, governing the lives and livelihoods of millions, creating America’s image on the global stage, and shaping the perceived moral center of the nation. I’d like to introduce my analysis, not with one of the more high-profile cases the Supreme Court has taken up in the past few years, but with a smaller case: Cedar Point Nursery v. Hassid. The case is primarily concerned with the Takings Clause of the 5th Amendment, in its relation to labor organizers. The takings clause says that private property shall not .” In American jurisprudence, this has been interpreted to mean that any state action that could change the value of private property can constitute a “taking” that must be justly compensated. The critical holding, in this case, was that an unconstitutional taking occurs when the government—here, the State of California— passes laws that require employers to allow labor organizers to access company property for union recruitment without compensation; greatly broadening the precedent on what constitutes a taking that must be compensated. Just on its face, this decision could have catastrophic consequences, effectively voiding, for instance, antidiscrimination law, which “takes” employers’ right to exclude workers of color, pregnant workers, and queer workers. Fair housing laws? Well, those take landlords’ “right to exclude” renters of color, families, and renters with vouchers. Any exercise of state power without compensation becomes an unconstitutional taking; the power of private citizens to halt state activity and exclude others becomes absolute (Bowie 2020). Understanding this as a form of personal law (law that applies differently to certain persons or classes) reveals two important, inseparable threads that we must trace through American history to understand the jurisprudence and nature of the court: first, America’s strong tradition of property rights; second, America’s strong tradition of hierarchical anti-democracy. America’s strong property rights cannot be separated from the founders

North Carolina School of Science and Mathematics

themselves, who sanctioned property in persons justified by the ideology of personal law, nor from our traditions of antidemocracy. Beyond the larger implications, the present crisis for the Supreme Court must be understood as a fight for civil liberties; one which is mediated by the unique structures and ideologies of the Court and how that shapes and has shaped America. But still intentionally analyzed through the frame of that which is material, the basic rights of all. The Supreme Court very recently handed down a 5-4 decision over Texas Senate Bill 8, a piece of legislation allowing any private individual, or state health official, to issue suit over any abortion occurring after the 6-week point in pregnancy when many do not even know that they are even pregnant. The complaint is very narrowly about whether abortion clinics are allowed to bring federal lawsuits to prevent the enforcement of the law, and the holding is even more narrow. Gorsuch, writing for the majority, holds that only state health officials can federally be sued to stop the law and that state judges and clerks cannot be held responsible for allowing cases under SB8 to proceed (The Supreme Court of the United States 2021). It’s the same basic pattern as the decision in Cedar Point Nursery v. Hassid, what I’d call “small scale evil,” which is to say, small decisions advancing personal law that undermine the fabric of civil society with large implications. The Court inserts itself into matters of civil liberties and settled law (often settled through explicitly democratic processes) as the ultimate lawmaker to uphold personal law, while simultaneously protecting the power of the judicial elite to enforce these impingements; reviving the ideology of the Waite court. By envisioning clerks and judges not as an extension of the law, but as completely independent actors who are protected but not bound by the law, it allows for the same doctrine of absolute nullification of state power, absolute exclusion, and ultimate political agenda-setting power. These ideas don’t just undermine civil liberties but create the nation by immortalizing the structural power and ideologies of certain groups, and effectively birthing the politics of the day. In fact, by understanding the urgency of this threat in terms of a few key groups and ideologies (most importantly here the anti-democratic power of the Court), we find yet another first principle of how ideological and


98

political the Court is. Because the Court plays such a key role in immortalizing ideology into the nation, the institutional ideology itself has to become at least a minor subject of consideration. And this must be carefully, even if briefly, considered because the ideology of the Court is not simple; it is filtered through the Senate nomination process and the “feeder” elite law schools; when we understand the ideology of the Court, we can’t simply engage in a left vs right analysis, but must analyze the bourgeois nature of the Court, and the sources from which its ideological trends arise. To understand the forces at play here, we must examine how the great political powers of our time are responding to this crisis. For instance, the Biden administration has created a “Supreme Court Commission” tasked with providing “an analysis of the principal arguments in the contemporary public debate for and against Supreme Court reform” (The White House 2021). The problem with the creation of this commission is, however, almost directly stated in its mission. It only seeks to discover what we already know, an analysis of the contemporary arguments. That shields us from anything deeper, from the intellectual production necessary to a new intellectual moment of crisis. Fundamentally, this is the same problem as the overturning of the 2000 election. Even as mainstream liberalism opposes the conclusions of conservative ideology, many of the same projected first principles about the nature of the Court and our institutions have been internalized and reproduced. The Courts cannot be challenged, because the Courts are just; the process must be trusted because the process must be just. This prevents us from grappling with the idea that the status quo of both the Court and process are both not only deeply unjust, but deeply incompatible with multiracial democracy, and allows those cynical enough to harness democratic processes to deliver anti-democratic results. On the other hand, the conservative side of this is almost easier to understand, but should not be neglected because it is so wide in scope. The conservative position is basically one of support of the Court, but that support should only be understood as a function of the Court’s ability to deliver decisions in line with conservative ideology; there is no mirror version of the same “process liberalism”. Nearly a quarter of Republicans voted for Trump because of Supreme Court nominees and the appointment of Justice Kavanaugh was a primary reason for many red-state Democrat Senate losses in 2018 (Bump 2021). The conservative movement in America has been focused on creating institutions that can shape and create the Court (which cannot be understood as static) for more than half a century. Organizations like the Federalist Society create career advancement and opportunities for young conservatives interested in a role in shaping the nation. These institutions are critical because they reveal a fundamental nature of conservatism and the Court; conservatives aren’t simply seeking to aid the Court, they are not, as they may claim, stewards of the Constitution, they intend to use these

processes and institutions to their advantage. The Court cannot be understood as it is in the popular imagination; simply the impartial arbiter of justice, it is a coequal political branch, something that must be campaigned for and won just as much as the Presidency or any seat in Congress. This pattern of behavior and understanding doesn’t just extend to the conservative movement. Even as the conservative movement has the explicit organizations to create the judicial system they want, every Senate nomination of a Supreme Court justice brings controversy. Because, fundamentally speaking, politicians implicitly understand that the just-so stories about the Courts are just that, stories, and the Court must be shaped by them if they wish to secure the political future of their party. This understanding is really important to the rest of the analysis that follows in this essay; both in form and conclusion. By using an analysis of the form that analyzes not the stated outcome, but the actual outcome (political parties devoted to shaping the Court), we reach an important conclusion about the nature of the Court as a coequal political branch. And because mainstream liberalism is so helpless in the face of such a great threat, and mainstream conservatism actively aids and abets the threat, much of the analysis that follows will be framed in terms of left-wing ideas far outside the typical political discourse. Fundamentally, I think such a radical moment demands radical politics and radical analysis to be able to truly understand the crises we face. America faces a uniquely pressing crisis of civil liberties. But that crisis doesn’t just emerge from the happenstance of a majority of 6 conservative Supreme Court Justices. It’s been created by the unique ideologies and pathologies of the Court and the legal elite that serve it; the Waite Court, the libertarianism and personal law inherent in the takings clause, and our conceptualizations and material grants of judicial power. And this crisis isn’t just birthed from abstract ideology, it is birthed from the very Americanism I argue the Court helped create. Analyzing the present crisis isn’t just analyzing a specific injustice, but rather the wide arc of American injustice as it is created through the Court.

B

ut, before we can truly begin to return to the questions of the creation of Americanism and the Court, we need a few more tools of analysis, tools of ideology and of politics. By positioning two landmark Supreme Court cases about marriage equality, both interracial and gay, in conversation with each other we reveal a few new key pieces of analysis. In , the Supreme Court holds, or rather, Chief Justice Earl Warren in the majority opinion states, that “Virginia’s statutory scheme to prevent marriages between persons solely on the basis of racial classifications [are] held to violate the Equal Protection and Due Process Clauses of the Fourteenth Amendment.”. The Court’s reasoning is fairly straightforward. Warren states that the clear and central purpose of the Fourteenth Amendment Fifth World


99

was to eliminate all official sources of invidious racial discrimination. The opinion goes even further in its analysis and argues that all racial classifications and separations by the government should be subject to “the most rigid scrutiny” (effectively making race a suspect class in legal terminology). In , Justice Anthony Kennedy holds for the majority that “The Fourteenth Amendment requires a State to license a marriage between two people of the same sex and to recognize a marriage between two people of the same sex when their marriage was lawfully licensed and performed out-of-State.” From the second you read the holding in this case, the differences in legal doctrine, even though they deal with fundamentally the same issue, become clear. Kennedy’s decision begins with a long history of marriage and gay rights in the country, situating this not as a simple issue of invidious discrimination, but establishing a new fundamental “right to marriage.” Kennedy was here trying to prevent a broad and liberatory understanding of the 14th Amendment (arguably, Gorsuch expanded this in Bostock v. Clayton County, but still not nearly as liberatory an interpretation as in Loving). As a conservative justice, he was opposed to any expansion of state protection, but, unlike the other conservatives on the court, was more moderate, and more interested in the priorities of liberal society as they pertained to the Court than the other justices. (It does become slightly ironic, or perhaps saddening, that this role has been taken up in the Roberts court by CJR himself when he likened gay marriage to bestiality in his dissent in this very case.) The only way to protect those dual priorities is a long-running opinion (many times longer than the Loving opinion) that paints a narrative story of groups, rather than a strict understanding of discrimination, justice, and the “freedoms” we must restrict, or perhaps “take” (to use the language of Cedar Point Nursery) to preserve civil society and the dignity of all. This motivation is clear in the legacy of Loving. For ideological reasons, the Court that decided could not support a decision which would cause the inevitable expansion of all rights; housing, workplace discrimination, public policy, etc. to the queer community. We must understand the politics of the Court and how it comes to consensus decisions as a function of ideology– not, however, as just any set of ideology and politics, but rather as the ideology and politics of the legal elite. Justices don’t directly answer to political parties, they answer to the institutional sites of discourse from which they derive their authority, which is to say, from elite law schools and legal circles. Judges cannot be disciplined by their parties; thus, they effectively become monarchs, only removable through coup (impeachment and removal, which has never happened to a Supreme Court Justice) or death. That Senate confirmation as the method of appointment further insulates the ideologies of the Court from the politics of the day, encouraging philosophical conservatism that prioritizes the interests of Senators and elite law over democratic rulemaking. Just as the politics of monarchs often do not map North Carolina School of Science and Mathematics

cleanly onto the constitutional governments which they preside over, so SCOTUS institutional ideology tends to prioritize self-preservation and class interest over any particular philosophical goal. These powerful institutional structures of ideological practice that are likely why models of mathematics and computer science perform extraordinarily well at predicting the behavior of the Court (Katz 2017), but that explains far less than we might want to know. All it affirms for us is the efficacy of analysis not rooted in law but in politics. We know the Court is formulaic, but beyond just the structures that I’ve begun to explore, what are the political outcomes and more complex institutional ideologies created and reaffirmed through this structure? Throughout this paper, I use a Gramscian political reasoning around the Court. And I think that is key because of what it unlocks. We lose so much when we understand the Court only as a legal body. For instance, in our analysis of , a strictly legal analysis would have led to a conclusion such as this: “the Court uses different doctrines in these different situations for reasons that are inexplicable.” The moment you interrogate the jurisprudence of individual justices, and why they might use different reasoning for very similar cases, you once again arrive at questions of politics. Treating those questions and considerations as central, rather than as a function of interrogation, is extremely powerful. The Supreme Court may very well be the place where truly material politics is most directly alive in America. Insisting on an understanding of this ideology through material injustice, an outcomes-focused approach birthed by the process rather than the process itself being valorized shapes a material and intensely political understanding. Kennedy in the holding aligns the protection of the material structures that uphold his class position and the ideology of his political movement: strong “State’s Rights,” nominal social but not broad-reaching economic rights, etc–which is to say, the bourgeois classical liberalism of the legal elite. We must consider the Court only within the shadow of class politics. We should also not neglect to situate the Court in its global context. So often, analysis of the Court refuses to understand the global context because it reveals that absolutist visions about the “goodness” of our system might not be so valid. This is important, because it not only establishes America as a historical anomaly, but it introduces yet another frame of reference through which we should be examining the Court. The United States is the only country (except India, perhaps), that has a strict/ strong judicial review (Tushnet 2009). This means that the Supreme Court, unlike most other high courts, can overturn a law simply for not fitting its interpretation of constituting documents. This is important to understand because it gives the Court an anti-democratic power that is unique in the world. The Court sits at the center of society, governing whether social safety net expansions can pass (e.g. the ACA


100

or the New Deal), granting or restricting rights far beyond that which political majorities amass in Congress for large scale change, creating national identities and cultures (e.g. the valorization of justices both liberal and conservative). It’s the power to say that, even though a law may be a reasonable, if narrowly disagreeable interpretation, of the Constitution, it nonetheless cannot stand because of how the justices envision the Constitution as operating. The Supreme Court becomes the Supreme Political Body, a Council of Elders, a House of Lords that can obstruct or overturn the will of the people at any moment. This is why we might approach analysis through the lens of the Supreme Court creating America, creating American politics, creating Americanism.

B

efore we can begin with the first landmark case of the Supreme Court, we must begin with the ideology of the Framers, most importantly, Hamilton and Madison. They both must be understood within the often aristocratic ideology that they supported. Even as democratic as they may appear, they only exist within their milieu of power/ class and how it restricts their possibility. As power must be secured for their class, so must the institutions of such power be created: the Senate (including the original state appointment system), the Supreme Court, non-expansive voting rights, and more. Hamilton in the Constitutional Conventions supported an executive that would be almost monarchical in its power, and of course, the Madison presidency had some of the first expansions of executive power and restriction of civil liberties. Nonetheless, we begin with Federalist Paper 78, where Hamilton almost directly gives his view of the Supreme Court. It can exercise “neither force nor will but merely judgement.” Hamilton had a vision of the Court that was impartial, a body not influenced by partisanship that simply keeps the other branches between the relatively wide marks set out by the Constitution, resolving disputes through “merely judgement” with neither “force nor will” to create America. This can be viewed as absolutism of democratic lawmaking, and Madison’s later writing supports this view. His faith in the power of democracy is strengthened after the Nullification Crisis. He writes that “While the Constitution is in force, the power created by it, whether a popular minority or majority, must be the legitimate power, and obeyed as the only alternative to the dissolution of all government” (Madison 1834). The Constitutional majority must be obeyed, even if in opposition to the very specific and narrow interpretation of the Court. Madison’s view contests both the Court’s current position as an ultimate lawmaker and its new doctrine of absolute nullification in the SB8 decision, in which the Court effectively allows the nullification of democratic will, as the majority of Americans consistently support abortion rights (Hartig 2021). The internal contradictions and tightly balanced systems of governance, all supported by the ideological

propositions of Montesquieu, suggest that the Court and the ideology behind it are not absolute, that it doesn’t fall into a single anti-democratic tradition as the 1619 view of America might suggest. Given an understanding of the ideology of Madison and Hamilton, we can understand Marbury v. Madison as fundamentally opposed to these shockingly democratic principles. Justice Marshall for the majority argues that “It is emphatically the province and duty of the Judicial Department to say what the law is.” This is almost indescribably anti-Hamiltonian and anti-Madisonian; at the same time, we should ask ourselves why are we even framing Americanism in terms of Hamilton and Madison? Even as Hamilton and Madison are upheld throughout our systems of education and politics as great American theorists, understanding American political acts only as “federalist” or “anti-federalist” forces us to conceptualize America narrowly in the personified terms of Hamilton and Madison, even when America may have been created by other forces. It disables a global analysis, it disables a Marxist analysis, and it only enables an essentialist analysis that builds America around who can best contrive these ideals. And it is dangerous for our political futures too. When we frame our political futures in terms of the narrowly democratic principles in which the demos is a colonial elite, we limit how we can conceptualize, for instance, property or economic rights. For instance, to accept Madison’s framing of the power of a Constitutional majority, we inherently select a kind of majority to empower–for Madison, this may be a “popular minority or majority.” To frame Americanism in terms of kinds of majorities, not the popular majority, is anti-democratic Americanism; it is the empowerment of the well-distributed white majorities that elect the Senate and Electoral College. But, more importantly, this is crucial to how we understand the Court as creating America, because it introduces the possibility that the Court didn’t hand down Marbury v. Madison and the doctrine of strong judicial review because of its acceptance of federalist aristocracy or it rejection of their narrow democratic principles;rather, the Court is an independent actor that issued the decision out of its own institutional ideology. This ideology has its origins in and is mediated by the structure of the institution itself; however, it is meaningfully independent of its framers. This is the correct way to understand this decision, and it has to support the old leftist view because if we only analyze America in the shadow of Hamilton and Madison, we get an inexplicable America, an America that cannot be understood except through essentialist narratives that don’t even try to “discover” fundamental truth, but rather warp history around themselves. The next moment, of the Reconstruction Amendments, is important for identifying modern threads of American political thought. Beyond that, the Amendments create the conditions for decisions central to the Court’s unique ideology. To frame modern jurisprudence in the shadow Fifth World


101

of Reconstruction, a useful thought experiment is to apply originalism, as formulated, to the Reconstruction Amendments. This is important because it reveals to us the deeply liberatory nature of the Amendments and the deeply hypocritical, or rather, political nature of originalism. The 13th, 14th, and 15th Amendments are a view of a truly equal America, a radically equal America held by the Radical Republicans. And the Court has acted to deny that possibility. If applied correctly, the Amendments would arguably open the door to expansive economic rights as well as the prosecution of the anti-democratic tendencies of the modern GOP (3rd section of the 14th Amendment); however, the Court’s specious appeal to original intent furthers the narrative of the Court as an institution in the old leftist vision that oppresses working-class victory, like any other institution. The ideology of originalism is deployed against the claims of marginalized people. Originalism shouldn’t be cast as a doctrine of trying to discover what the framers of any given piece of American history were thinking, but yet another example of the contrived framing of America around specific Framers’ principles designed to create the conclusions that justices who follow it want. However, the Court’s denial of the Amendments and the missed opportunity of Reconstruction itself also draws us nearer to a more mainstream view of the Court and America. Reconstruction was the opportunity for an equality that was not realized; on a wide-reaching scale: there was, for a moment, a chance for true women’s equality and the opportunity for Frederick Douglass’s view of a multicultural America. Jamelle Bouie’s “The Equality That Wasn’t Enough” reveals to us the tragedy of this moment (Bouie 2020). A version of the 15th Amendment introduced by Representative Samuel Shellabarger would not only have expanded the right to vote to all former slaves but would have ended the conceptualization of voting as a negative right, essentially eliminating the possibility of poll taxes or literacy tests employed by white supremacist during Jim Crow. It would’ve prevented the rise of Southern Redeemer governments through the simple, yet powerful, right to vote. It would have achieved a complete reframing of American history, as Black political thought would have informed new conceptions of government, and enabled true multiracial democracy. But the frame from which we first began this investigation of the political side of Reconstruction (the denial of Reconstruction by Congress and the Court after the moment of Reconstruction itself), reveals the answer. There would’ve been another, different Compromise of 1877, even if we got Reconstruction right. The bedeviled, conservative nature of the institutions that thwarted this movement would have stopped any like it down the road of history, effectively sealing our fate. This is precisely the danger of the political liberalism that tries to respond to our current moment. Even as it considers present and individual injustice, it cannot link it back to larger processes and structural failure.

North Carolina School of Science and Mathematics

This moment is lost not only with the military end of Reconstruction but with the Civil Rights Cases. In this set of five cases under the Waite court, the majority holds that “The Thirteenth and Fourteenth Amendments did not empower Congress to safeguard blacks against the actions of private individuals. To decide otherwise would afford blacks a special status under the law that whites did not enjoy.” It turns the ideology of Reconstruction on its head, and reenables the doctrine of absolute nullification. The state and the Courts may enable invidious discrimination in every way, and offload the enforcement of these racial doctrines to private individuals, escaping the law (i.e. personal law). It is the same formulation as SB8. And in fact, the Court never overturned this understanding: when the Court upheld restrictions on the Civil Rights Act of 1963’s ability to intervene with private businesses, it was under the Commerce Clause, not the 14th Amendment, and I think that is no mistake. If we understand that even an extremely liberal court cannot radically expand rights because of the ideological conditions in which it exists, we understand that the Court never could return us to the liberatory moment of Reconstruction, even as it upheld the Civil Rights Act. This draws us to a conclusion about the Court that goes beyond the 1619 view of America as inherently marred by slaveholder ideology. The Court is not only conservative, hindering progress, but is reactionary, actively reversing progress. It is a uniquely deleterious force in American history, the bastion of not only conservatism, but fascism, and enabled by the archaic and aristocratic views of the Framers. This moment lasts until the creation of the Warren Court under FDR, LBJ, and Democratic Senate Leadership in the 40s, 50s, and 60s. While the various cases decided by the Court here deserve their own comprehensive legal analysis, our understanding of this moment must once again be carefully framed in the shadow of politics. The Court here acts as a historical anomaly: for the first time in history it acts, arguably sometimes against pure majorities of public opinion, to greatly expand civil rights and our conceptualization of what we are owed as citizens. But,this moment is uniquely dangerous as well, for it creates complacency and misunderstanding of the Court. It creates the situation we are in today, where the Democratic Party simply has no response to the slow, methodical takeover of the Judicial branch because the people who lead those organs never have had a personal and political relationship with the Court as anything but liberatory (not helped by many members of Congress being graduates of many of the same institutions that instill the ideology of the legal elite). And this is the fundamental danger of not only liberal ideology about the Court but the ideology of the liberals who serve on the Court. By acting not structurally but individually, they imagine that the Court can be recast as liberatory through good enough doctrine, through good enough legal machinations. This is the almost monarchical


102

ideology central to the Court’s operations. But even as they may create fleeting moments of liberation, important moments of liberation, they ignore the very anti-democratic and, as revealed in the analysis of the Civil Rights Cases, reactionary nature of the Court. Even more broadly, as most lawyers who graduate from top law schools are Democratic voters or personally liberal, by acting individually, they, like the liberal SCOTUS justices, reaffirm the structural conservatism of the system, and often directly believe in the very structural conservatism that stimies their political orientation (arguably in part because of direct class interest). This prevents the true re-creation and reform of the Courts; our possibilities for the vision of the future of the Court are limited as they are filtered through elite legal opinion, opinion that preserves itself, class interest, elite prestige, and the individual judge over the larger goals of an ideological movement. The best place to enter the modern era of the Court is through the landmark case San Antonio Independent . The majority holds that it is unconstitutional under the Equal Protection Clause to fund public schools through property taxes. Education is not a fundamental right nor does the system facially discriminate, so the laws are not subject to strict scrutiny, and encouraging local control is a rational state interest. This almost directly contradicts the landmark Brown v. Board decision that education “must be made available to all on equal terms,” reviving the doctrine of separate but equal without, however, even the appearance of equality. The Court uses facial equality to empower devolved forms of power to capture key state functions. Understanding the complex legal and racial dynamics, particularly for the Latine community, requires the tools of Critical Race Theory; for my analysis, however, the important conclusion has to do with the conceptualization of state power and local control as a specific interest–as, again, a form of personal law which is not necessarily applied to a person, but a larger community constituted through socioeconomic power. We see this pattern emerging in the Court over and again in the cases brought before it not because America is a function of the Court, but rather because America is a function of the anti-democratic ideology that necessitates the Court as a key institution. But Justice Thurgood Marshall’s dissent in this case is deeply important as well, because it suggests the less essentialist interpretation. By exposing some of the deeper pathologies of the Court, we understand modern anti-state power views as constituted uniquely by the Court. Marshall identifies that the Court is abdicating its role in solving unequal education, writing: “I, for one, am unsatisfied with the hope of an ultimate ‘political’ solution sometime in the indefinite future while, in the meantime, countless children unjustifiably receive inferior educations.” In my view, this is a key conceptualization created by the Court; it is not inherent to American ideological foundations. SB8, the Civil Rights Cases, and more–all

are examples where the Court creates a just-so story in which invidious discrimination can occur, often even statesponsored; however, due to political restrictions that never seem to exist when a matter of pressing urgency to justices is brought to the Court, the Court has no choice but to turn a blind eye. Thus, a crucial element of modern Americanism, personal responsibility as a shield against societal ill, is created by the Court. This can ultimately be viewed as an evolved version of the ideology of the Waite Court, protecting judges from a role in improving society whether it be through intervening to enable private discriminatory actors or failing to intervene to protect the sanctity of state power and state interests. Marshall’s other key argument in the dissent follows from this understanding as well. He argues, “Hence, even before this Court recognized its duty to tear down the barriers of state-enforced racial segregation in public education, it acknowledged that inequality in the educational facilities provided to students may be discriminatory state action as contemplated by the Equal Protection Clause.” The majority refuses to take Brown, and by extension, the 14th Amendment to their logical conclusions because of the wide-sweeping rights it would create: rights not to be discriminated against on face and not to be discriminated against by class–not only to have equal access but to have true parity and equity in distribution, almost a socialist vision. But we must expand our understanding of the intersection of the Court, state power, and ideology beyond modern anti-regulatory takings. To do this, I’ll place two cases, both decided by conservative-majority Courts in conversation with each other, and . In Employment Division v. Smith, Scalia holds for the conservative majority that Oregon may deny unemployment benefits to Native Americans who use peyote for purely religious purposes under standard drug testing laws. Generally, the holding is that facially neutral laws do not violate the Free Exercise Clause of the First Amendment. However, the conservative majority on the Court reverses this in their concurring opinions in the recently decided . They argue adoption agencies should not be subject to facially neutral state power requiring that they give children to queer couples. In one case the Court refuses to elevate themselves (judges) and belief to stymie state operations (this is of course granting the Court’s assumption that drug tests for unemployment assistance are a compelling state interest). They prioritize the state over the individual for the first time in this writing. However, in the other, the conservative justices de-emphasize the state and emphasize themselves; a simple belief system is important enough to override state interests. Or, to reframe in the language of takings, the exercise of neutral state power necessitates takings against those who do not believe in the legitimacy or judgment of that power. Fifth World


103

We have, then, another set of contradictions impossible to understand simply in the terms of jurisprudence, just as in the marriage equality cases. The contradictions themselves are of course necessitated by the politics of homophobia and race central to conservatism. To understand the contradictions in a more broad sense, however, we must once again develop a political and structural understanding starting, somewhat interestingly, with the Waite Court ideology. Even though Waite Court ideology is most centrally about protecting judges and Courts, that’s a function of pragmatism. The Waite Court ideology was created in the context of a Court more reactionary than the Congress which just passed the Civil Rights Act; it isn’t absolute. Political outcomes and institutional ideologies only exist insofar as they align two powerful forces: 1) the structurally capitalist/conservative discursive power of elite lawyers/ SCOTUS appointment and 2) the goals of a conservative political movement with many of the same class interests as elite lawyers. Conservatives are only prioritizing structure and judicial power because it creates that key alignment. State power can be valid, as long as the state is conservative; judges can then be sidelined. On the other hand, liberals fail to act and insert their ideology into the latter part of that alignment; they do not have a political movement that prioritizes capturing the Court. Or, most precisely, they do not have the ideological strength. The Court can only act to strengthen a political position insofar as the structure and ideology align; when they do not, a political group must be willing to rebuild the structure to align with the ideology. Most powerfully, we can therefore understand the Court as a set of feedback loops, multiply determined by structure, sites of institutional definition, and institutional ideology that necessitate and align with political movement ideology to deliver certain, often necessarily conservative, political outcomes. Political outcomes are framed in terms of the unique structure and institutional definitions that reaffirm and create institutional ideology with both discursive power (e.g. the valorization of the Founders, devotion to the process itself as good instead of outcomes, or the radical self-empowerment/self-protection of the Waite Court) and material power (e.g. a Court which pushes broadly conservative politics, even in liberatory moments like or even when counter to political opinion). In fact, this alignment and these feedback loops give conservatives another significant advantage. This analysis highlights why it’s so important for left-wing politics to contest not just specific political outcomes but also underlying assumptions and institutions–important, in other words, to engage in definitional, discursive, and structural struggle. Because the Court’s structure lends itself to conservative movements, and conservative movements are aligned with the material interests of judicial institutional ideology, movement politics becomes easier. The movement can be safely centered around the blind devotion to institutional ideology (valorizing the Framers) and structure because those

North Carolina School of Science and Mathematics

ideologies and structures have been so thoroughly captured. That capture opens opportunities for deeply pragmatic rulings, even those that contradict purported institutional ideology, like the phony originalism of how the Court interprets the Reconstruction Amendments or the pro-state ruling in , through the tightness of integration between structure, institutional ideology, class interests, and movement politics. By insisting on a conceptualization of the Court that centers outcomes and fights for structure (ie, the means of discursive production), we gain profound analytical insight. The last case key to understanding the modern Supreme Court is v Sebelius. The landmark decision upheld President Barack Obama’s Affordable Care Act in parts and struck down others. This case is important because it demonstrates both the political negotiations present in a court that purports to only be interested in discovering the “true” nature of the law and demonstrates how the Court is so entangled in the creation of modern politics, strengthening the case that the Court is not the product of a damned nation, but a superstructural force subjugating and creating politics as they exist today. The political nature of the Court can be fairly easily interrogated through the incredibly complex five decision holding of the court. First, justices of the Roberts court unanimously decided that the case could not be stayed under the Anti-Injunction Act of 1867. Second, Roberts, writing for a liberal majority, holds that the Individual Mandate (requiring that all must carry health insurance) is a tax, and is a valid exercise of authority. Third, Roberts, writing for a conservative majority, holds that the Individual Mandate was not a valid use of Congress’s power to regulate commerce. Fourth, Roberts, writing for a conservative majority, holds that states could not be compelled to expand Medicaid under threat of losing all federal funding. Fifth, Roberts, writing for a liberal majority, holds that states could be compelled to expand Medicaid by threatening the new funding appropriated under the ACA (NFIB v. Sebelius 2012). Without considering the complex legal contradictions this case poses (of which there are many), Roberts essentially acts here as the personification of the Median Voter Theorem. The Theorem holds that politics will gravitate around the median voter (between ideological poles), not attracting particular parties’ bases, because median voters are the tie-breaking vote. Effectively, Roberts demonstrates the Court acts like any other system of elections. In the context of the Court, he acts to both express and moderate movement conservative and liberal ideology as mediated through the elite legal/structurally conservative ideology of the Court. There was a deal-making process of concessions from the liberals to secure his vote on larger priorities (Roy 2014). This explicit demonstration of the nature of the Court is so important because it reemphasizes the need to understand it primarily politically, even as interesting as the particular


104

legal issues at hand may be. That political understanding still must be mediated by an understanding of the politics of the justices and the ideology of their milieu. Roberts acts as the median not because he loves moderation as an end to itself but because that’s simply where his ideology is situated with respect to the other justices. When a case like Shelby County v. Holder comes in front of the Court, he breaks the tie with his ideology, overturning part of the Voting Rights Act. Enacting the median can be understood either cynically as a play by Roberts to shape decisions as much as possible or simply the result of his positioning, but it is still fundamentally about ideological commitments, not broad principles. But within the larger context, shaped politics, essentially allowing the political parties to set the agenda and have the agenda set through the Court (Highsmith 2019). In general, there are four ways through which parties and the Court are highly intertwined, creating each other’s agenda. First, parties use their institutional resources to create legal challenges. Organizations like the National Federation of Independent Businesses are not only subsidized by party infrastructure but also subsidized by the state through “non-partisan” tax breaks and advantages. Second, parties often act as a signal to the judiciary. One of the most presently relevant examples is the pressure the Republican party put on the courts during the 2000 election. The Court plays a central role in the creation of the politics of the moment by acting as the venue through which political disputes are resolved, and its inherently conservative structure allows those interests to win more often. The Tea Party insisted that the ACA was of great importance to the judiciary. Third, parties and the Court amplify certain constitutional claims. The Court often considers complex Constitutional claims, and that agenda must be translated from the discourse of elite law to the discourse of politics. So, having heavy connections between political parties and the Court (for instance, all the Bush appointees on NFIB had heavy connections to the Bush administration or previous Republican administrations), enables the Court to effectively translate issues into the public sphere, shaping politics. And this is why liberal abdication of this important conduit is such a large failure; strong courts like America’s with ultimate agenda-setting power (because of their ultimate power to overturn or reshape any law) must be treated explicitly politically with an outcome-focused approach. Finally and fourth, with translated, highlighted issues in the public square, parties can start selling constituent engagement in the Court. As the issues of the day and the Court become intrinsically linked, voters respond, voting for Senate candidates and Presidents based on their promises to shape and create the Court. In this way, the voters are not creating the Court, so much as the Court has effectively created a large bloc of voters devoted to shaping and stewarding its future; devoted to protecting not just the outcome but the conservative process.

Here?

T

he Court is intertwined in America’s DNA: you don’t have America without the Court. But even in its intertwined role, it acts pre-discursively, as a neutral surface on which politics and the issues of the day are projected. The structure of the institution itself is uniquely American; it is situated governmentally as the means by through which national narratives (e.g. ones of equality, justice, and property) and national ideology are sociopolitically established. The structure of the Court, as created by the founders–lifetime appointment, Senate confirmation, difficult impeachment, and removal of the Court–creates the ideology through which our national narratives are realized. They’re national narratives always framed through the bourgeois professional class milieu of elite law: reaffirming the beliefs and structures of the less liberatory Framers, and delivering a unique flavor of classical and neoliberal conservatism and liberalism. These are not quite explicit movement politics in the way the Court prioritizes minoritarian propertied interests and institutional preservation/commitment to the process itself, but they are distinctly political in the way that issues of the day are mediated by the Court’s unique agenda-setting power. To remove the Court from American history would fundamentally alter it: no other institution can act in a conservative and reactionary way such as the Court has. But it’s important to note that the Court still is fundamentally a product of Americanism. It is integral to the American superstructure, uniquely influenced by the aristocratic, but also narrowly democratic ideology, of Hamilton, Madison, and even Jefferson, but also simultaneously generating at every point new politics mediated by that institutional and class ideology that can shape the nation and state in new and unexpected directions. And the power of the Framers goes beyond their role in creating the structure of the discourse; they and the Court have created a new form of legitimacy. Politicians invoke the Framers and the institutions they created as great and fundamentally American, strengthening their discursive power to have all conceptualizations of the nation flow through them. This of course leads to questions of how we might even resolve the situation in which we find ourselves. Understanding this as purely an academic exercise would be a disservice. We need to understand the broader political implications. Ultimately, fights about how we might reform the Court are fights about how we frame America into the future, how we might reclaim progressive and even socialist Americanism. This begins with the abdication of duty on the part of Congress. Congress still has the power Hamilton lays out in Federalist 78 to regulate the Court. Its more opaque practices could be done away with simple legislation. Arguably, Congress could even coerce the Court into a different doctrine of judicial review. But this is also a battle of public opinion. The Court draws its legitimacy Fifth World


105

both from its acceptance within academic circles, that is why Kennedy and Roberts are so careful to pay attention to the ideology of elite lawyers, but also mass public opinion. A public focused not on serving the Court with their votes as now, but on fundamentally transforming the nature of the Court is a public that is perfectly positioned for the biggest reform of our time. The Court cannot move until one of the three great levers of power move: Congress, elite public opinion, or mass opinion. But, if we focus our energies on flipping even one, we can more easily flip the others. We have to dream of an Americanism liberated from the shadow of the Court, and created in the cleansing sunlight of pluralistic, multiracial democracy.

Boudin, Louis B. “Government by Judiciary.” 1968. Russell & Russell Bourne, Randolph. “The Doctrine of the Rights of Man as Formulated by Thomas Paine” 1916 Bouie, Jamelle. “The Equality That Wasn’t Enough.”

15 Feb. 2020.

Bowie, Nikolas. “The Deregulatory Takings Are Coming!” LPE Project, 17 June 2020. Bump, Philip. “Analysis | A Quarter of Republicans Voted for Trump to Get Supreme Court Picks and It Paid Off.” The Washington Post, WP Company, 8 Oct. 2021. Hamilton, Alexander. “The Judiciary Department”.

28 May. 1788.

Hartig, Hannah. “About Six-in-Ten Americans Say Abortion Should Be Legal in All or Most Cases.” Pew Research Center, Pew Research Center, 3 Sept. 2021, Highsmith, Brian. “Partisan Constitutionalism: Reconsidering the Role of Political Parties in Popular Constitutional Change” 4 Wisconsin Law Review 101 (2019). Katz, D. M., Bommarito, M. J., & Blackman, J. (2017). “A general approach for predicting the behavior of the Supreme Court of the United States.” PLOS ONE, 12(4). Madison, James. “Majority Government.”

. 1984

Supreme Court of the United States. “Marbury v. Madison.” 5 US 137 (1803), 24 Feb. 1803.

.

Supreme Court of the United States. “United States v. Stanley; United States v. Ryan; United States v. Nichols; United States v. Singleton; Robinson et ux. v. Memphis & Charleston R.R. Co.” 109 U.S. 3 (1973), 15 Oct. 1883. . Supreme Court of the United States. “Loving v. Virginia.” 388 US 1 (1967), 12 Jun. 1967. . Supreme Court of the United States. “San Antonio Indep. Sch. Dist. v. Rodriguez.” 411 U.S. 1 (1973), 21 Mar. 1973. . Supreme Court of the United States. “Employment Division, Department of Human Resources of the State of Oregon v. Smith.” 485 US 660 (1988), 27 Apr. 1988. . Supreme Court of the United States. “National Federation of Independent Business v. Sebelius.” 567 US (2012), 28 Jun. 2012. . Supreme Court of the United States. “Obergefell v. Hodges.” 576 US (2015), 26 Jun. 2015. Supreme Court of the United States. “Bostock v. Clayton County.” 590 US (2020), 15 Jun. 2020.

. .

Supreme Court of the United States. “Fulton v. City of Philadelphia.” 593 US (2021), 17 Jun. 2021. . Supreme Court of the United States. “Cedar Point Nursery v. Hassid.” 594 US (2021), 23 Jun. 2021. . Supreme Court of the United States. “Whole Woman’s Health v. Jackson.” no. 21, 10 Dec. 2021. . “President Biden to Sign Executive Order Creating the Presidential Commission on the Supreme Court of the United States .” The White House, 9 Apr. 2021, Tushnet, Mark V. “Weak Courts, Strong Rights: Judicial Review and Social Welfare Rights in Comparative Constitutional Law.” Princeton University Press, 2009. Roy, Avik. “The inside Story on How Roberts Changed His Supreme Court Vote on Obamacare.” , Forbes Magazine, 13 Aug. 2014.

North Carolina School of Science and Mathematics


106

Historical Importance of Indigenous Irrigation Systems on the Peruvian Coast Katie Beard

Poco a poco se anda lejos - Peruvian Proverb

T

his phrase, meaning “little by little, one walks far”, holds true to Peru’s indigenous irrigation systems. From utilizing floods for irrigation to constructing one of the most intricate irrigation systems in the Americas before 1100 AD, indigenous engineers in Peru were able to use the principles of hydraulic engineering to develop a strong empire on the coast. From a historical perspective, the outcome of this empire should not be surprising, as “some anthropologists and historians point to the development of irrigation as the catalyst for the interaction of engineering, organizational, political, and related creative or entrepreneurial skills and activities which produced the outcome referred to as ‘civilization’” (Sojka 745). One sentence more, developing briefly the implications of this quotation for your work. The principles of hydraulic engineering applied to the coast of Peru can be best understood by looking at the anthropological site of Chan Chan. Chan Chan, built in 850 AD, was the capital city of the Chimu empire. It is located in what is now the La Libertad region, about 3 miles west of Trujillo. Chan Chan was surrounded by fields which relied on water from the Chicama-Moche Intervalley Canal and Vichansao Canal to draw upon water from the Chicama River and Moche River for irrigation.. The water from these irrigation systems also ran to underground wells within the city of Chan Chan, where residents could be supplied with water separated from agricultural use. For the time, this practice was revolutionary, and further demonstrated the strength of Chan Chan during the time period. These canals supplied Chan Chan with a strong source of profit through their agricultural products. This allowed Chan Chan to create a clear socio-economic structure in which control could be maintained through rural administrative units. This demonstrated the advancement that irrigation systems could afford to the cultures around it. By applying this principle to today’s cultural landscape and restoring some of the pre-Columbian irrigation practices, the indigenous communities around these practices would benefit. Through accessing the rise of the agro-tourism industry, as well as consuming and selling the food products of the cultivated fields, these cultures would not only make a

profit, but combat food insecurity. If governments invested in the redevelopment of indigenous agricultural practices, they would be able to reach more engagement with these communities, as well as forge a stronger relationship with their country’s indigenous cultures. Lastly, not only would indigenous cultures benefit from the restoration of their agricultural practices, but would provide their knowledge as a key to mitigating climate change.

B

efore the construction of the Chicama-Moche Intervalley Canal, residents of the Chan-Chan area likely used floodwater farming techniques to irrigate crops. Chilca sites, located in the Lima district, are frequently studied for information about prehistoric agricultural techniques. The sites were thought to use sunken fields to farm; however, the frequent flooding (suggested by soil samples and current climate) of the area would be unconducive to the function of this agricultural technique, where sunken fields may overflow. The sunken river hypothesis assumes that floods were an inconvenience, rather than a key resource for the land’s irrigation. The raised portion of the fields were at level with the basins, which contributes to the idea that the fields were embanked for protection. However, these embankments, constructed by heaps of soil around the field, had gaps. These gaps were likely constructed purposefully, as a major archeological site (IV-G6-A) was constructed next to it. Had these gaps been intentional, they would have likely served as flood control, diverting water into crops. Similarly embanked fields were constructed in floodprone areas across South America, further leading away from the conclusion of “sunken fields.” Embanked fields at Lake Titicaca and in the Viru Valley were both found to guide waters, rather than to protect areas from the floods. In more arid regions, indigenous farmers currently use similar techniques. In Navajo Country to the north, the following technique is described, “Along the flood plains of the larger washes, the practice is to plant corn at intermediate spaced holes 12 to 6 inches deep. The grain germinates in the sand and rises a foot or more above the surface before July begins. With the coming of the flood the field is wholly or partially submerged. After the water has receded parts of the field are found to have been stripped bare of vegetation and

Fifth World


107

other parts to have been deeply buried by silt; the portion of the seeded ground remaining constitutes the irrigated field from which a crop is harvested” (Gregory 104). This technique is mainly in use by indigenous communities in the southwestern United States, but groups in Oaxaca, Mexico also still practice this type of farming. Systems in this area combine floodwater farming with water table farming. This combination may have been used in Chan Chan, and eventually contributed to the construction of the ChicamaMoche Intervalley Canal. As Chan Chan began to evolve, the Chicama-Moche Intervalley Canal was likely constructed to supplement a water shortage. Around the year 1000, Chan Chan faced both a drought and tectonic changes, leading to a drop in the water table. To supplement their water resources, Chimu leaders decided to divert water from the Chicama River, which was substantially larger than the Moche River. The Chicama-Moche Intervalley Canal was constructed, providing water directly into fields lying north of Chan Chan. The construction of the Chicama-Moche Intervalley Canal served as a testament to the civilization’s engineers, who worked with an arid climate and mountainous terrain. The result was the world’s first canal enacted on an uphill slope. Modifications were made minimally, but eventually required for the completion of a stable channel. These modifications included deep cuts, embankments, and aqueducts, which meant cuts into various materials. “These canals characteristically have a bed of alluvial silt, which in some places is stone-lined for added stability; and whilst the canals have unlined banks in many places, certain sections exhibit stone lining; again for stability and to prevent the collapse or undercutting of sandy banks” (Park 162). Though, among the various materials, Manning’s constant remained consistent, the limiting velocities varied greatly. This illustrates that more methods of control would be needed to ensure an effective delivery of water. A moveable sluice gate was added to the canal intake, which drew water from streams draining from the Andes. Furthermore, to ensure that the crops were allowed suitable amounts of water, the main feeder canal was purposefully diverted into the distribution canals for a few periods throughout the year. Within the fields themselves, some forms of control were exercised, likely in the form of furrows. Though the Chicama-Moche Intervalley Canal was the largest point of focus, the Vichansao Canal was also created to service the area. The Vichansao Canal was constructed in the Middle Moche Phase (300-400 AD), while the ChicamaMoche Intervalley Canal was constructed in the Late Nazca Period (550-750 AD). This alludes that the Vhicansao Canal either was unable to take in enough water from the Moche River or ran to ineffective points of entry in the fields. The former was likely true, as not only did the Moche River have a smaller water supply, but provided more limitations on the Vichansao Canal’s effectiveness. When comparing the North Carolina School of Science and Mathematics

cross-sections of the canals, the Vichansao Canal had a more restrictive limit of excavation than the Chicama-Moche Intervalley Canal, and had higher levels of silt in the channel. Though the Vichansao Canal and Chicama-Moche Intervalley Canal were far better off than previous irrigation methods, they were still liable to environmental threats. The canal systems in the Moche River, to the north of Chan Chan was destroyed around 1100 AD, due to flooding from an El Nino event. They may have been further damaged by a larger El Nino event in the fourteenth century, suggested by geomorphological research in Casma Valley. Radiocarbon samples dating 1325 AD (+/- 45) and 1380 AD (+/- 140) seemed to represent the same El Nino event, suggesting that the event occurred around 1330 AD. In addition, raised fields were located in the Casma Valley. The function of raised fields relates to two ideas: passive irrigation and reclaiming waterlogged land. Given the arid climate of the Chimu region, the function would have been the former, further suggesting an El Nino event. This El Nino event would have aligned with the one recorded in the fourteenth century, as agricultural worker’s tools were located outside of the raised fields. The Chimu’s irrigation system allowed them to grow a more diverse range of crops, including beans, sweet potatoes, and cotton. Beans and sweet potatoes naturally grow better in silt or clay loams, rather than Peru’s naturally more infertile soil types. Though flooding may have allowed these crops to be grown in certain areas seasonally, the irrigation system allowed the area to be expanded, and for these crops to be cultivated more consistently. The newly efficient cultivation of crops allowed t h e C h i mu t o grow economically, even fostering an intricate socio-economic system.

T

he importance of these irrigation practices extends beyond the agricultural advantages into the development of Chimu society in pre-Columbian Peru. By looking into the design of Chan Chan, it can be understood how these agricultural advantages became the center of Chan Chan’s wealth and society. “Adams (1996) notes that irrigation systems tend to concentrate the control of the surplus wealth and productive resources in the hands of a limited segment of society and thus, at least in some cases, hydraulic systems may be seen as stimulating the development of non-egalitarian or hierarchical society” (Keatinge, Day 275). By this principle, the rise of Chimu society could likely be attributed to the precocious irrigation systems. The residents of Chan Chan, being the main benefactors of the region’s rapid agricultural development, were then able to put themselves in a position of higher authority than their rural counterparts. Chan Chan itself was separated into distinctive categories of architecture corresponding to social classes, illustrating the formation of a hierarchical society from newfound


108

wealth. The first of these, “slum architecture”, was destroyed by modern-day farming. “Slum architecture” was the largest portion of the city, displaying no formal planning. The next largest piece, “intermediate architecture” included housing and passageways, but also courts, signifying a greater level of power to the residents of “intermediate architecture” housing. “Intermediate architecture” displayed clear urban planning principles, and was located next to “monumental architecture.” “Monumental architecture” was attributed to ten rectangular enclosures in Chan Chan, which were well preserved throughout the years, compared to the “less dignified” forms of architecture. These rectangular enclosures were likely reserved for elite groups to reside in. Large walls barred entry from commoners, as well as corridors within the enclosures that would have limited access to various sites once within the enclosures. Even within these formal enclosures, a select few of the enclosures were distinguished for a more formal plan. These enclosures were named “ciudadelas”.Ciudadelas were clearly divided into three sections, and only included one point of entry. Ciudadelas included a walk-in-well, likely for residents to access residential-use and drinking water separated from agricultural-use water. Citizens of Chan Chan outside the ciudadelas were also provided access to drinking water (water was separated from agriculturaluse water as well). However, due to the climate conditions that prompted the construction of the Chicama-Moche Intervalley Canal, it would be reasonable to assume that citizens would remain conscious of their drinking water usage. A residential walk-in-well would have allowed free access to water at any time, signifying a hierarchy as drinking water restrictions are absolved for these residents. The lower sector of the ciudadelas, “canchones”, also contained a walkin-well in addition to less movement barriers throughout. Canchones were likely used by inhabitants as a social space, which further illustrates a power divide as the residents and guests are able to access water. The function of the ciudadelas was highly contested by anthropologists. The ciudadelas contained kitchens, storerooms, burial platforms, and structures called “audiencias”. Audiencias were structures shaped into a “u.”. The intention of the audiencias remained ambiguous to anthropologists, but most theories, from shrines to funerals, contained some symbol of status. Furthermore, there existed a clear relationship between the audiencias and storerooms. Across both sectors, audiencias were usually located next to the storerooms. Both audiencias and storerooms remained difficult to enter, with a singular point of entry on both structures. Navigating to these structures required movement in a system of corridors, which clearly connected the audiencias and storerooms. “As noted elsewhere, the relationship of audiencias to storage rooms through an integrated system of corridors is the architectural expression of Chimu administrative control over the storage and distribution of goods” (Keatinge, Day 284).

However, the north sector had far greater audiencias and fewer storerooms than the central sector. An explanation may exist in Ciudadela Rivero. Ciudadela Rivero’s central sector featured only one audiencia. The single audiencia was located next to storerooms and a unique burial platform. Though the sectors had many burial platforms, this burial platform remained distinct, and looted. This section of the ciudadela was more secluded than the rest. “If class separation can be consistently followed in architectural layout, then the highest status resident of the ciudadela would have lived in the most isolated part of the enclosure. One audiencia, one “king”; one kitchen, one occhocalo (the royal cook)” (Keatinge, Day 284). Furthermore, the king was often accompanied by “fonga”, who lined the path of the king with seashell dust. Along the sides of the looted burial platform, archeologists found a sealed portion of a crushed shell. Thus, the king was likely located at the Ciudadela Rivero, and high-status residents may have received similar distinction across the other ciudadelas. From the structures, it can be concluded that the ciudadelas were the residences of Chimu leaders, who worked on the administration of Chan Chan and its rural constituents. The power of the region was consolidated into these ciudadelas, as they enjoyed and controlled the profits from the increased rural productivity. Moving away from the ciudadelas, the power and wealth held diminished. The rural areas were largely planned for, and controlled by the Chimu elite. Very little evidence of the rural residential areas remain, illustrating the minimal investment in its construction. However, three villages were able to be identified at the site, each of which included a “rural administrative center.”. These centers were named El Milagro de San José, Quebrada del Oso, and Quebrada Katuay. They were isolated from the rest of the villages, with no evidence for a nearby population. Each center was surrounded by the fields and located near one of the irrigation canals, thus illustrating an agricultural purpose. The structures of the centers were similar to the ciudadelas, with the same entry courts and passageways. At El Milagro de San José, evidence for a kitchen identical to the ciudadale’s style was found. In addition, each center had a “focal point”. Each of these focal points were identified as audiencias, but had a more rural architectural style. Despite the architectural styles, audiencias were usually reserved for the parts of Chan Chan with monumental architecture, symbolizing that the residents of guests of these centers had a higher status than the agricultural workers. Outside of the centers, planned houses and temporary housing were located, suggesting that officials from Chan Chan were using the administrative centers. The isolation from the population illustrates that these officials would not have visited for a control over the rural population, but for an analysis and control of the fields and irrigation systems. Combined with the highly protected storerooms, a point is solidified that Chan Chan’s Fifth World


109

main concern was their advanced agricultural practices. Furthermore, audiencias were only found in places of high socioeconomic status, found only at ciudadelas and in the fields. This suggests that not only were the Chimu elite highly concentrated on their agricultural profits, but that they also respected the systems. Though Chimu society had a clear hierarchy, the Chimu empire as a whole did well as a result of the construction of the irrigation system. Chan Chan headed the rise of the Chimu, building a trade hub, where the Chimu both stored and traded their crops.Their agricultural profits allowed the Chimu to invest more into their military, expanding outward. Under the rule of Guacricaur (960-1020 AD), around the same time as the irrigation system’s construction, the Chimu were able to expand into Moche, Santa, and Zana valleys. He was succeeded by Nancempinco, who expanded the Chimu empire south, taking over the Lambayeque. Both the height of the Chimu empire and the end of the empire was reached under the rule of Minchancaman (1440s-1476 AD). During the height of the empire, the Chimu influence stretched “1300 km along the coast of northern Peru” (Cartwright). Despite the magnitude of the empire, the Inca, headed by their ruler Tupac Inca Yupanqui, were able to conquer the Chimu. As the Inca moved through Peru, they were likely led to the Chimu by an ally in the highlands, in Cajamarca. Upon the conquest of Chan Chan, the Inca held Minchancaman as a prisoner, but treated him as somewhat of a guest. He, along with his court, were kept as informants on the land, and the administration between Chan Chan and surrounding areas. A testament to their strength, the Inca maintained the agricultural systems, as well as other principles of the Chimu empire. Keatinge writes, “Although some of the similarities between the Chimu and Inca socioeconomic organization may well derive from the shared Andean heritage, the Inca organization of state lands, use of tribute labor, concern with storage, and organization of production and redistribution of goods strongly suggest a Chimu origin for their development” (Keatinge, Day 292). Anthropologists were able to make a clear connection between the Chimu and Incan empires, especially as the conquest occurred roughly 50 years before the Spaniards colonized Peru. As the Spaniards arrived, they were able to meet and record stories from Chimu citizens who lived through the Incan conquest. Furthermore, a path was able to be traced as the Inca had more records of their highland conquests. These conquests would have included the Chimu’s aforementioned allies.

A

fter the Inca conquered the Chimu, very little changed as far as the irrigation systems. The Chimu’s irrigation systems were the most advanced at the time, and reaped very effective results. However, the purpose of the water use from the canals may have changed slightly. The Chimu

North Carolina School of Science and Mathematics

placed a high value on their walk-in-wells, where residents of Chan Chan had access to residential-use water separated from agricultural-use water via water tables recharged by the canal. The Inca, by comparison, would have instead preferred to reserve water for religious uses. However, both empires used the canals primarily for irrigation. It is possible that the Inca added onto the irrigation system. As the Chimu built the Chicama-Moche Intervalley Canal, they used aqueducts as well when necessary. The Inca were said to have an affinity for aqueducts, modeled by their uses at sites like Cuzco, Pisac, and Tipon. They usually created aqueducts along the sides of mountains, which would have worked well in conjunction with the route of the Chicama-Moche Intervalley Canal. Evidence for these Inca characteristics in the irrigation system were found in Huanuco Pampa. Huanuco Pampa may have been made a small administrative center during the Chimu empire, but reached its height from the Incan Empire. The water management of the site contains clear influences from both Chimu and Inca. The canal system, featuring both open and covered canals fed baths and a pool. The baths were found on the eastern side of the site, and were decorated by an Incan design. In addition, the pool was likely an Incan touch rather than a Chimu characteristic. However, the hydraulic system characteristically had niched walls, which were used throughout Chan Chan’s “monumental architecture.” Audiencias, in particular, had niched walls, with the niches almost corresponding to the esteem of the structure. The discovery of these niched walls could have multiple meanings: either Huanuco Pampa was originally a Chimu site, taken over early in its development by the Inca, or the Inca adopted a large amount of Chimu culture, even architecture. Should it be the former, the Chimu were likely looking to expand further inland, rather than focusing their expansion north/south. Soon after the Incan conquest of the Chimu, the Spanish arrived in Peru. Forcing the Inca off the land, the Spaniards turned away from the native crops to crops of their choice. On the coast, sugar cane was planted. Sugar cane required three times as much water as the crops the Chimu and Inca cultivated on the land. In addition, the Spanish population, horses, and livestock in Peru at the time would have been much more than the land originally sustained. In Trujillo, which was once under Chimu rule, the Spanish modified the canal systems to provide water into each home, further increasing demand. Though the Spanish usually did not greatly modify the irrigation systems in place upon their initial conquest, the growing water demand would have required a change. Of these modifications, the most notable would have been the filtration gallery. The filtration gallery, which was known as the “puquio” in Peru, was a good choice for irrigation in the Andes, as it accounted for a hillside. In “Latin American Antiquity”, Barnes describes the system’s form in Peru. She writes, “A filtration gallery is a tunnel cut on a gentle gradient into the


110

water table within a hillside. The aquifer is found by digging a series of vertical shafts in a straight line until striking water, and then linking the bottoms of the shafts with the tunnel...Where the tunnel emerges into open air, there is often a reservoir from which water is distributed, either to fields or for household use” (Barnes 49). These systems are ideal for both providing fresh water and draining mines. They could transport water throughout Peru’s arid climate without losing much of the water between locations. The origin of the filtration gallery dates back to its earliest record in the Assyrian empire in 772-705 BC. Islamic conquest would have brought the invention to Western Asia, and eventually the idea would have made its way to Spain by Arabs. The filtration gallery was introduced to the Spanish cities of Crevillente, Cordoba, and Madrid. The invention allowed these cities to expand and support the growing populations. Although the filtration galleries in Peru are mostly declared to be of Spanish origin, the filtration galleries built in the Nazca drainage adds some amount of ambiguity to the origins. The details on the Nazca filter galleries were dated to Inca Roca, rather than the Spaniards. The date of the construction was unknown, however the settlements in the Nazca drainage were able to thrive pre-Hispanic rule, due to agricultural profits stimulating the economy. When the Spanish arrived in the area, they noted that it was well-irrigated. Furthermore, the same climate events that may have led to the construction of the Chicama-Moche Intervalley Canal may have led the Nazca to build their filtration galleries. Though the debate may seem trivial, the difference made a difference under Hispanic water rights laws, where water from an indigenous technology was public property, and water from Hispanic technology was private property. Though there is no consensus, most believe that they were built by indigenous engineers, rather than Spanish. However, other possibilities remain, including the hypothesis that they were the Spaniards’ idea, but carried out by indigenous workers in the area. The Spanish quickly implemented filtration galleries into Latin America’s irrigation systems. In 1526, the Spanish had already established their first filtration galleries in Mexico. Once they explored Peru, they made modifications to infrastructure, allowing for filtration galleries to be built. Peru’s location in the Andes mountains was a good area for the Spaniards to add filtration galleries, as they had established mines and needed a method to drain the mines. The first filtration galleries in Peru were built in Santa Valley around 1590, then built in Chillon Valley. The focus of the first filtration galleries was to irrigate Lima, where the Spanish had chosen to build their capital city. Their filtration gallery systems were expanded so quickly in Peru that by the 1800s, the system extended from Santa Valley to Puquio Nunez, Chile (around 2000 km). Filtration gallery systems were used for a wider range of purposes than the irrigation systems were. The filtration

gallery systems in Lima, where the Spaniards placed the greatest focus, supplied water to not only fields, but also orchards, fountains, and even individual houses. Water from the filtration galleries was stored underground in cisterns, where they were easily accessed for residential use. At Pisco, the water’s use differed, also supplying water for religious purposes. One canal from the filtration galleries was used for the monastery, and the other split water between the Spaniards and the local indigenous communities, which was unusual given the Hispanic water right law stating water from Hispanic-built waterworks were the property of a single Hispanic landowner. The Belen filtration gallery system used the water for a vineyard, which was another example of the Spanish changing the typical crops grown in the regions. The Spanish use of these irrigation systems may have also contributed to the formation of a hierarchy, as the water was most typically used to support hacienda systems in both agriculture and mining. In these systems, Spanish elites owned large estates which were labored on by various groups, but typically indigenous workers. These laborers received little compensation for their work, and were bound to the Spanish estates. These systems were enforced throughout Latin America, and solidified the Spanish racial hierarchy into Latin American culture. As the Spaniards forced themselves into a higher position of the Peruvian society, Peruvian agriculture was inclined to follow Western European trends rather than the indigenous techniques. Crops farmed were suitable to Spanish needs, rather than the native foods of the land. Of these foods were sugarcane, grapes, and citrus fruits, all of which were more conducive to Mediterranean-style farming than Andean. For example, grapes require looser and better-drained soil than what the coast of Peru could provide. They also required further digging into the ground, which may have been difficult given the mountainous and rocky terrain. Many of these types of foods required more water than the indigenous crops, which further transformed the irrigation systems of the region.

C

urrently, Peru, and other states around the world, have started to revive some of the indigenous agricultural practices. In coordination with the Food and Agriculture Organization of the United Nations (FAO), the New Zealand Aid Programme developed a project to aid indigenous cultures in reviving their old agricultural practices. This program, the Strengthening of High-Andean Indigenous Organizations and Recovery of their Traditional Products project (FORSANDINO), was implemented from 2007 to 2011 in Huancavelica. The goal of this project was to provide better resources for indigenous agriculture, which in turn could alleviate food insecurity for indigenous communities in Huancavelica. To oversee the results of the project, FAO conducted

Fifth World


111

surveys and field visits in the final stages of the project. A team met with community members to hold a workshop detailing their goals. From here, the communities created a Development Plan suiting their needs. These Development Plans were drafted into the General Assembly, allowing for them to ensure that it was feasible. Being involved in the General Assembly via the Development plans created a desire within the communities to become engaged in the Participatory Government. The community members were also trained by the officials from the overseeing programs in agricultural techniques. After the workshop, members of the community even requested more training and information, demonstrating that indigenous peoples are eager to emerge as their own into the agricultural world. Contact with experts in rural development also suited the desires of the communities. Through these interactions, each indigenous community was able to create a more interconnected network. This allowed for the community members to broaden their knowledge on development and agricultural techniques. The contact between these representatives and the indigenous communities also nursed an interest in community members to interact more with the Peruvian government. Through this interest, the communities were able to accomplish more of their goals after the project was completed. Another section of the project aimed to increase the production of traditional foods in the communities. Urbanization had made many of the traditional products unavailable, but through the project, they were able to put them back into production. The cultivation of traditional crops and products also benefited the diets of the communities. The most benefited were the communities’ pregnant women, which will likely allow for better nourishment of the children as well. By cultivating traditional crops, the Andean indigenous communities combatted food insecurity by saving money on vegetables through the use of greenhouses. Community members enjoyed recognition as well;, for example, “in Angaraes they expressed pride in having won first place at the fairs in which they participated for the quality and variety of their Andean products such as tarwi, wheat, barley, and nashua” (FAO 14). As a result of this recognition, the communities were able to raise money through the sales of these traditional products. The cultivation of traditional crops was typically reserved for self consumption, which allowed community members to save money to use on other food products. Any excess produce was subsequently sold in a small circuit, generating extra profit for the community. At the end of the project, statistics were generated assessing the impact of the project on poverty in the area. In the case of extreme poverty, of the control group families, 23.3% of the families lived above the extreme poverty line as compared to the 41.7% of participating families living above the extreme poverty line. Breaking down the statistics into the gross economic value of production (GEV), the GEV total for control-group families North Carolina School of Science and Mathematics

totalled 628 new soles, while the participating families totalled 898.4 new soles. Though little change was noted among livestock production or sub-livestock production, there was a 44% increase in the GEV forestry production and 101% increase in GEV agricultural production between control-group families and participating families. Family economic assets were monitored as well, showing a 14% change in assets with a male head of household and a 15% change in assets with a female head of household. For per capita family income, with a female head of household only a 0.2% change was noticed, however, with a male head of household, a 64.5% change was recorded. Food insecurity was also alleviated through the implementation of the project. Comparing the control-group families and participating families in per capita levels of food consumption, a 12% increase was observed in female-led households and a 35% increase was observed in male-led households. From this project we can conclude that indigenous cultures are willing, and ready to restore their agricultural practices. Transition from their readiness to “our” work. In doing so, we can remediate some of the poverty placed upon these communities through colonization. Furthermore, the communities will remain more engaged in a political world that actively includes them and their needs. By remaining engaged, they can restore some of their own needs. Should a larger connection between the state and indigenous communities be restored, indigenous communities could have a larger voice within the government to bring awareness to the inequalities regularly faced. As well as indigenous crops and farming techniques, parts of Peru are also actively reviving their pre-Columbian irrigation systems. As a result of climate change, Peru’s already scarce water supply has been depleted. In fact, “a 2019 World Bank report evaluating drought risk in Peru concluded that the capital’s current strategies to manage drought- dams, reservoirs, storage under the city- will be inadequate as early as 2030” (Gies BBC). Desperate to save their water supply, the Peruvian government began requiring water utilities to use a portion of their money into “Mecanismos de Retribución por Servicios Ecosistémicos” (MRSE). MRSE funds sustainable water storage and supply practices, i.e. old indigenous techniques. North of Lima, in the highlands, members of an agricultural collective called a “comunero” practice preColombian irrigation techniques similar to those the Chilca and Chimu used. Water canals called “amunas” are put into use, guiding water from mountain streams into infiltration basins. The water permeates the ground, moving underground until it arrives in streams weeks to months later. Water is collected and used to water crops during dryer seasons. This particular method is of interest to Peru’s government as the routes the amunas take could supply water for Lima as well. By restoring these waterways, water use in Peru could be restored sustainably.


112

W

hile the process of restoring the amunas has been slowed by the COVID-19 pandemic, the principle remains the same. Indigenous communities around the world hold untapped knowledge about sustainable agriculture, which could be the key to combating agriculture’s contribution to climate change. Most indigenous agricultural techniques use the principles of agroecology to cultivate crops while maintaining minimal interference with the land. For example, many of the key ideas of regenerative agriculture have been practiced by inidgenous cultures for centuries. Regenerative agriculture values crop diversification, where intercropping could be applied. Intercropping in the Iroquois cultures in the eastern United States includes the “three sisters”, winter squash, corn, and beans. Each of these crops has a growing quality that proves to be beneficial to another one, allowing each to flourish. When intercropping is applied to cover crops in a similar matter, carbon emissions could be reduced, further mitigating climate change. Relating back to Peru’s pre-Columbian irrigation systems, regenerative agriculture values good water management practices. Rather than large reservoirs, which could prove to become too invasive to be sustainable, waterways like the Intervalley-Canal could be restored. For dry areas, relying on water from other places adds transportation issues that make irrigation less sustainable. The amunas are great alternatives, allowing water from wetter seasons and areas to slowly drip down until they reach the drier areas when they need it most. As for agroforestry, another principle of regenerative agriculture, controlled forest fires have started to make mainstream use, but have been in practice for indigenous cultures for generations. Tribes in the Midwestern and Southwestern United States had used swidden agriculture, selectively burning forests to help them regenerate. Tribes in the Midwestern and Southeastern United States also planted legumes, which is a nitrogen-fixing crop. This allows for less use of fertilizers in the soil, and also accompanies the regenerative agriculture principle of permaculture. Through the practice of permaculture, agricultural patterns mimic nature. For example, the use of embanked fields by the Chilca recreated a natural flood-control system. The same practice is currently in use in some areas of Navajo Country, as stated previously. Many of these regenerative principles are applied because of an important cultural practice to a tribe. For example, in the Khasis tribe of Meghalaya, when each child is born, their umbilical cord is attached to a tree. This symbolizes each person’s connection to nature. Because of this connection, the tribe tries not to use fertilizers or chemicals on any piece of land, including fields. Crop rotation techniques are deployed, as the tribe believes that minimal working should be done on the land, instead, good management techniques are used. To recreate these indigenous practices, governments

should maintain little involvement in choosing how indigenous cultures farm. For example, in Northeastern India, tribes farm their land as a community with little involvement from the government. This minimal involvement also allows native practices to thrive without pressures from Westernization and urbanization. Furthermore, by not allowing these pressures to remain in native communities, they will also be able to return to farming indigenous crops. By farming crops native to the area, water use, as well as the impact invasive species have on the soil, will decrease. By allowing indigenous cultures to have some control over their own agriculture, their economies will be benefitted. As project FORSANDINO demonstrated, when communities are given the tools to jump start their own agriculture, they will be able to not only make a profit, but combat food insecurity. In relation to Peru’s irrigation systems, a restoration of these canals could lead to more agro-tourism, especially when linked to Peru’s rich history and diverse culinary practices. This principle has been demonstrated by Italy, who uses their culinary repertoire to increase tourism to their fields and vineyards. Peru could restore the indigenous irrigation systems and market it for its anthropological, environmental, and culinary value, increasing tourism in the area. Should the invention of these canals be attributed completely to the native cultures around the area as it should be, they would be able to use the agrotourism profits to further develop their communities. Lastly, project FORSANDINO illustrated that when governments are invested in developing, and protecting, the sanctity of an indigenous community’s food sovereignty through listening to said community, they will also be able to develop a stronger engagement with the community. “Food sovereignty is an affirmation of who we are as indigenous peoples and a way, one of the most sure-fooled ways, to restore our relationship with the world around us.” - Winona LaDuke

Fifth World


113

Smithsonian Magazine Smithsonian Magazine

The Geographical Journal

International Water Resources Association (IWRA)

ology

North Carolina School of Science and Mathematics


114

Understanding the Future of Virtual Worlds in the Environment of MMORPGs Julien Cox

T

stake. A once-thriving city is now reduced to a barren wasteland. This description may match many images of the worst human conditions– war, famine, natural disaster–here, however, it depicts only the after-effects of a plague. A group of hunters outside of the city had set out with their dogs in search of an animal that would feed them and their families, and offer opportunities for selling the remaining parts. After a long search, the hunters found the animal and killed it. They dressed the animal, covering themselves in the remains. Washing themselves off, the hunters returned home clean and refreshed, but their animal companions had unfortunately ingested some of the remains of the animal, unbeknownst to the hunters. The hunters return to their town, greeted by their families and friends who congratulate them on a successful hunt. The pets return to mingle amongst the people of the town as well. A delivery man pets one of the hunter’s dogs and gets into his car to travel on to the next town. The next morning, people got out of their bed feeling queasy and coughing. That coughing quickly turned into blood and vomit, until eventually everyone in the town was incapacitated with a horrible sickness. The townspeople had been unaware of the quickly mutating disease in the corpse of the animal the hunters had slain. Spread by the pets, the disease quickly moved from person to person until all were infected. The young and elderly had developed symptoms and passed quickly, while it took only a little longer for adults to fall to the disease. A town away, the delivery man was quickly spreading the disease as well, and it was several more towns before the news about the disease spread and residents quickly quarantined. While this may seem like a fairly common story about the progression of new animal-borne diseases, there is one small catch: this is not a real story, or at least not a story that is commonly thought of as real. This was the Blood Plague, a virtual outbreak that occurred in a Massive Multiplayer Online Role-Playing Game (MMORPG). In 2005, the MMORPG, World of Warcraft (WoW), had just received an update to its gameplay, otherwise known as a “patch.” This patch created a new boss fight, an encounter with a more formidable and complicated enemy. This particular boss featured a new effect that would drain

the health of a player, spread to other players through close contact, and kill them if not correctly removed. Normally, this effect would not be able to move outside of that specific fight, but due to a bug where pets were able to contract and carry the effect, players were rapidly contracting this deadly debuff outside of the boss fight. Lower-level players were especially affected by this debuff turned disease as they did not have the health or necessary means to survive it. In response, large hubs in-game, traditionally filled with players, were abandoned (Lofgren and Fefferman). Such a dramatic response from players drove researchers to learn how this particular phenomenon unfolded. Their research focused on the process through which real-life decisions could be visualized or confirmed through the mechanics of a virtual world. While computers and the models they can produce are powerful, the advantage of an MMORPG is that the mechanics and technology of the game can be manipulated to help create the scenario or concept that you are researching. For example, there have been mathematical models of air-borne diseases or diseases that spread by close contact, but there have not been controlled ethical environments where researchers can specifically look at the choices human beings will make while the disease spreads (Logfren and Fefferman). From this, we now know that an MMORPG can have components inside of itself that create visualizations. But to take this idea a step further, in what ways can the virtual be imagined through the virtual? While MMORPGs can be used to understand the real world, the virtual world is becoming more prevalent. Virtual societies have begun to emerge with the creation of virtual communities. Virtual realities could create people who have never experienced anything other than the virtual world. We need to know how those virtual worlds might be imagined and what will cause them to evolve. Just as they can be used to understand the real world, so MMORPGs can become the first step to understanding how the mechanics of a virtual world will be able to form social, cultural, and lingual constructions in the wake of a completely virtual environment.

M

MORPGs and video games alike are often criticized for being fantasies. To some, the limitations of a screen cannot overcome whatever holds a videogame

Fifth World


115

back from being recognized as a construction of a world. In actuality, the physical restraints of MMORPGs can be loosened by simply understanding how the mechanics implemented inside of the video game can create the same social constructions we experience every day. One of the first things that draw MMORPGs into the space of reality is the communities that exist inside of every game. At smaller levels, MMORPGs are similar to internet chat groups, or simply private messages between two people. This one-on-one communication is an important feature of MMORPGs, allowing for independent and private relationships between different players. Though on a larger level, an MMORPG, and more specifically the individual servers they inhabit, are little compressed nationstates. After Benedict Anderson, I know as a player that MMORPGs create real intimacy among their users, bringing them into a tight-knit “imagined community.” Although MMORPGs are not strictly tied to print capitalism and the emergence of a common language, the virtual media of an MMORPG does create a common ground for an emergent kind of pride similar to nationalism (Anderson, 39, 41). United under the common standard of a server, many of which are named, people under the guise of avatars can visualize a connection between their avatar and the avatars of every other person in the world. While many MMORPG servers can hold a maximum of 5000 players at a time, some large-scale servers underneath games like EVE Online can hold hundreds of thousands of players. Both of these ranges make it very hard to intimately know every player in a given server state or, in EVE Online’s case, an entire server galaxy. What separates this of course from a very large group chat is the embodiment of character. The mechanic inside of these games that allows you to move a body within virtual space is important to constructing the idea of Imagined Communities. Without it, there is no imagination needed, the avatar would simply exist in a real community where everyone is in proximity to each other, or the embodiment of the player would exist in the same way we do when we text a friend or family member. This space shows how the mechanics in a game can construct something that would necessarily be associated with the virtual. Imagined Communities have always been something directly tied to the idea of space in the real world, connecting people under a united community when they really have no other connection. Instead, virtual Imagined Communities are something created through the virtual media of an MMORPG. Their presence is a clear sign that something other than polygons rendered on a screen can be created inside the virtual. Another mechanic of MMORPGs that crafts a social construction is simply the character creation screen. How players choose to represent themselves inside of a world creates unique ways to understand different components of themselves such as race and gender. Race, in particular, is constructed uniquely because of the inherent way race is North Carolina School of Science and Mathematics

formed through the idea of whiteness and the “other.” For some games, the race chooses to have color sliders, allowing each player to simply choose between any color on the visible spectrum for the color of their skin, lips, eyes, etc. while other games have a predesigned color palette assigned for various fleshy implements. While this does not inherently release any kind of predetermined agency assigned to colors from the real world, it assigns a choice to something none of us have any choice in. This may not be unique to those who exist on the internet, where people can lie and catfish about their gender, race, or age; however, the uniqueness comes with how different notions of race can be constructed inside of these games. Inside an MMORPG, racial constructions are not necessarily rooted in the histories of colonialism, slavery, or other kinds of social injustice that traditionally give birth to the different notions of race. Instead, MMORPGs find different ways. Often the lore, histories, or even the cost of playing a certain type of character for every game redefines some of the predetermined notions different people will have for different types of races and colors. Consider, for example, the “High Elf” race inside of the MMORPG Lord of the Rings Online. As a player, I have witnessed High Elves be characterized as higher-than-thou and aloof. Such preconceived notions inherently start to craft the beginnings of racial ideas into the workings of race choices even at the very beginning of their game. Concerning gender, MMORPGs in general have very little to move away from the binary idea of gender, but there is hope for non-binary and genderfluid options (Petrovich). Most character customization screens have the option for either male or female character designs, with body types already predesigned. No one has predetermined notions about a character based upon their player’s in-real-life appearance. Players can simply reimagine their characters through whatever gender identity they want and not have to worry about the assumptions that come with their flesh in real life. Pronouns, body type, and every hindrance to a gender identity that is not your own can simply be done away with in an MMORPG setting if the player wishes to. The power of a character creation channel is simply endless when it comes to reimagining the self. While this kind of self-creation and choice is available in the real world, the accessibility and ease of use for this kind of virtual reimagination are unique to the worlds of MMORPGs. Through the exploration of the various components of an MMORPG, we can start to see how the mechanics of server-based interaction and character creation can create different social constructions. Community, race, gender, all of these things have the potential to be drastically changed by a virtual environment. An important aspect of this recreation though is how we also understand how realworld understandings of community, race, and gender also influence these things and to be aware and thoughtful of them.


116

A

s with imagined communities, race, and gender, the environment of a virtual world starts to create its own culture as well. All of these components had to do with the interaction of one player with a specific interface, i.e. the character creation screen. This was the beginning of understanding how the mechanics of an MMORPG will create an environment in which new things will be born. In this section, I will start to ask how the technical constructions inside of an MMORPG will give way to a culture of remembrance and competitive culture between groups of people. The first construction that we should take notice of is housing. Many, if not all, MMORPGs have some kind of personal housing system. This system gives space to every avatar to be used and crafted into whatever they wish it to be. More importantly, though, this system creates a way for every person to be embodied in the files of the game itself. Friends of a dead player can visit the housing of their character(s) and remember the times they had with them. With this, a history and a culture of remembrance start to emerge in a world that has no physical presence but exists only on the screen. A better way of understanding this may be the practice of in-game monuments for player loss. There are many accounts of developers choosing to implement a small statue, plaque, or another type of visitable location to honor the memory of a deceased player (Felix). These memorials are ways for players to honor and grieve their fallen compatriots and often just require the onboard of some kind of developer to implement a monument into the virtual space (Felix). In their creation, these memorials and other acts of remembrance provide a conduit for MMORPG players to digest their grief and form one of the most basic kinds of culture: honoring the dead. This kind of cultural creation is infinitely more accessible simply because of the ease of creation inside virtual worlds. There is no need for a building permit to clear land and erect a statue, or to pay for the building materials. The memory simply becomes there with a few lines of code and the proper models incorporated into the roots of the code. Expanding on this idea, it is very easy to see how different cultures of interaction have evolved in MMORPGs. Some histories show these developments, sometimes in dramatic ways. This is perhaps best exemplified by the interactions in the massive sandbox MMORPG EVE Online. EVE Online is notorious for its steep learning curve and brutal policy with player versus player interactions which allow conflict between players at all times of the game, even in the beginning (Drain). This often deters new players but the payoff is the complicated interactions among EVE Online’s user base (Drain). Combined with the massive size of EVE Online’s server, its development company, Crowd Control Programs, was able to create a massive world where every player can interact with the other. The culmination of these components created a birthing ground for a culture of conflict and grabs for power and digital land.

One of the best examples of this kind of culture is The BloodBath of B-R5RB (Battle of B-R5RB). Due to one missed payment to retain control of an outpost, rival coalitions of players were able to build up an army and rush to quickly overcome the defense of the outpost and into the surrounding area. The opposing coalition was met with high resistance and the size of this battle was incomparable to any before it with 7,548 participants overall and thousands of ships actively participating (Battle of B-R5RB). Simply put, this battle was born through a collection of in-game social and economic activities between various military powers vying for a new chunk of land. Such components of the battle start to reveal how EVE has crafted a culture among its users for battle and scheme. The origin of this culture of course is directly tied into the technical construction of the game. As mentioned above, EVE Online was specifically developed in mind with free-for-all player vs. player gameplay. While this has been controversial, with some players wanting to opt for a more peaceful style of play, test servers have shown that the dynamic play style of EVE Online vanishes instantly. The technology that exists inside the virtual environment of EVE Online heavily affects the culture and actions of people inside of the game. Without the massive single server, there would not nearly be as much collaboration and interaction among groups of different players merely because of the difference in size. With the assistance of technological constructions inside of MMORPGs, we can start to see how these different games can create culture and society inside of themselves. Instead of drawing from the natural environment, people inside of MMORPGs have to rely on the environment presented to them through the game that has been created by the game developers.

M

MORPGs have a long history of their players using their lingual constructions in coordination with their native tongues. Of course, these lingual constructions did not emerge from the void, but rather through the mixture of physical and social environments that players exist in (Fill, Alwin, and Peter, 13). The gameplay of any given MMORPG creates the physical environment while the culture of interactions between player groups creates the social environment. Inside an MMORPG, there are many different kinds of gameplay each with different features. While doing content that is mainly meant for the solo player, the need for communication is low, with the player rarely having to interact with anyone but themself. This changes drastically during group content or content where you need the assistance of another player. With how much time some MMORPG players put into these games, many players want to waste as little time as possible, and so searching for other players to group up with is often kept brief. In the MMORPG Lord of the Rings Online, searching for a group of players is usually expressed through a simple message containing a large amount of information. For example, the Fifth World


117

message “Remmo t2 6/12 need guard DPS PST” contains a series of contractions, acronyms, and group information which to anyone who has not played an MMORPG before appears as gibberish. But for those who understand it, it reads something along the lines of “Looking for a Guardian and more DPS classes for the Remmorchant Instance at Tier 2. We currently have 6 out of 12 players needed for completion. Please send a tell (instant message) to me if you want to join.” Within such a small message there contains information about the name of the group content, the difficulty the content is being played at, the classes the group is looking for, the size of the group so far, and how to contact the group leader. Through the needs of the gaming environment, the player base of Lord of the Rings Online, and many MMORPGs like it, have created this language for finding content groups. Without it, the capacity for finding people for groups would be greatly diminished. Inside of group content, the language of players also changes to match the environment of what happens around them in a more literal way. For example, among all levels of content, there are virtual entities known to players as “MOBs” which stand for Mobile Object Blocks(Kaelin). MOBs are the enemies you encounter in MMORPGs and can perform various actions against your character, dealing damage and various effects(Kaelin). Encountering these enemies has given birth to one of the most common terms in the lexicon of an MMORPG player: aggro. While being a shortened version of “aggravation,” aggro has become its own word in the language of MMORPG players. To have “aggro” means the MOB or MOBs in question have their attention directed towards you (Kaelin). Simply put, it is the concept of using the mechanics of a MOB’s targeting system and directing that targeting system onto a player or group of players. Through this, the mechanics of the game have acted as the physical environment and created an entirely new word with its own meaning. From the concept of “aggro,” players have also come up with the idea of a “pull.” Using the same mechanics from aggro, a pull is when a player, usually with more health or avoidance skills, will draw aggro from a group of enemies causing them to attack or follow the designated player (Kaelin). Often this is a form of strategy during group content and has become its own term in direct response to the physical environment created through the game mechanics. In turn, the language created through the physical environment has also further developed social language. Drawing on the concept of a “pull,” such in-game tactics have also created opportunities for interesting interactions between players. A character who has pulled a large number of MOBs can run around with these MOBs following behind to prevent taking damage. This process is usually known as “kiting” and is another example of a language that has come from the physical environment of an MMORPG (Kaelin). The trail of MOBs produced by such a technique allows for players to maneuver through other players, usually North Carolina School of Science and Mathematics

causing them to be overwhelmed and killed. This process in repetition is considered by most as “griefing” (Kaelin) (Kash). In general, griefing is the name given to acts that cause other players to lose enjoyment in a game or in general cause them grief. This is a great example of how the physical landscape in turn affects the social environment inside of an MMORPG. While “aggro,” “pulling,” and “kiting” are all words formed through the physical environment of the game, “griefing” is a purely social word that is assisted by the physical landscape. Other examples of words like these are “ganking” which is when a player is overwhelmed by a group in player vs player combat (Kaelin). Generally ganking is not looked upon kindly, especially if it is repetitive, and could be considered a type of griefing as well (Kash). While ganking can be assisted by the mechanics of the game, the essence of the word is emphasized by player-to-player interaction. This means it is more associated with the social environment of an MMORPG and is merely assisted by the physical environment. Through the combination of the physical and social environment, MMORPGs start to form the language that every gamer knows how to speak once they’ve invested enough time into it. This shows how easily the virtual can start to create things on its own. Without the physical environment of the game’s mechanics, and the social environment of player-to-player interaction, the words discussed above would simply not exist. Virtual worlds have more than the capability to create their own language and this will be an important part of understanding how virtual environments can change the way people interact inside of them.

I

n conclusion, MMORPGs are an excellent way of understanding how virtual worlds may be expressed through the mechanics of the technology they are defined by. With the rise in increasingly realistic virtual reality from Oculus, Valve, and other companies, there is an increasingly likely chance that people will start to form communities and environments that are completely virtual. Virtual treadmills will lead to the creation of movement inside of virtual worlds while staying in one place inside of your house (Robertson). Avatar creation will become more and more realistic until you start to look down at yourself and wonder which one is more the body you believe you belong in: the one you were born in or the one that you have created inside of this virtual world. The questions of race and gender that we ask ourselves now may become completely different when we start to inhabit these virtual worlds. Similarly, once we can create and craft with the wave of our hand and just a few lines of code, how will the creation of culture change? Already in MMORPGs people have created memorials and ceremonies to honor their dead, but often those are restricted to their fullest creative capacity by developers. Once these players can have full control over the world that they exist in, what will be created? The same kind


118

of thought process can also be put to the culture and social contracts that emerge from these virtual communities. With the threat of mortality gone, what kinds of practices, behaviors, and cultural events will become acceptable in virtual worlds? As of right now, we only have the histories and events from MMORPGs to give us any kind of guidance when it comes to theorizing about those kinds of activities. The counter-argument to this idea is of course that these games are not real and are merely being heavily influenced by the culture in the real world as well. Why should anything in the real world be based upon the imaginary happenings inside a world made of code? There are certainly concerns that the individuals playing these games will not care about what decisions they make at all, and do whatever they think will suit their current desires. The problem with this line of thinking is that it does not take in the social contract that exists even inside of a video game. MMORPGs in particular are filled with interactions between people where there are consequences if the proper procedure is not followed. So for those who think that MMORPGs and the decisions that take place in them are imaginary, the answer is, of course, these games are not imaginary. The events that take place in an MMORPG are as real to the players inside of the game as a talk with a friend in a cafe would be. With the rise of voice chatting applications, this has become even more visible due to the presence of speech that provides a human-tohuman connection while playing. The avatars inside of the games are not merely lines of code, but embodiments of the people that they are played by since they actively reflect the choices made by the player. Since all MMORPGs function as a sort of intersection between the real world, the imagined community, and the virtual universe, there is a unique possibility for pushing the boundaries of the virtual and the “real.” What I am proposing is the beginning of long-needed intersectionality between the understanding of virtual and real. While the actions of people in a virtual setting may not have direct impacts on the reality outside of the screen, they are still making decisions that will affect something one way or another, which means there will be a consequence or an effect. Does this not mean that the actions inside of virtual worlds are in a way “real” as well (or as real as any decision that can be made)? When do the decisions made in the virtual world start to matter as much as the ones in the real world? The line is further blurred when the rise of virtual reality is taken into consideration, and how our physical bodies can influence the weight of our lives. In summary, there is a very real possibility that the virtual may become more and more what we consider “real”. With the proper understanding of MMORPGs and both cultural and social constructions, we may be able to envision what the future of social interaction will be like.

Anderson, Benedict. “The Origins of National Consciousness .” Imagined Communities, Verso Books, Brooklyn, NY, 2016, pp. 39–25. “Battle of B-R5RB.” Wikipedia, Wikimedia Foundation, 1 Dec. 2021. “CCP Games.” Wikipedia, Wikimedia Foundation, 13 Sept. 2021. Drain, Brendan. “Eve Evolved: Eve Online’s Server Model.” Engadget, 10 Feb. 2020. Felix, Howard. “‘Sad When a Game Outlives a Player’: Memorials and Monuments in MMO Gaming.” Playthepast.org, 10 Apr. 2017. Fill, Alwin, and Mühlhäusler Peter. “Language and the Environment.” The Ecolinguistics Reader: , Continuum, London, 2006, pp. 13–17. Kaelin, Mark. “Playing an MMORPG Is Not All Fun and Games, You Better Have the Right Vocabulary.” TechRepublic, TechRepublic, 3 May 2006. Kash, Sam. “Sam Kash.”

, 22 Jan. 2020.

Lofgren, Eric T, and Nina H Fefferman. “The Untapped Potential of Virtual Game Worlds to Shed Light on Real World Epidemics.” , Elsevier, 20 Aug. 2007. Nölle, Jonas, et al. “Language as Shaped by the Environment: Linguistic Construal in a Collaborative Spatial Task.” Nature News, Nature Publishing Group, 25 Feb. 2020. Petrovich, Erik. “Palia MMO Has Unique Approach to Character Creation and Gender.” Game Rant, 8 June 2021. Robertson, Adi. “This Extremely Slippery VR Treadmill Could Be Your Next Home Gym.” The , The Verge, 7 Oct. 2020.

Fifth World


119

The Effects of the Incarceration Cycle on Socially Excluded Groups Ekwueme Eleogu

D

espite being known as the “leader of the free world,” the United States has the largest incarcerated population in the world (Initiative). Incarceration has changed its face in the last two decades, straying farther from the ideology of rehabilitation; incarceration is now seen as a market whose targets reap none of the rewards. The market is targeted toward socially excluded groups (minorities and members of the lower class), and its effects have been proven to not only individually set back these targeted groups but also wreak havoc upon their communities. Historically, the United States is no stranger to the mass incarceration of socially excluded groups. In the 1920s a large number of African Americans migrated North and out west (the Great Migration) to avoid the unfair treatment they faced within the South. Over 6 million African Americans moved North in search of new freedoms and overall a better life; however, this movement to urban areas encounter an unforeseen obstacle. In Moving North, Katherine Eriksson’s writes, “I find that black men who migrate to the North are 2 percentage points more likely to be incarcerated in 1940, roughly double the probability than if they stayed in the South” (Eriksson). This higher incarceration rate of black men in the North following the migration took away the freedoms that they came seeking, warning the African American community to stay in the south and live in fear or move to the North and be jailed. With more and more African American men going to prison over the decades, it was only a matter of time before corporate America and the United States government began to exploit this situation. The 1980s brought about a duo that would pillage these lower-class communities: privatized incarceration and the war on drugs. Ronald Reagan’s presidency brought about the resurgence of the war on drugs, pushing harder legislation to put an end to the drug epidemic. However, the majority of legislation negatively affected lower-income communities due to racial bias. Infamous mandatory minimums took away the power of judges to lessen the extent of punishments, for these mandated courts to give out certain sentencing regardless of unique circumstances, causing harsher sentencing for less heinous drug offenses (Recovery). Mandatory minimums were the cause of overcrowding in the prison system, forcing the government to find a solution: privatized incarceration

North Carolina School of Science and Mathematics

was just the answer they sought. Privatized incarceration is the incarceration of inmates by a third-party company contracted by the government. The new market of private prisons was centered around the need for greater prison capacity; more people within these prisons caused the contracts and budgets of private prisons to be larger. The mandatory minimums that were established in the Anti-Abuse Act of 1986 spiked the incarcerated population and rates. In data collected by The Sentencing Project, it is reported that in 1986 the U.S. prison population was 522,084 people and only a decade later the population was 1,137,722 people, over a 100 percent increase. The main culprit for this increase was the “tough on crime” approach newly seen in criminal sentencing and law enforcement ( ). To be “tough on crime” was to act as harshly to criminals as the law would allow. For example, judges gave full penalty sentences and law enforcement sought more drug-related arrests; this consequently caused more people to be brought into the prison for longer periods of time. Lower-class communities were impacted the most by this dual assault on “crime.” Harsher legislation caused more civilians to be picked off the streets for storage in private prisons. The mandatory minimums had a controversial distinction towards crack cocaine and powder cocaine. Crack cocaine and cocaine have similar chemical compositions except that crack is sold in a solid form and cocaine is sold as a powder; the minimum sentencing for the possession of five grams of crack is 5 years with no parole. To warrant the same sentence for possession of cocaine, you have to be caught with 500 grams. This ratio, 1:100 grams of crack to cocaine, was seen to be racially biased because crack is commonly used by poor Americans, particularly African Americans, while cocaine is more expensive and used by wealthier Americans.(Recovery) This caused poorer Americans to be more susceptible to longer and more frequent sentences, flooding prison systems with lowerclass Americans., producing racial disparities in the prison system, ripping impoverished families apart. The mass removal of citizens from their communities plays a large role in the success or failure of the community as a whole, especially in terms of re-entry into the community.


120

states, “Our analyses also showed that African American and Latino parolees, in particular, tend to return to disadvantaged neighborhoods and communities, defined by high poverty rates, high unemployment rates, and low educational attainment. This suggests that reentry will be especially challenging for these groups” (Davis et al.). The scarcity of jobs within impoverished areas affects not only the people already residing in the area, but also the people who will be returning to the community from incarceration. The attainment of a job after institutionalization is difficult enough due to the requirement of stating if you have been convicted of a felony on job applications. Attaining a job in an area without many job opportunities makes this near impossible. This is what prompts the reincarceration of individuals: many felons return to their previous crimes in order to make ends meet. This cycle of reincarceration of felons takes a toll on the youth of their community as well. “Children of incarcerated parents are more likely to exhibit low selfesteem, depression, emotional withdrawal from friends and family, and inappropriate and disruptive behavior at home and in school, and they are at increased risk of future delinquency and/or criminal behavior” (Ibid). Children with incarcerated parents are already put at a disadvantage; since the private prison market is centered around keeping people incarcerated, this statistic ensures the market will be around for years to come. This is the incarceration cycle: lowerclass communities are targeted by harsher sentencing and legislation, creating a market for private prisons and causing individuals to be permanently marked as felons, thus reducing their ability to secure employment after reentry. This not only leads to reincarceration, but also affects the increases the likelihood of their children’s incarceration.

I

ncarceration has proven to be disastrous to these socially excluded groups. In this paper, I will be doing an indepth analysis on the effects of the incarceration cycle on these socially excluded groups. The incarceration cycle has proven not only the failure of the prison system in rehabilitating felons but also the negative impact it has upon the communities who are subject to this cycle. Rather than just imprisoning the individual, this cycle prompts the imprisonment of the communities, causing the community to remain constrained by poverty and crime. Incarceration was designed to act not only as a deterrent from crime but also as a rehabilitative institution to aid in decreasing further crime from previously incarcerated individuals. Known as rehabilitative penology, this ideology boomed within prison systems in the 20th century. It was designed to “help occupants of penal institutions adjust to society through educational and vocational training, good behavior incentives, and other prison-based programming. The central aim was to reduce recidivism by facilitating personal improvement”(Grasso). This ideology seems to be useful in returning convicts into society, but convicts who

fail certain rehabilitative measures face harsher punishment. As Anthony Grasso observes, “this rehabilitative model incorporated elements of harsh, justice from its inception, as progressive era penological scholars and practitioners imbued rehabilitation with a dual purpose. Offenders were given rehabilitative opportunities, but those who failed to reform were deemed incorrigible and punished harshly.” Deeming convicts incorrigible was the prison system’s systemic justification for harsher sentencing, keeping inmates in the system for longer. Select convicts were seen as “worthy,” and thus had access to better resources during their time of incarceration, which would help them in reentry; however, these incorrigible felons were denied the necessary tools for successful re-integration, thus returning them to the system which profited by their incarceration. Not only has the prison system failed the people it claims to heal, but it has ruined their chance of securing economic freedom and social advancement by stripping some of the chance to receive the resources necessary for successful reentry. The two extremes of rehabilitative penology leave large room for interpretation and manipulation of the initial structure of the ideology. Politicians in the 20th century noticed this and used it to advance their own political agendas. The rehabilitative penology ideal led to the development of indeterminate sentencing, a tool that allowed for inmates to be released closer to their minimum sentence if the parole board believes you have been rehabilitated. Although this type of sentencing does have its faults, it does allow inmates the chance of leaving prison earlier. When indeterminate sentencing is applied in the context of the private prison market, however, it is seen as a disadvantage; rather than inmates being sentenced to ten hard years they are given a range between five to ten, which can potentially take away from the revenue curated by these private prisons. When the prison population began to surge, politicians began to discredit the idea of indeterminate sentencing. They did so by blaming criminality on personal failures, which allows the public to distance themselves from the problems plaguing disadvantaged communities and support conservative ideals that crime was a personal choice (Grasso). This discrediting paved the way for the Anti-Abuse Act, which allowed for states to construct systems that would reduce the number of early release opportunities and abolish parole. This served further to discredit the idea of rehabilitation as a whole within the prison system. Another injurious development was the three-strikes law, which warranted significantly harsher punishments for offenders who had committed three offenses. Oftentimes when offenders hit their third strike they were sentenced to life. The United States Sentencing Commision justified the law in this way: “To protect the public… the likelihood of recidivism and future criminal behavior must be considered. Repeated criminal behavior must be considered. Repeated criminal behavior is an indicator of a limited likelihood of successful rehabilitation” (Grasso). According to this Fifth World


121

logic, some felons were unable to be rehabilitated. More exactly, they could not exhibit or perform their capacity for rehabilitation to a system that did not in any case offer the opportunity for proper rehabilitation. Rehabilitative penology provided the prison system with ideas and proper measures for rehabilitation, but punished individuals for failing certain measures of rehabilitation. It is also not surprising that individuals who experienced the fullest extent of rehabilitation did recidivate. This is because social and economic factors are not taken into account when dealing with rehabilitation. Previously incarcerated inmates are returning to communities with limited economic opportunities, which are not take into account within rehabilitative efforts, thus leading to failure. To achieve economic freedom one first must find an income. This task is one of the most difficult aspects of life postincarceration. The label of already makes joining the workforce difficult as is, but adding race into the equation makes it near impossible. Furthermore, criminal stigma and network support play a major role in re-entry. The stigma deters employers from offering positions to felons; furthermore, it appears to be greater for Black job seekers (Western and Sirois). Network support for Black felons is typically worse than that which White felons receive. This is because African-Americans are less likely to receive job recommendations from family and friends in fear of them being unsuccessful or unreliable (Western and Sirois). This heavy stigmatization and weak network support cultivate “racialized re-entry,” which limits the economic opportunities for minorities. Bruce Western and Catherine Sirois used data from the Boston Re-entry Study (BRS) to test their hypothesis of racialized re-entry. Following for a year men and women who were released from prisons in Massachusetts, they observed their transition back into the community and employment that followed. Monthly employment rates were collected for each race within the study (Caucasians, African Americans and Hispanics). The reported employment rates for African American respondents never exceeded 50% consistently within the year, which implies that the median income for the data set is near zero. Hispanic employment rates were slightly higher than the African American rates; however they typically hovered near 50%. The employment rates of Caucasian respondents were significantly higher than their two counterparts; over 60% were employed for eight months of the year (Western and Sirois). The rate of employment reported for the different groups within the study reflects that the hypothesis of racialized re-entry is one we can accept.

North Carolina School of Science and Mathematics

Figure 1. Following twelve months after release: (a) monthly employment rates by race; (b) mean monthly earnings by each race. (N = 116) (Western and Sirois)

The first panel of Figure 1 highlights a crucial but depressing factor in the discussion of privatized incarceration: re-incarceration. When studying the graphs there is a noticeable decrease after the peak of month four within the White respondents’ graphs. This decline is noticeable within both African American and Hispanic employment rates; however it is not as severe within the Hispanic community. I believe that reincarceration is responsible for the decline within the employment rates. As reincarceration rates increase, employment rates for the respective groups will fall. Another notable point about the graphs is that the severity of the decline within the rates of Caucasians unemployment was the most evident, but this still holds the much higher employment rates than the other two groups. This confirms once again the presence of racialized re-entry within the study. Other data collected from the study were the monthly earning rates of each group. The lower panel of figure 1 shows the trends for both only the employed within the groups and a trend including the unemployed. This was done in an effort to eliminate biases of certain groups that have a higher unemployed population, which would then reduce the earnings for the group as a whole. When the unemployed were included within the average earnings of African Americans the average trended near five hundred dollars a month. This is annually equivalent to half the poverty income for an individual. The line that only displayed the average of employed African Americans was around $1,300 dollars. This is around half of the median earnings for African Americans in the US labor market as a


122

whole. The employed Hispanic average was slightly higher at around $1500 a month. This is equal to the monthly earnings of about 60% of Hispanics within the US labor market (Western and Sirois). The employed White average was $2500 a month. Something to take into account when studying the graphs is the trend of the earnings of White respondents during the second half of the year. The average earnings of these respondents increase significantly compared to the other groups. This rise in earnings represents the average earnings of nearly 80% of Whites within the US labor market (Western and Sirois). It can be interpreted that White felons who remain employed are more likely to receive raises and promotions within their jobs, which makes them less likely to return to crime. African American earnings, however, declined within the second half of the year; this decline can be attributed to either reincarceration or dismissal from employment. Dismissal from employment thus makes African Americans more susceptible to returning to crime. This is how the cycle entraps its victims; by victims returning to prisons, the cycle strengthens its grip upon them. Eventually, this causes the belief that successful re-entry is unattainable for some felons, forcing them to accept their life of crime, giving up on efforts to rehabilitate themselves. The cycle of incarceration entraps not only the individual, but their community as well. When looking at their communities the effects of this missing social link are evident. Recycling between their communities and back into prison takes a toll on the structure of the community and family dynamic. These communities lose family members to the jaws of the incarceration cycle. The ones who seem to suffer the most from these family members’ absence are their children. “Formerly incarcerated men and women average low levels of schooling and work experience. Deficits of human capital are indicated by high rates of high-school dropout and backgrounds of instability and poverty in childhood.” This reveals a prominent part of the cycle (Western and Sirois). Incarcerated adults faced years of instability within their own family dynamics due to family issues, drug abuse, and incarceration. Becoming incarcerated later in life causes this same instability within their own homes. This instability causes children to either be forced to find ways to provide for their families (turning to crime) or believe that crime and substance is an acceptable way of life. This is the sad reality forced upon children of individuals stuck in the cycle and this reality makes these children more susceptible to becoming a part of the cycle later in life. In 2000 a study was done on the effects of the incarceration cycle on different aspects of life within California inmates. When studying the effects of incarceration on family life, researchers found a total of 856,000 California children having a parent involved with the Criminal Justice System (approximately 1 in 9). (Davis et al.) The incarceration of a

parent during a child’s life makes them vulnerable when they reach young adulthood. Children who have incarcerated parents are more likely to exhibit “low self-esteem, depression, emotional withdrawal from friends and family, and inappropriate or disruptive behavior at home and in school, and they are at increased risk of future delinquency and/or criminal behavior”(Davis et al.). This cycle is like an infection: once one family member suffers from it the rest of the family are at greater odds to fall into it. The effects of incarceration on children accrue as the children mature; long-term effects of incarceration on children include “questioning of parental authority, negative perceptions of the police and the legal system, impaired ability to cope with future stress or trauma, disruption of development, and intergenerational patterns of criminal behavior” (Davis et al.). The cycle attempts to trap these individuals before they even enter society as adults. Black men are being incarcerated more than any other group(Clear). The constant cycling of these men throughout the community takes a toll on the members of society. The recycling of these men through the community has caused it to become woven into the community structure, as if it were an expectation if not a rite of passage for black men to go to jail at least once in their life (Clear).

T

he main culprit behind putting black men behind bars is, however, the drug laws that deliberately target them, not the cultural effects of such laws. In Imprisoning Neighborhoods Worse, we read: The main vehicle for the different rate of incarceration for black males is the drug laws. This situation is not because black males are more likely to use drugs. Black high school seniors report using drugs at a rate that is and white students have about three times the number of emergency room visits for drug overdose; this discrepancy in rates has remained steady (or grown) for over a decade. (Western 2006, citing Johnston et al. 2004) Blacks, however, are much more likely than whites to be arrested for drug crimes”(Clear). This statistic shows the deliberate targeting of black men by the criminal justice system. This targeting works to keep their jails full by enslaving communities through unjust laws and sentencing. Being a black male in a society where privatized incarceration is a daunting reality, I have experienced first hand the effects of this incarceration cycle. Being told to “never wear a hoodie in public,” “never hang out in stores for too long,” and “always be as still as possible when interacting with police” are all instructions that have been given to me to avoid being thrown into this cycle that plagues my community. My mother lives in fear when I go out with my friends because she knows that I could get picked up and thrown into this cycle everytime I leave the house. The pain

Fifth World


123

and the stigmas created by this cycle have ruined individuals, families, and communities and have caused major distrust throughout the nation. The incarceration cycle has failed in the rehabilitation of incarcerated individuals, causing the likelihood of reincarceration to increase to historic hights. Not only has this failure of the prison systems increased the likelihood of incarceration, but concentrated incarceration plays one of the biggest roles in fueling the incarceration cycle. The claims presented in this paper highlight the effects of the incarceration cycle on these socially excluded communities. A massive reconstruction of both the prison system and legislation working against these socially excluded groups is needed to end racial disparity in prison systems and racial inequality in the field post-reentry. This idea of the carceral marketing of communities who have been systematically denied access to proper education and who are subject to poor home life should be eliminated.

T

he claims presented in my paper have proved not only the failure of rehabilitation but the enormous profits made by private corporations from this failure–a failure inseparable from social and economic injury and psychological pain. These corporations do not care about the increased prison population because it continues to put money within their pockets. The financial incentives to house criminals reward these big corporations for failing at their job. Not only are they being paid to overcrowd these prison systems, but they are not using profits to better these facilities, causing health complications. Not being able to go out into the job field ready and healthy takes a toll on felons’ goals post-reentry (Western and Sirois). This lack of readiness stacks another obstacle in the way of proper rehabilitation and steering away from previous criminal activities. However, this cycle and its effects are not going unnoticed. In 2010 President Obama signed the Fair Sentencing Act, which eliminated the fiveyear minimum for crack cocaine possession and changed the 100:1 cocaine to crack ratio to 18:1. This act aided in slowing down incarceration rates and paving the way for the removal or visitation of drug policies. These steps are crucial in the reform of the incarceration system but many more efforts need to be made to end the incarceration cycle and eliminate the targeting of these socially excluded people to fill these private prisons. This trap is still a large looming threat to socially excluded groups, during the next decades a full reform of the prison systems in the United States is necessary to properly rehabilitate felons and begin healing in the targeted communities.

North Carolina School of Science and Mathematics

“Criminal Justice Facts.” The Sentencing Project Davis, Lois M., et al. RAND Corporation, 2011 Eriksson, Katherine. “Moving North and into Jail? The Great Migration and Black Incarceration.” , vol. 159, Mar. 2019, pp. 526–38. John, Arit. “A Timeline of the Rise and Fall of ‘Tough on Crime’ Drug Sentencing.” The Atlantic, 22 Apr. 2014 Moore, Lisa D., and Amy Elkavich. “Who’s Using and Who’s Doing Time Incarceration, the War on Drugs, and Public Health.” , vol. 98, American Public Health Association, Sept. 2008, pp. S176–80. Recovery, Landmark. “The History of The War on Drugs: Reagan Era and Beyond.” Landmark Recovery, 13 Feb. 2019 Clear, Todd R. “Imprisoning Communities: How Mass Incarceration Makes Disadvantaged Neighborhoods Worse.” Oxford University Press. Lauren-Brooke Eisen. Columbia University Press, 2018. Western, Bruce, and Catherine Sirois. “Racialized Re-Entry: Labor Market Inequality After Incarceration.” , vol. 97, no. 4, Oxford University Press, 2019, pp. 1517–42. Travis, Jeremy, et al. National Academies Press, 2014. Grasso, Anthony. “Broken Beyond Repair: Rehabilitative Penology and American Political Development.” Political Research Quarterly, vol. 70, no. 2, [University of Utah, Sage Publications, Inc.], 2017, pp. 394–407. Initiative, Prison Policy.


124

Class Narratives in Media Maddie Beard

M

y memories of childhood are fickle, constantly changing as I make discoveries about myself. This is not a unique experience- I imagine many people have debated the events of their youth with parents, guardians, siblings, or childhood friends. However, one thing that remains clear in my ever-changing childhood narrative is the Friday night ritual my mother, sister, and I had. We got into the habit of having a “kid’s night” after moving to rural North Carolina, where idle children find no entertainment in town on Friday night, or any night, or any day, for that matter. After playing card games every evening post-homework, and with the promise of board games, novels, and a couple of chores to fill our weekend, my siblings and I would be quite rambunctious and almost impossible to please on Fridays. My father worked late, leaving my mother to deal with our impish tendencies on her own. So, on Friday nights, we would collaborate to cook a light meal from my mother’s plentiful cookbooks, and after eating, we’d head upstairs to read for a while, then watch a movie. We’d make big bags of popcorn and all cram into my parents’ bed for the event. My brother, being 3 years older than my sister, Katie, and I, participated in this Friday night ritual for a while, but after some time found himself to be too mature to climb into my parents’ bed and eat popcorn. As he began to spend these nights in his room, Katie and I had more freedom to choose movies that we enjoyed. We watched documentaries, Disney adaptations of fairy tales, and coming-of-age movies, but our favorite genre was always romantic comedies. The crude jokes in these romantic comedies went quite far over our heads, but the idea of some idealized romance where Adam Sandler and Jennifer Anniston (or some similar couple) somehow lived happily ever after, despite being very obviously mismatched in both attractiveness and quality of personality, was appealing to us. It wasn’t until later that I realized the impact these stories had on my perception of the world. First, it is (very mildly, and only comedically) important to recognize that romantic comedies (rom coms) promote an unachievable relationship, which leads anyone who watches them to be immediately disappointed upon interacting with a couple from reality. Much more importantly, rom coms refuse to break from the heteronormative culture, which has impacted the perceptions of many LGBT+ people, myself included. The most popular rom coms in America

also portray almost exclusively white couples and characters, including racial or ethnic diversity only among extras, rather than any roles with significance to the plot. Overall, these exclusions enforce an idealistic perception of heterosexual white couples as the subjects of romance, at the expense of everyone else who watches these comedies. These faults are all easy to spot if you’re willing to look, but romantic comedies have yet another weakness that’s widely recognized, but never deeply analyzed. One of the common clichéd storylines in Hallmark-type movies involves a character who is high strung and wealthy who, for some odd reason, is forced to move to a small town and is then awed by the close-knit community in that town, as well as some strapping flannel-clad lad or stunning doesn’tknow-she’s-gorgeous gal. Superficially, this storyline seems cheesy but does not seem to play into any stereotypes or tropes. However, it is a perfect example of the tendency of media to appeal to the middle class. The supposedly workingclass small-town character often works a blue-collar job, but somehow always makes a middle-class income, and therefore lives a very comfortable life on that income. Thus, the character is perceived as middle class, and as the more humble and principled character, enforces the class narrative that people from the middle class are grounded, presentable, sophisticated, and pleasant to be around–as natural as the countryside in which they are found. The wealthy character must be humbled by this middle-class character, as they tend to be more uptight and work themselves too hard. This portrays the upper class as hard-working, appealing to the Bootstrap Myth, which suggests that hard work, not prior advantage, results in success, and that all other people who have not achieved economic success are lazy or, like the simple country people by whom they are surrounded, lacking ambition. Additionally, the blunt and uptight nature of the wealthy character paints a specific image of the personalities of economically privileged individuals, continuing to play into the idea that hard workers always become successful, but that such success must be wedded to rural ideals. The aspects of romantic comedies that fail to portray a modern world in a realistic light may seem insignificant, but representation in media is an incredibly important topic. If the majority of media is angled towards a specific demographic, the latter’s biases will spread both from the population and to the population. This is especially significant in relation to

Fifth World


125

children, as they will form a worldview partially based on the media that they consume. If children grow up learning only the conventional social identities of romantic comedy, thus limiting the expression of their narratives and unique experiences, the world will become, and perhaps already is, a very bland and oppressive place. Since class narratives are tailored to the middle class and tend to portray the middle and upper classes affirmatively, their enforcement of class narratives also stigmatizes the experience of the working and lower classes, which often causes people to ignore or to misunderstand issues that affect them, invalidating their lives and the barriers they have faced: even as the countryside is transformed into the empty space of middle-class romance, so the actual transformation of rural America, and the rural suffering it has created, is erased. This creates a lack of sympathy, not to mention a larger commitment, to issues that are truly very relevant and significant to working people. Therefore, it is easy to see that bias in media is incredibly important to research, recognize, and rectify, as the enforcement of class narratives in media targeted towards the middle class ends up hurting lower socioeconomic classes. We consume media every day. Television shows, movies, books, newspapers, news broadcasting, podcasts, music, and advertisements surround us constantly. The implications of this existence so consistently peppered with media- the incessant opinions, explanations,a nd narratives of othershave not remained unexplored. Bias in news is brought up by children as young as 8 years old and by adults ranging from all ages. Violence in video games has been debated by students and parents countless times. Censorship has been advocated, enforced, or rejected by schools, families, governments, and citizens. Strangely accurate targeted ads have even raised concerns about privacy within the digital world. However, at the end of the day, most people brush off these concerns and settle in at night to watch a TV show or perhaps read a book before going to sleep, seeking escape from such a mediated world through media. Casual media such as novels, television, movies, and advertisements, are not expected to completely mirror reality. In fact, embellishment is key to producing successful media, as people spend their working and social lives in reality, and often wish to find an escape at some point. However, with embellishment often comes uniformity, not imagination. Archetypes, stereotypes, stock characters, and clichés tie together seemingly unrelated stories: for example, the lover and the hero show up as protagonists in stories from all genres. This type of consistency continues to impose and enforce the continuation of uniformity, especially in romantic comedy, which is concerned with a union, sexual and social, relentlessly imagined in heteronormative and bourgeois terms. Social stereotypes remain prevalent because they are reproduced in media, especially when embodied by stock characters or archetypes. This concept of media conformity has been marginally explored concerning North Carolina School of Science and Mathematics

race, gender, and sexuality, to the point where discoveries are made more accessible to the public using tools such as the Bechdel, Ko, Russo, Duvernay, Waithe, Villalobos, and Landau tests (Waters). The representation of socioeconomic classes in media, however, continues to be overlooked during these movements, despite ties between and among socioeconomic status, gender, race, and national origin due to systemic racism, xenophobia, and generational poverty. These actual ties, or intersections, of the various aspects of socioeconomic injustice are perpetuated both in reality through the use of stereotypes, in media (Feagin; Ethnic and Racial Minorities and Socioeconomic Status).

S

ocioeconomic class stereotypes have been deeply woven into all forms and genres of media, even news broadcasting, which portray specific people in a certain light because of their social class. Oddly enough, however, characters that fit these stereotypes in a specific way have not been acknowledged widely enough to be recognized as stock characters, archetypes, or tropes. Therefore, one may take it upon themselves to sort these class narratives into a couple of different groups. Some common class-related themes among modern media are the Cinderella rags-toriches characters; the Bootstrappers who work hard to obtain a middle class lifestyle, derived from the bootstrap myth that plays deeply into the concept of the American dream; the moral middle class knights, who do not always have to be knights, but display characteristics associated with chivalric fantasies; the pureblood snobs, exemplified by the rich elite in Harry Potter, who derive their wealth from their prominent family status and separate themselves from others by enforcing superiority narratives; the rich benefactors like James Laurence, the kind grandfather of Theodore Laurence in Little Women; the billionaire playboys who spend their money without abandon and feel no societal responsibility, such as Jay Gatsby in The Great Gatsby; the impoverished yet worthy family who receives sympathy from others based on their socioeconomic suffering; the thugs/gang members who are derived from stereotypes of the inner city working and lower classes; and the salt-ofthe-earth workers, content with their situation in life. Some of these character types sound quite familiar– in fact, by combining two or more, you may be able to create an entire story. When looking closely at popular culture and the media that accompanies it, it becomes clear that these stereotypes appear in almost every form of media. Classic novels, classic movies, modern movies, sitcoms, new television shows, fantasy novels, and fairy tales all contain these characters in their narratives- they’re almost inescapable, as inescapable as the life incessantly imagined by these narratives to the exclusion of other lives and other ways of living. For instance, Little Women is a classic novel that follows the lives of four sisters from their childhood years to marriage and beyond (Alcott). The main characters, the March family,


126

are all part of the middle class. All of the sisters grow with strong moral compasses. Beth, the second to youngest child, most exemplifies this conscience, volunteering to do charity work and happily serving others while staying fairly quiet and demure. Jo, the second oldest, seems to fall into the Bootstrapper category, as she ends up getting a job rather than marrying rich and becoming a housewife, and takes pride in her independence and hard work. She can travel to pursue her career, and any of her monetary struggles are typical of any middle-class young adult, occurring at the beginning of her employment and not being too severe (Alcott). Amy and Meg, the youngest and oldest sisters respectively, are both honorable and sophisticated. They both get married, but take separate paths, with Amy marrying up through her union with Laurie, the rich neighbor of the March family, and Meg marrying Laurie’s tutor, therefore marrying down a social class (Alcott). This creates an interesting parallel between the two while highlighting the effect of class divergence on their lifestyles. Amy was always a bit vain, attempting to fit in more with the upper class through her proper manners. She spent quite a bit of time with her rich great-aunt and was even invited to go to Europe with her. Here, she portrays her highest level of personal growth by choosing love over her desire to marry rich, as she decides to marry Laurie rather than her more wealthy suitor (although Laurie is rather rich as well), whom she did not love (Alcott). This progression in association with a rejection of extreme wealth continues to enforce the idea that the middle class most portrays virtuous principles, and that this virtue is most effectively portrayed through heterosexual romance. Meg was also incredibly well-mannered, even as a child, but more humbly, following the suit of their mother, Marmee (Margaret March). She marries John Brooke, Laurie’s tutor, despite her great-aunt’s disapproval of Brooke’s lower socioeconomic status. Meg began as a very upstanding young woman and was always fairly intent on marrying for love, so her personal growth occurred in a different way than Amy’s. After marrying John, she struggled a bit with financial constraints, as she was accustomed to a comfortable if modest middle-class lifestyle. However, she and John live fairly well in a small home, simply unable to afford expensive luxuries, such as the lavish dresses she was able to wear as a child. John is educated and has tutored for rich families, so one may argue that their access to literary culture allows the Brookes to remain genteel rather than becoming part of the working or lower class. This continues to enforce the narrative of the middle class that is humble, hardworking, and principled, ennobling itself through virtuous cultural performances. The lower class in Little Women fits perfectly into the charity-inducing trope. The March women are all very charitable, even sharing their Christmas meal with people who cannot afford such luxurious food items. However, this manner of addressing the lower class tends to flatten them,

developing a middle-class savior in the novel rather than allowing lower-class characters to shine. While one may argue that this blueprint is less harmful than the stereotype that working or lower class families are lazy, it does shift focus onto the good deeds of the middle class, continuing to enforce this class narrative. In Little Women specifically, there is some attempt to address the problems that the lower class faced at this time, especially as a family that the Marches interact with contracts scarlet fever. Nevertheless, some of this work is undone when the beloved Beth contracts scarlet fever from this family as she attempts to help them. When Beth falls ill (and eventually dies), she makes it clear that the family is not to blame, but the event itself is rather harmful in its prosecution of the notion that lower-class families are dirty or diseased in some way (Alcott). It is also important to note that the working class in this novel are incapable or unwilling to address their situation through political action, and none are the objects of romantic interest. The upper class in Little Women is mostly portrayed in a positive light, through Laurie and his grandfather, who both reflect the rich benefactor stereotype. James Laurence is incredibly generous, even gifting Beth with a grand piano simply because she loves to play. He and Beth have an incredibly special relationship, but he is generous to all of the Marches; he even sent them a Christmas dinner after they shared their Christmas meal with those less fortunate than them. Laurie took a bit longer to grow up, as many privileged children do, and reflected the trope of the wealthy dilettante who only requires the love of a worthy woman to fulfil his social destiny. He persisted in college only at Jo’s request and went through a period of time after her rejection where he simply spent money without abandon and lived a quite rambunctious lifestyle. However, deep down, Laurie is a kindhearted boy. Through the Laurences in Little Women, the upper class is displayed in a three-dimensional manner, with kindness at its core, although Laurie’s vices enforce the class narrative of selfishness among the upper class as well (Alcott). Such selfishness, as I have suggested, must be ameliorated by the firm values of the middle-class woman. The elite upper class also makes an appearance in Little Women. This class is best defined as people who not only have a superfluous amount of money, but who also possess some time of political power, influence, or fame due to their family or whatever endeavor supplied them with so much money. Examples of this class include Jeff Bezos, Elon Musk, the British royal family, Bruce Wayne in Batman, and Fred Vaughn in Little Women. Fred is not very interesting, being mostly characterized by his romantic pursuit of Amy. One may argue that his only purpose is to provide Amy with an opportunity to choose love over wealth (Alcott). The portrayal of the upper class in Little Women therefore also remains mostly insignificant, as a character with no personality cannot enforce a narrative, although one may argue that the absence of personality is a quality in itself. However, while the narrative around the elite is typically Fifth World


127

displayed with abuse of power and a sense of overwhelming greed, there are some stereotypes that people with such a significant amount of privilege are boring, as they never truly feel a need to learn how to interact with others in a polite or engaging manner. This is especially true of people who grew up with their excessive wealth, which means that Fred Vaughn does, in some way, enforce the narrative exclusion of an “aristocratic” class from the novel. As a bourgeois form, the novel is not interested in the “extremes” of wealth and poverty: its narratives construct the world as a space for the consolidation of middle-class values, most often through romance. The novel Little Women by Louisa May Alcott is written from a middle-class perspective and enforces particular stereotypes about different socioeconomic classes. The lower class is portrayed as two-dimensional, serving mostly as a tool to prove the benevolence of the Marches. The middle class, both upper and lower, is depicted as hardworking with a high moral standard. The upper class is characterized as kind, although with some vices susceptible to amelioration (with the right woman), and the elite upper class is illustrated as two-dimensional and boring. These class narratives are not restricted to one novel, one genre of novel, or even one form of media, though. Similar narratives, derived from stereotypes, emphasized and forced into uniformity among media, appear in other classic novels, such as The Great Gatsby. The Great Gatsby is a novel about lost love, vice, and hopeless situations, but it is also a novel that centers around class narratives. In The Great Gatsby, the upper class is portrayed in two different ways, and are even physically separated into West Egg and East Egg based on these distinctions. Gatsby, the hopelessly romantic and debonair millionaire from humble beginnings, portrays “new money,” obtained through shady and socially rebuked means, while the snobbish and shallow Buchanans represent “old money,” which was also obtained through illegal means, but, as these means are buried in the past, their social and economic status is assured (Fitzgerald). In the novel, Gatsby falls in love with Daisy, who marries a man named Tom Buchanan while Gatsby builds his wealth in order to give her the upper-class lifestyle she desires. Tom is racist, aggressive, and sneaky, while Daisy is unwilling to stand up to him or leave their excessive wealth. Gatsby is charismatic and hopelessly romantic; he is portrayed positively, though he is fixated on his dream of Daisy rather than Daisy, much less others around him, including Nick Carraway, whom he uses for access to Daisy (Fitzgerald). As Daisy’s cousin living on Gatsby’s land, Nick is portrayed much differently than any of the upper-class characters, or, at least, he claims such difference for himself. Although he belongs to an older family (or “clan,” as he calls it), Carraway sees himself as someone relatively simple, hardworking, and rational (Fitzgerald). He builds upon the “moral middle-class” narrative found in so many other forms North Carolina School of Science and Mathematics

of media. observing the extremes of wealth and poverty in the world around him. In a typical manner for novels such as these, the lowerclass characters die. Two characters, in particular, stand out: George and Myrtle Wilson. Myrtle has been having an affair with Tom, who feels entitled to her due to his comparatively elevated financial standing. When her husband George finds out that she’s having an affair, he locks her in a room and plans for them to move out west in order to save their marriage. However, Myrtle is killed in a car accident when she runs out to Gatsby’s car (which Daisy is driving), thinking Tom is in it. Gatsby takes the fall for the accident, and Tom tells George that Gatsby was the one having the affair with Myrtle. Therefore, George ends up murdering Gatsby in his home as a form of revenge, then killing himself promptly afterward (Fitzgerald). The deaths of Myrtle and Tom Wilson are related to the death of Jimmy Gatz: all are excluded finally from the “family,” which is to say, from marriage with a person from the upper class, because they do not have a “family,” which is also to say, a “past.” For Carraway, however, Gatsby differs from Myrtle because his “dream” to found a family is superior to her vulgar, material desires for “class.” For both Gatsby and Myrtle, however, class origins are as inescapable as “race,” as Tom Buchanan defines it in his diatribe against immigration. Gatsby is not a comedy but a tragedy; its “romance” is of a desire that cannot be fulfilled, and therefore is endless. treats class a bit differently than these classic novels, but retains some of the stereotypes. The main character, George Bailey, is from a successful, hardworking family; while the family lives humbly, they also have access to some luxuries, such as funds for George to travel to Europe and to go to college (although he ends up giving these funds to his brother, Henry). George is the epitome of the moral middle-class stereotype. As a child, he saves his brother from drowning, sacrificing his hearing in one ear to do so. During his teenage years, he also stops his employer from accidentally poisoning a child due to an accident at the pharmacy he works at, though he is initially beaten for it. He gives up his dreams of being a successful architect to take over the family banking business and rebukes a financially favorable offer for the family business in order to preserve the town he lives in. Almost everything he does focuses on the moral outcome of his renunciation rather than its financial outcome, which does result in his financial standing dropping and his family business in jeopardy (Capra). This movie does a good job of steering away from the death of lower-class characters, although George does almost commit suicide due to his economic distress and social pain. Otherwise, the lower class is portrayed mostly as part of a larger community, the town George lives in. Characters of this town with money to spare end up rallying money to save George’s family business, which helped them make ends meet during tough times using loans that posed a risk to the bank’s livelihood (Capra). This narrative is less


128

harmful, as it does address the benefits of collective action and concern, but it does seem to be a bit odd that all of these characters improved their financial standing so greatly, and, of course, only the intervention of an angel saved George Bailey from jumping off the bridge. The antagonist of provides the most detailed portrayal of the upper class in the film. He wishes to turn the town into a tool for his economic gain, rather than letting it remain a place for people to live happily (Capra). His portrayal is an example of the negative narratives of the upper class, centered around greed and the constant desire for economic growth conceived as individual profit. While it is easy to find some pleasure in this narrative due to the actual greed of the upper class, it is important to remain aware of the ramifications of these portrayals upon other classes. This narrative can only imagine the lower class of Bedford Falls as victims of capitalist greed, incapable of acting against its power without the help of the middle-class hero. It is the middle class, however, as both characters and audience, who most benefit from this Christmas story, as they are able to unite in order to “defeat” the upper-class antagonist. If the comedy lies in this unity, then the romance appears as divine intervention in the form of a thoroughly middle-class angel.

T

he concept of an upper-class antagonist is mirrored in many modern movies, including and Crazy Rich Asians. These stories are different, with lower and middle-class protagonists, respectively. In the protagonist is a caretaker of the patriarch of a very wealthy family (Johnson). In Crazy Rich Asians, the protagonist is a middle-class woman who realizes that her boyfriend comes from an incredibly rich family (Chu). must be first praised for its inclusion of a Hispanic protagonist, although her job as a caretaker is a bit of a turn-off when trying to assign praise for diversity. She works for an upper-class family and is framed for the murder of the patriarch of the family. The entire family is, to be frank, pretty horrible. They are entitled, backstabbing, and greedy. This, again, plays into the qualities typically assigned to the upper class. For once, however, the lower-class character is not dehumanized; instead, she fights against the abuse of the wealthy family. The moral (upper) middle-class character is, of course, still present. Benoit Blanc is the detective on the murder case, and he is simply searching for the truth, refusing to give in to the wealthy family’s manipulations. Moreover, the movie ends with its working-class character inheriting the wealth of the family (Johnson). This ending is what viewers cheer for throughout the movie, as money makes everyone a bit more comfortable! However, when viewed from a critical perspective, it leaves some to be desired, as Marta (the protagonist) ends her arc with money, rather than substantial character growth. She was a moral character all along, so she must have internally been middle or upper class, anyway, right? The question

of class struggle, along with its implications of economic restitution or redistribution, is lost in the triumph of virtue over vice. Crazy Rich Asians must also be recognized for its insistence upon the representation of racial diversity in movies. Almost the whole cast is Asian and the movie is directed by an Asian-American, one of the first of its kind in American films. The negative side of this is that almost every character is upper middle class or extremely wealthy, which seems to enforce the model minority myth and fails to portray the class struggles many immigrants face. The main character, yet again, is a middle-class woman–a college professor–who is economically stable and simply wants to get engaged to her boyfriend for love. The antagonist is the boyfriend’s mother, who does not believe that she is good enough to marry into such a wealthy and influential family (Chu). The lower class is hardly portrayed at all, save as servants, but there is some diversity in the personalities of the upperclass characters in the movie: some are more likable and less greedy than others, although this yet again fails to benefit the lower classes. As in Jane Austen’s novels, so here the bourgeois heroine wins the aristocratic husband through the display of her virtues, thus consolidating the upper class while assuring middle-class audiences that their values remain the basis for social life within global capitalism. By now, the repetitive use of socioeconomic stereotypes in almost all of the media consumed by Americans is almost sickening. For instance, is just another mass-consumed sitcom that refuses to acknowledge class struggles, unless it is comedically beneficial to the allwhite cast (Burrows). Gilmore Girls enforces the Bootstrap Myth, the moral middle-class stereotype, offering an image of generosity amongst the vapidity of the upper class, again with an almost all-white cast (Sherman-Palladino). , another sitcom, at least contains mostly black cast members and addresses issues related to class and race, but the narratives remain the same: the lower-class characters find themselves climbing the socioeconomic ladder by the end of the series, the middle-class characters are doing their best to follow their morals, and the upper-class characters are vain, although sometimes generous (Brock). Among all of this media, the most worrisome type that contains stereotypes is fairy tales, as these are often geared towards children. In , Sleeping Beauty, and Cinderella, the characters all find their struggles alleviated by marrying a rich prince that they have only just met (Disney). Among these Disney fairy tales, The is definitely the best of the worst. The characters are racially diverse, and the prince is actually in a bit of trouble for his reckless spending. The future princess is a working-class woman who just wishes to honor her father by owning a successful restaurant, which she achieves after finding love and becoming a little bit less pessimistically stubborn, although no less strong-willed. Even this movie, as entertaining and fresh as it is, enforces the narrative of Fifth World


129

the rich benefactor with the princess, Tiana’s best friend, offering Tiana money on a regular basis. Thankfully, Tiana does not accept the money, so the benefactor narrative is less harmful, as it does not dehumanize her (Disney). So, it is an honor to commend this one gem in a mess of mostly racist, classist, sexist, and heteronormative fairy tales. It may now be asked why these class narratives are so dangerous, especially if they’re so common. No one can avoid entertainment abounding with stereotypes like these, so who does it even help if you try? Anyone who has consumed media has developed a biased view of the world, preventing them from easily recognizing its wrongs. In a vicious cycle, the biases of the creators of media influence the consumers of media, who then influence the creation of media since consumer appeal drives profit, and profit, the creation of new media, even if its representations of social life are not new. This cycle allows bias to remain significant, both in fiction and in reality. When bias is created and directed against people of socioeconomic status, it is incredibly destructive. This is not to say that popular media is uncritical of the upper classes. As I have shown, narratives about these classes include vice, greed, and hunger for power. Wealthy and powerful characters are sympathetic only insofar as they exhibit middle-class values. Thus, although the upper class does hold a good amount of power and influence in real life, media is one area in which the middle class holds power; media is geared towards the middle class, which it incessantly reproduces as consumers of its products, including its narratives. This is also rationalin reality, one can suppose that the middle class, at least for media producers, is a compromise between higher and lower classes, making it economically advantageous to target media towards this “median” income value, where the population is large and has expendable income for entertainment. If media is designed for the middle class, and therefore centers the “average” citizen as a bourgeois individual replete with all of the middle-class virtues, then it is logical to assume that the lower classes are put at a disadvantage, both in media and in reality. The working and lower classes are typically used as tools to portray a middle-class character in a certain light, allowing them to demonstrate their strong moral fiber. If they are not used for this purpose, they are simply two-dimensional, and their character arcs center solely around their social class, ignoring the rich diversity in experiences among these individuals. In some stories, such as , these characters even find the end of their growth as a character simply resulting in some economic or status growth, rather than with the cultivation of their freedom to become something other than another middle or upper-class character by the end–a marriageable character, as romantic comedies construct them, with marriage as the (heteronormative) sign of social acceptance and incorporation. Criticisms of the specific characteristics of the upper and elite upper classes in media may make it seem as if there North Carolina School of Science and Mathematics

is prejudice against these classes, but in fact, most of the stereotypes surrounding these classes are most harmful to the working and lower classes. For example, the stereotype that most upper-class individuals are uptight workaholics enforces the Bootstrap Myth and makes it seem as if anyone could be part of the upper class if they simply worked hard enough. Additionally, the presence of vices among the upper class is portrayed negatively, but despite being similar to the presence of vices among any other social class, are treated differently. Vices, especially when related to substance abuse or addiction, among the lower classes serve as proof that people in these classes are simply not working hard enough, or that they are distracted. In contrast, vices among the upper class are viewed as inevitable- with privilege comes substance abuse and other issues. These issues are viewed with more sympathy when they affect the upper classes, especially the upper elite, and are viewed as a negative result of the possession of money and influence. The middle-class narrative is perhaps the most diverse in media, but typically highlights the morality, honesty, dependability, and principled nature of characters. This is harmful to the lower class because it offers no criticism of the middle class privilege; instead, it sustains the belief that such privilege is natural and right even as it excludes other perspectives. Such exclusion further creates the perception that working class people are socially intelligible only insofar as they exhibit middle-class values, even though they do not enjoy the conditions in which such values are possible. Even when, like Myrtle Wilson, they do aspire to middleclass privilege, they are scorned as coarse, vulgar, crudely materialist–incapable, in any case, of joining “the family.” The privilege media has provided to the middle class through its diverse and positive narratives has existed for a long time, and is yet to be resolved. If the working and lower classes are viewed negatively, they will face more barriers to a higher socioeconomic standing. For example, if someone of lower socioeconomic status is unable to get dress clothes for an interview, they may be deemed unprofessional and lazy simply because of (unprofessional and lazy) assumptions about their social class. Therefore, the stigma against the lower classes needs to be eradicated to allow for any hope that everyone can have a high objective quality of life, regardless of income. It may seem mind-bending that to give a high objective quality of life to the lower class, much less to support their effort to obtain it, one must show sympathy towards this class and recognize that money is not the only determinant of quality of life. However, ignoring the colorful narratives among individuals of lower socioeconomic standing in favor of highly stereotypical class-based cultural narratives reproduces the oppression this group experiences in its everyday lives. Marriage with someone of a higher class does not end poverty; exclusion from the possibility of marriage or social union perpetuates injustice. Although it may seem impossible for us as a society


130

to push towards the just representation of the lower socioeconomic classes, analyzing and critiquing the media we consume as well as understanding the implications of bias and stereotypes in that media is a great place to start. The internalization of social and economic injustice as cultural bias reinforces and extends societal bias, so even small changes in one’s personal life can make an impact on a larger scale. Portraying diverse narratives across all socioeconomic backgrounds is truly vital to this process; class-centered and class-ignorant narratives increase discrimination through prejudices against the lower and working classes, making middle-class audiences hostile to struggles for economic and social justice. It is important to portray class struggles without subscribing to stereotypical socioeconomic narratives or cliches that end up disproportionately and negatively affecting the lower classes.

Alcott, Louisa May. Little Women. Roberts Brothers, 1969. Brock, Mara. Girlfriends. CBS; Paramount Network Television, September 2000 to February 2008. Burrows, James. Friends. NBC; Warner Bros. Television Distribution, September 1994 to May 2004. Capra, Frank. It’s a Wonderful Life. RKO Radio Pictures, December 1946. Chu, Jon M. Crazy Rich Asians. Warners Bros. Pictures, August 2018. Clements, Rob and John Musker. The Princess and the Frog. Walt Disney Studios Motion Pictures, November 2009. Disney, Walt. Cinderella. Walt Disney Productions, February 1950. Disney, Walt. Sleeping Beauty. Walt Disney Productions, January 1959. Disney, Walt. Snow White and the Seven Dwarves. Walt Disney Productions, February 1938. “Ethnic and Racial Minorities and Socioeconomic Status.” Feagin, Joe. Systemic Racism: A Theory of Oppression. Routledge, 2006. Google Books. Fitzgerald, F. Scott. The Great Gatsby. Charles Scribner’s Sons, April 1925. Johnson, Rian. Knives Out. Lionsgate MRC, September 2019. “Little Women.” Wikipedia, May 2021. Maas, Sarah J. Throne of Glass. Bloomsbury Publishing, August 2013. O’Malley, Daniel. The Rook. Hachette Book Group, January 2012. Rowling, JK. Harry Potter and the Sorcerer’s Stone, Harry Potter. Bloomsbury Publishing; Scholastic Corporation, June 1997 to July 2007. Scott, Frank and Allan Scott. The Queen’s Gambit. Flitcraft Ltd, Wonderful Films, and Netflix, October 2020. Sherman-Palladino, Amy. Gilmore Girls. Warners Bros. Television Studios, October 2000 to May 2007. Waters, Terri. “7 Tests That Measure Movies For Gender Equality And Representation.” The Unedit, August 2020.

Fifth World


131

Cartography and Culture: Romance and Rationalism in Early Modern Mapmaking Frank Ladd

M

aps are a site of culture. They are an art and a science, an expression of self and of a world so wide that it can only be experienced as a whole in depictions, on paper. They are an expression of personal and cultural ideals. As the introduction of Donald Wigal’s Historic Maritime Maps explains, “besides its utilitarian function, every single map symbolizes the period of time in which it was created” (Wigal 1). Maps as cultural artifacts are harder to read than literature; they are acts of imagining the real and as such the ideas which they express are almost never direct. They require a reading that is difficult and investigative, but the knowledge they can hold of cultural contexts, ideas, and worldviews is deep. In this essay, I will be exploring the role of cartography as a lens into the England of the late 17th and early 18th centuries, in the midst of the enlightenment. These maps show the unusual mix of romanticism and rational scientific inquiry which characterized the English Enlightenment from a new angle, and argue for a conception of these two ideas as fundamentally intertwined in the culture of the era. In order to show this, though, it is first necessary to examine the ways in which cultural influences make themselves felt in mapmaking. Central to this work is the late 17th- early 18th century cartographer Herman Moll, whose extraordinary technical skill as an engraver and geographical imagination made his influence felt in London intellectual circles to a degree unique for his occupation. Moll was not originally English–Peter Barber places his birth in Holland, while Dennis Reinhartz places it in Bremen–but he would become throughout his life a Londoner in all of the ways relevant to his work (Reinhartz 1, Barber 190). His cartography is essential to this essay because in it can be found illustrated the attitudes and ideas of an England at a crossroads: a London recently rebuilt from an inflamed wreck into a commercial center, an English overseas presence moving out from the shadow of the Spanish (the original empire over which the sun never set), and an intellectual society thriving under the auspices of the Royal Society of London and of the Restoration London coffeehouses in which the Enlightenment found its home. What is most clearly seen in the contemporary works of cartographers such as Moll is Empire: the lifeblood of cartography was in the explorers and traders who traveled the world. The development of England’s ability to project

North Carolina School of Science and Mathematics

her power across the world directly affected the ability of her navigators to gather the knowledge Moll needed. Outside of the New World, England’s traders found plentiful markets in lands which would become the subjects of more of Moll’s maps. The East Indies, for example, appear in Moll’s maps from a very European perspective, reflecting the growing scope of that worldview: the Indies are abstract yet concrete places that can be marked down in detail. Perhaps more interestingly, though, Moll’s maps and others like his inscribe European interests and goals as well as perspectives upon the worlds they represent.

Figure 1. The New World of 1715.

In what is possibly Moll’s most famous piece, “A New and Exact Map of the Dominions of the KING of GREAT BRITAIN on ye continent of NORTH AMERICA,” or, as it is often called as a result of its engraving, the Beaver Map, North America is depicted not just as the English saw it but as they wanted it to be seen. The blankness of the interior,


132

so often in earlier maps a result of incomplete knowledge, was by Moll’s time in the 18th century a deliberate choice: the subjects of the map are as much human as they are geographical. Natural features, like forests and mountains, are painted in relatively broad strokes, rarely even getting named, while human constructions like borders and towns, still crucially excluding Native Americans except in relatively broad terms, merit detailed depiction and names rivaling only the detail in the waterways on which the lifeblood of empire flows. The emptiness of the interior is not forbidding one; rather, it is an invitation, offering to its readers the picture of a land ripe for European “improvement,” to echo its title, the “Map of the Improved Part of Carolina” written in the bottom corner (Moll). One interesting way in which Moll represents the New World is in his portrayal of New Albion, Sir Francis Drake’s attempt at colonizing California. Moll cites Drake’s 1577 circumnavigation of the world, undertaken by accident, in which his crewmate Francis Pretty documents an interaction–the accuracy of which is questionable at best– that ends in the natives granting him the kingship of the region. He defers this title to his employer and liege lord Elizabeth I, leaving behind, supposedly, “a monument of our being there, as also of her majesty’s right and title to the same” (Petty 161). Moll transcribes this claim more or less faithfully to Drake’s account, despite the lack of interest from the English Crown or the facts that Drake had left no colonists to actually secure the claim and that the claim conflicted directly with that of the Spanish, who exerted a much greater influence in the region. In Drake’s account, the New Albion claim is essentially a side trip in Drake’s account, who is much more concerned with the extensive pillaging that led him to California in the first place. The prestige of such a wide-ranging claim, right in the teeth of the Spanish with whom the English were rivals, led Moll to depicting it as a geopolitical fact regardless of its largely fictive circumstances. Moll’s connection to Drake is interesting in itself, as Drake represents one of the first attempts of the English people to turn beyond Europe in explorations sponsored by Queen Elizabeth I. Drake is a consummate explorer, as well as a privateer, and Pretty’s account records in detail the geographical features they encounter, from the island which “is fair and large, and, as it seemeth, rich and fruitful,” on which they first encounter the Portuguese in the beginning, to the detailed descriptions of South America, especially in its harbors, waterways, and ecology. In all instances, he eyes the land he is exploring from the perspective of a colonizer. The colonial desire implicit in Drake’s explorations and depredations of the New World is shared by Moll, and his role as one of the earliest of England’s explorers ties that exploration directly to the conflict with Spain. Drake’s adventure–and it is an adventure in every sense of the word– is pivotal in creating the worldview which Moll inscribes in his maps over a century and several successful colonies later.

Moll is heavily tied to explorers beyond just Drake, though. The beginning of his , a detailed world map, emphatically depicts the route of Captain William Dampier’s circumnavigation of the world. Alongside the trade winds upon which explorers like Drake and Dampier relied, it is one of the only of his characteristic notes on this most important of maps that goes beyond simple topography. Dampier’s explorations were roughly contemporary with Moll’s cartographic work. Dampier’s records of his travels are incredibly detailed, and they mirror the interest in the foreign which Moll’s diverse side notes reveal, taking note of the same sort of anthropological and natural histories which Moll records. One can see in Dampier’s remarks on “manatees,” “maho trees,” and “manchaneels” the same impulse that leads Moll to emphasize that “in ye desarts [of Sumatra] they have elephants, tygers, rhinoceroses, boars, porcupines, serpents & monkies” in his map of the East Indies (Dampier, Moll 6). The East Indies are a crucial site for England’s overseas sights; thus, Dampier spends much of his circumnavigation in the Indies, and the unparalleled riches which the Europeans of the time saw in them were essential to their expansion into and exploitation of the broader world–the world depicted in Moll’s maps.

Figure 2. The European view of this region can be summarized by the name “Spice Islands.”

One of the more interesting maps in Moll’s crowning collection is the “Map of the East Indies,” which he dedicates to the East India Company and which was, like much of the collection, published separately on its own. This map reflects many ideas about the East Indies that stem not just from explorers like Dampier, whose interest in the East Indies is primarily one of curiosity, but from traders as well, like the East India Company to whom he dedicates the map. Moll puts exquisite detail into his elaborate descriptions of which of the Spice Islands are known for what exports. These islands were at this time the site of contention among colonial powers, including “England, Spain, France, Holland, Denmark, and Portugal” (Moll 6). His mercantile focus mirrors that of the England for whom he is mapping these islands; thus, it offers a visual analogy to the diary of Samuel Pepys, which is one of the more detailed insights

Fifth World


133

into general life in Restoration London, Pepy is very rarely concerned with the East Indies, but the luxury goods and the trade they bring are of note to him. His descriptions of “seeing the East India ships at dock” are uninterested in Southeast Asia as a region to be studied; rather, it is a place from which ships sail with goods for England (Pepys). In fact, many of Moll’s maps emphasize details relevant to commercial interests. Although probably included to make his set of maps more useful to merchants and commercial enterprises, who would have been among his primary customers, notes such as “excellent wine and turpentine” in Liguria or “Here ye best white marbel(sic) is dug and send(sic) all over Europe’’ in Modena reveal a focus on the many and various goods made such regions relevant to European interests, thus visible to Europe. He is directly involved with such commercial ventures, depicting a map of “ye Limits of the South Sea Company,” a venture by the British Crown to gain an advantage in the South American slave trade. The company was most famous for the South Sea bubble scandal a few years after Moll’s mapping, which Britannica describes as “the speculation mania that ruined many British investors in 1720.” He is noticeably silent about the attempt at British control over Tangier, involving and recorded by the aforementioned Samuel Pepys, an early attempt at an English presence in Africa granted by the Portuguese which ended silently with the Moroccan annexation of the city. This history does not appear on his maps.

T

his commercial focus makes itself clear in Moll’s depictions of Africa in general, too, in which he depicts the “Grain,” “Ivory,” “Gold,” and “Slave” Coasts, each of which are defined by European exports (Moll 8). In the very center of the continent, bounded on the west by “Congo,” named of course after a body of water, and on the east by “Zanguebar,” an entire region of the Swahili coast which he names after the one trading city with which the Europeans were familiar, is the country of “Ethiopia: this country is wholly unknown to the Europeans” (Moll 8). At least in this case, he is willing to admit what he doesn’t know, rather than simply placing what rumor has described in “Negroland,” into which region he puts less detail than he does the Indian Ocean trade winds. His sources, after all, were primarily sailors, as were many of his customers: he was familiar with their needs, as when he places a beautiful engraving of the harbor at the Cape of Good Hope in the bottom right corner. Moll’s work is often defined as much by what he doesn’t know as what he does, especially in inland regions outside of Europe: here, too, his reliance on navigators and on explorers is crucial to an understanding of his work. In lands not directly accessible by sea, and even in some that are, he depicts primarily a view defined by his own externality: the assorted Coasts in West Africa, or his description of Suez as the place “by which ye Turk makes himself master of all the ports of the Red Sea &c,” are defined, much like the regions North Carolina School of Science and Mathematics

in which he notes the primary exports, by their meaning to a European audience which encounters them only indirectly through the effects they have on the regions which matter to Europe. In these cases, it is Moll’s perspective as an English cartographer, or at least a cartographer working in England, that defines the relative accuracy of his claims, by defining the sources he has to work with. His English perspective also makes itself known in more overt ways, as when Moll begins depicting French or Spanish territories. Rivalries with these two Continental and colonial powers determined England’s perceptions of the foreign; for instance, Moll’s map of the East Indies lists France and Spain after England, despite the fact that Holland and Portugal were both much more dominant in the region at the time, and both Drake and Dampier’s voyages were initiated primarily for the goal of foiling the Spanish. On the high seas, and in the world outside of their island, one of the primary goals of the English Crown was to compete with the continental monarchies. Moll doesn’t engage much in these rivalries in his depictions of Europe, for the geography of these regions was not much in dispute, although his depictions of France and Spain lack much of their usual flavor, with very few side notes. Italy is depicted in mercantile terms, while Germany includes an outline of the workings of the Holy Roman Empire, with these regions getting detailed engravings of Mount Vesuvius and the Imperial Diet respectively, but France and Spain get none save the elaborate and varied coats of arms which accompany all of his maps. Here his main depiction of bias appears, or doesn’t, in his omissions. Moll’s engagement in these rivalries is much more direct in the New World, where the boundaries of the global empires that were dominant on the continent were vaguely defined at best. In a time of burgeoning nationalism, these borders were subject to consistent alterations in their depictions, according to the interests of whoever was mapping at the time. One of the most egregious examples of this is in the Carte de la Louisiane of Guillaume Delisle, a French cartographer who stretched the borders of French Louisiana to the Rio Grande and encompassed most of the Carolinas, which he claimed to be named “en l’honneur de Charles par les François,” or in honor of Charles of France (Delisle). Geographically, his map is excellent, with incredible detail about the coastlines, rivers, and mountains; and the detail with which he depicts the general areas of the Native American tribes in the region reflect the much greater French focus on coexistence with the Natives than of the English for whom Moll drew–or, if not coexistence, then at least an exploitative model much more accepting of their presence and importance. It is the political details that Delisle is more willing to misrepresent in the name of France, which Moll decries in his own . In this atlas, Moll devotes a print to “ye North parts of America claimed by France,” which copies the borders set down by Delisle almost exactly, and notably differs from the


134

map of North America which Moll claims to be “according to ye newest and most exact observations” in the borders between England’s Thirteen Colonies and Louisiana, although it leaves the exaggeration which takes land from Spain intact in his preferred map. In “Ye North parts of America claimed by France,” he describes how “the yellow colour is what they allow the English,” while claiming that, because of the wording of Charles II’s grant of Carolina, “any body may see, how much they would Incroach[sic] &c.” He bases his refutation on evidence, crucially, and turns the power of reason towards his contestation of France. The titular Beavers of the Beaver Map are a point of interest which, along with the impressive accuracy of the coastline and the geographical features and the depictions of Newfoundland fishing, make it, in the words of possibly Moll’s only biographer Dennis Reinhartz, “Moll’s most famous map” (Reinhartz 37). The New World was the site or symbol of most of the cultural trends relevant to this work, in fact, and it is incredibly important that Moll found significance in his depiction of the region. The New World was the frontier of the European world of the 18th century, a playground for expeditions of empire and, inseparably, for intellectual discovery essential to the Enlightenment ideals of rational improvement of the world. Moll’s cartography is a cartography of expanding frontiers, of scientific and political boundaries expanded (sometimes literally, in the case of his rivalry with Delisle), and of the application of the sciences to the improvement of artistic imaginations of the world. It is a cartography of the Enlightenment, and his focus on the New World is essential to that characterization. Moll’s empiricism, like many aspects of his work, stems from the necessity of exploration, made available to him from traveler’s accounts: he has no easy access to the objective truth he is seeking; he has only the ability to strive towards it. He is, in fact, highly defensive of this search for empirical truth, claiming, in one of his prodigious notes, in this case on a map of South America in his , that “the world is in nothing more scandalously imposed upon, than by maps put out by ignorant pretenders” (Moll 12). His approach to cartography relies on a pursuit of accuracy, and one can directly see the regions to which he has more data by the accuracy of his depictions. The Southwestern coast of Africa, for instance, is distorted on his map up until the Cape of Good Hope, the precise location which he acknowledges to be hotly contested and on which he writes a lengthy note. This distortion occurs because there is no major European trade in the region, and routes going past Africa lead from West Africa straight to the Cape, cutting across the open sea as depicted in his note of “a good course of sailing from Great Britain to the East Indies in the Spring and Fall” (Moll 7). He is in the intermediate stage of an empirical process, as he is caught up within so many incomplete processes of partial discoveries. His geographic uncertainty is readily apparent in his maps, as his desire for knowledge. The map which Moll claims to be an accurate depiction

of North America is also interesting for its depiction of England’s other colonial rival, the Kingdom of Spain. In this map, Moll depicts Spain’s treasure routes. In a time when these treasure routes, essential to the transport of the silver upon which the Spanish economy had become reliant, were key targets of English and French privateers, he is essentially providing a direct treasure map to those pirates, making it easier for England’s primary rival to be damaged economically and England to be enriched. Moll engages with the French purely in the field of cartography, contesting the accuracy of their claims, but his work engages the Spanish on the high seas, at least indirectly through his support of the buccaneers who preyed upon Spain. According to Reinhartz, “these additions helped to make Moll’s map a ‘buccaneer map’ not only for British privateers like Dampier…but also most importantly for its perusers back home” (Reinhartz). Indeed, it is in Moll’s engagement with the economic and imaginative interests of the “perusers back home” that his maps serve as the greatest cultural lens. He was a great frequenter of the coffeehouses in London, those public spaces where the intellectuals of the day were in the process of creating the Enlightenment. This put him in a unique position to be influenced by and to influence Enlightenment ideas, beyond those which directly concerned the world he was depicting. As a cartographer, he himself was of course affected by the scientific developments of the Enlightenment, taking advantage of new advances in astronomy and navigation, including those of his friend Robert Hooke, to make his maps more accurate. In his work, though, he himself engages in the empirical process through his gathering of information with which to make his maps: his treatment of travelers’ and explorers’ accounts as scientific evidence to be developed into a coherent truth of the way the world looked is scientific at its core. The London of the early 18th century was, however, not just defined by its scientists but also by its literature and the spirit of romantic adventure which the sensational exploits of explorers like Drake had fostered and people like Dampier continued to foster. This romanticism found a home in the works of authors like Defoe and Swift, both of whom were acquainted with Moll; Moll actually engraved a map for the fourth edition of Robinson Crusoe, and this romanticized view of the world was of at least equal importance in Moll’s works as his pursuit of scientific truth. Thus, an examination of the ways in which the Enlightenment found him is essential to the examination of Moll’s Enlightenment ideals. Moll was, according to Reinhartz, a regular of the London coffeehouses in which the Enlightenment took place as part of what Reinhartz terms the “Intellectual Revolution,” a growth and dispersion of scientific attitudes and knowledge which defines Moll’s cartography. These coffeehouses, as spaces created by the increased interaction with global trade networks and thus global luxury goods which Moll’s maps depict, were gathering places for the urban intellectuals who drove the Fifth World


135

Enlightenment, and at their peak the coffeehouses had become what Lawrence Klein terms “an essential institution of urban life in England, at least for males of the upper and middling classes” (Klein 1). This site of a new public sphere was a realm into which Moll entered after it had already developed, and so the ideas that he expresses through his work are informed by it in foundational ways. His constant notes, for instance, ranging from topics like natural history to relevant geopolitics and denunciations of his competitors, represent an individual convinced of the absolute important as well as the duty of intellectual self-expression, a core element of Enlightenment thinking.

O

ne can see more concretely the influence of this emergent public sphere, and the contrasting values which were its foundation, in the individuals with whom Moll was personal friends. People like Robert Hooke, for instance, whose diaries are a crucial element of documenting the coffeehouse phenomenon and whose surveying talent helped lay the foundation for London’s Restoration period, are a crucial part of even documenting Moll’s existence beyond his maps and his part in the Enlightenment circles. Hooke was a scientist, a crucial member of the Royal Society and developer of Hooke’s law of spring dynamics, among many other discoveries, and his rational inquiry was influential on Moll. Hooke was also a prolific writer, and his works outline the process he uses to make his discoveries. This was a process mirrored by Moll, and when Hooke speaks of those “who have confined their imaginations & fancies only within the compass and pale of their own walk and prospect,” one can almost imagine Moll’s notes in his maps. The belief in the enlightening power of a knowledge expanded methodically through the study of as many pieces of information as possible is one which Moll carries with him as he makes his maps (Hooke 2). In a practical sense, the Enlightenment also made its mark on Moll’s work: instruments of navigation essential to the discoveries Moll relied on were becoming more advanced, and the fields of geography and natural science which are the core content of his side notes were experiencing great shifts in the time he made his maps; for instance, Hooke’s Micrographia, in which is outlined the discovery of the cell, was published in 1665, a mere 43 years before World and even closer chronologically to Moll’s earlier work. Coming towards the end of the Enlightenment, Moll’s work represents a remarkable shift from that of earlier cartographers like Coronelli, whose America Settentrionale, published in 1688, is much less accurate in its depiction of the coastline. Moll had better data and better tools with which to use such data, and his work reflects it. He engages in trends which were just emerging during his career, and brings to them his characteristic engraving skill; one of his last maps, dated to 1732, is a road map which uses as reference the first road atlas published, John Ogilby’s Britannia, which was at that point a mere 60 years old. The North Carolina School of Science and Mathematics

future of commercial mapping, which would eventually give rise to the Google Maps with which we are all intimately familiar, was one in which Herman Moll dabbled. Ogilby himself is a figure of the early Enlightenment. Although he worked as a translator, his primary legacy lay in his creation of the aforementioned , an atlas of England with a set of 100 plates depicting relevant routes between English towns. The work is immensely utilitarian, containing no clever notes, no detailed engravings on the side, only routes. It is the embodiment of rational cartography, and the fact that Moll’s primary work differs so much from Britannia is essential to understanding Moll’s role as a man of culture. Oglesby is committed to his mission of depicting objective truth, and the way this contrasts with the tangents into subjectivity which characterize Moll’s notation draws attention to the attitudes these notations record. Moll even engages in a bit of history in his maps, from his frequent historical notes in his general maps, like his highlighting of the Battle of Lepanto in the Mediterranean, to his , dedicated to his friend Rev. Dr. William Stukeley, an influential if often mistaken antiquarian. Stukeley himself is an incredibly interesting figure of the English Enlightenment: pivotal to the shaping of archeology as a field, he was also an ordained priest and engaged in his studies of antiquity with the goal of advancing Christianity. In the preface to his lecture on the , he expounds upon his belief “that, from speculation of material causes, we may become adepts in that wisdom which is from above,” and he comes to the conclusion that earthquakes are an instrument of divine wrath (Stukeley). His beliefs veer into the bizarre when he encounters Britain’s pagan past and chooses to ascribe the construction of Stonehenge to what he describes in his Stonehenge: a Temple Restored to the British as “Patriarchal Christianity” (Stukeley). He explains that “the Druids were of Abraham’s religion intirely, at least in the earliest times, and worshipp’d the supreme Being in the same manner as he did”(Stukeley). In these writings, although he works in the direction of rational inquiry, his work is animated, like Moll’s, by more than just pure science. In the case of Stukeley, this is his own religious fervor, but the animating force of Moll’s cartography is another force active in Restoration England: romanticism. The world, to Moll, is more than just a place to be investigated and set down accurately; it is a place to be explored, to marvel at, to make detailed engravings of a beaver dam. His notes, when not dealing with trade goods, expound on an eclectic range of subjects, from insulting his competition to what wonderful animals there are on Borneo. His maps paint a picture of a world not just to be studied but to be found, to be experienced in a way reminiscent of the novels which were just emerging-he was in fact an acquaintance of Daniel Defoe. Moll’s colonialism is an echo of this romanticism as well; his colonial tendencies lie in his


136

ability to make colonization into a romantic vision, into a scene of adventure and conquest redolent of legend. The Beaver Map works as an argument for colonization because it paints the New World as a romanticized site, a site where the English identity can be developed and where English glory can be advanced. Nowhere is this cultural intersection of romanticism and science more clear than in Moll’s friend Captain William Dampier. Dampier’s career falls between that of Sir Francis Drake, pirate and adventurer whose exploits made him a legend, and James Cook, an explorer hired by the Royal Society of London and whose accomplishments lay in the scientific realm. Dampier did both. His the World, compiled from notes taken on a privateering expedition turned circumnavigation of the world, includes notes on topics from anthropology to natural science, all noted down while alternately pillaging and running from the Spanish. In his introduction to , Sir Albert Gray remarks that “The stories of the buccaneers are on the verge of romance,” and yet Dampier is engaged with the scientific evolution of the day, credited with the introduction of hundreds of words to the English language. The complexity of Dampier as a figure is emblematic of the complexity of the era into which he sailed.

I

ndeed, Dampier is a romantic figure par excellence. He was a natural pirate, trekking across Panama to steal some boats from the town of Santa Maria at the opening of his first circumnavigation, which he recorded in one of the earliest instances of widely successful travel writing in the English language, his . He spends much of the voyage alternately dodging and pillaging Spaniards in the South Sea, and when he finally gets to the East Indies his records depict a world essentially foreign and exciting, one in which the spirit of adventure can yet find ample space for excitement and freedom. He is a figure of a sea that is to be explored, to be conquered, and of a world that is open to adventure. The bulk of his writing, however, is not in the nature of action or adventure, despite the overflowing surplus of such which he records. The privateering and daring exploits are almost a side note, a background to his real writing, which primarily covers descriptions of the lands he encounters. While these descriptions are romanticized, the bulk of them are mundane; the only thing notable about the “vast cockles” or “shy turtles” of Celebes is that they are in the East Indies, and thus different from what a European might expect, but generally his remarks are primarily descriptive (Dampier). His observations are not just those of a romantic keen on painting a picture of great adventure; they are also those of a naturalist looking to record the knowledge he sees for the benefit of his readers, for “that zeal for the advancement of knowledge” which he describes in the introduction to his account (Dampier). His interests are aligned with those of the Enlightenment scientists. Dampier is a reflection of his era, in the same way Moll

is, although with a far more active role in its movements. In a time in which science was still developing its centuriesand-counting ascendancy over the realms of intellect, and crucially to Dampier the high seas, he was an early example of the trends that would give rise to the surveyor, the anthropologist, the Captain Cook. Like Moll, he would have benefited from the advances made by people like Robert Hooke in the fields of chronology, geography, and nautical instrumentation. Dampier might in fact have seen more value in Hooke’s claim that, “I have tryed, not without good success of improving Clocks and Watches, and adapting them for various uses…[including] discovery of Longitude, regulating Navigation and Geography, detecting the proprieties and effects of motions for promoting secret and swift conveyance and correspondence, and many other considerable Scrutinies of nature,” as these varied uses are all directly applicable, and indeed applied, to his voyages. His profession is one which is in his lifetime being absorbed by the community of science, but it retains the sense of adventure that made it so appealing to people like Defoe, and which Dampier inherits from the cultural impact of people like Sir Francis Drake. Drake is a pivotal explorer, but in the eyes of the English of the 17th century, he was first and foremost a pirate. His famed circumnavigation of the world, the first successfully undertaken by one captain throughout, is not a mission of exploration or of colonization: he sets out as a pirate. From his initial encounters with the Portuguese in which he “espied two ships under sail, to the one of which we gave chase, and in the end boarded her with a ship-boat without resistance,” to his chase of the Spanish treasure galleon Cacafuego which drives him to his so-called New Albion, mentioned in my earlier discussion of Drake as an explorer, in the first place he fulfills the role magnificently, accumulating a great deal of silver and somewhat surprisingly killing very few people in his privateering effort (Pretty 148). As a privateer, Drake is a larger-than-life figure of adventure and glory, and it is that air of romance which he instills in exploration and the exoticism of the foreign which he represents which forms the cultural backdrop to the changes of the Restoration. This romantic view of the world is evident in the works of the novelists who were contemporary with the scientific advancement of the Enlightenment. Herman Moll’s acquaintance Daniel Defoe, for example, is most famous for his adventure novel Robinson Crusoe, a tale- inspired by Dampier’s exploits- of a young man cast into a life at sea which is as terrifying as it is thrilling. Defoe echoes the colonial viewpoints of Moll’s maps, a viewpoint which Dampier shows a surprising lack of, though he still holds the biases of his era. To almost all the English writers of the era, and Defoe is certainly no exception, his protagonist taking part in the slave trade and all, the world at large is a world to be conquered and brought under a European yoke. Moll shares this perspective, as evidenced by his engagement in the colonial viewpoints of his time, but Fifth World


137

to him these ideas were not mutually exclusive with the romance or the rationalism of his worldview. These views are in fact inseparable, to Moll and to England at large, from the colonialism that they accompany. In short, Herman Moll is a cartographer of his time. The delighted fascinations he showcases in his notes on an eclectic variety of topics, the loving detail he puts into his geographical details, and the obvious, blatant biases he exercises in his mapmaking all serve to make him a lens for the Enlightenment as it was felt by the people experiencing it, as its effects were making themselves known on both perceptions of the world and on the broader world itself. Cartography is an expression of a worldview in the most literal possible sense, and the worldview that Herman Moll expresses is one in which science and legend are equally beloved, in which the pursuit of truth and knowledge is one which can be exercised from the the prow of a ship, braving the waves of a new world. It is a world in which science is romantic, because it is being born into a world already viewed through the romantic’s eyes. The admixture of the spirit of adventure with the pursuits of science which Moll embodies along with many of his contemporaries, like Stukeley, to a limited degree Hooke, and paramountly William Dampier, is essential to the optimism and eye towards the future which lay at the core of the Enlightenment. Be it in Herman Moll’s New World, ripe to be civilized and molded into the shape of a better world (with all of the obvious problems in that worldview conveniently absent from his work), or in Hooke’s quest to find some better glass for his lenses so that he might come closer to the stars, the figures of the Enlightenment held a passion for the pursuits of science that was heavily romantic, and this view can be seen clearly in their maps.

North Carolina School of Science and Mathematics

Barber, Peter. The Map Book. London, Weidenfeld & Nicolson, 2005. Britannica, The Editors of Encyclopaedia. “South Sea Bubble”. Encyclopedia Britannica, 5 Nov. 2008. Conzen, Michael P. Mapping Manifest Destiny: Chicago and the American West. Chicago, Newberry Library, 2007. Dampier, William. A New Voyage Round the World. Project Gutenberg, 2005. Defoe, Daniel. Robinson Crusoe. Planet PDF, 2011. Delisle, Guillaume. Carte de la Louisiane et du Cours du Mississippi: Dressée sur un Grand Nombre de Mémoires Entrautres sur Ceux de Mr. le Maire. Library of Congress. Klein, Lawrence E. Coffeehouse Civility, 1660-1714: An Aspect of Post-Courtly Culture in England. JSTOR. Hooke, Robert. Lectiones Cutlerianae. Internet Archive, 2013. Reinhartz, Dennis. The Cartographer and the Literati: Herman Moll and his Intellectual Circle. Lewiston, New York, E. Mellen Press, 1997. Moll, Herman. The World Described; or, a New and Correct Sett of Maps. Antique Maps Inc. Ogilby, John. Britannia Depicta, The Visual Telling of Stories. Pepys, Samuel. The Diary of Samuel Pepys. The Diary of Samuel Pepys, 2003. Pretty, Francis. “Drake’s Famous Voyage.” Voyages of the Elizabethan seamen to America. Thirteen original narratives from the collection of Hakluyt, Internet Archive, 2006, pp. 145-176. Stukeley, William. Stonehenge: a Temple Restored to the British Druids. Project Gutenberg, 2020. Stukeley, William. The Philosophy of Earthquakes, Natural and Religious. Project Gutenberg. Wigal, Donald. Historic Maritime Maps. New York, Parkstone Press International, 2014.


138

Classicism’s Influence on the Fall of the Empire of Liberty: How American Democracy Can Survive Riley Jo Holland

L

ooking at foundational American documents such as The Declaration of Independence, it may seem that by arguing “they are endowed by their Creator with certain unalienable Rights” means the Founding Fathers felt a connection between their freedom and the Christian faith; however, the Founding Fathers did not look toward only God’s will in founding what is now the American empire (Jefferson 1). They instead turned their focus to the classical influence of Greece and Rome, an influence in which they had been taught since early childhood. As Thomas Ricks says in his book , “These men did not study Locke as much as they did the writings of the ancient world, Greek and Roman philosophy and literature: the Iliad, Plutarch’s Lives; the philosophical explorations of Xenophon, Epicurus, Aristotle; and the political speeches and commentaries of Cato and Cicero” (15). These texts became key documents in the shaping of political leaders as Greek and Roman men became the idols of Eighteenth Century politicians. These classical men were educated, creative, rational, and most importantly powerful. The Founding Fathers craved to be just as impactful as ancient heroes, so they followed the examples Greek and Roman leaders set for them as they began the American Experiment with a template made hundreds of years prior to their births. Jefferson even said himself in a letter to Colonel Duane, “It is true that I am tired of practical politics, and happy while reading the history of ancient, than of modern times. I … take refuge in… everlasting infamy” (Sowerby 27). Everlasting infamy was the goal of revolutionaries, and infamy is what they gained. Unfortunately, history has proven time and again the only way to gain everlasting power and recognition is to obscure the ideals of right and wrong. The Founding Fathers formed a republic based on ancient classical forms of rationalism and imperialism at the cost of morality. The Founding Fathers began their republic with the resurrection of the ideals of Lucius Junius Brutus. Like Brutus, they disassembled a monarchy and reinstated a government for the people. Not all people were represented, of course, just white land-owning males such as themselves. Once they gained a government for themselves, they took on the face of Alexander the Great. The Founding Fathers saw their culture and way of life as superior. There was a general consensus among the American public that all

inferior people must crave culture and were not tied to the land or beliefs of their ancestors. So through force, land was taken and heritage was destroyed. With as much land and influence as they craved, the Founding Fathers were boastful of their accomplishments. With pride they followed in the footsteps of Philip II of Macedon and gave themselves the title of gods. They demanded worship and inspired imagery that continues into today’s history books which shape both media and culture. Consequently, power leads to responsibility and with even greater power, maybe even a godly power, comes corruption. If political corruption and unwillingness to cooperate with one another continues, then the leaders of America will give themselves over to the spirit of Brutus, where personal gain takes precedence over integrity and loyalty. In Greek and Roman history, this lack of moral character inevitably leads back to singular authoritarian rule. The regression of democracy to absolute rule is a cycle, perpetuated through the desire for power. While looking toward a classical influence might have been an effective way to gain independence, continuing to follow the examples that classical Greece and Rome hold will only prove the same result as they did before. The influences of classicalism created roots of demise for America and without changing the path that America is on, the Empire of Liberty could be destined to become the next great fallen nation.

D

uring the fifteenth and sixteenth centuries, the ideals of the Renaissance began shaping the ideas that would be the founding of America. Even though Colonial America was not yet established, the ideas brought forth from the Renaissance created the domino effect on American culture that can be seen in problems that America is currently facing. The Renaissance idea of humanism stressed the morality of the individual, and the Founding Fathers needed this belief to create the basis of a democratic republic. Taking importance away from religion, and giving each individual mind its own reason and rationality, allowed the creation of enlightenment, which in turn allowed the biggest experiment in democracy, America, to commence. To fully understand the mindset of the Founding Fathers, one must first examine their foundational education. When observing the teachers of those who founded America, they

Fifth World


139

almost all trace their roots to Scotland. This raises the question, what impact did Scottish history and culture have on the shaping of their educations, which in turn shaped America? Today, the Scottish Enlightenment is viewed by many scholars “as the nursery of … social sciences” (Kidd 1). Across the globe, the Scottish Enlightenment nursed revolutions and political leaders in the late eighteenth century. The Scottish Enlightenment arguably provides the intellectual founding of the United States of America. So what prompted this moment of enlightenment for the Scottish people? Most of Scotland during the American colonial period was poor and uneducated. However, the Scottish people shared a common goal, the ability to worship religion freely without oppression from the Church of England or the British government. For the Scottish radicals to find a healthy mix of apprehension and influence, the only place to look was toward classical Greece and Rome, due to the ancient world having the most readily available literature and texts. The mantra of Scottish enlightenment thinkers became: “have the courage to use your own reason” (Berry 2). Using one’s own reason was then used to form political ideology and overall social thought. Some of the most influential Scottish Philosophers that rose to prominence during this time period include Francis Hutcheson, Adam Smith, Thomas Reid, and Adam Ferguson. Hutcheson’s “thoughts and theories were always connected to the ancient traditions, especially those of Aristotle and Cicero” (Vandenberg 1). Although self-educated, Smith, drew from the classical influences to write on how political economies should function and adapt through different ages of society. Reid took classical rhetoric and transformed it to be adapted into a modern setting. Ferguson taught in a classical way of seeing common sense for daily interactions and decisions. All of these men heavily relied on classical influences to create their philosophies while also expanding the classics to be able to fit and justify a peoples who wanted to become self-reliant and free. As these ideas began to take hold, the Scotland began to have a reputation as a land where “politics… shape the universities” and “politics pervaded life” (Emerson 4). This politically shaped life created a cycle of enlightened thinkers teaching politicians how to influence other world leaders. With politics shaping all stages of education in Scotland, the main political focus became the religion of the Scottish people. Politically, the Enlightenment fought against the Angelican Church. This is particularly specific to the Scottish Enlightenment, as most other enlightenment thinkers from different regions tended to completely denounce the existence of a God or say that a God has no real impact on what is happening in the world. The Scottish Enlightenment, however, was heavily tied to the Protestant Church. While most Scots during this time were Protestant, their complaints against England were mostly centered around what they perceived to be a law too North Carolina School of Science and Mathematics

focused on tradition. Scottish Philosophers such as Thomas Reid “long had maintained that it is natural and right for there to be limits on the power of monarchs” and that “that kings must earn and retain the consent of the governed” (Ricks 68). These themes of limited power and checks and balances of a consensual government ring through what would later become the hallmarks of the American republic. With a church controlling both religion and politics in Scotland–a church that had been forced upon them since May 1st, 1707 when they became a part of Great Britain– Scottish people became quickly unhappy. The beliefs of the Scottish Enlightenment are best simplified by the poem of an Scottsman from the 18th century which says, “Of pow’r THE PEOPLE are the source, / The fountain-head of human force; / Spurn’d by their Subjects, WHAT ARE KINGS, / But useless, helpless, haughty things?” (Ricks 69). These Scottish men, influencers of the American Founding Fathers, fully embraced both the “intellectual skepticism and dynamism of the Scottish Enlightenment,” and this way of looking at government becomes foundational in their teaching (Ricks 18). These enlightened educators from Scotland began turning to the opportunities of the New World to find religious freedom from the Angelican Church as they became more oppressed for their outspoken calls against monarchy, specifically a monarchy which dictates religion. As Scotsmen came to colonial America, their enlightened ideas became popular as the American colonists felt as if the British monarchy did not have their best interest at heart either. With Scottish philosophies came Scottish inspiration to fight against oppression. The main impact the Scottish enlightenment had on America was its introduction of an education through the readings of classical Greek and Roman rule and life. These readings began taking their place in the American education system when it was used “mainly to colonial Americans preparing to be clergymen” (Ricks 20). Clergymen in early Colonial America had the power to shape all of the education in the colonies as they could then tell their congregations to read specific texts or view the world in specific ways. However, the influence of Clergymen began to lose some power as Colonial America wanted to move away from religious ideas in leading politics. The Greek and Roman histories began to come back and reclaim their importance when eventually “it was used to train members of the elite, especially in law and oratory” (Ricks 20). These elite members had the means to make real political change. Some of these elite members included the Founding Fathers, Thomas Jefferson, John Adams, George Washington, and James Madison. Jefferson’s knowledge of classical Greek culture can not be doubted. In his library catalog, there are a known one hundred thirty books specifically written on ancient history. This is still a small portion of all the books on Greece and Rome in which he had studied in his lifetime. Being a child in the wealthy south these readings began at an early age. At


140

the age of nine, Jefferson began learning Latin, Greek, and French. After losing his father, classical ideals of morality became appealing and offered Jefferson a world in which he could become a great intellectual power. These classical influences learned early in life “most notably in … emphasis on testing ideas against observation through one’s own senses” (Ricks 78) would mold Jefferson into the leader he became; a leader in which all political ideologies were based on a strong sense of rationalism. Having leadership skills, Jefferson then learned even more from Scottish philosophers to a point in which it can be said that “The main author of the Declaration of Independence relied not on Locke but on Hutcheson” (Tanaka 27). The 18th century philosopher Fransis Hutcheson’s moral philosophies taught six senses in which Jefferson pulled from to create his works: beauty, grandeur, harmony, novelty, order and design. Jefferson followed in Hutcheson’s footsteps by believing “human nature contained all it needed to make moral decisions, along with inclinations to be moral” (Vandenberg 1). His belief in the common man became a structural tenant in forming a new republic; a republic where all people would be able to do the right thing in selecting leaders and making policy. However, this belief deeply relies on a common belief of morality or humanism. Jefferson did not see different versions of morality that were contrary from his own. To create Jefferson’s perfect democracy, all citizens must be able to make the best choice for the common good; however, as America developed and supported the ideas of the individual, it becomes clear that the moral compasses of the people vary greatly. Even during his own time period, Jefferson was an elitist who did not value the rights of lower members of society. Jefferson relied on his higher education to push his own moral values into the government he was helping to create. While Jefferson’s education may have given him the leadership skills needed to become a political powerhouse, it completely contradicted and stifled true democracy and choice. While Jefferson’s education had a strong Greek influence, John Adams’s education was bound deeper in Roman history and teachings. Adams learned to read at an early age, then went to school to learn Latin. Despite his quick ability to learn, he despised most of his educators which put a stop to his love of learning until he studied under Joseph Mayhew. While studying under Mayhew, he had the chance to first read Conyers Middleton’s . Adams described reading of the by saying, “I was destined to a Course of Life in which these Sciences have been of little Use, and the Classicks would have been of great importance” (Adams 262). Because Adams related to Cicero’s classical, liberal arts view, Cicero became Adams’s inspiration as a politician, lawyer, philosopher, scholar, and skeptic of higher education. Like Cicero, Adams became a revolutionary that was increasingly egotistic, thinking about himself first in almost all situations. Adams said that his “reputation ought to be the

perpetual subject of my thoughts, and aim of my behaviour” (Ricks 72). He wanted to grasp the power of liberty for himself while becoming the American hero he aspired to be. To gain this power, he used his classical education with “books about government, politics, and law” to pave “the road to reputation, honor, and power” (Ricks 73). Adams’s Roman influence changed the way he saw himself as well as how he led America as a powerful revolutionary. Unlike Jefferson and Adams, George Washington did not have an impeccable childhood education. He struggled to understand classical influences as he “spoke only English, and was not widely read even in that language” (Ricks 31). Some even began to think that Washington was illiterate. Washington saw his own education to be flawed and was intimidated by the vast knowledge of his social peers. However, his success as a political leader is made only more impressive when considering this early disadvantage. Washington finally found his connection to the classical world through Cato. Washington related to the tragedy of what Plato described as a person “that even from his infancy, in his speech, his countenance, and all his childish pastimes, he discovered an inflexible temper, unmoved by any passion, and firm in everything. He was resolute in his purposes, much beyond the strength of his age, to go through with whatever he undertook” (Ricks 36). Washington identified himself as the American Cato, as he overcame the deficiencies of his upbringing. Instead of the accolades of traditional elitist higher education, Washington sought praise and validation through the pursuit of virtue. Washington took Plato’s classical idea of knowing oneself and thinking rationally as an inspiration to become the virtuous hero America was yearning for. Compared to Jefferson, Adams, or Washington, James Madison was most influenced by Scottish Enlightenment thinking. He studied under professor Donald Robertson, an alumni of the University of Edinburgh, who had moved to Virginia in 1750. Under the influence of Robertson, Madison read Montesquieu, who was considered “a bridge between the Enlightenment and the classical world” (Ricks 93). Madison learned of the great successes of policy in ancient history as well as their great catastrophes. Montesquieu argued that for a great republic to succeed it needed to be small. He blamed the fall of the Roman Republic on expansionism. Similarly, Madison supported a republic government in America in which the federal government had limited power and small, individually functioning republic states were more influential. With a personality shaped around history and liberty, Madison soon became “the white son Jefferson never had” (Ricks 105). Jefferson took Madison under his wing in a sense, giving Madison the ability to gain from both his own background and the friends and allies of Jefferson. Together, they formed a powerhouse that was representative of their classical heroes. They craved power and would do anything to gain it. Using his Scottish Enlightenment ideas mixed with the teachings Fifth World


141

of Robertson, Madison became the leader in revolutionary classicism in America. While the education of these four men shaped America in astounding ways, their next test was to see if the knowledge of their youth could be turned into the wisdom of their collective futures. Putting their education into action was the only reason these men became American political powerhouses. The Founding Fathers’ educations alone held no real value without a basic education existing in the American public because the American people would not have understood the classical thought of the Founding Fathers without their Protestant upbringing. John Adams sensed this democratic need for an educated populace when he stated, “Liberty cannot be preserved without a general knowledge among the people” (Anderberg 1). It was imperative that the American public understood the classical ideas of humanism from the Renaissance to be able to value the ideals of democracy. Like most monumental events, the introduction of Greek and Roman literature into all of American society only happened because of a perfect storm of timings. In early America, the Protestant Church had significant influence over society. With generations of people feeling as if they had been cheated by the Catholic Church, as they had not been able to interpret the Bible for themselves, the ability to read was vastly coveted in society. Early America became a place with one of the highest rates of literacy due to how many families were Protestant who believed in basic rights of education. Children of all ages would have been sent to some type of schooling where they would learn to read the Bible. With the ability to read, people were able to pick up texts from ancient Greece and Rome and read the works of Socrates, Plato, or Aristotle. One of the things that also caused the American public to hold Greek and Roman literature in high esteem is that by the time of the American Revolution there were already a whole host of what is now referred to as Ivy League Universities that had been founded in the Colonies. These schools of higher learning were completely entrenched in Scottish Enlightenment theory. If the literacy rate would have been lower in America, the hold that ancient classical literature held would be lost, as even though the highest members of society held its knowledge, the ideas of Scottish Enlightenment would not be passed down to lower classes, making the American revolution hold no real motivational factors for the vast majority of the population. America never expected that by the “later parts of the eighteenth century, the thoughts and stories of the ancient Greeks and Romans stood front and center in American political and intellectual life as the founders grappled with the questions of how to gain independence and then how to form a new nation” (Ricks 20). The classical influence of Greece and Rome had officially claimed their spots as the founders of what is now the American empire.

North Carolina School of Science and Mathematics

I

nspired by their classical education, the first step for the Founding Fathers to create a Republic was to learn from the Roman leader Lucius Junius Brutus. Brutus worked to overthrow the last Roman king, his uncle, and then establish the Roman Republic in 509 B.C. However, this proclamation of freedom was not an easy price to pay. In establishing a government for the people, he had to make sure no threats came to that government. Brutus had to become a stoic symbol of a new empire in which he “according to a decree of the senate, proposed to the people, that all who belonged to the family of the Tarquins should be banished from Rome: in the assembly of centuries he elected Publius Valerius, with whose assistance he had expelled the kings, as his colleague” (Livius 2). By this decree, his own family was exiled; as illustrated in , he had to take his stance on liberty one step further when his own sons tried to reclaim the Roman throne. At the execution of his own blood, the fight for freedom is shown in its truest form. Knowing the dedication that would have to be made to secure a republic, the American Founding Fathers would have looked toward Brutus with great honor and respect. They would soon come to a deep understanding of Brutus’s personal sacrifice as the Founding Fathers had to go against their family and closest friends during the American revolution to create an American republic. Washington was not just a political leader but also a military leader. In the American Revolution, he constantly saw men he cared for, men he led, die in the fight for liberty. Some of Washington’s closest friends and peers would have also been loyalists, the people he was actively fighting a war against. The fight for freedom is one that took away parents, friends, and brothers before ever seeing a chance of winning. Similar to the transition of power in Brutus’s Roman Republic, after Washington and America as a whole won the fight for independence, smooth sailing for the republic was not a guarantee. After the Revolutionary War, the Founding Fathers along with socially influential members of society spoke of reconstructing a Greek form of democracy combined with a Roman form of republic, yet the American people were not oblivious to the fact that the type of government they were forming had never been constructed with lasting success. Some Americans wanted to give Washington the ability to rule as a monarch. He turned this down, as his education gave him a sense of fear of becoming similar to a Roman emperor, and instead he developed what America calls the Presidency. However, there were no guidelines for how long a president could serve or how much power a president could truly have. Some of Washington’s closest friends and allies were still strongly against him leaving office, but a successful turn-over of the government to the next Head of State had to be done to protect the rights of the public to choose their own leader. Washington did not even know if he was leaving the office in good hands, with factions


142

fighting and not agreeing to compromise, but he had to trust in the power of democracy to push American liberty forward in a time in which it would have been familiar for an all-controlling government to step back into play.

T

he year 1797 proved an interesting transition. With Washington stepping down from the Presidency, both the Federalists and Anti-Federalist had to decide which way to lead America next. They could not compromise on economics, state power, or constitutional interpretation. However they did agree, at least for a while, that America should expand. The ideas of a citizen-representing government shifted focus as the Founding Fathers looked toward a Greek view of imperialism. Both the new American nation and Ancient Greece “see themselves as chosen people and both see their national character as exceptional” (Murphy). In having a viewpoint of exceptionalism, Jefferson created the theory of an Empire of Liberty, giving responsibility to the United States to spread freedom across a confined world. The Founding Fathers took this responsibility and led by example. The idea of creating an Empire of Liberty, however, contradicts much of what Jefferson claimed. He consistently argued for a small, limited government, yet during the time of his presidency, he doubled the physical size of the continental United States of America, pushing the need for a large, powerful government. In the spirit of Alexander the Great, Jefferson began to expand this new empire across the continent. During his thirteen year reign, Alexander the Great conquered and controlled over three thousand miles of land. Alexander the Great single-handedly created the biggest empire in all of history; he gained power by overthrowing existing governmental leaders, killing civilians, and forcing international marriages. Alexander the Great took control not just militarily, but also culturally, making his empire Greek in rule and in mind. However, not all citizens were treated equal to those who were Greek by birth. In a similar fashion to the ancient empire’s rules on citizenship, early America was not ready to let anyone other than white, landowning, high-social status, males vote and truly hold citizenship. The people whose land was taken in the pursuit of such an “Empire of Liberty” were not allowed any genuine voice in the government “for the people ‘’ that they now had to answer to. In addition, the American leaders fully followed in the footsteps of Alexander the Great by imposing both soft and hard power on the people in which they were oppressing; for example, land was taken from Native people groups by force. According to David Micheal Smith, “the total number of Native inhabitants living in the entire Western Hemisphere had declined to 4-4.5 million. In 1800, only about 600,000 Indigenous people remained in the coterminous United States” (Smith 1). This mass genocide falls on the hands of colonial America as well as the Founding Fathers all because native people were seen as savages who had no culture but masses of land that could be

beneficial to the new empire. Genocide and taking land by force was not the only means by which the Empire of Liberty was formed. Soft power worked to wipe out heteronormity in the United States. With voting, laws, and policy being only in English, the government pushed for a white, European discourse in America, disallowing voices of anything other. Culture, commerce, technology, and ideas were those of white men, forced onto anyone who was not. It allowed these men to hold superiority over women and people of color into current times. Women only gained the right to vote in the 1920s, meaning the systems put in place by the Founding Fathers discriminated against women for close to one hundred and fifty years. This tradition of discrimination, which was started at America’s founding, has continued today where the right to vote still does not mean that there is equality between men and women. Even in twenty-first century America, the gender wage gap shows that women make eighty two cents to every dollar their male coworkers make (Gender 1). The same system of inequality has black men making eighty seven cents to every dollar their white male coworkers make (Miller 1). And despite being a citizen of a democratic republic where equality is guaranteed, Native American women make sixty cents to every dollar their white male counterparts make (Native 1). Culture, commerce, technology, and ideas in America are still those of white men who crave power close to two hundred years after the Founding Fathers put into place the ideas of American imperialism.

W

ith the creation of the American empire, the Founding Fathers began viewing themselves as emperors. Even John Adams said, “tyranny can scarcely be practiced upon a virtuous and wise people” (Anderburg 1). The Founding Fathers deceived the American people by claiming their own virtue while constantly reinstating the ideas of American humanism. This manipulation consequently convinced the entire nation to view the Founding Fathers as emperors as well. To glorify the great leaders of the empire, buildings, statues, and paintings were created in their honor, worshipping them the same way as the heroes of ancient Greece and Rome. Governmental buildings and self-honoring depictions became symbols of both the foundations of and problems in the American empire. Early national art, in its gloryful and holy representations, contains the seeds of its own disaster. The US Supreme Court mirrors a Roman temple where law is the religion being worshiped, and the Jefferson Memorial is a Pantheon to worship the American god that wrote the Declaration of Independence. Standing in the heart of the Capitol Building, turning to the heavens, one will only find an imperfect man. “The Apotheosis of Washington,” a painting that to most Americans represent accomplishments, liberty, and freedom, only illustrates the problems with American leadership. Apotheosis, the ascension of a man to the Fifth World


143

holiness of a god, is fundamentally flawed. Why does this American democratic republic give men the power to call themselves worthy? Should worth not be the consequence of action? According to Greece, apotheosis has not to be fully earned, however it must be proclaimed loudly enough to drown out opposing voices. This proclamation is the same apotheosis as leaders in Greece such as Philip II of Macedon. He was the first leader to gain divine honor after he changed a weak nation to a strong one using a formidable army. His image was placed beside the images of the gods. He expected all people to worship him despite him using bribery and fear to gain his position and power. Most of the people he demanded worship of were people his own army had captured and enslaved. He also demanded political marriages in which he used women as objects of blackmail and the promise of peace. His policy and lead only helped a small number of people in his society, those who looked and acted like himself. Philip II of Macedon is the ideal of political corruption; however, because of a few good acts he did, at the cost of lives, earned the spot he gave himself among the gods. Similarly, “The Apotheosis of Washington” glorifies the figure-head of America’s Founding Fathers, George Washington; however, he was a known slave owner, a position he held at the young age of eleven. This contradicts the painting’s story and history that holds Washington at the peak of American morality, virtue, knowledge, and reason. Prejudice was in George Washington’s vocabulary, and in his everyday life. Washington held views indicative of racial superiority. He in no way thought much differently than peers. He did not live out the ideal that all men are created equal and entitled to liberty. The American government in which he helped create sees evidence of the continuation of his actions today, as minority members of society cannot gain the protections and liberties granted to those who are majority members of the population. Additionally, despite the powerful female goddesses depicted, the painting also indicates that a woman’s position is to sit prettily next to a man, which is contrary to the founding humanist belief that all people are individual, capable beings. In a letter to Sally Cary Fairfax, Washington mentions how he is anxious to “possess… Mrs.Curtis” (George 1). Possess her, as if she was an inanimate object. According to Mercy Otis Warren, his wife was there to “sweeten the care of the Hero and smooth the rugged scenes of war” (George 1).She was to be simply a shadow. The equality that Washington prescribed was not represented in his daily life or vocabulary. The thirteen virgins sitting patiently at his side, symbolize the struggle of women in society today to be seen as equals, as even thirteen strong, independent women could never possess the power given to Washington. Images such as “The Apotheosis of Washington” only serve to illustrate the number of microaggressions that exist in America’s capital and that may, over time, lead to North Carolina School of Science and Mathematics

slow violence. The painting may seem inconsequential, however, the ideas depicted work to deny equal rights and liberties to marginalized communities. Slow violence grows and corrupts the government of the United States- the government for all people, which counters the ideals which gave the Founding Fathers their education and inspiration. This is the image that America gives off to both the world and its citizens; it is an image where there is willing acceptance of imperfect men as gods, an image of a place where prejudice thrives but is drowned out by the voice of white men claiming that there is liberty, freedom, justice, and equality. Many of America’s most important decisions continue to be conducted under a painting that erases freedoms and liberties for women and minority Americans. America is far too quick to accept Washington’s apotheosis. Celebrating and idolizing an unfinished story of America just leads to looking over the dents and cracks in Freedom’s shield. Instead, paintings should point toward the emancipation of the constructs that plague American society.

T

he Greek and Roman histories should have prepared America for what was to come next. Despite the separation of powers put into place by America’s Founding Fathers, corrupt politicians have become the legacy of Washington. Who does not want to be a god? When building a nation from scratch, the Founding Fathers intended to act in the best interest of the American people; however, without strong checks and balances in America, it can be easy to take power, easy to hold power in a single palm, and easy to use power to stab one another in the back, just like a Greek tragedy. American politics have increasingly become the story of multiple Marcus Junius Brutus characters. With idealized leaders comes political corruption and loss of political compromise as those who obtain power desire more. Buying and selling political positions are increasingly popular with money becoming power. Through bribes, blackmail, and boldness, corrupt political leaders steal righteousness to become untouchable gods. The American government is plagued with these political corruptions while claiming their decisions are done in the name of justice, liberty, and virtue. The best way to ruin democracy is by claiming that deceitful power comes from democracy. America is not the first place this has happened either. In the early to mid-twentieth century, Europe was particularly struggling with this same attack on democracy. Italy, Russia, and Germany all faced similar demises caused by power. A corrupt democracy is the most clearly illustrated in Germany. After slowly rising through the ranks of government, gaining power through a valid democratic election, Hitler slowly deceived the German people and gained more power through fear and small softpower law changes. To keep the power he had obtained, he granted leadership under him to the most loyal, influential, wealthy people he could find and legally, through laws he


144

passed with a democratic majority, imprisoned his political opponents. The majority of his power came from attacking minorities, those already too weak to fight back, to further his plan of a total global rule. Hitler’s political rise to power in a democratic government is an example of how morality can become lost, as power takes over. Those who argue that America has not already taken the first steps toward this same overthrowing of morality are wrong. America has attacked minorities in the same way by sending those who do not fit into the perfect American image to detention camps. Currently, on the border between America and Mexico, one can find thousands of people imprisoned. Of these, just last year, close to seventy percent of the people imprisoned had no criminal record whatsoever (Decline 1). Families are separated, put in unlivable conditions, and starved. Native citizens of America might argue that these people are not citizens; however, America has treated its own citizens this way as well. Not granting inalienable rights and treating people inhumanly on American soil happened during World War II. After the bombing of Pearl Harbor and while American troops were fighting against the deceitful democracy of Nazi Germany, President Rooosevelt made an executive order that allowed the forcible removal of close to 120,000 Japanese Americans from their own homes. Before this order was even placed, the American Navy had already been taking Japanese Americans into custody. The American government even “rounded-up 1,291 Japanese American community and religious leaders, arresting them without evidence and freezing their assets” just hours after the bombing (History 1). Already in America’s short history, there are evidences of corrupted democratic power not living up to the Founding Fathers’ ideals of justice, liberty, and virtue. Additionally, American politics have time and time again told the story of multiple Marcus Junius Brutus, looking to hold power in their own hands while being second to someone who they see as the step above them. It became particularly notable in the early sixties when the Kennedy administration went head to head with one another. The Kennedy Administration was plagued with jealousy, rivalry, and hatred. One of the most well known American conspiracy theories is a theory that the Kennedy shooting was a government-planned assasination and not an individual shooter incident. Ever since, the drama that comes out of Capitol Hill is filled with bad blood and downright grasps for power. In more current history, the Trump administration became famous for backstabbing as politicians have gone straight to the media to find their popularity while tearing down their opponents. A telling sign comes simply from an online search of books about the Trump administration where one of the first books linked is entitled . Ego and personality created an environment in which the only way to shine was screaming over the voice that was slightly louder than your own. The Trump Administration, similar to ones prior

to it, demonstrates that America can not rely on morality to govern, as morality clearly does not exist in the highest offices of power, or if it looks to exist, is just a facade. With four hundred thirty-five House of Representative members, one hundred Senate members, fifteen Cabinet members, a Vice President, the whole State Department, a President, and hundreds of other members of an administration, Capitol Hill can get too loud to distinguish truth from screams of attention. The reverberations of these political screams of attention and power has made them even louder with the consistent use of social media as a new tool of politics. The Brutus influence in American politics is potentially detrimental to democracy.

W

ith Greek and Roman influences being so intertwined with the foundation of America, it is hard to see if there is a new nation that is formed or there is just a repeat of two empires in which history has already seen come to a conclusion. In Rome’s case, its empire reign lasted over one thousand years then dispersed into smaller regions plagued with war and rivalry. In Greece, there was only a classical period of two hundred years until a monarchical system returned while some regions were conquered by other monarchical powers. The collapse of these two empires begs the question of if America’s reign will be similar, ending in war, disputes, and a nondemocratic government. As politicians crave more power, factions fight as if they have no common ground, and American policy looks more toward changing global issues instead of domestic issues, America is playing its role as the new classical empire. America was and is the great experiment in political philosophy, and as it approaches its two hundred and fifty years of existence, America is already seeing signs of the same destruction as Greek and Roman empires. To avoid the path of destruction, there are certain changes that need to be made in America. If America is Greek and Roman history simply repeated, the nation can learn from the mistakes of those cultures instead of being doomed to the same historical failures. America needs to begin by changing its singular heroic view of itself. The egotistical nature of American policy affects how America views itself in relation to other nations. It is vital for America to realize individual truth does not correlate to universal truth, meaning American culture, policy, and overall mindset is not the way in which all countries or people groups need to think. Monotony of thought is draining both to creativity and knowledge. Diversity of peoples is the only way the world can be beautiful and full of life; ironically, this was the idea of Humanism that the American Founding Fathers gravitated toward initially. America secondly needs to learn the value of not only diversity of nations, but also diversity of its own citizens. As a nation of immigrants, Americans have different cultures, languages, and beliefs. There is no one American ideal or Fifth World


145

one American perspective. Condensing all Americans into one big collective mind is draining to the possibility of individualism, which again is one of the tenets of the Scottish Enlightenment on which America was founded. A cultural shift, created through respecting individual voices, is needed for change to occur in how the American society values diversity. In an opposite and equal fashion, policy needs to be expanded from the simple explanation of “left” or “right.” The American political belief system is a spectrum with many shades of middle between the two extremes. While being so diverse, Americans have many similarities; instead, the media paints two drastically different sides fighting to obtain ultimate power. For the majority of Americans, viewpoints on American policy are less dramatic than the media portrays. America and its leaders need to become more accustomed to communication and compromise; because without seeing the value of individual human thought, America disallows for agency of the common man. Without this agency, which is a vital piece of the framework of democracy, America can not exist. Consequently, creating a democracy that does not value the individual will end similarly to the Greek and Roman empires before it; America will fall, not because of an outside power, but instead from its own infighting from straying away from the ideals of democracy. The framing of America began with educated men following in the footsteps of classical heroes. However, while beginning the American experiment, the virtue of their ideals was overshadowed by the actions taken to create and secure an empire. Similarly, American leaders today are challenged to take the ideals outlined in America’s founding and put them into action. History illustrates through the collapse of Greek and Roman empires that the task of creating and maintaining a truly democratic government is impossible, unless America takes action to forge a new path, not seen before in history. With close attention paid to the value of individual human thought and exhibiting this value through actions and policy, America can begin breaking the cycle that history has predicted. Through actions that demonstrate the merit of diversity and compromise, a new tone can be set for the Empire of Liberty in which liberty and prosperity can actually be granted to all people of America.

Adams, John.

. Belknap Press, 1962.

Anderberg, Jeremy. “The Best John Adams Quotes.” “Apotheosis of Washington.”

, 17 June 2021.

.

Berry, Christopher J.

. Edinburgh University Press, 2001.

“Brutus Condemning His Sons To Death.” Guillaume Lethiere. “Collections Online: British Museum.”

.

“Decline in ICE Detainees with Criminal Records Could Shape Agency’s Response to COVID-19 Pandemic.” Pandemic. Emerson, Roger L. Academic Patronage in the Scottish Enlightenment: Glasgow, Edinburgh and St Andrews Universities. Edinburgh Univ. Press, 2008. “Gender Pay Gap.” Payscale, 2 Nov. 2021. . Heyman, George. ˜ University of America Press, 2007.

. Catholic

History.com Editors. “Japanese Internment Camps.” 2009.

A&E Television Networks, 29 Oct.

Kidd, Colin. “The Scottish Enlightenment and the Matter of Ancient Troy.” Brewminate, 19 Jan. 2021. Lewis, Michael. “Is America the New Rome? – United States vs. the Roman Empire.” Money Crashers. Livius, Titus, et al.

. Harvard University Press, 2017.

Middleton, John.

. Vol. 1-3, Taylor and Francis, 2015.

Miller, Stephen. “Black Workers Still Earn Less than Their White Counterparts.” SHRM, SHRM, 7 Aug. 2020. Murphy, Cullen.

. Houghton Mifflin, 2008.

“Native American Women Lose Nearly $1 Million to the Pay Gap over Their Careers-and Covid-19 Could Make the Disparity Worse.” CNBC, CNBC, 8 Sept. 2021. Onuf, Peter S. 2013.

. University of Virginia Press,

Richard, Carl J. Harvard University Press, 2009. Ricks, Thomas E.

.

. Thorndike Press Large Print, 2021.

Sadler, John, and Rosie Serdiville. Publishers and Book, 2019.

. Casemate

Smith, David Michael. 1492-Present. Sowerby, E. Millicent. Printing Office, 1952.

. Vol. 1, United States Government

Tanaka, Hideo. “The Scottish Enlightenment and Its Influence on the American Enlightenment.” The Kyoto Economic Review, vol. 79, no. 1 (166), Kyoto University, 2010, pp. 16–39. United States, Congress, House, and Thomas Jefferson. Vandenberg, Phyllis. “Francis Hutcheson (1694—1745).”

North Carolina School of Science and Mathematics

, 1776. .


146

The Social Consequences of the Portrayal of Older People in Media Catherine Vu

C

arpe diem, a nearly universal mantra, has a sense of urgency, signaling at the impending termination of life. It instructs individuals to seize the day, implying that the time and opportunities they have left are dwindling. The urgency by which this mantra operates relies on the societally imposed fears of death and aging. Despite the inevitability of death and the constant experience of aging, these fears of death and aging are deeply ingrained in society. Fears of death often stem from the unknown, as what happens after death is largely unknown. The fear of aging is more concrete in nature. This can be traced to fears of psychological concerns, change in physical appearance, loss and the physical struggles that elderly people face. Although these fears are substantial and hold great validity on their own, the media’s depiction of the elderly exacerbates these feelings. At least partially due to the general apprehension surrounding aging, the elderly are primarily excluded from the media and constructed negatively. All In the Family exemplifies this as main characters Archie and Edith Bunker are both elderly. Archie is generally unpleasant, unhappy and unattractive, more often than not leaving Edith to clean up his messes. They are contrasted with their daughter Gloria and her husband “Meathead” who are both characterized as intellectual, pleasant and more reasonable seeming individuals. Elderly representation and these characterizations have improved marginally over time, but the elderly (much like they are in real life) still remain largely hidden. Despite partially breaking from the onedimensional stereotype of the grumpy old man/woman, elderly characters are often limited and still embody a variety of stereotypes. In the twenty-first century, the media is undeniably the most prominent form of popular communication. Due to its ubiquity, its influence over societal perceptions is immense. Media can be analyzed through the Theory of Social Representation, which describes how pedestrian knowledge (common sense) is constructed and transmitted by social and cultural means. Things become common sense through anchoring and objectification. Once concepts are placed in culture (anchoring), the media will often associate or construe them into something physical (objectification), embedding them into the cultural landscape as if they were

natural features. For example, in the case of Archie Bunker, his unpleasant demeanor is anchored by his television presence and then objectified when one passes by an elderly neighbor and assumes he is unpleasant due to his shared likeness, even his identity, with Archie. The portrayal of the elderyly by the media has potentially catastrophic implications. This impact is exacerbated by the fact that people 65 and older spend over ten percent more hours a day consuming media when compared to other age groups. Therefore they are major consumers of material that fails to represent them adequately, much less accurately. Media’s prominence and its establishment of representations of populations that are regarded as common sense, leads people to conform to how they are portrayed in order to act according to expectations. The overwhelmingly negative characterization of the elderly in the media leads older people to feel negatively about themselves, as stereotypes of forgetfulness, unpleasantness and unhappiness run rampant. Today’s portrayal of older individuals in the media is imperfect, but has expanded greatly. Both representation and dimension of character have increased dramatically. Although the stereotypes associated with the elderly referenced above are still associated with older characters, productions often make efforts to highlight other aspects of their character/story. The 2017 film Going in Style tells the story of three old men who attempt to rob a bank. Although the three protagonists are typical older men and are portrayed as such, their humorous personalities and comradery are the focal point of the movie. These more complex portrayals are not exclusive to visual media. In Lana Del Rey’s “Young and Beautiful” the line “Will you still love me when I am no longer young and beautiful” (1:56-2:03) is repeated throughout the song. This line implies that beauty fades with age, feeding into the physical aspect of the fear of aging. Lana Del Rey clings to her youthful beauty as she fears that the deterioration of such that is associated with age will lead to a loss of love (another key aspect associated with the fear of aging). Despite the artist’s relative youth, she is still subject to the fear of aging and fears the undesirability expected with age that has been perpetuated by the media’s portrayal of the elderly. . The fear of loss is exemplified in Adele’s “When We Were Young”. The song repeats longing references

Fifth World


147

youth and follows these with: Let me photograph you in this light,in case it is the last time that we might be exactly like we were before we realized, we were sad of getting old, it made us restless. (1:23-1:40) This popular song expresses the desire to capture fleeting moments of youth in order to escape the reality of mortality and aging. Adele compares this moment to a movie and a song, acknowledging that her desire to return to her youth or pause the aging process is unrealistic. She clearly fears the loss of the liberated, movie-character-like nature of her youth and is attempting to save remnants of it by taking a photo. Media reinforces societal fears and portrays a distorted image of older people that often lacks dimension. This often results in a misconstrued portrayal of the elderly, negatively impacting the large number of older media consumers. When older people deviate from this construction and possess other/additional characteristics it is often seen as an exception to the norm rather than a failure by the media to accurately represent this population. This perpetuates societal ageism, which is heavily rooted in and attributed to the fears of death and aging. Since aging is generally associated with loss and surrounded by a variety of negative stigma, it is seen as something that should be combated despite its inevitability. Recently, the preservation of youth has risen to new heights with the popularization of cosmetic procedures and makeover productions, once again glorified by the media. To directly address the fear of change in physical appearance, cosmetic procedures such as botox and plastic surgery are marketed towards older people in order for them to fulfill this desire for youth, fed to them by media portrayals of older people lacking the marks of aging This desire nearly transforms into pressure due to the proliferation of cosmetic procedures in the past few years. If the representation of older people in the media is one flooded with cosmetic surgeries,, this expectation will be reflected upon aging populations,, pressing older individuals to seek cosmetic procedures that they may not have considered otherwise. Although the media has immense influence over the public perception of aging, social and cultural factors play a similar role. The perception of the eldery varies greatly across different racial demographics. White people in particular emphasize independence and productivity. These values align with those that are present in America. In a study done by Roberts and other researchers that examined the perceptions of aging between different racial and ethnic groups, Roberts et. al states that “In American society, the belief commonly held that one’s value comes mostly from sources affiliated with economic success and achievement (productivity), measures of efficiency (great output for little input) and effectiveness (a positive monetary outcome) can have a negative effect on older adults’’ (Cole, 2002)

North Carolina School of Science and Mathematics

(Roberts et. al 7). Such values imply that older people are less valuable due to their limited physical capabilities and such implications are confirmed by the stereotypical media portrayals of the elderly. The parallels between white and American perceptions of what makes people valuable greatly reflects the social hierarchies present in the United States. For a variety of reasons, white people still possess the vast majority of the power and influence in the US, which results in white people having the most prominent media presence. This leads to white values being projected the most dominantly through popular media. Other demographic groups view aging in a more positive light. For example, Latino participants in Roberts’ study perceived more positive aspects of aging and viewed it as a natural process with normal health consequences, as opposed to assigning sources of individual blame for things associated with it. Additionally, health is seen as a personal matter to white people significantly more than it is to other races (discovered through Roberts’ study) leading white people to believe that people are faulted for their health conditions. This and most of the other perceptions of aging that are associated with white people closely align with “aging in place” which emphasizes letting the elderly live independently in their local homes and communities. The white-promoted near insistence and disproportionately high value of independence that is imposed upon individuals is harmful to the elderly as they often need and would benefit greatly from assistance. Such ideologies leave them reluctant to ask for help which may lead to unnecessary neglect and worsening of issues that could have been resolved by simply asking a relative or healthcare professional for assistance. The media’s influence on the public perception of the elderly and the elderyl’s perception of themselves is problematic, leading to false assumptions, invisibility and invalidation. For decades, the media has severely underrepresented and misrepresented older people’s actual fears of aging and death, occasionally even promoting arguably potentially problematic behaviors (such as the near encouragement of cosmetic procedures through the common youthful depiction of the elderly in media). Media’s omnipresence leads individuals to naturally reflect what they perceive to be expected of them. Due to the disproportionate amount of power white people possess in America, these reflections have mostly been reflections of stereotypical white elderly characters. Furthermore, since social value is associated with independence, and independence with economic activity, one’s value in the US is perceived to be associated with their independent ability to do what is expected of a “functional member of society.” This leads to the devaluation of older individuals both internally and by others, cultivating a fear of aging that nearly necessitates combat. Such combat takes a variety of forms including cosmetic surgery and trying to capture/ engage in things associated with younger generations . Despite this aging should be embraced and viewed as a


148

natural experience, as it is an inevitable experience all will face. No old person should be made to feel invisible or at fault for conditions associated with aging nor should they feel neglected or diminished due to this misrepresentation in media. Due to its immense influence, the media should be obligated to accurately depict this huge population. Accurate and adequate representation of the elderly in media is vital to mediating the fears of aging, society’s disvalue of older people, and generally improving the perception of and lives of older people. Media misconstructions lead to societal dysfunction as pressing problems are concealed in favor of societal comfort and in the place unnecessary problems are created. The interdependence between media and societal norms should be utilized to benefit and normalize the lives of older people. Fear is defined by Merriam-Webster as “an unpleasant emotion caused by being aware of danger…” (Merriam Webster). The unpleasant nature of the construction of fear encourages the avoidance of it. Societal misinterpretations are often rooted in fear and the distortions of older people are no exception. These distortions likely stem from fears of aging, which closely relates to and is likely partially derived from a fear of death. Thanatophobia is the term used to describe the fear of death. This stems from the Greek personification of death (Thanatos) and was coined by Sigmund Freud in 1915 who rooted it in the subconscious belief in immortality and by association, the fear of the unknown of what occurs after death. Understanding the inevitability of this fear, described by Sinoff as something that is “...omnipresent in our lives…” is essential to the human experience. The fear of death is often separated into two categories, the fear of death and fear of dying. Both are associated with fears of death, what happens to the body after death, fears of lost time, suffering, the unknown, and loneliness. Loneliness and suffering are factors closely associated with the portrayal of aging. In the realm of the fear of death, loneliness and suffering pertain to the dread that surrounds ceasing to be. What happens after death is a question that yields much uncertainty, which can stimulate anxiety and fear. Suffering that is typically associated with death, when coupled with the uncertainty of its aftermath, creates justified fears of loneliness and suffering as people link death to the dying process. The dying process and the fears surrounding it are slightly different in character as these can be directly related to human experience as people live through and witness this process. This fear parallels the fear of aging significantly more as aging becomes perceived as the chronic approach of death. Preparatory grief often characterizes the dying process and consists of five stages, identified by KublerRoss, a Swiss-American psychologist. The denial of the eminence of death is first, which is likely heavily influenced by the nearly universal sense of taboo surrounding death. Death is viewed as something to be avoided out of fear and

society has increasingly used denial as a coping mechanism for such fear. Countless medicines and weapons (ie. atomic bombs) of mass destruction have been created out of the fear of death. Their motivations are drastically different (to mediate/reverse the dying process and to avoid death by threatening the death of others respectively) but can nonetheless be rooted in an aversion towards death. Medicine especially is partially motivated by the fear of the dying process. The second stage is anger and resentment towards the living. Dying has a variety of negative connotations and those suffering oppose themselves with those who are not. Society’s denial of death leads them to think that they should not be in a dying position and such feelings brew anger towards those who are in a position closer to society’s expectations. Bargaining to cope is the third. This is often rooted in religion, but sometimes people simply offer whatever they can to prevent death. Religion has juxtaposing effects on the fear of death. On one hand, the belief that death is followed by interaction with a Supreme Being mediates fears of death as the meeting is nearly depicted as a positive end goal of life, slightly overshadowing the negativity surrounding death. On the other hand, death is associated with the judgment of the individual’s life and the fear of scrutiny can exacerbate pre-existing fears of death. The fourth and fifth stages of grief occur when inevitability is realized and acceptance begins, respectively hinging upon each other. After bargaining fails at the hands of the inevitable nature of death, people often become depressed due to the characterization of death. Behaviors associated with depression are often improperly addressed due to the human inability to tolerate depression over long periods of time due to its complexity and implications. Nonetheless, depression starts reactively and then becomes quiet as patients/those experiencing the dying process truly come to terms with its inevitability leading to acceptance. The acceptance of death feels like the acceptance of loss, a contradiction to the societal denial of death and a failure of the innovation set in place to prevent it. Significantly more patients die in hospitals now than they did decades ago and the image of a lone dying person hooked up to a plethora of machines that have functions largely unknown to the average person is common fear, only perpetuated by media depictions of hospital scenes. The suffering, loss and solitude associated with thanatophobia parallel the fear of aging. Bodner, a gerontology professor at an Israeli university describes four dimensions of age anxiety: the fear of old people, the fear of altered physical appearance, psychological concern and the fear of loss. Old people have been characterized as unpleasant by the media, are heavily associated with dying and bleak hospital scenes and retirement homes. The occupation of either for reasons related to old age nearly forces the process of preparatory grief upon individuals due Fifth World


149

to the heavy societal associations with death attributed to both institutions. Conventional attractiveness is perceived to have high value in society and since the appearance of old people does not embody such characteristics, the deterioration of physical appearance is associated with aging and viewed negatively. Such deterioration exacerbates this fear and contributes to the elderly’s sense of societal inferiority, which in turn leads aging people to seek to avoid such status. Many neurodegenerative diseases (among even more general diseases) are associated with aging. Disease is taboo and often treated nearly secretly behind closed doors in hospitals. Such secrecy surrounding aging’s direct correlation with a plethora of diseases that cause negative psychological implications only fosters aging anxieties. The fear of loss encapsulates and builds upon the previous fears. Losing status, ability, loved ones and opportunity among other things is a consequence of death and fear of such loss leads to aging anxiety as these losses often occur as people age but concentrate as death approaches. When faced with death, individuals either accept death or allow anxieties exacerbated by societal constructions to dominate, but these cannot save or prolong life. The latter group of individuals die unreconciled to their dying. The acceptance of death can greatly improve the livelihood of a dying or aging individual, but is made difficult by societal and media constructions of death and aging. Fears of loss dominate and lead to fears of death being externally imposed and internally festered. Such imposition leads to discussion and the transfer of fear between individuals, exacerbating the internal process, but the fear and inevitability surrounding death remains unchanged and simply develops in form and essence over time.

T

he depiction of aging and older people in media significantly contributes to the external imposition of the fear of death. As with most populations (excluding the white men, among whom age is often seen as a sign of status, that dominate and have monopolized most of the power in America for centuries) elderly people have long been neglected and misrepresented in the media. This is exemplified in Vickers’ description of the development of the characterization of older people in media when she wrote for California State University. In the 1970s, a mere 3.7% of television characters were elderly and they were portrayed in an extremely negative light, often exhibiting displeasure or unattractiveness (Vickers 2). This allows viewers to not be faced with reminders of the existence of aging and by association, inevitability of death. The media-produced temporary encouragement of ignorant bliss is a commonly utilized strategy to keep individuals content with their current position and often, even in this situation, is notably flawed. A decade later, the elderly characters expanded to highlight more positive aspects.

North Carolina School of Science and Mathematics

Cocoon (a film released in 1985) revolves around older people discovering a fountain of youth next to the nursing home they reside in and “although again stereotypical attributes are recognizable in some of the characters, the film also conveys important messages about growing older, such as the importance of friendship, the inevitability of hard decisions, the desire among some to live forever, and the need to say goodbye.” (Vickers 101). This depiction encapsulates a lot of the components associated with the fear of aging including the fear of loss and need to accept it. Since then the characterization of older people in media has increased in depth, but with such developments are accompanied with a variety of expanding narratives, some of them problematic in nature. In the twenty-first century, older people are commonly portrayed as they are in the 2017 film Going in Style, brimming with societally imposed stereotypes of unattractiveness and the loss of a youthful caliber of capability, but to a lesser extent than previous characterizations. Despite this, these traits are not highlighted and the narrative focuses on the protagonists’ adventures with each other and their comradery. Even the relatively older Richard Webber of Grey’s Anatomy (its first episode aired in 2005) despite his general presence as a wise leader, is stereotypically technologically inept, typically stuck in his ways, and sporadically unpleasant. Although the expansion in dimension of the characterization of elderly people in media, the expansion in representation has been limited. Initially, this is puzzling due to the fact that older people are the largest consumers of media. “According to Nielsen, a market-research firm, Americans aged 65 and over spend nearly ten hours a day consuming media on their televisions, computers and smartphones. That is 12% more than Americans aged 35 to 49, and a third more than those aged 18 to 34” (The Economist). Due to a variety of factors, older people are deemed as uninteresting and therefore the media is less inclined to depict them prominently out of a fear of losing the attention of viewers. Fears of death and aging are also captured in music, but rather than crafting negative stereotypes around or neglecting to represent the elderly, songs often implicate the fear of loss. As most popular artists are younger people, themes of desire to hang onto such youth are prevalent and can be traced back to fears of aging and death. “Young and Beautiful” by Lana Del Ray and “When We Were Young” by Adele both exude fears of loss by questioning what would happen when youth and the conventional attractiveness of such is lost and trying to capture and hold on to a moment of youth.. “Tonight let’s get some and live while we’re young” (0:50-0:54) is the principal repeated choral line in One Direction’s “Live While We’re Young,” in which the brevity of a night spent partying is used to characterize youth. Encouragements to “live” (alluding going crazy and falling in love) are riddled throughout as the singers encourage listeners to not overthink and that “...it’s now or never” (1:14-1:15). These develop a sense of urgency that


150

can be traced back to the fear of the loss of the opportunity to create memories associated with youth–a memorable youth rather than a forgettable death. Similar to the absence of older people in visual media, the songs about aging and the concept of being older are far outnumbered by the songs discussing youth. Songs about youth populate the charts and have done so for decades. Grammy-winning “We Are Young,” by fun, which was one of the most dominant songs of the early 2010s, and “Blinding Lights,”’ by the Weeknd, which remained a top song throughout the 2020s thus far, both display themes of fleeting youth rooted in the aging anxiety’s fear of loss. In contrast, Alec Benjamin’s “Older” literally expresses fear and uncertainty by repeating the fact that he is not ready to get older throughout the song. Fears surrounding aging are showcased in this song as he wishes time would move more slowly and depicts all of the events/activities (ie. taking down childhood posters, getting an apartment etc.) associated with growing older in a solemn, reminiscing light negatively connotating aging. The Social Learning Theory states that younger people are greatly influenced by what they see and hear, often modeling what they see and hear. This parallels the Cultivation Theory which describes the ability of mass media to shape people’s perceptions of the world. These effects are compounded by the media’s omnipresence and the large volume of consumption society has of it. Additionally, the evolution of communication allows the largely negative perceptions of aging to spread and to simultaneously compound and reinforce deeply societally rooted fears of death and aging. The media reflects upon the fear of aging leading to its reflections of it, influencing the masses, leading to a self-reinforcing cycle of misinterpretation, fear avoidance, misconception and negative connotation. A major component of the fear of death and by association the fear of the unknown is uncertainty and by suppressing the experiences of aging in media, that uncertainty never resolves, breeding more fear.

T

he media has mass societal implications and the portrayal of older people within it is no exception. This is reflected in the strategy to encourage “aging in place” described by Baldassar to mediate the issue of aging populations in developing nations. Aging populations are increasingly occupying a number of resources, especially in healthcare, leading to some younger people’s resentment towards them, especially when younger populations are made to feel that their tax dollars are being used disproportionately to support a large older population “Aging in place” lets older people live independently in their own communities and have assistance come to them, but this can be problematic as issues this is simply not feasible for a lot of people due to the inequity of resource distribution. Too much encouragement of this could lead to people feeling obligated to live independently, potentially

neglecting their needs which more often than not leads to the use of more resources to mediate issues that could have easily been prevented by proper care initially. White people and African Americans view aging through a deficits model and a progression towards death and while it is a cultural norm for Latino people and other groups to take care of elderly relatives, in a study done by Lisa Roberts of Loma Linda University that surveyed Latinos, African Americans and white people “...most non-Latino participants felt that older adults are generally not respected in American society and that any value assigned to them aligns with their capacity for independence and productivity,” (Roberts et. al 4). This aligns with the white insistence (and anxiety) that the maintenance of independence is critical to aging. Although independence is valuable, older people often need medical attention or assistance doing general tasks due to nothing other than things associated with natural aging. This insistence fosters feelings of shame and fear when reaching out for assistance and leads older people to want to avoid getting medical care or attempt to do things that are unsafe for them, leading to falls or much worse. These fears are exacerbated by the perception of older people as a burden to society, which is both projected and held by younger people and internalized by older people in a society which actually has little use for them. Such ideologies are damaging as they lead to neglect. “In American society, the belief commonly held that one’s value comes mostly from sources affiliated with economic success and achievement (productivity), measures of efficiency (great output for little input) and effectiveness (a positive monetary outcome) can have a negative effect on older adults (Cole, 2002, Roberts et. al 7). By this logic, as independence decreases due to neurodegenerative disease, arthritis or other diseases typically associated with older people their ability to reach perceived levels of value in American society decreases since their ability to achieve, be efficient or make money are hindered. Such age-related diseases should be viewed as natural occurrences as more often than not they are genetic or stem from things that the individual suffering from the disease had little to no control over, but instead people are often blamed for their suffering and choices made in an older person’s past are often blamed for their present condition. The Theory of Social Representations describes how everyday knowledge is socially constructed and diffused through social and cultural means. This everyday knowledge is perceived as common sense, which turns it into an encapsulation of beliefs/things that should be held or known by everyone. Things become common sense between anchoring (characterizing and placing concepts in culture) and objectification (turning abstract concepts into things that can be perceived materially). For example, due to the prevalence of Alzheimer’s disease and its lack of a cure, a wide array of media pertaining to it has been produced. Alzheimer’s disease is anchored positively Fifth World


151

through prevention, scientific discovery and care whilst being negatively anchored through its growing prevalence according to Anna Šestáková and Jana Plichtová of the Slovak Research and Development Agency. Despite its multidimensional anchoring, its objectification has a largely negative connotation through its projection and objectification through the impacts of the degenerating brain. The social and/or economic impact of Alzheimer’s disease on families and society in general is ignored in favor of this oversimplified objectification. This implies an individualistic responsibility for caring for dementia which makes no sense given the average individual and family’s lack of expertise and by association, their lacking ability to adequately care for the disease. Prevention, one of the few perspectives associated with Alzheimer’s disease focuses on individual lifestyle issues and often roots this and other neurodegenerative diseases (and most diseases related to old age in general) in flaws the way an individual lived rather than the more likely cause of genetics or old age. This combines with the perception of old people as a societal burden to blame old people for their status. This places older people in a poor social condition, which contributes to fears of aging by feeding to the fear of loss as young people do not want to lose their favorable social positions due to age and contributing to the fear of old people as people do not want to be around people that are perceived to be the source of their suffering, in fear of suffering themselves (among other things) contributing heavily to the solitude and negative feelings associated with older people (these feelings translate to fear and avoidance, worsening the effects of these issues). The fear of aging, poor social position as a product of media and societal perceptions and America’s unrealistic insistence on the independence of old people and the overvalue of their productivity leads to a harsh juxtaposition. This is exemplified through their relationship with technology. Despite being generally eager to learn about technology and how to use new devices, the elderly are depicted as technologically clumsy and inept. As a result, due to their media portrayal as generally unpleasant, it is often assumed that they are unwilling to learn and people often neglect to or do not desire to help older people learn about technology. Technology has become vital to society, industry and productivity and due to older people’s general perceived lack of technological capability they seem unproductive. This is then blamed on their unpleasant nature and a lack of education or resources is rarely considered. Media’s rhetoric surrounding older people and technology is a barrier to their learning, but such a big focus is placed upon the burden this potentially imposes upon society that this rhetoric is seldom considered. In a study done by Kathleen Schreurs of elderly people’s relationship with technology, it was found that elderly people felt that the costs of learning outweighed the benefits of staying close to their family. This is extremely saddening and can be attributed to a variety of factors including the reluctance of people to teach North Carolina School of Science and Mathematics

them due to a perceived lack of a desire to learn and older people feeling guilty for burdening people by asking for assistance among other things. This has led to people often halting their engagement with technology after they retire, or at least significantly reducing it, leading to isolation and detachment. Coupled with the independence imposed by “aging in place” this can be detrimental to the life of an older person as they could feel socially pressured into living alone with limited means of contacting others. Anyone in this position, much less someone dealing with the diseases, physical deterioration and decreasing capabilities associated with aging would become depressed and unpleasant. Yet, instead of initially thinking to assist these people, the dominant rhetoric blames them for their suffering and lacks understanding for these issues. This leads to lacking support for resources like medical care for these people, exacerbating existing issues.

I

n order to mediate the intricately flawed construction of older people in media that has led to an extremely distorted societal perception of them, the root of these issues, the fears of aging and death, all must be addressed. As with most issues, these can be separated into smaller components and is best approached through isolation and reconstruction. The fear of old people should be mediated through accurate and adequate representation of them in the media. This should be done through increasing the number of older characters there are in prominent media products, increasing the volume and variation of the narratives surrounding them and increasing their depth as characters. The grumpy old man that universally populated the media in the 1970s is a harmful and inaccurate stereotype and although people are mostly conscious that stereotypes exist, they still both subconsciously influence how society views demographics. Accurate and adequate representations of older people will eliminate a lot of the uncertainty and misinformation surrounding them. Most fears stem from uncertainty and the fear of old people is a prime example. I would approach psychological fears and the fear of changing physical appearance similarly. Although the fears come from differing sources and are constructed slightly differently they both can be mediated through the normalization and embrace of things associated with aging. Conventional attractiveness is structured to value features typically associated with youth, therefore as people grow older they become less conventionally attractive and since this is of high value in society, they by association become perceived as less valuable. Neurodegenerative diseases including Parkinson’s disease and Alzheimer’s disease are strongly associated with aging. These diseases make it difficult for older people to live their lives independently and due to independence’s high value in the United States, older people are perceived as less valuable because of this. On top of this disvalue, a prominent health narrative is not


152

that these diseases are typically associated with aging or are related to genetics, but instead that they are somehow the fault of the sufferers. Although scientific journals do not convey this message, the framing of messages in most other media paint diseases related to older people in a way that they appear as problems that should be solved by individuals instead of things that should become larger than the individual, encouraging individuals to reach out and seek help whilst also encouraging people to help individuals affected by these diseases. In both the case of psychological fear linked to neurodegenerative disease and physical fear linked to attractiveness, the common fear is the fear of losing perceived value in society. In the case of diseases related to aging, more accurate scientific construction of these diseases needs to be publicized in order to correct the victim-blaming narrative and in both cases societal values should be reconstructed in order to embrace all characteristics of all people, but especially in this case older people. Someone’s conventional attractiveness has no correlation with their value as a person, yet those who are conventionally attractive are greatly favored by society. The fear of losses ties the previous fears together. The fear of old people largely stems from the fear of losing favorable social status while the fears of physical appearance and psychological concerns stem from the fear of losing perceived societal value. Both of these losses correlate with isolation and suffering, two experiences met with lots of animosity and heavily associated with aging and death. Fears of loss would diminish significantly if society viewed aging more realistically, but accurate media constructions and representation are a near necessary precedent of this view. The media should be utilized to improve and spread awareness surrounding the lives of older people. Instead it commonly blames them for their suffering, leaving them feeling neglected, isolated and largely shielded from society. Mediating these feelings through proper portrayal of older people in the media is necessary for the quality of the lives of older people, and society’s management of an aging population.

this as it showcases that young people model the behavior that is expected of them, therefore when younger people digest media that portrays old people in a negative light and see older people being perceived negatively in the media, they model such behavior. Disease naturally associated with aging is also severely misconstructed and the victims of the disease are unjustly blamed. This leads to a conflict between the need for older people to receive medical assistance and the desire for them to be independent. The unnecessary emphasis that white people in America put on independence is harmful to older people as things that naturally occur with aging often make living independently unfeasible and unsafe without assistance. This leads to neglect and events that could have easily been avoided with adequate attention and medical care. The problematic framing of older people in the media needs to be addressed. The fears of death and aging and general ignorance and negligence should not be used as a crutch to dictate society’s perception of older people. Narratives surrounding older people in the media should expand with careful accuracy in order to avoid stereotypes, paint them as multidimensional people and eliminate the victim-blaming and largely scientifically inaccurate prominent perception of disease related to old age. This would mediate society’s inaccurate perception of older people and attribute greater value to them. In turn, lots of the negative connotations associated with aging would greatly lessen leading to a more positive and accurate societal perception of older people. In such an environment, older people would be more likely to reach out for help if they needed it and feel valued decreasing depression and other associated health consequences. Overall, mediating distorted negative societal perceptions of older people through the reconfiguration of their characterization in media is vital to the sustenance of an aging population and a better life for all, but especially future generations of old people.

T

he distortion and disproportionately minimal representation of older people in the media has a variety of personal and societal consequences. This misconstruction is rooted in the fears of aging and the fears of death. These fears are interdependent and the loneliness and suffering associated with death is associated with aging while the fears of loss, psychological fears, fears of old people and fears of altered physical appearance are associated with death. Fear often comes from uncertainty and the neglect to represent older people in media brews uncertainty, leading to the perception of elderly people to be dictated by prominently one-dimensional media constructions and the stereotypical unpleasantness associated with this construction. Media is the most prominent form of communication, its impact on society is immense. The Social Learning Theory exacerbates Fifth World


153

“Aging in Place in a Mobile World: New Media and Older People’s Support Networks.” Taylor & Francis. “America’s Elderly Seem More Screen-Obsessed than the Young.” The Economist, The Economist Newspaper. Bodner, Ehud, et al. “The Interaction between Aging and Death Anxieties Predicts Ageism.” Personality and Individual Differences, Pergamon, 1 June 2015, Tilvawala, Khusbu, et al. Design of Organisational Ubiquitous Information Systems: Digital Native and Digital Immigrant Perspectives. North, Michael S, and Susan T Fiske. “A Prescriptive Intergenerational-Tension Ageism Scale: Succession, Identity, and Consumption (Sic).” Psychological Assessment, U.S. National Library of Medicine, Sept. 2013, OKübler-Ross Elisabeth, and Ira Byock. “On Death & Dying: What the Dying Have to Teach Doctors, Nurses, Clergy & Their Own Families.” Amazon, Scribner, 2019, Scheurs, Kathleen, et al. Problematizing the Digital Literacy Paradox in the Context of Older Adults’ ICT Use: Aging, Media Discourse, and Self-Determination Lisa R. Roberts, Holly Schuh. “Exploring Experiences and Perceptions of Aging and Cognitive Decline across Diverse Racial and Ethnic Groups - Lisa R. Roberts, Holly Schuh, Dean Sherzai, Juan Carlos Belliard, Susanne B. Montgomery, 2015.” SAGE Journals, Šestáková, Anna, and Jana Plichtová. “More than a Medical Condition: Qualitative Analysis of Media Representations of Dementia and Alzheimer’s Disease.” De Gruyter, De Gruyter, 1 July 2020 Sinoff, Gary. “Thanatophobia (Death Anxiety) in the Elderly: The Problem of the Child’s Inability to Assess Their Own Parent’s Death Anxiety State.” Frontiers in Medicine, Frontiers Media S.A., 27 Feb. 2017 VVickers, Kim. “Aging and the Media:” Californian Journal of Health Promotion

North Carolina School of Science and Mathematics


154

In Fear of Paganism: Exploring Evangelical Appropriation of Yoga and Meditation Sophia Lavigne

A

lbert Mohler, President of the Southern Baptist Theological Seminary, declared in a 2010 blog post that, “Yoga begins and ends with an understanding of the body that is, to say the very least, at odds with the Christian understanding. Christians are not called to empty the mind or to see the human body as a means of connecting to and coming to know the divine”; therefore, he concluded, yoga and meditation are incompatible with Christian life. For Mohler, Christianity requires a belief in the principle that man (the body) is separate from God (higher awareness). Throughout the article, he portrays the emerging popularity of these practices in America as an evil, corrupting force, linking them with sexual depravity and heathenism because of their origination within Hinduism and Buddhism. The perspective of distrust towards foreign practices discussed by Mohler is not uncommon. Yet a new trend has arisen in American evangelical Christianity: appropriation of practices introduced to the public consciousness by Hindu and Buddhist tradition. This has manifested through intentionally Christianized yoga and meditation, creating new versions of ancient practices - a possible by-product of the distrust epitomized by Mohler’s blog post. Pew Research Center’s Global Religious Futures Project reports that as of 2020, 26.0% of the 4 billion individuals living in the Asian/Pacific Ocean region identify as Hindu and 11.3% identify as Buddhist. In contrast, only around 2% of the United States population identifies with either of these categories. This results in a vastly different perception and understanding of Hinduism and Buddhism in the United States than in their countries of origin. Furthermore, many people in the United States remain largely ignorant of Hinduism and Buddhism, lacking, perhaps, a longer history of cultural and religious exchanges. As society is heavily shaped by the religious majority, the values found in religious practices become synonymous with their secular (non-religious) counterparts. Through this lens, Hindu practices are fundamentally Indian and vice-versa. The ritual health-conscious spirituality of yoga and the tenets of mindfulness that originated in Hindu and Buddhist tradition have become, for many Americans, culturally synonymous rather than nuanced and distinct historical practices. The religious practices of Hinduism and Buddhism, most notably yoga and meditation, have been transformed

from their original form as they entered into American consciousness. Yoga as we know it is a physical exercise, but the term encompasses much more in Hinduism. In the Bhagavad Gita, a Hindu text dated to around the 3rd century BCE, four types of yoga are described as ways to establish moksha (unity with the divine) - bhakti (devotion), jnana (knowledge), karma (action), and dhyana (concentration), with dhyana being the closest to modern notions of yoga (Hindu American Foundation). Comparably, the practice of meditation is a key component of Buddhism. Its founder, Siddhartha Gautama, also known as the Buddha, reached enlightenment in the 6th century BCE by rejecting worldly pleasures and meditating, discovering why suffering exists and how to depart from it (Vail). In this way, the significance of iterations of these practices cannot be overlooked, despite their secularization, because of their origins. In a similar manner to the cultural-religious exchange between Asia and its dominant religions, Buddhism and Hinduism, the ideals of the American public are linked to Christianity due to its historical prevalence. The legacy of the strict religious society of the European colonists is ingrained in the development of America. An estimated 70.6% of the population is currently identified as Christian (Pew Research Center). Furthermore, the Religious Landscape Study calculates that 25.4% of Americans are evangelical Protestants, descendants of the Christianity which emerged from the revivals of the eighteenth and nineteenth centuries. American evangelicalism is characterized by conservative congregationalists, a high emphasis on evangelization, which is to spread the Gospel at all costs, and direct Biblical interpretation. Spreading the Gospel is deeply important because a core belief is that Jesus Christ is the only way to salvation - forgiveness of sins and unity with God in heaven - and so non-Christian populations must be converted. However, an overzealous approach to this can lead to miscommunication. A misunderstanding and dismissal of other belief systems can occur when the goal for making connections outside of an insular church community is immediate conversion. This approach limits the amount of genuine relationships fostered with diverse people groups. A patronizing dynamic is created; disregard for other beliefs is fostered, with the stripping of individual labels from other religions and their

Fifth World


155

practices, assigning them the overarching label of “pagan.” This term originated in England within the Christian community to describe the Anglo-Saxon population that still followed the traditional religion, and this term has evolved to possess a heterodox or idolatrous connotation. It is often used within this context to describe every other faith apart from Christianity (and sometimes Judaism). Recently, this term has been reclaimed by practitioners of a variety of folk and regional religious practices commonly characterized by polytheism, animism, or pantheism, such as Wicca, indigineous tribal religion, and Greco Roman traditions. In the context of appropriation and the interactions of religious groups, the label of “pagan” and Paganism have two different meanings, one imposed and the other self proclaimed. The characterization of every other religion as “pagan” creates a dichotomy of culturally inferior versus superior, signified by appropriation. Religious appropriation is a misinterpretation of a practice that stems from incomplete understandings. It disregards the specific religious and cultural significance of practices, alleviating any concerns by forming new associations with familiar beliefs. Secular and evangelical appropriation of religious practices lend themselves to each other. Secular American culture seeks experiences that are outside of typical availability and “new,” things that quickly gain popularity through commodification. Foreign practices are anglicized or their origins ignored, which allows for them to be marketed as a new product. Appropriation in the context of evangelical Christianity is driven by a desire to participate in components of popular culture that they otherwise denounce. Evangelical appropriation of Hindu and Buddhist practices, such as yoga and meditation, is harmful to Asian communities because of its exploitative nature that remains ignorant of the damage it causes. This results in the formation of a culture within the evangelical community that tends towards an imposed racial and ethnic hierarchy, financial exploitation of other religions, and a departure from the true, intended nature of Christianity.

T

he understanding of Hinduism and Buddhism in the United States has always been incomplete because of the consequences of imperialism and resistance to cultural pluralism (multiculturalism). The United States has long been politically allied with nations that have histories with imperialism. France and England, with whom the country is closely aligned, were active instigators in the exploitation of China, India, Vietnam, Hong Kong, and many other states. The language that we use to talk about these places have been shaped by their encounters with Western nations. We do not have access to much of the language used by those colonized because it has been diminished and disfigured in favor of Western viewpoints, including Western markets. This has permeated the way cultural fixtures of Asia, such as religion, have been discussed. In her work, “Hinduism in America,” Amanda Lucia North Carolina School of Science and Mathematics

identifies the initial introduction of Hinduism to America as when Christian missionaries returned from India and reported their findings to their communities. This provided a framework within which the relationship between American Protestantism and Hinduism developed. There is a dynamic within this use of “missionary” that asserts an authority over those it encounters. The term “Hinduism” itself only originated in the discussion of South Asian religious practices by British missionaries, and these local religious variances were not previously grouped together under a specific label (Lucia). Our Western understanding of the diverse practices and experiences of the continent is constructed through the identity of a missionary. This act of forced unification then creates a framework under which to view its people, one that has heavy Western input. Upon discovery of a culture, the work of the missionary is to find ways to merge Christianity with indigenous religions to create a middle ground, shaping old ways so that the leap needed to reach conversion shrinks. This defines the actions of Unitarian missionaries in India as they used neo-Vedantic, or Hindu modernist, monotheism to gradually bring the people to Christian monotheism from Hindu polytheism (Lucia). The other effect of using this technique is that it makes indigenous religion more palatable and noble to those in the West, playing on the trope of the “noble savage” who just doesn’t know better. These first “gurus” who visited America and explained Hinduism to the public represented a neo-Vedantic, monotheistic system as the reality of Indian culture to large crowds of admiring and patronizing listeners, as well as heavily featuring a general spirituality that encompasses yoga and meditation (Lucia). Although the rise of Nativism in the early twentieth century led to an overall negative shift in the perception of foreigners, this teaching had a lasting impact on Hinduism in the public consciousness. The perception of Buddhism in America has likewise been shaped by its introduction. The legacy of Buddhism in America has been molded by its misunderstandings, and this has allowed for a negative perception to be fostered in some evangelical communities. It has additionally been shaped by its association with labor and the economy. Buddhism was first brought to America by the Chinese and Japanese immigrants who moved to the Western US and Hawaii to serve as a migrant labor force in engineering projects, mining, and agriculture (Pluralism Project). They formed community temples and religious groups to continue the traditions and connect them to their home countries. However, discrimination became more prominent with the enactment of policies banning the naturalization of Asian immigration, such as the Chinese Exclusion Act, which politicized their presence. The distrust that these policies expanded, founded on a desire for cultural homogeneity, created an environment with anti-Asian hostilities, visible in rioting and the passing of ordinances that limited religious freedoms, such as restrictions on usage of Buddhist


156

and Daoist ceremonial gongs and firecrackers (Pluralism Project). The prevalence of discrimination induced a trend of assimilation and rejection of tradition by Asian-Americans. These cultural touchstones were targeted as foreign and unnatural, but gradually were incorporated into American culture during the twentieth century as the East became in vogue. Ideals of Zen Buddhism were popularized in their appropriation by the Beats, a countercultural literary force based in California in the mid-1900s, influencing the trajectory of hippie culture in the 1960’s and 70’s. The overlap between Buddhist principles and the American consciousness is mostly secular, as it is divorced from the original context and significance. This aided in the development of the concept of “New Age Spirituality,” which merges traditional spiritualist practices from across the globe, such as crystal usage, meditation, karma, yoga, and sage cleansing. Moreover, this formation of the New Age movement created an opposition for evangelical America to rally against as it stood for vaguely “pagan” ideas from foreign sources around the world. The accessibility of information in the modern era, as well as increased diversity, provides a chance for misunderstanding to be overcome. However, there remains a barrier to an increased cultural understanding in evangelical communities - the threat of doctrinal confusion. Doctrinal confusion is the idea that exposure to other belief systems can interfere with the comprehension of one’s own religion (Brown). This would mean that, subconsciously, elements of other religions would become incorporated into Biblical doctrine and the person would no longer be able to distinguish between original beliefs and implanted ones. The belief in this possibility is rooted in the evangelical perception of the secular. Concepts from Asian religions are commonly added to popular culture with little hesitancy. Evangelical society perceives this, fearing that careless acceptance of the secular world can open the door to doctrinal confusion. Furthermore, the label of “pagan” drives home the importance of maintaining this separation as the discourse of the “pagan” is very closely aligned with that of Satan and other demonic forces, which convinces some that exposure to any “pagan” (i.e. foreign) beliefs is direct exposure to the Devil. While harboring suspicion of secular practices, evangelicals still desire to have the same experiences that they observe happening around them–to participate in popular or consumer culture. This has been amplified with the changing dynamic of church culture in America as smaller, traditional congregations are replaced by MegaChurches - congregations of at least 2,000 members that typically follow a more high energy model of worship. The traditional evangelical service is losing its following, with younger generations seeking out more emotional and fast-paced experiences to keep up with the intensity of modern life, and this leads to a search elsewhere to

replace or supplement older practices with more compelling experiences. MegaChurches fill this gap with high intensity services and appealing messaging, but this emotionality is also reached through appropriation of specific practices, such as yoga and meditation, that have high emotional spirituality. This provides an alternative route from secularism, which is frowned upon. In the New Testament, believers are explicitly instructed: “Do not love the world or the things in the world. If anyone loves the world, the love of the Father is not in him. For all that is in the world…is not from the Father” (1 John 2:15-16 ESV). This initially does not seem to coexist with the idea of appropriation, but it is mended through the act of “redeeming” each practice. The “redemption” of a practice allows it to be circulated through the evangelical community without concern. It is a repurposing that serves to clear the cognitive dissonance of condemning a practice while partaking in it. There are many different forms that redemption can take, and there are two main ways that it functions. The first is a previously secular stripping of religious context that allows evangelicals to pick up the practice and make insertions readily. The second is a conscious reconstruction that usually manifests as a substitutive act. Repackaging is a tool in this effort. As it applies to appropriation in Christian communities, especially evangelical varieties, repackaging could involve the addition of prayer, Biblical vocabulary and verses, and the editing of references to the original spiritual context to be more ambiguous (Brown). This is not a phenomenon limited to evangelicalism and is not even Christian in nature; instead, repackaging is a process that occurs within secular popular culture over time. Yoga is a prime candidate for redemption. It has already been secularized in popular culture of the United States, as the poses and Sanskrit words remain, but they become devalued when divorced from their origins. It is an aestheticized version of a culture. Once taken up as a spiritualism of Christianity, further alterations must be made to sever any cultural allusions. An example of this phenomenon is Holy Yoga, a popular “Christian alternative” to mainstream yoga marketed as a way to strengthen relationships with God, while presenting a clean and inviting aesthetic. This aesthetic is key to the manner in which it is promoted. The spiritual benefits of Holy Yoga are emphasized over the physical, assuming that its seekers are familiar with the secularized practice and the innate physical benefits, but want their alternative. Their website insists several times that the key purpose of Holy Yoga is spreading the Gospel, increasing its marketability. Their purpose is stated on their website: “Whether you come to us for your personal practice, instructor training, spiritual formation, or anything in between, you will find that divine transformation happens here” (HolyYoga). Yoga becomes a church in this sense. The practice has been transformed and assigned a higher meaning, framed as a source of deep spiritualism. It is still the same as yoga in appearance, just with a different significance assigned. The Fifth World


157

distinction between this form of appropriation and others lies in the aestheticization. A different approach is found in the form of PraiseMoves. PraiseMoves is marketed as a form of “Jesus-friendly exercise” rather than a spiritual experience. It is openly pragmatic, adding in Biblical references in order to justify its use. In fact, it attempts to divorce itself from the spirituality that Holy Yoga aestheticizes. It does not want to capitalize on or identify with the intrigue of foreign, secular experiences that makes Holy Yoga attractive to its participants. It refuses to even use the word “yoga,” claiming that the terminology is indicative of Pagan practice. The traditional pose names are replaced with shoehorned references to various Bible verses, and one series of poses mimics the shape of the letters in the Hebrew alphabet. These distinctions reinforce the concept of “pagan practice” and its subsequent demonic properties. The founder of PraiseMoves, Laurette Willis, claims that when yoga is practiced in its original form, it invites evil spirits into the body (Willis). Practicing yoga unaltered would be considered idolatry and false worship. Even though belief is defined as through faith alone in Christianity, and yoga would not be practiced as Hindu if the participant was not Hindu, it is still a foreign act in the context of Christian tradition. Even when yoga is secularized, its portrayal as a hidden, contagious enemy permeates the evangelical perception. The redemptive process reclaims this aesthetic disagreement, channeling it into a desire to purify and conquer through renaming. Nonetheless, this has the consequence of excusing exploitation. Creating a form of an appropriated practice has a net positive financial effect. Holy Yoga has grown exponentially since its founding in 2006, becoming an empire with 2,200 instructors in 13 countries and a 15% growth rate in the last year alone (Solomon). Brooke Boon, the founder of Holy Yoga, has gained a substantial amount of notoriety. The Christian market is not as aggressive in its consumption as others, but it is reliable. Christian-targeted marketing is lucrative because of the large Christian population that wants to buy and support products related to the faith, as well as being provided an alternative to popular secular items that raise objections, such as movies and music. It taps into the insecurities associated with being too much “of the world.” Laurette Willis of PraiseMoves has made a brand out of this, producing material that allows consumers to exercise in a Christian way - although exercise is not inherently spiritual - because of an entrenched fear of the secular. Her website promotes PraiseMoves Gold (Christian yoga-substitute for seniors), PraiseKICKS, PraiseWAVES, PraiseBarre, and, last but not least, Mira! (Willis). These products assert that even innocuous exercises, like water aerobics and kickboxing, must contain a continuous assertion of each participant’s Christianity. It is a signifier of difference, being set apart from the rest of society. This is the case with the treatment of yoga and meditation in appropriated forms. Redemption is a marketing strategy, North Carolina School of Science and Mathematics

a sign to others that you have conquered paganness instead of giving in. This reveals appropriation as exploitation, especially when considering the extreme detachment of its perpetrators from the exploited culture. Appropriative forces degrade or destroy cultural competency in America, and they are a by-product of a destructive discourse formed through the nation’s history, which separates the beliefs and practices of immigrant peoples from their economic utility. In response to the increased labor-driven immigration of Asians (at the time, most notably Chinese) to the United States in the mid1800’s, the naturalization of Asian immigrants was codified as an illegal act. Immigration was necessitated by the dual industrialization and expansion of America into the frontier and imperialism-caused instability of the Global South. The economy of the United States, in this expansion, demands cheap labor, and the demarcation of immigrants as aliens, unable to claim citizenship and its associated rights, allows for the immigrants to remain pure labor. In this way, the label of immigrant is an excluding force, shaped to signify someone who is unable to access the political sphere, just as “pagan” excludes them from the religious. This served as an objectification and subsequent devaluing of Asian immigrants. In discussing the place of Asian-Americans in American Studies through her work, “The International and the National” Lisa Lowe delineates another effect of the interaction between the political and economic issues, the concern of public health. The restricted naturalization of Asian laborers shaped the function of immigration; it led to a reduction of women in these American settlements as men temporarily immigrated in order to provide for their families, creating a new model for Asian masculinity. As a result, these bachelor-camps, and therefore, their Asian occupants, were discussed as a public health crisis and a threat to American society (Lowe 34). The aforementioned policies against Asian naturalization were not repealed until after World War II, extending the circumstances that shaped discourse of Asian communities as a source of contagion. The society that existed and enacted this collective trauma continues on to the modern day, and this provides a precedent for a societal disrespect towards Asian culture, as well as an association of Asian cultures with disease, which, like the demonic, threatens to possess the body political no less than the body of the believer. Beyond social circumstances, the discourse of Asian belief systems and personal values are framed by language of contagion, and this is a function of the homogenization of the Asian-American identifier. The treatment of all cultural and religious beliefs originating in Asia as homogenous allows for them to be demonized as a unit, ignoring complexities and diversity. The state of discrimination and misunderstanding towards Asian culture in the United States has relegated Asian-American discourse to its own sphere, creating a deficit of representation in popular culture. A connotation


158

of virulence, racialized fear, and mystery was placed upon the Asian immigrant community, which was then transferred to its cultural and religious practices. Because of this exclusion, the majority of prominent depictions of Asian culture are from an outside perspective, and these are the circumstances in which appropriation has thrived. Portrayals of foreign culture made from an outward lens allow their practices to be reshaped and influenced for the specific needs of the appropriator, and these aestheticized appropriations sell better in the American market due to their cheap production costs. Moreover, these methods of production directly exploit the people that they steal from, subjecting immigrant and overseas laborers to inhumane working conditions. In this way, producers of appropriated goods are benefiting directly from the legacy of discrimination against Asian cultures in America, and this appropriation in the secular sphere gives way to appropriation in religious settings. Appropriation is an action that negatively alters church culture when it occurs, going against the true spirit of Christianity and signifying the construction of a racialized hierarchy. It creates a norm by classifying all things that exist outside of it as abnormal, and then bends these practices to the will of the dominant force. In this mindset, the practices of Asian cultures are inferior to their Christian counterparts because they must be “redeemed” through the process of appropriation to be worthy of cultural canonicity - a place in American homogeneity. This sends the message that foreign religious practices are not worthy of respect in their own right, and this transfers onto their practitioners. A church community that practices appropriation is predisposed to be less inclusive of members of foreign cultures because of the messaging that foreign cultures are demonic through the label of “pagan.” It breeds misconceptions that spread through the dialogue of the church community. A blatant disrespect for other cultures as demonstrated through appropriation pushes away members of the community that otherwise would have made a difference. The Bible teaches that all people are made in God’s image and are equal, and appropriation promotes the opposite sentiments. The Bible calls for treating everyone with respect and love, which is a major emphasis of the ministry of Jesus in the Gospels - “So whatever you wish that others would do to you, do also to them” (Matthew 7:12 ESV). As it applies to the church after its creation, the Pauline epistle written to the church in Ephesus instructs the early church that, “you are no longer strangers and aliens, but you are fellow citizens with the saints and members of the household of God” (Ephesians 2:19 ESV). It is a direct commandment for the church to be inclusive to all groups, along with cohesive unity within the church. John’s vision of the second coming of Jesus at the end of the world in the Book of Revelations deliberately describes the presence of people from all nationalities and cultures in unity (7:9 ESV). Instead, American community churches form along

race lines. Exclusion of people of Asian descent by creating a culture of suspicion and dehumanization perverts this original intention of radical love and inclusion. Appropriation promotes a disrespect for foreign belief systems, which goes against the greatest commandment for unconditional love and respect. In some instances, appropriation may be considered a way to evangelize to members of the religion or culture that is being appropriated from, an act of exploitation. An example of this is presented in an article by Scott Griswold in the Journal of Adventist Mission Studies. Griswold argues that “Biblical meditation,” which still possesses similarities to the meditation described in the Bible but is organized differently to model Buddhist strategies, can be used to evangelize to Buddhists - that introducing Buddhists to “Biblical meditation” can lead Buddhists to the Gospel (Griswold 130). A decrease in respect for other cultures as an effect of appropriation is implied in his argument as he asserts that contemplating the differences between Biblical and Buddhist meditation, meaning the truth of Biblical versus the falsehood of Buddhist, is helpful to this ministry and provides increased vigor for the pursuit of evangelism (130). Proselytization is considered a positive act because it introduces someone to the sacrifice of Jesus. However, utilizing appropriation hinders this effort by showing a disrespect for their culture and by disobeying a common commandment in the Bible. The demonstrated disrespect of other cultures goes against one of the greatest commandments of Jesus - to show radical love and acceptance. When asked what are the greatest commandments, Jesus says to love God, and to “love your neighbor as yourself” (Matthew 22:39 ESV). Thus, religious appropriation disagrees with the core principles of Christianity. Another driving force behind exploitative appropriation is the ability to incorporate more high-emotion, spiritual experience into services to keep up with modern demands. Yoga and meditation are regarded as highly emotional experiences, having spiritual effects. The act of utilizing emotional experiences as a method of worship has no inherent negatives, but use of appropriated versions that harm Asian communities are not ethical based upon their consequences. The MegaChurch, while arguably impersonal due to its large congregation size, has created a trend of more high-emotion worship services by using more upbeat or modern music. Nonetheless, these practices are not essential for having a spiritual experience as a Christian. The desire for highly emotional spiritual experiences is driven by an inclination for high-intensity fast-paced content in the culture of a digital age. Challenging the common formulas and patterns of worship to find a personal fit can meet a need, but appropriation is incompatible with the goal of worship. To this extent, innovation that is not at the expense of other cultures would be much more productive. Appropriation harms church culture by fundamentally changing worship practices in a commercializing manner in Fifth World


159

an attempt to remain relevant. Materialistic appropriation like PraiseMoves or Holy Yoga, or appropriation for the purpose of proselytization, has an additional effect of commercializing worship, a limit encountered in the growth of church communities. It depersonalizes something that is meant to be deeply individual. In the Pauline epistle to the church community in Colossae, it is written on worship: “Let the word of Christ dwell in you richly, teaching and admonishing one another in all wisdom, singing psalms and hymns and spiritual songs, with thankfulness in your hearts to God” (Colossians 3:15-16 ESV). Personal encounters with God through prayer and worship provide a sense of spirituality and intense emotional connection. Religious appropriation, on the other hand, is detrimental to all communities, the Buddhist and Hindu, as well as the Christian. It has altered Christian practices, and the focus of these evangelical communities has turned from the Biblical model to worldly influences of racialized suspicion and selfish exploitation.

C

an a practice really be separated from its origins? This question lies behind the motivations of appropriation, and its contributors would say yes. Appropriation attempts to remove a tradition from its context, constructing a new meaning that is beneficial to the purposes of the appropriator. However, this revision is neither feasible nor ethical. The re-signification of an object of interest is a process that happens naturally through shifts in culture. Each group that adopts a symbol adds to the legacy of its meaning, so one cannot remove an act abruptly from its cultural position because it still exists meaningfully to its participants. The significance of the symbol or practice is a core component of its meaning, which makes it hard to distinguish from the appropriated version. When a group of people are holding poses on mats, one can assume that it is yoga, not “PraiseMoves” or “HolyYoga.” The nature of these appropriations is to denounce and seek to replace the originals, providing an alternative. To the eyes of an outsider, there is no difference, theologically or physically, between a Christian practicing yoga and a Christian practicing a yoga alternative. Therefore, it is a technical loophole around the rejection of non-Christian practices. It is done to prove devoutness to the community, and the creation of such appropriations provides a financial reward, as well. Besides the noted difference within the appropriating community, it is seen also by the community that it is taken from, signaling an imposed hierarchy. Christianity is a call to be different. Following Jesus was a radical act of self-sacrifice as you gave up the ways of the world and selfishness to follow Him. Appropriation allows Christians to participate in things that they desire but acknowledge as “too worldly,” while maintaining a perspective of superiority that allows them to feel as if they are set apart. It is hypocritical to decry yoga and meditation as demonic, but practice it yourself with small adaptations North Carolina School of Science and Mathematics

and promote it as holy. Adding another religious element does not cancel out the significance that is already present. Utilizing appropriation only covers up and seeks to ignore the decline of the ideals of the church community. It seeks to covertly adapt a religion that is decidedly incompatible with alteration. The evangelical population in the United States is serving a culture of forced homogeneity over key Biblical principles in this manner. Especially within the last couple of years, church has been used as a political rallying point, and this has damaged the ability to foster Biblical community. It is more a place for politically like-minded individuals over spiritually like-minded individuals as politics and polarization permeate the pulpit. Within the current state of the American church community, change needs to happen. Different cultures can be appreciated without exploitation or creating contradictions in faith. The act of claiming a practice of another culture or religion for oneself and altering it for personal gain is appropriation, making participation exploitative. Within the context of American evangelical church culture, the bigger issue is the movement towards secularism while simultaneously denouncing it, isolating the community in a bubble of its own media and routines, some of them made to be copies of the ones in the outside world. Realizing this and stepping outside of the bubble would create a more inviting atmosphere for outsiders. Christians exercising with flowing stretching movements or spending time in contemplation, if not contradicting any personal convictions or contributing to the erasure of other cultures via appropriation, is not harmful. These motifs present themselves globally, and can be practiced without having to borrow an iteration that has religious significance. Choosing to alter and exploit a practice that may have no significance to you, but is heavily meaningful to another culture is immoral. Building relationships with others from different backgrounds, learning about their lifestyles directly from them, is a more compassionate way to breach cultural divides as opposed to attempting to isolate their practices and interact with them outside of their context. Encouraging mutual understanding through cultural awareness in the church setting, along with utilizing worship practices that do not inhibit its purpose, could fix the divide that has been created and provide a more peaceful future. Although the world is very different now than it was 2000 years ago, continuously following the original words and intentions of Jesus and the early church would improve the community vastly. Rejecting appropriation and the racialized concept of American homogeneity would be a vital step in this endeavor. An important step is to widen the lens of church life beyond how it exists within each community’s geographical location. The pure intentions of love and respect, as well as a lens of self-reflection - the radical actions that Jesus calls for - can transform the church community for the better, allowing for reconciliation with the groups harmed by appropriation.


160

Brown, Candy G. “Christian Yoga: Something New Under the Sun/Son?” Church History, vol. 87, no. 3, 2018, pp. 659-683. ProQuest. Accessed 9 Apr. 2021. . Crossway, 2001. Accessed 18 May 2021. Griswold, Scott. “Comparison of Biblical and Buddhist Meditation with Reflections on Mission.” , vol. 10, 2014, pp. 120 - 134. Accessed 11 Apr. 2021. Hindu American Foundation. “The Hindu Roots of Yoga.” Accessed 1 Dec. 2021. . Holy Yoga LLC, holyyoga.net. Accessed 11 Nov. 2021. Lowe, Lisa. “The International and the National: American Studies and Asian America Critique.” , University of Minnesota Press, no. 40, 1998, pp. 29-47. Lucia, Amanda. “Hinduism in America.” Corrigan, Oxford University Press, 2017, pp. 105-125. Accessed 12 Apr. 2021.

, edited by John

Mohler, Albert J. “The Subtle Body - Should Christians Practice Yoga?” The Southern Baptist Theological Seminary. Accessed 28 Nov. 2021. “Pew-Templeton Global Religious Futures Project - Asia-Pacific.” Pew Research Center. Accessed 18 May 2021. Pluralism Project. “Buddhism.” Harvard University. Accessed 15 Apr. 2021. “Religious Landscape Study.” Pew Research Center. Accessed 2 Nov 2021. Solomon, Serena. “Inside the Growing World of Christian Yoga.” Nov 2021.

, 5 Sept. 2017. Accessed 11

Turek, Lauren F. “Ambassadors for the Kingdom of God or for America? Christian Nationalism, the Christian Right, and the Contra War.” Religions, vol. 7, no. 12, 2016. Accessed 18 May 2021. Vail, Lise F. “The Origins of Buddhism.” Asia Society Center for Global Education. Accessed 1 Dec. 2021. Welman, James, et al. “Megachurch: The Drug That Works.” High on God: How Megachurches Won the , E-book, , 2019. Accessed 9 Apr. 2021. Willis, Laurette. “Why a Christian ALTERNATIVE to Yoga?” PraiseMoves, 2007, Accessed 14 Apr. 2021.

Fifth World


161

Lifespan: A Scientific Analysis and Global Application Isabella Larson

L

ifespan is something that has been puzzled over for as long as humans have had coherent thoughts. In fact, not only has it been an object of question, it has been the object of outright obsession. 2,200 years ago Chinese Emperor Qin Shi Huang issued an executive order to find an elixir of life, in which he went so far as to take mercury pills; in the 18th century B.C. the Sumerian king Gilgamesh went on an epic quest to find eternal life after the death of his friend. Ponce de Leon famously searched for the Fountain of Youth, and Victorian women applied toxins such as lead to their face in an attempt to reverse their aging. Although these are a few notable people who attempted to achieve eternal life, not only the affluent want to live longer. The general population wishes to lengthen their life as well, which can be seen today in the products, diets, and routines advertised which supposedly lengthen life. Humans have endlessly chased immortality. Although science has made it widely known that immortality is impossible to obtain, this did not kill the craze that the pursuit of eternal life created. Instead of immortality, people started hunting for a way to lead a longer life and appear younger. In current times, countless articles have been published on “tips and tricks” that will help you live longer. Tabloid covers advertise features which will reveal why a celebrity is so “youthful”. There have been excessive claims to a “secret” to living longer — one thing that would supposedly guarantee your longevity. Makeup brands sell products that are “anti-aging” or reverse wrinkles. The presentation of these things in the media is nothing if not reminiscent of ancient searches for an elixir of life: people are still searching for a way to live longer. The media has advertised innumerable things that claim to magically lengthen lifespan, but these claims are all false. Although people have speculated that life is dependent on some mystical factor which we have yet to discover, it is actually a lot more straightforward than magic. Why someone lives as long as they do can be explained by the science behind life expectancy. Because lifespan is rarely entirely dependent on one singular factor (though there are exceptions to this statement), a person’s life expectancy cannot simply be changed by a singular factor (such as an “elixir of life”). Scientists have come to discover the list of factors that affect life expectancy, and it is quite an extensive catalog.

North Carolina School of Science and Mathematics

To make life expectancy easier to comprehend, these factors can be organized into three broad groups of factors which affect life expectancy. To understand these groups in a more complex way, they can then be defined by numerous smaller factors. The three broad categories that define life expectancy are individual factors, societal factors, and genetic factors. Individual factors are aspects of life which you control as an individual, including lifestyle choices such as smoking, eating habits, exercise, and (to some extent) mental health. Societal factors are out of your control as an individual and are determined by society (the government and/or the collective population). Societal factors are things like access to healthcare, wealth inequality, and education access. Genetic factors are controlled by your family history. This particular category of factors can have a large impact on your life expectancy, since it considers things which can significantly shorten one’s lifespan. This could mean any inherited chronic diseases, cancer, conditions, or other genetic illnesses. Because these are such broad categories, the three of these factors sufficiently define the majority of lifespan determinants. We know the three broad categories that affect lifespan and what they generally encompass, which is incredibly helpful in understanding the science behind life expectancy and how it varies. One way in which this knowledge is applied is in the analysis of the average lifespan of a country. If we look at average individual factors, genetic factors, and societal factors of the citizens in a country, we can understand the average lifespan of that country. For example, Japan has one of the longest life expectancies in the world. Studies have shown that Japan’s citizens lead a generally healthy lifestyle, with enough exercise, a fish-based diet, and relatively low drug abuse rates. These are a few individual factors that define Japan’s population. In terms of societal factors, Japan’s government highly encourages education and provides healthy meals to children at school. They also boast lower levels of inequality than other developed countries. Furthermore, Japanese citizens have a much lower risk of cancer compared to most other countries. This would be considered a genetic factor. All of these mentioned factors are beneficial, and can help explain why Japan’s life expectancy is one of the highest. A similar process can be applied to other countries, which can help us gain a better


162

global understanding of life expectancies.

T

o understand life expectancy as a whole, we have to have a detailed understanding of the factors which define life expectancy. Individual factors include things which are important to do and things which are important to avoid. This leads to the discussion of factors that are important to avoid in order to lengthen lifespan. The most crucial factor that many clinics reference is to not smoke (4 Top Ways). This has been emphasized in school and in television campaigns since it was discovered just how harmful cigarettes are — but smoking habits still persist worldwide. This is for many reasons, some of which include: cultural influence, peer pressure, addiction, and lack of education. Smoking is incredibly harmful; it quickens aging, raises the risk of “cancer, heart disease, stroke, lung diseases, diabetes, and chronic obstructive pulmonary disease (COPD), which includes emphysema and chronic bronchitis” (“Health Effects” 2020). It also raises your risk of contracting infectious diseases, and weakens your immune system. It’s also important to note that vaping devices such as e-cigarettes do not eliminate the risks of smoking. Studies show that many smokers have moved towards using vaping devices instead of cigarettes (Villarroel et al. 2020). While these devices may seem like a safer alternative to smoking cigarettes, they can still cause addiction to nicotine and metals such as nickel, tin, and aluminum have been found in vaping devices. These metals have been shown to cause lung cancer (Study: Lead 2018). All of these factors make it one of the most dangerous habits to participate in and a major determining individual factor of lifespan. Drug abuse of any kind is crucial to avoid. Smoking is harmful, but so are many other drugs, and some can be even more dangerous. Some of these drugs include opioids, fentanyl, cocaine, oxycodone, narcotics, caffeine, and alcohol. Some of these things, such as alcohol and caffeine, are not widely recognized as harmful drugs and are commonly used across the United States. While these substances may not be as harmful as opioids in small amounts, they can still be abused. For example, long term, consistent caffeine use can lead to chronic insomnia, hippocampal learning deficit, and even anxiety (Han 976-980). Alcohol abuse often leads to liver disease, pancreatitis, and heart disease (Mayo 8893). As can be seen, drug abuse leads to all sorts of health problems that can significantly reduce lifespan. Aside from physical health issues, it can also lead to mental health problems. Because drug use often produces chemicals which your brain would not usually produce, it can cause both short term and long term change to your brain. This can be incredibly detrimental to mental health — one study showed that “people addicted to drugs are roughly twice as likely to suffer from mood and anxiety disorders” (Comorbidity 2008). Poor mental health can have devastating effects on life expectancy if proper care is not given or accessible. These two factors are very closely linked, as often poor

mental health can lead to drug abuse, just as drug abuse can lead to poor mental health. Mental health is often connected to things other than drugs. It’s extremely important to take care of your mental health in order to stay healthy, which in turn will improve your life expectancy. If you struggle with a mental health disorder, it is important to be properly treated and provided with resources in order to prioritize wellbeing. Things that can cause downward trends in mental health are just as important to avoid as things that cause negative physical health effects, such as smoking. Some things that have been shown to improve mental health: avoiding negative behaviors such as comparing yourself to other people, avoiding too much cell phone usage to avoid cell phone addiction, being physically active, going outside, and coping with stress effectively (McLeod et al. 482-497). Of course there are many more things that can help improve and maintain mental health, but these are a few major sciencebased factors. An additional thing to avoid is excessive exposure to common carcinogens. We can resist small amounts of many common carcinogens, but too much exposure can be extremely dangerous. There are a large number of these common carcinogens, including asbestos, acrylamide, formaldehyde, UV rays, processed meat, exhaust, and air pollution. The best thing you can do to avoid carcinogens is to be aware of what you’re being exposed to. A few good habits to avoid carcinogens are to keep an eye on the Air Quality Index in your area, routinely check your home for carcinogens such as asbestos (in roofs or parts of houses), wear sunscreen, put a blue light filter on your phone, and maintain a healthy diet (Perez). This leads us to the things you can do to improve your lifespan. The first thing is to lead a healthy lifestyle. This consists of exercise and other things such as regularly going outside, socializing, and diet. Leading a healthy lifestyle and making healthy decisions have been shown time and time again to be the best way to lengthen one’s lifespan. Exercise is one of the factors that heavily influences a healthy lifestyle. Exercise can improve general health and quality of life, which contribute to life expectancy. Some of the physical health benefits that exercise provides are healthy bones and joints, reduced risk of bone disease, reduced risk of colon cancer, lower blood pressure, lower chances of contracting heart disease, and help managing chronic conditions. It can also have mental health benefits such as reducing symptoms of anxiety and depression, as well as being a form of stress management (Deslandes et al. 191-198). These physical and mental health benefits improve one’s quality of life, as well as have positive effects such as helping reduce obesity, which can hinder someone’s lifespan. Being outside on a regular basis has shown to have a positive effect on both physical and mental health (Jacobs et al. 259-272). One of the reasons this is true is because the sun provides Vitamin D, a vitamin linked to happiness Fifth World


163

that is difficult to obtain from sources other than the sun. Going outside alleviates the effects of a number of mental health disorders, including depression, anxiety, and even self esteem (Stehl).

Figure 1. Table derived from Fontaine et al. 2000 depicting the impact of physical activity on numerous mental health disorders.

Being outside also exposes you to natural light, which has been proven to improve concentration, help people heal faster, improves sleep quality, and even increase productivity (Van Den Wymelenberg 2014). Exposure to natural light is crucial because education and work have become increasingly conducted through technology (artificial light) (Twenge 765). Artificial light causes negative health effects after a certain amount of exposure. Looking at a screen for more than 2 hours a day (outside of work, and for adults) has been proven to have negative effects on individuals’ concentration ability, as well as having other negative effects such as increased risk of depression, poor They can also have negative effects on our interpersonal interactions, “Heavy parent digital technology use has been associated with suboptimal parent–child interactions and internalizing/externalizing child behavior” (McDaniel 210218). With technology becoming more and more present in both our personal and professional lives, it’s all the more important to avoid screens as much as possible and be outside. The health benefits of being outside support a long lifespan and suggest that humans should go outside regularly to improve their life expectancy. Socializing with other people can also help you lead a longer life. While the science behind the benefits that socializing provides for mental health is less straightforward than habits such as exercising, it is still present. It has to do with neurotransmitters and how they affect your mental health and stress levels. For example, one of the most common neurotransmitters that causes a happy feeling is oxytocin. This is released when humans socialize or make contact, as little as “...shaking someone’s hand...” (Van Workum et al. 563-573). This neurotransmitter helps create feelings of happiness and lowers stress levels. Consistently

North Carolina School of Science and Mathematics

socializing helps you produce this neurotransmitter more consistently, which positively affects long-term mental health. This is crucial because mental health has been shown to greatly affect lifespan. According to the World Health Organization, “people with severe mental health disorders have a 10–25-year reduction in life expectancy.” As you can see, mental health is a critical part of one’s lifespan, and that makes proper mental health, to the best of one’s ability, all the more important (Premature Death). Diet is a commonly discussed factor, since humans need to eat to sustain life. After much deliberation and research, health officials have found that the best diet to maintain is a balanced diet (Price 30-31). There is limited information on whether organic and natural foods are truly better for you, and obscure diets such as the ketogenic diet have been proven to have risks that may outweigh the benefits (Joshi et al. 1163-1164). After considering these things, the best thing you can do to maintain a healthy diet is to eat a balance of foods. This means avoiding over-consuming red meat (which can cause cancer in too large amounts), eating many and different kinds of fruits and vegetables, not too much fat or sugar, and a healthy amount of grains. Since a balanced diet is so important to general wellbeing and lifespan, it’s worth reviewing in greater detail. Everyone’s diet consists of a recommended amount of calories based on age, gender, and exercise habits. Here is a current chart of recommendations (Do You Know):

Figure 2. Do You Know’s recommendations for physical activity level characterized by age and gender.

However, regulating the amount of calories you consume does not automatically mean you uphold a healthy diet. The most important part of a healthy diet is where those calories come from — the food you consume on a daily basis. A healthy diet consists of proteins, healthy fats, antioxidants, minerals, vitamins, and carbohydrates. The sources of these components are just as important as maintaining a balance of them in your diet. Proteins can come from many sources, such as legumes,


164

grains, nuts, and meats, though plant-based proteins are the safest choice. Because red meat is considered to be a carcinogen, and processed meat is detrimental to health in a number of ways, many doctors and researchers recommend getting your protein from plants when possible (Protein 2021). Dieticians say that a safe amount of meat is about 455 grams of red meat per week (How Meat 2019). If you consume more than this, it may be a good idea to consider substituting poultry for red meat. Poultry is much better for your health than red meat because it’s low in saturated fat and high in protein and healthy fatty acids, such as Omega-3. Considering the health effects of red meat is why health professionals concur that plant based protein is much healthier than meat based protein. It’s recommended to consume a variety of said legumes, grains, and nuts in order to maintain a good amount of protein consumption. Another crucial part of a balanced diet is healthy fats. Though many people associate fats with being bad, especially because of the widespread “no/low fat” trend in the United States, healthy fats are pivotal in maintaining a healthy diet. According to one study, “Healthy fats provide energy, support cell growth, protect organs, and keep your body warm. Essential fatty acids are necessary for the absorption of fat-soluble vitamins A, D, E, and K and help with hormone production” (Swanson et al. 1-7). This means we should aim to maintain an intake of healthy fats. However, the amount of healthy fats we eat should still be limited. An excess of any part of a balanced diet can be too much of a good thing. In order to keep our intake of healthy fats at a balanced level, dieticians say we should aim for healthy fats to make up no more than 20-35% of our daily calorie intake (Fat and Calories 2019). Antioxidants, minerals, and vitamins can be grouped into another important part of a balanced diet, since they come from similar sources. Plant foods — mostly fruits and vegetables — are great sources of these nutrients. It’s important to get your antioxidants, minerals, and vitamins from plant sources rather than over the counter vitamins because studies have shown that nutrients provided by these vitamins aren’t as effective as those which come directly from a plant based source. Antioxidants are important because they neutralize free radicals in the body which, if left, can cause a variety of diseases (Antioxidants 2020). Vitamins and minerals work in harmony to perform countless roles in the body. “They help shore up bones, heal wounds, and bolster your immune system. They also convert food into energy, and repair cellular damage” (helpguidewp 2021). This is why eating many fruits and vegetables is considered such an important part of one’s diet — consuming them provides us with the basic nutrients to be healthy in many different ways. Another part of maintaining a healthy diet is eating healthy carbohydrates. Sugars, fibers, and starches are the most common examples of carbohydrates. Carbohydrates are important because they provide glucose, which is used by

the body to produce energy. However, not all carbohydrates are healthy. In fact, many are highly processed, and are more harmful than helpful to the body. Generally, healthy carbohydrates are those which are minimally processed. This means a carbohydrate like quinoa is much healthier than a processed carbohydrate like french fries. Processed carbohydrates are crucial to avoid because they “...may contribute to weight gain, interfere with weight loss, and promote diabetes and heart disease” (Carbohydrates 2019). The last part of maintaining a healthy diet is consuming plenty of water. Although drinking enough water may seem less important than eating healthy foods, it’s equally important as other parts of your diet. Drinking enough water not only helps protect sensitive tissue and lubricate joints but reduces the risk of dehydration. According to the CDC, dehydration can “cause unclear thinking, result in mood change, cause your body to overheat, and lead to constipation and kidney stones” (Water and Healthier 2021). Considering all these factors, it’s clear that maintaining a healthy diet has a direct link to your lifespan. It’s generally agreed on by scientists that diet is one of the most important determinants of life expectancy. This leads us to ask why certain countries have longer lifespan than others, if many factors are largely left up to individuals. The length of a country’s average lifespan does rely on individual factors, but a country’s average lifespan is greatly affected by societal factors. This includes things like access to food and healthcare. Oftentimes these are the reasons that lifespan can vary greatly around the world and from country to country. To understand them, we have to examine the societal factors that affect lifespan more closely, and how they differ across different countries. Previously, we discussed how important diet can be in determining someone’s lifespan. An individuals’ diet can depend on a country’s resources and often on citizens’ societal status.1 For example, it is impossible for a person to maintain a healthy and balanced diet if they are living in a food desert which they cannot escape because of a poverty cycle trapping them in economic ruin. Another societal factor which affects diet is geography, which . Geography often affects what kind of diet the citizens of a country will consume, because it affects what foods are widely accessible. For example, coastal countries which have a large seafood industry will have more accessibility to seafood than countries surrounded by land. This would make a seafood-based diet more likely in a coastal country than a landlocked country. It also depends on how the government provides for their citizens. An example of the effects governmental decisions have on citizens is school lunch programs. Because child and teenage brains are highly 1 Because societal factors are closely linked to individual factors in the sense that they have a great impact on an individual’s choices, it can be hard to quantify the difference between the two. However, for the sake of my argument, the two must be separated.

Fifth World


165

impressionable, habits created in these years are the basis for lifelong habits (Choudhury and McKinney 292-215). Since the government is in charge of the public school system in most countries, the government also has control over what food is provided to students in schools. Since students attend school for a majority of the week, being consistently provided with healthy or unhealthy food at school creates an impact on the eating habits and general health of many children, which will come to affect life expectancy. Thus, the government, by providing children with a certain type of food, paves the way for lifelong eating habits for many of its citizens and helps determine average life expectancy. One of the most important societal factors is poverty. The poverty cycle traps people in poverty by denying them access to things such as nutritious food and education. It keeps people in a situation where they do not have the resources to lift themselves out of said situation. For example, in the United States, school funding for a specific area comes from taxes collected in said area. Should this area happen to be low income, then less tax money will be collected. This means that frequently, schools in lowincome areas have less funding for adequate staffing and classroom supplies, leading to a lower quality of education. A lower quality of education often leads to jobs that don’t require as much education and are frequently low paying, leading people from low income areas to stay in low income areas. Though this is just one example of how the poverty cycle works, it works in a similar manner to keep people in poverty in other ways, as will be demonstrated in the following paragraphs. One specific way the poverty cycle affects lifespan is by restricting access to healthcare for low- income people. To lengthen your lifespan, it is important to lead a healthy lifestyle. In places where healthcare is inaccessible, it becomes more difficult to stay healthy, especially when people fall ill. This is seen particularly in countries where minority populations have historically been denied equal access to healthcare. For example, indigenous Australians face a number of barriers when attempting to receive healthcare, including distance to the nearest healthcare center, racism in healthcare centers, and cost (Davy et al. 2016). According to a study in Australia, the gap in life expectancy for indigenous Australians was very high; between 2005 and 2007, it was 9.7-11.5 years less than the average Australian citizen today (Tier 1-Life 2012). This shows just how large of an impact poverty and systemic oppression have on a population’s average lifespan. In countries where the majority of a group or groups of people have been denied access to healthcare, the life expectancy will be shorter than if everyone was provided with adequate, equal access to healthcare. Another factor that greatly affects life expectancy controlled by the poverty cycle is access to education. It has been shown that life expectancy is lengthened for each step of education taken — as much as 1.37 years added at each step (Bulled and Sosis 269-289). This is a result of the North Carolina School of Science and Mathematics

fact that better education can lead to better life choices and better careers, which correlates with a longer life (Borman and Henderson 71-76). Average lifespan also relies on the quality of education. For example, a student who goes to a better funded school than another student may learn more about the food triangle and exercise than less-funded schools. This would lead to the students at the better funded school making overall healthier choices, therefore leading to a longer and healthier life. Furthermore, the better educated a student is, the more likely they are to go on to higher education. Higher education leads to a job that is less likely to be a manual labor job and higher paying jobs. This means the more highly educated student would have the resources to buy healthier food and pay for better healthcare. These things all contribute greatly to lifespan, and this shows yet another way how the poverty cycle can lead to a lower life expectancy among minorities who have historically been forced into low-funded schools and poverty, which then affects a country’s overall lifespan. The poverty cycle also restricts impoverished people’s access to nutritious food. According to one study, “among adults and the elderly, both food insecurity and poverty are predictive” , meaning that if someone is impoverished, they are more likely to experience food insecurity (Bhattacharya et al. 839-862). The poverty cycle perpetuates food insecurity because of food deserts and the higher price of nutritious food. Typically, food deserts occur in areas that are highly populated with low income families. Food deserts are areas where food is hard to come by, meaning low income families have to travel further to get to a grocery store. This means transportation costs and time consumption of grocery shopping are higher for families in impoverished areas. Furthermore, in many countries, nutritious food is much more expensive than highly processed food. Foods like potato chips are mass produced and made with cheap, unhealthy ingredients, meaning low income families often choose the less nutritious option, simply because it’s cheaper and more accessible. This directly impacts low-income families’ diets, a crucial individual factor of lifespan. The poverty cycle is such an important societal factor— its impact on individual factors is greatly negative, and in countries where poverty rates are high, average life expectancy is impacted. The final factor in determining life expectancy is genetic, which is not controlled by an individual or the society they live in, but rather by a person’s genes. These factors can have a great and unpredictable impact on a person’s lifespan. Breast cancer is one example of a genetic factor. If breast cancer runs in a persons’ family, they can take steps like exercising regularly to stay healthy, but they may not be able to prevent themselves from being afflicted by breast cancer. Chronic illnesses, genetic mutations, and hereditary diseases are genetic factors and can be unpredictable. The one exception to this unpredictability is cancer, which can be higher or lower based on how developed a country is. More developed countries tend to have higher cancer rates.


166

A study from the European Society for Medical Oncology shows that in more developed countries, citizens are almost 2.5 times more likely to be afflicted with cancer than in less developed countries (European Society 15-23). Because of “a transition to some of the lifestyle, diet, and environmental risks previously associated with highly developed societies,” these risks are things specific to developed countries such as processed diets, a sedentary lifestyle, and increased exposure to carcinogens. Therefore, we can conclude that while individual genetic factors are usually independent of other factors, a country’s cancer rates are not. To demonstrate the practical application of this research, the average lifespan of a country can be analyzed. For this example, Japan will be used. Japan was chosen for this analysis because they have one of the highest life expectancies in the world: 84.36 years as of 2019 (World Development 2019). It is important to note that while Japan specifically will undergo analysis, this process can be applied to any country’s average lifespan and can therefore be used to compare global average lifespans and their discrepancies. To analyze a country’s average life expectancy, an examination of the country’s average individual habits, societal factors, and genetic factors is required. The first individual factor we will look at in Japan is exercise. Japanese citizens maintain average exercise habits. With the average Japanese adult walking around 7,168 steps per day, and around 45% of citizens regularly exercising, Japan boasts a fairly active population. Because their population is moderately active, Japan has one of the lowest obesity rates in the world, 3.6% (Senauer and Gemma 265268). This is attributed to their higher walking rates and healthier eating habits. Why are Japanese citizens’ eating habits so much better than the global average? Japan is an island. This means that there is a large seafood economy and seafood is widely consumed by Japanese citizens. It is easy to see how this would apply to other geographic regions, like places with different climates. Seafood-heavy diets have been proven to be good for you (Troell et al. 2019). By consuming seafood rather than red meat, you are receiving protein that you would also receive from red meat, but with less saturated fat and more Omega-3 vitamins. This reduces your risk of contracting cancer from consuming red meat, and provides many long and short term health benefits that contribute to a long lifespan. Japan also has low drug use rates, among the lowest in the world. As mentioned in the section discussing individual factors, avoiding drug abuse is one of the most important things you can do to improve your life expectancy. Japan has comparably harsh drug laws, leading to exceptionally low rates of drug usage. This means Japanese citizens are much less likely to be subject to the overwhelmingly negative health effects of drug abuse such as addiction, cancer, and chronic disease.

Furthermore, Japan manages societal factors such as food accessibility exceedingly well. In school, healthy lunches are provided to every student, and if they cannot afford lunch, then the meals are highly subsidized. Education is highly prioritized by many families and the government in Japan, so this ensures every child is given a healthy lunch each day. We can compare this to the United States where school lunches often consist of highly processed, fat and sugar-heavy foods, such as pizza and ice cream. In 2009, a study found that American school lunches wouldn’t meet the standard of many fast food restaurants (Fast-Food 2009). On the other hand, there are no cafeterias in Japan, and therefore no underpaid, overworked cafeteria workers serving mass-made, frozen meals. Most meals are handmade on site and healthy, consisting of meat (a protein), rice (both a grain and a healthy carbohydrate), and vegetables (a source of vitamins, minerals, and antioxidants) (Harlan 2013). This creates the basis for lifelong healthy eating habits, and contributes to Japan’s low childhood obesity rate. Another crucial societal factor which Japan handles well is accessibility to healthcare. Healthcare in Japan is provided to all people through a universal healthcare system. Employed adults, unemployed, self-employed adults, and students can all register for healthcare. The cost of healthcare is determined by their income, making Japanese healthcare widely affordable (Healthcare in Japan). Many Japanese healthcare employees have also been trained to speak English and Japanese, making it easier for everyone to receive adequate healthcare through the national Japanese healthcare system. As discussed with societal factors, accessible healthcare is arguably one of the most important determining factors in average lifespan and helps explain why Japan’s expected life expectancy is so high. Finally, Japan’s individual genetic factors cannot be accurately assessed, but Japan’s low cancer rates can be. The low cancer rates in Japan are frequently attributed to their low obesity rates. As previously discussed, this can be associated with healthy school lunches, a moderately active average lifestyle, and a diet heavy in seafood. The average fish heavy, plant heavy diet most Japanese citizens utilize also leads to low cancer rates because omega-3 fatty acids and soybeans reduce cancer risk (Hardman 3508S-3512S). This means that Japan’s overall cancer rates sit at the low end of global cancer rates. To summarize, Japan’s very high average lifespan can be attributed to their average individual, societal, and genetic factors. The average Japanese citizen consumes a healthy, seafood and plant based diet, exercises a moderate amount, and avoids drug use. The Japanese government promotes healthy eating in schools and provides accessible healthcare to Japanese citizens. This should make obvious the process of applying the knowledge provided in this paper to any individual country and how this makes analysis of global lifespan plausible.

Fifth World


167

L

ife expectancy around the world differs from country to country. Though over time, the world population’s average lifespan has increased, there are still disparities between countries’ life expectancies. The reason why some countries have lower life expectancy depends on a number of factors. However, these factors can be identified and explained. What used to be a mystery to humankind is now explained by science: we know what affects life expectancy and how this knowledge can be applied to understand a country’s average lifespan. To be able to understand the average lifespan of entire countries, we must understand what dictates an individual’s life expectancy. The first things discussed that affect life expectancy are things which are in an individual’s control. These are defined as individual factors, and are mostly lifestyle decisions that an individual makes. The next factors affecting a person’s life expectancy are those which are largely decided by society as a whole. To be more precise, societal factors are things such as wealth inequality and education access, things which most individuals cannot control. The final factors which affect life expectancy are also out of an individual’s control: genetic factors are hereditary and determine certain risks. These are the three general categories which define a person’s lifespan and are pivotal in understanding life expectancy on a broader scale. If you understand the factors that affect life expectancy, you can then understand what the average life expectancy of a country means. To understand a country’s life expectancy, you have to take all factors into account. In terms of individual factors, you need to know the habits of the country’s general population. This means knowing what diet, work, and exercise habits are typical to a country’s population. If there are multiple lifestyle habits that a general population leads within one category, like two kinds of diets, then you also need to consider how many people practice different habits. You also need to know what things a country’s society prioritizes. This includes what kind of healthcare is offered to citizens, how much education costs, if there are any prevalent systemic inequalities, if certain habits are shunned or considered “taboo”, et cetera. Finally, you need to know about the country’s general genetic makeup. Some countries’ populations are at higher or lower risk of inheriting certain illnesses, which is important when determining what affects a country’s average lifespan. This information on different determining factors is useful because it allows you to explain why life expectancy differs from country to country. By identifying what the differences between countries in factors that affect lifespan are, you can explain the difference in average lifespan between those countries. This research led me to discover how we can apply scientific information regarding lifespan to different countries in order to expand our understanding of global life expectancies. However, this research can be applied even further. We can use this information to pinpoint what factors are lowering a country’s lifespan. The first step in North Carolina School of Science and Mathematics

changing an issue is to identify its cause, so if we know what factors are negatively affecting a country’s average lifespan, we can address these issues directly. If we apply this research to address issues in a country and changes are made by those in power, both a longer life and a better quality of life are bound to follow. Knowing how we can apply this information to improve global lifespan and quality of life makes this research infinitly more valuable.


168

Bhattacharya, Jayanta, Janet Currie, and Steven Haider. “Poverty, food insecurity, and nutritional outcomes in children and adults.” Journal of health economics 23.4 (2004): 839-862.

Troell, Max, Malin Jonell, and Beatrice Crona. “The role of seafood in sustainable and healthy diets.” The EAT-Lancet Commission report Through a Blue Lens. Stockholm: The Beijer Institute (2019).

Borman, Christopher A., and Patricia G. Henderson. “The career/longevity connection.” Adultspan Journal 3.2 (2001): 71-76.

Twenge, Jean M., Gabrielle N. Martin, and W. Keith Campbell. “Decreases in psychological well-being among American adolescents after 2012 and links to screen time during the rise of smartphone technology.” Emotion 18.6 (2018): 765.

Bulled, Nicola L., and Richard Sosis. “Examining the relationship between life expectancy, reproduction, and educational attainment.” Human Nature 21.3 (2010): 269-289. Choudhury, Suparna, and Kelly A. McKinney. “Digital media, the developing brain and the interpretive plasticity of neuroplasticity.” Transcultural psychiatry 50.2 (2013): 192-215. Davis, Carla. “Shining Light on What Natural Light Does for Your Body.” Sustainability, NC State University, 24 Mar. 2014.

Van Den Wymelenberg, Kevin. “The benefits of natural light.” Architectural Lighting 19 (2014). Van Workum, Nicole, et al. “Selection, deselection, and socialization processes of happiness in adolescent friendship networks.” Journal of Research on Adolescence 23.3 (2013): 563-573.“Premature Death Among People with Severe Mental Disorders.” World Health Organization. Villarroel, Maria A., et al. “Electronic Cigarette Use Among U.S. Adults, 2018.” Centers for Disease Control and Prevention, Apr. 2020.

Davy, C., Harfield, S., McArthur, A. et al. Access to primary health care services for Indigenous peoples: A framework synthesis. Int J Equity Health 15, 163 (2016).

“4 Top Ways to Live Longer.” Johns Hopkins Medicine.

Deslandes, Andréa, et al. “Exercise and mental health: many reasons to move.” Neuropsychobiology 59.4 (2009): 191-198.

“Antioxidants - Better Health Channel.” Better Health Channel, Victoria State Government Department of Health, 6 June 2020.

dren.” Journal of paediatrics and child health 53.4 (2017): 333-338.

-

European Society for Medical Oncology, and P. Kanavos. Annals of Oncology. 17th ed., New York, United States, Springer Publishing, 2013: 15-23 Han, Myoung-Eun, et al. Biochemical and Biophysical Research Communications. 4th ed., vol. 356, ScienceDirect, 2007. Hardman, W. Elaine. “Omega-3 fatty acids to augment cancer therapy.” The Journal of nutrition 132.11 (2002): 3508S-3512S. Harlan, Chico. “On Japan’s School Lunch Menu: A Healthy Meal, Made from Scratch.” Washington Post [Washington, D.C.], 27 Jan. 2013. helpguidewp. “Vitamins and Minerals.” HelpGuide.Org, Harvard Help Guide International, 15 July 2021. Jacobs, Jeremy M., et al. “Going outdoors daily predicts long-term functional and health benefits among ambulatory older people.” Journal of Aging and Health 20.3 (2008): 259-272. Joshi, Shivam, Robert J. Ostfeld, and Michelle McMacken. “The ketogenic diet for obesity and diabetes—enthusiasm outpaces evidence.” JAMA internal medicine 179.9 (2019): 1163-1164. Mayo, Charles. Postgraduate Medicine. 6th ed., vol. 64, Interstate Postgraduate Medical Association, 1978. McDaniel, B.T., Radesky, J.S. Technoference: longitudinal associations between parent technology use, parenting stress, and child behavior problems. Pediatr Res 84, 210–218 (2018). McLeod, Jane D., Ryotaro Uemura, and Shawna Rohrman. “Adolescent mental health, behaviorproblems, and academic achievement.” Journal of health and social behavior 53.4 (2012): 482-497. Perez, P. R. “Carcinogens in Your Home.”

“Carbohydrates.” The Nutrition Source, Harvard T.H. Chan School of Public Health, 22 May 2019. “Comorbidity.” National Institute on Drug Abuse, U.S. Department of Health and Human Services, Dec. 2008. “Do You Know How Many Calories You Need?” Food and Drug Administration. “Fast-Food Standards for Meat Top Those for School Lunches - USATODAY.Com.” USA Today [Tysons, Virginia], 8 Dec. 2009. “Fat and Calories: The Difference & Recommended Intake.” Cleveland Clinic, 25 Apr. 2019. “Health Effects of Smoking and Tobacco Use.” Centers for Disease Control and Prevention, 9 Feb. 2017. “Healthcare in Japan.” International Student Insurance. “How Meat and Poultry Fit in Your Healthy Diet.” Mayo Clinic, Mayo Foundation for Medical Education and Research, 19 Nov. 2019. “Protein.” The Nutrition Source, Harvard T.H. Chan School of Public Health, 12 Nov. 2021. “Study: Lead and Other Toxic Metals Found in E-Cigarette ‘Vapors.’” Johns Hopkins Bloomberg School of Public Health, 21 Feb. 2018. “Tier 1-Life Expectancy and Wellbeing-1.19 Life Expectancy at Birth.” Department of Health | Tier 1-Life Expectancy and Wellbeing-1.19 Life Expectancy at Birth, Australian Government Department of Health. “Water and Healthier Drinks.” Centers for Disease Control and Prevention, U.S. Department of Health & Human Services, 12 Jan. 2021. “World Development Indicators (WDI) - Home.” The World Bank, The World Bank Group.

Price, Susan. “Understanding the importance to health of a balanced diet.” Nursing times 101.1 (2005): 30-31. Ravallion, Martin. “Inequality is Bad for the Poor.” World Bank Policy Research Working Paper 3677 (2005). Sato, Kyoko Kogawa, et al. “Walking to work is an independent predictor of incidence of type 2 diabetes in Japanese men: the Kansai Healthcare Study.” Diabetes Care 30.9 (2007): 2296-2298. Senauer, Benjamin, and Masahiko Gemma. “Reducing obesity: What Americans can learn from the Japanese.” Choices 21.4 (2006): 265-268. Stehl, Alexandra. “The Health and Social Benefits of Recreation.” State of California Resources Agency. Swanson, Danielle, Robert Block, and Shaker A. Mousa. “Omega-3 fatty acids EPA and DHA: health benefits throughout life.” Advances in nutrition 3.1 (2012): 1-7.

Fifth World


169

Prominence and Popularity of Chinese American Cuisine Alicia Bao

C

onsidered one of the most complex and diverse cuisines in the world, Chinese cuisine 1 has only recently gained prominence in the Western hemisphere. Chinese American cuisine, however, has been an integral part of the formation of America. At the crux of Americans’ perception of the Chinese diaspora, Chinese American food has helped the Chinese in America to survive economically and culturally. The resistance and conformation to hostile pressures that enveloped these Chinese communities were reflected in the ever-shifting Chinese restaurant business, from the foods served, to the locations, to their main customer base. As a result, the history of Chinese food in America can be separated into three distinct periods. The first period saw Chinese restaurants catering to Chinese immigrants during the Gold Rush of the mid-1800s. These restaurants were located within ethnic enclaves and presented relatively authentic2 and extensive menus that offered anything from affordable dishes like stir-fry to expensive delicacies like bird’s nest soup. In addition to the Chinese dishes, a few American entrees would be tacked on, such as fried chicken or steak, for the few non-Chinese customers that wandered in. The non-Chinese customers were usually impressed by the food and service of these sitdown restaurants, as the restaurant industry in America had not yet developed. The second phase consisted of the rise of chop suey houses around the late 1800s– a notable shift towards appealing to the western palate. Instead of extensive menus with a large price range, the menus of chop suey houses simplified to dishes like egg foo young3, chow mein4, and the namesake 1 There is no one spelling of Chinese that accurately represents all the dialects, tones, and nuances of the language. The spellings reflect the pinyin of each character, other than a few popularized words, such as “Cantonese” or “Canton.” 2 The authenticity of these restaurants was limited by the availability of ingredients abroad. 3 An omelet dish that consists of the egg mixed with vegetables and protein (usually shrimp or pork), cooked to be like a pancake, and smothered with a stir-fry sauce. 4 A stir-fry noodle dish that consists of noodles, protein, and an assortment of vegetables coated in a thin soy sauce-based sauce.

North Carolina School of Science and Mathematics

chop suey5. The popularity of these dishes was attributed to the affordability that did not compromise taste, as they fed the emerging American ideal of a leisurely lifestyle. The popularity allowed Chinese restaurants to spread outside of Chinatown as well, where more Americans could access them. Though Chinese restaurants outside of Chinatown had higher prices, they were still more affordable than normal restaurants– making them a popular choice amongst the working class. Office jobs, created by the rapid urbanization at the turn of the 20th century, were occupied by young people with limited budgets. The hot meals provided by chop suey houses were a favorite amongst them (Liu 60). The third phase of Chinese restaurants was marked by the movement back towards authentic Chinese food. This movement is often credited to Cecilia Chiang, who opened The Mandarin– that was a high-end restaurant which served authentic Northern Chinese cuisine from an extensive menu of over 300 items– in 1961. While many people expected The Mandarin to fail because it deviated from the Western palate, it attracted mainstream attention after a prominent columnist for the San Francisco Chronicle raved about his experience. Chiang’s success with The Mandarin opened the idea of trying new, authentic Chinese foods in America, paving the way for the success of restaurants like Din Tai Fung, or Xi’an Famous Foods (“Chiang” [Interview]). Every city in America is speckled with Chinese restaurants, from the run-down, hole-in-the-wall spot that has thrived for decades despite being rated 3/5 stars on Google reviews, to an upscale restaurant that seats over 20 people around each lazy susan, for large gatherings of friends and family. Even with limited background knowledge about Chinese immigration, it is easy to see that the Chinese restaurant business has been crucial as an economic lifeline for the Chinese community. In the 1840s, news that gold was found in California quickly spread and people rushed to immigrate and become gold miners. At the peak of the Gold Rush immigration, 20,000 Chinese immigrants moved to California– approximately 30% of all immigrants at the time. The Chinese community was subsequently met with waves of 5 A stir-fry dish that usually consists of protein and vegetables, but is distinct from chow mein in the thicker, gravy-like sauce.


170

xenophobia and racism, as the fear of perpetual foreigners stealing American jobs spread. These sentiments were echoed by the California legislature, when they adopted a Foreign Miners License Law, charging 20 dollars per month for all non-U.S. citizens engaged in mining. Those 20 dollars were equivalent to over 700 today, which many of the miners could not afford, as they had left everything behind to find a new life in America. Though the act was repealed after a year (then replaced with a similar law), many Chinese miners had already left to form San Francisco’s Chinatown, leaving the mining industry (“Gold Rush to Golden State”). Chinatowns were formed to protect Chinese immigrants, as they faced discrimination and violence everywhere else they went. Smaller communities of Chinese immigrants were stamped out as the Yellow Peril sentiment remained omnipresent. The bias against Chinese immigrants meant that mines were even more dangerous for Chinese miners. As a result, Chinese workers moved to other sectors of work, such as agriculture, but anti-Chinese sentiment followed. The hostility forced many workers into self-run businesses, often located within their community to serve their community. These areas formed Chinatowns. Almost three decades after the Foreign Miners License Law, the Chinese Exclusion Act of 1882 was passed. Because merchants were exempt from the act, many Chinese immigrants gathered small funds to open businesses in the U.S. The two main industries that drew Chinese workers were the laundering and restaurant businesses. The U.S., which allowed Chinese merchants through, was exceedingly hostile towards them and made operating the businesses as difficult as possible. The legislative harassment that hounded the Chinese community in the late 1800s included the Sidewalk Ordinance of 1870, which prohibited merchants from using poles to carry their merchandise; the Laundry Ordinances of 1873 and 1876, which mandated high licensing fees for people who carried laundry without horsedrawn wagons; and Cubic Air Ordinance of 1871, which jailed hundreds of Chinese residents because Chinatown was overcrowded (Miller, 2017). Chinese restaurants grew to be extremely popular, despite the stigma against the culinary habits of Chinese people. Their appeal to non-Chinese communities caused Chinatowns to become important economic centers of the cities they were located in. While these ethnic enclaves initially served as a form of protection and means of preserving Chinese culture, they were transformed into tourist attractions. The newfound attention was mostly directed towards the chop suey restaurants– their unprecedented success allowed Chinese food to be the first ethnic cuisine to be commercialized. The cheap, filling meals found at chop suey houses fueled more than just the hungry masses in cities– they fueled the development of the American restaurant industry. In the early 19th century, restaurants almost existed in the form of lodging houses that provided meals at a common table

on a first-come-first-serve basis. Most patrons were better off eating a home cooked meal. Thus, the low standards for Chinese restaurants to beat were set. Immigrants from a country with a culinary scene that had developed over thousands of years easily exceeded those standards. Chinese restaurants with cheap, tasty food, and service impressed American patrons. William Ryan, a British immigrant to San Francisco wrote (after eating at Canton Restaurant): I once went into an eating house kept by one of these people, and was astonished at the neat arrangement and cleanliness of the place, the excellence of the table, and moderate charges. The Chinese venture was styled the “Canton Restaurant,” and so thoroughly Chinese was it in its appointments and in the manner of ser-vice, … Every item that was sold, even the most trifling kind, was set down, in Chinese characters, as it was disposed of; it being the duty of one of the waiters to attend to this . . . [which] he did very cleverly and quickly. Ryan would soon write about other Chinese restaurants in San Francisco, praising specific menu items (Liu, 2015). This new group of dishes is definitively authentically American and Chinese restaurants fit right into the new American ideal of a leisurely lifestyle. Eating at restaurants was the American thing to do, so groups striving to climb the social ladder followed suit. The affordability of Chinese restaurants made them accessible to many ethnic minorities. Though it would seem like an unlikely pairing, the Jewish community and Chinese restaurants have had a close relationship since the early 1900s. The rapid increase of Chinese restaurants coincided with a large number of Jewish immigrants moving to New York City in the early 1900s. Many of those Jewish immigrants faced poverty, unemployment, hunger in their home countries, and antiSemitism in the U.S. The shared experiences fostered a sense of camaraderie between the two immigrant groups, where both understood and sympathized with the other– mostly unaffected by the xenophobic preconceptions from mainstream society. The lack of anti-semitism from Chinese restauranteurs created a comfortable environment for their Jewish patrons. While this might have been formed out of necessity and the two groups’ proximity, it is a relationship that celebrates immigration and community. Other marginalized groups, like white workers, Bohemians, and African Americans, like the Jewish community, were unperturbed by the white middle and upper classes’ stigma against the Chinese community. As a result, they formed the first reliable customer base for Chinese restaurants, outside of the Chinese community. Their close relationship with Chinese restaurants and chop suey could be seen in the media. Chop suey made its way into the jazz genre– “Cornet Chop Suey” was one piece, among many that referenced the dish in the title or lyrics.

Fifth World


171

Figure 1. Edward Hopper, Chop Suey, 1929. Oil on canvas, 32 x 28 in.

Then, chop suey’s presence expanded into the fine arts, when Edward Hopper painted Chop Suey in 1929 to reflect social changes in the workplace. In the painting, two working class women are found sitting in a chic version of a chop suey restaurant, where working class women were now welcome to eat (“Chop Suey Hopper”). Although the classic Chinese restaurant was transformed into a space with contemporary decor, far from what would be found in Chinese restaurants at the time, the painting speaks to the cultural prominence of Chinese restaurants in the early 1900s. While nonChinese validation of Chinese restaurants allowed them to rapidly expand, it also raised new questions about the shifting landscape of Chinese American foods. The restaurants located outside of Chinatown were markedly different from the ones established inside of Chinatown. Ones located outside of Chinatown were slightly more expensive and had a comparatively simpler, more Americanized menu. In contrast, the restaurants inside Chinatown depended on predominantly Chinese clientele, offering a wider repertoire of dishes as a result. This disparity raises questions of Chinese American authenticity. Are Chinese restaurants located outside of Chinese neighborhoods authentic? The answer lies somewhere in between yes and no. To provide a simple “yes” would be to ignore the diverse and sophisticated regional cuisines of Chinese and claim that Chinese American restaurants represent all of them. Neither the signature mala (numbing spicy) flavors of Sichuan nor the sweeter, delicate qingdan (light) flavors of Guangdong quite aligns with chop suey. It would also neglect the challenges restaurateurs experienced as they navigated reconstructing Chinese dishes in America– a place with different ingredients, palates, and customers. Then, a simple “no”, would deny the irrefutably Chinese roots of chop suey, chow mein, and other Chinese American dishes. “Chop suey”, or za sui, is a category of dishes from Guangdong consisting of odd ends of leftovers, stir-fried

North Carolina School of Science and Mathematics

together with the sauce. (“Rise of Chinese Food”, ch. 3). The current usage and idea of authenticity render it an ineffective term to describe the state of a restaurant in relation to its culture. Authenticity is often tied with a person’s (or restaurant’s) valid relation to the identity they choose, which is why Chinese American restaurants are often dismissed as inferior. Someone who is “too American” is usually deemed invalid by their community, for deviating too far from their culture, while outsiders expect foreigners to look and act a certain way, conforming to their own “valid” fantasies of ethnic identity. In this sense, authenticity is an unproductive term to criticize relationships between someone and their culture. Criticizing inauthenticity to one’s culture without considering the intense pressure placed on individuals to assimilate is unproductive. Because questions of authenticity arise from emphasis on cultural differences, often understood in essential terms, the concept and questioning of cultural authenticity primarily occurs in the West. For example, Chinese American restaurants located in China are well received, without questions of authenticity because it is simply American food. The question of whether or not orange chicken is similar enough to any regional cuisine of China is not raised because there is no pressure to erase dominant Chinese cultures. The lack of erasure leads to no individual need or pressure to retain mainstream Chinese culture, allowing for creative development of cuisine. To question if something is authentically Chinese or not, there needs to be a force claiming authenticity– if the restaurant is advertised as American, there is no issue. In the U.S., however, advertising orange chicken as American food might not reach the same success, as the idea of an American tends to be a Eurocentric one, even as “American” food is not European. To reconstruct the idea of authenticity to include Chinese American culture and therefore cuisine, would be another step towards understanding the identity of Chinese Americans as a distinct experience. While Chinese restaurants are not representative of the diverse economic statuses and intersectional identities of the Chinese diaspora, they are often the economic lifeline for working class Chinese families. Their cultural prominence and popularity have contributed to the survival of Chinese Americans. Though more Chinese restaurants in America are returning to traditional Chinese cuisines, Chinese American fare has a sort of permanence that can only be cultivated over centuries. This ongoing cultivation of a diverse food culture in the United States represents a site of Chinese American cultural creativity and a permanent contribution to the possibility of an authentic American cuisine.


172

A New Community. Library of Congress. Chen, Yong. Chop Suey, USA : The Story of Chinese Food in America. Columbia University Press, 2014. EBSCOhost. Chen, Yong. “The Rise of Chinese Food in the United States.” Oxford Research Encyclopedia of American History. 28. Oxford University Press. Date of access 14 Dec. 2021. Chiang, Cecilia. Interview by Momo Chang. Q&A with Cecilia Chiang of The Mandarin Restaurant, n.d. “Chop Suey (1929) - the Most Iconic Edward Hopper Painting Left in Private Hands: Christie’s.” The Psychological Drama of Edward Hopper’s ‘Chop Suey’ | Christie’s, Christies, 12 Dec. 2018. Coe, Andrew. Chop Suey : A Cultural History of Chinese Food in the United States. Oxford University Press, 2009. EBSCOhost. From Golden Rush to Golden State. Library of Congress. Haiming Liu. From Canton Restaurant to Panda Express : A History of Chinese Food in the United States. Rutgers University Press, 2015. EBSCOhost. Hopper, Edward. Chop Suey. 1927. Edward Hopper. Immigration and Relocation in U.S. History. Library of Congress. Lee, C. W., & Hoy, W. (1935, December 6). Seven Steps to Fame. Chinese Digest. Miller, G. (2017, June 3). 1885 Map Reveals Vice in San Francisco’s Chinatown and Racism at City Hall. Wired. Photograph of Manhattan’s Chinatown. New York Post, 26 February 2021.

Fifth World


173

The Nature of Museum Neutrality Adriana Jimenez-Willis

U

p until about six months ago, I considered museums to be a snapshot of history, unbiased, factual, and passively representative of the past. Museums promise to deliver only the solid facts of history-the objective presentation of what really happened, without opinions. When I would walk into a museum and see the exhibits, I saw objects and writing, the objects which I considered to be facts of the past and the writing simply a factual description of the object. However, I never stopped to consider where the objects came from or who did the writing. I never considered who and what is represented, and who chooses to do the representing. Since I was introduced to the idea that museums are not, in fact, totally true, I have begun to think about what impacts could come from such a misunderstanding. Everything that I have seen in a museum (minus the one I have visited between six months ago and now- The Country Music Hall of Fame and Museum in Nashville, TN is really quite worth a visit) I have accepted unquestioningly as truth, and I have constructed my idea of the past using the museum as a primary source. How, then, have my ideas of history been influenced? How have other’s been influenced? And how does the recognition of “truth” change depending on the observer’s position within the institutional structures of the museum? I have chosen, then, to research the construction and perception of museums, as I want to learn how the influences of different socialized factors can affect both the establishment and curation of a museum, as well as how the ways these such factors are displayed affects the experience of those consuming information from museums. In this work, I am striving to understand how the interactions with a museum’s content changes based on an individual’s race, gender, sexuality, or nationality, and what the impacts of the variances are on both the individual and society. In thinking about the creation of a museum, we must consider who oversees it- who makes the decisions, chooses the objects, donates them, writes about them, and organizes them into their places. In the case of museum curation, and in society typically, those who possess such power are privileged, white, rich, male, cisgender, and straight. Museums are deceptive in that they present themselves as objective, but they are quite subjective for multiple reasons. On the formative end, museums are cultivated by people with biases and a biased array of objects. On the receptive end, the viewer may experience a museum differently based on their individual background. Many museum critics such

North Carolina School of Science and Mathematics

as Adam Hochschild believe that “A good museum should make you start looking at the world beyond its walls with new eyes” (Hochschild 2019). Often, the biases that are prevalent in museums call to mind a question of exactly how many museums actually execute such a task in a manner that provides enriching, diverse information from multiple perspectives and encourages visitors to think critically about their exhibits and life beyond them. In the careful consideration of museums, it is important to keep in mind that every step in a museum’s construction is carried out by individuals. Museums are created by people with biases within a biased society which is reflected in the exhibits on display. Those in power tend to be the privileged, who tend to select exhibitions reflecting their privileged lives. According to the Association of Art Museum Directors, only 11% of senior leadership positions in art museums in the United States were occupied by people of color in 2015; and in 2018, the number only increased to 12% (Association of Art Museum Directors). This is much lower than the forty percent of Americans who are People of Color (US Census Bureau). The occupation of such leadership positions by POC within museums is crucial because it allows for the construction of the museum outside of the white point of view, imagined as history itself. Museums are more accessible to, funded, and operated by the privileged, and therefore perpetuate the views imposed (either intentionally or simply as a result of bias) by those contributing to their construction. Additionally, objects from underrepresented groups are uncommon, especially artworks, and if they are exhibited, they are often “explained” or “refined” within a white or colonial context. For example, within the classically French chateau-like appearance of the Royal Museum for Central Africa in Brussels, Belgium reside very few representations from Central Africans themselves. Even an area dedicated to Central Africa feels as though it is not really about Central Africa, or representative of it. Hochschild notes that, “When I first visited the museum, in 1995, the exhibits of Congo flora included a cross section of rubber vine—but not a word about the millions of Congolese who died as a result of the slave-labor system established to harvest that rubber. It was as if a museum of Jewish life in Berlin made no reference to the Holocaust.” Instead of documenting Central African culture, the museum instead housed many relics of colonial soldiers and portrayed ideas about Europeans “bringing


174

civilization” to Africa. After the museum reopened in 2018, the exhibits and text remained apologetic for colonialism, explaining that it is hard to display African culture from an African perspective due to most artifacts being from a white perspective in some way: “Colonialism remains a very controversial period. The collections of the Royal Museum for Central Africa have been composed by Europeans; it remains a challenge, therefore, to tell colonial history from an African perspective,” reads a sign in the museum (Hochschild 2019). While it is good to acknowledge such biases and warn viewers of influences, the focus of the texts still is catering to a white audience. The Museum chose to take an apologetic stance for white mistakes and acknowledging white perspectives instead of adjusting their content to amplify and document African voices.. Artifacts and artworks from marginalized groups are much harder to access and less readily available, yet museums should search harder to find and incorporate them. Instead of apologizing for misrepresentation and ignoring marginalized cultures, museums should follow through with action to accurately represent other cultures rather than simply claiming that to do so is too difficult.. A racist museum claiming to represent minority groups but instead only providing an explanation for its ignorant racism is no respectable substitute for a museum that accurately documents the history of those groups who it promises to represent. Hochschild’s ideas can be represented concisely in a simple term: “un musée des Autres” (A Museum of Others). This important distinction of the “others” is precisely why museums have failed to accurately represent and include marginalized people into mainstream museum culture. The idea is not simply that there are between the “others” and the creators of the museum, but rather that they are in essence. This cognitive separation between cultures rejects the idea that culture is mixed and constantly evolving; and it stratifies historical groups as present-day ones. The attempts to and explain “others” in museums rather than portraying the ways they speak and create for themselves, evolving their own culture in historical encounters with others and among themselves, is a tragic mistake. This separation is the most harmful misrepresentation that a museum can make- it draws lines between cultures, reinforces stereotypes, and continues a cycle of non-understanding. Another kind of false representation of marginalized groups present in many museums is the one of Native Americans, Eskimos, and Africans as part of the “natural” world instead of the “civilized” one. (Hochschild). White/ European people are never represented in natural science exhibits in museums. This highly problematic phenomenon suggests that POC are primitive or underdeveloped versions of their counterparts of European descent. This portrayal suggests that marginalized peoples are only deserving of being represented in museums as objects of nature, rather

than as subjects of culture or as creators of fine arts. For example, in New York City’s American Museum of Natural History lies a “Hall of African Peoples” but no hall of white or European peoples. A little further south in Washington D.C.’s National Museum of African American History and Culture, a specific display called Movement: Gesture and Social Dance” describes black movement as if the exhibit was describing the behaviors of zoo animals: “Many African Americans stand, walk, dance, and communicate in gestures that set them apart. Some of these movements express the marks of blackness—liberation, creativity, improvisation, and self-determination—from the time of slavery to now. African American gestures can be quiet and illusive, or vibrant and confident.’ The examples include various “gestures”: “gestures of solidarity,” “gestures of respect,” “gestures of play,” and more. A photo of crips throwing up their gang sign sits above a photo and explanation of dapping and another of Michelle and Barack Obama fist-bumping.” (National Museum of African American History and Culture, 2019) Why are African people portrayed and treated like “natural history” but white people are not? Or rather, why are they portrayed this way within museums?1 Although these may be, at their core, “accurate” representations of black peoples, why are white people not described in the same terms? Why are marginalized peoples rarely the ones to speak—and instead, spoken about? Written plaques, too, are the most direct influence of privilege within exhibits, taking on the voice of the mostly white curators and museum leadership. For example, even the intentionedly-progressive Smithsonian Black History Museum in D.C.includes vague and mildly-put desrciptions of the horrors that black slaves were put through, such as the following diluted depicion of master-slave rape: “Intimate relationships in the Chesapeake crossed color lines. Some were consensual, others were not. Enslaved women were subjected to the sexual demands of white slave owners” (Phillips). The museum insinuates that racism is a naturallyoccurring phenomenon that has evolved and manifested in many different ways throughout the course of history. Museums tend to offer explanations, tame accounts of brutality and inhumanity, and apologetic perspectives rather than highlighting the voices of the marginalized and letting those individuals be the subjects of their own accounts. The impact of this is that it fails to fully denounce racism and colonialist narratives, and implies that these things will happen and the best thing to do is simply apologize

1 Interestingly enough, even though Natural History museums may not exhibit any kind of people, many of the collections located within museums were collected by enslaved peoples of color (Davis). It is important that museums recognize the colonialism that made their collections possible, and what kind of profit was gained from the work of enslaved laborers.

Fifth World


175

for such natural occurrences and attempt to not make the exact same mistake in the future-when, in reality, similar mistakes continue to be repeated. These displays of racism and other forms of discrimination have simply evolved to what is regarded as “acceptable” today, but they have not disappeared. In addition to repetitions of discrimination in various forms, the privileged individuals in power to create exhibits also often do not attempt to put special effort into amplifying the voices of those they are representing within the museum,. The majority of leadership positions within a museum are held by white people, and there has been little change in the proportions of leadership positions by race of late—from 2015 to 2018, the increase in leadership positions within art museums only increased 1% (Association of Art Museum Directors). The results of these leadership statistics influence the exhibits and their contents; exemplified in the fact that the artists in 18 Major US art museums are 85% white and 87% male (Bishara). A significant issue with how many artworks belonging to women or people of color are displayed is that these works are portrayed as a specific type of art categorized by the artist’s femininity or nonwhiteness, rather than an artistic categorization. Of course, much female and POC artwork comes from a retaliation of white patriarchal societal standards, as a response to the oppression suffered from being marginalized.

N

on-reactionary artworks from marginalized groups are less common in “classical” periods of artwork dominated by white men due to the fact that POC and women were, in such periods, not viewed as valid artists and therefore excluded from access to practice in such spheres. The artwork, then, is still dependent on white male society, and therefore art, in the form of a response to it; but perhaps there is no way to avoid this conflict due to the nature that society is, in fact, white-dominant and patriarchal. There are no “white art” histories but there are renaissance, classical, abstract, and realist art histories. But there are “black art” histories and “female art” exhibits. The focus of artwork by marginalized groups, then, tends to be on their struggle (crucial, nonetheless) but not also as a subject of creativity and independence. An interesting concept, perhaps the reverse of people being the objects of their own experiences (rather than subjects), is the idea that objects can also function as subjects of history— with active perspectives, opinions, and independent influence. The objects and artifacts found within a museum are from pasts of opinion and ideals, and therefore possess said opinions themselves. They can function as proprietors of the ideas of the historical period in which they are from— they reflect the beliefs and biases held by their creators and can therefore relay those beliefs onto the viewer examining the object. Objects have the power of remaining unchanged; even though they may age physically, their essence remains the same as time progresses. Add North Carolina School of Science and Mathematics

something like: This is described by Hannah Scott, who observes: “Advertisements from the 19th century, much like today, tend to represent one specific, idealized body type, rather than the actual diversity of bodies from the time period” (Scott, 2020). Not only do objects sometimes carry influential pasts with them, but they also may be biased in their selection. For example, in terms of exhibits of women’s clothing and what is denoted as “museum-worthy,” Scott notes that “Ideas of what people wore in the past are thus skewed towards more formal items of clothing (Scott 2019).” It is natural to want to display the least “average” items. On a more serious scale than “skewed” clothing, skewed objects of non-white and male cultures can be non-representative of “how it really was” most of the time, influencing more incorrect perceptions of the past and other cultures onto museum viewers. Museums also group “other” cultures into one; for example, in the McClung Museum of Natural History and Culture (Note: Natural history), many different Native Alaskan cultures are lumped into “one monolithic group (i.e. “Eskimos”).” Beyond encouraging intercultural misunderstandings, misrepresentations within museums of one’s own culture can be very uncomfortable and unsettling to people who feel that their depiction in museums is misrepresented, whitewashed, or simply inaccurate. Objects in museums are not just simply presenters of the past in which they were created; they are excavated, analyzed, restored, collected, researched, and displayed by people; therefore, there are opportunities for bias to enter the process in every step along the way. As a direct influence of the biases present within the creation and construction of museums, such biases are reflected within the exhibits. Such appearances of bias are experienced differently by individuals with different backgrounds; whether the distinctions be of race, gender, sexuality, class, or any other social factors. In a study that I did by posting a Google Form on three different social media platforms (Snapchat, Instagram, and Facebook), I asked respondents to anonymously answer different questions about their experiences with museums and their social backgrounds. The respondent population was mostly high school students, with some college students. There was a general trend of increased enjoyment, trust, and accurate representations amongst more historically privileged groups of respondents. For example, only 7.69% of black respondents answered “Yes” to the prompt “Do you trust museums as a source of unbiased information?”, while 47.37% of Asian respondents and 46.57% of white respondents answered “Yes.” There were not sufficient responses from other races to come to a statistically sound conclusion about a general trend within such races. It is also important to note that the sample population was a convenience sample taken from those who are in contact with me on social media, mostly from my home high school


176

and NCSSM. Due to these factors, it is possible that there are forms of self-selection bias present as well as the potential that populations of my past and current high schools are not representative of an American or Universal population. 2 However, since there is a substantial difference between the proportions of respondents by group, it is fair to assume a similar trend in the general population. Interestingly, 82.19% of White respondents answered that they enjoyed visiting museums, while 78.95% of Asian respondents and 76.92% of black respondents agreed. While my study found rates of enjoyment of museums to be similar across races, the study found trust in museums to display accurate information to largely varies across races. This is most likely because privileged (generally white) people within a museum tend to be unaware of the museum’s biases, as it fits the way that they have experienced life. Those who have not been privileged throughout their lives are acutely aware of any misrepresentations within a museum, which can be evidenced by the significant drop in black trust in museums compared to Asian and White rates of trust. Another factor contributing to levels of trust and curiosity towards exhibitions within museums is the matter of artifact and object authenticity. Objects are proven to inspire more insight, questions, and interest in them if they are displayed as “authentic” (Schwan & Dutz). Two researchers, Stephan Schwan and Silke Dutz “employed a questionnaire study in nine German museums; three museums of history of science and technology, three natural history museums, and three cultural history museums.” In the survey about the authenticity of objects and how they affect viewers was distributed, and results showed that (on a relevance scale of 1-5, 1 being lowest and 5 being highest) authentic objects most commonly “bring a topic closer to me” (averaging 4.32), “help me comprehend something” (averaging 4.25), “make me curious” (averaging 4.23), “make me wonder” (4.04). As mentioned above, many objects within museums are more representative of privileged populations. If the majority of “authentic” objects in a museum are representative of the privileged, then more interest will be taken in those objects, eroding the focus on objects representative of already marginalized populations. Further, 57.1% of respondents in my study noted that they have seen misrepresentations of culture within museums, further evidencing the notion that museums are not doing everything they can to correct wrongs regarding displays of marginalized people and how “other” cultures are displayed.

2 Self-selection bias is defined as“a bias that is introduced into a research project when participants choose whether or not to participate in the project, and the group that chooses to participate is not equivalent (in terms of the research criteria) to the group that opts out” (Statistics How To).

Figure 1. Survey results showing that 42.9% of people surveyed agreed that they had witnessed their culture misrepresented within a museum.

In addition, is it important to think of the museum’s culture of colonialism not just in biased exhibitions, artifacts, and displays, but also in unethical collection practices. Museums have the reputation of housing stolen or illegally traded artifacts, perpetuating a cycle of colonialist practice even as museums present seemingly progressive exhibits. An operation by Homeland Security called “Hidden Idol’’ sought to uncover illegally traded artifacts smuggled across four continents by Subhash Kapoor, estimated to have returned a profit of at least $100 million dollars (Tharoor). Many museums defend such practices by claiming the large collection of artifacts to be an enriching universal experience for visitors: “Colonialism is alive and well in the art world,” [Tess] Davis [a lawyer with the Antiquities Coalition] said. “Socalled leaders in the field still justify retaining plunder in order to fill their ‘universal museums’ where patrons can view encyclopaedic collections from all over the world. A noble idea, in theory, but in practice, a western luxury. The citizens of New York, London, and Paris may benefit, but those of Phnom Penh? Never.” (Tharoor) The justification and continuation of unethical methodologies of artifact collections are precisely the kinds of misconduct that perpetuate discrimination in today’s culture. The colonialist museum biases have even affected animal exhibits. Small animals, female animals (only 29% of mammals and 34% of birds are female!), and “gross” animals-such as ones that have to be preserved in jars and cannot be taxidermied- as well as animals that were not a part of territorial colonialist collection motives are not displayed in natural history museums nearly as much as their counterparts according to Smithsonian museums. (Ashby) This phenomenon proves that colonialism has evaded the neutrality of museums at its core and it is not something that can be easily fixed; rather, the system must be reconstructed. Just because the repetition of the specific types of discrimintation aren’t the same, marginalization evolves with culture if it is not actively fought againstwhich takes sacrifice. Museums that give up illicit objects may lose millions of dollars, while museums that refuse to partake in illegal trades have to spend much more money legally sourcing artifacts, or may lose the opportunity to buy Fifth World


177

artifacts belonging to other cultures at all. nother problematic habit of a museum is to make a distinction between cultures and to draw lines between the histories (and therefore, the present) of segregated groups, instead of focusing on the more realistic mixture of inseparable cultures, constantly changing and influencing each other. Often, the history of such a culture is not reflective of how it has evolved, and does damage to the current perceptions of it. Regarding the museum of Chinese in America, “Many residents believe that to preserve the story of Chinatown, it makes more sense to safeguard the actual neighborhood than a historical record of it — and to not do so may endanger Chinatown’s viability” (FreytasTamura). The museum has attracted attention away from small businesses in Chinatown and contributed to the closure of several of such establishments. The struggle arises over a question: which is more representative of Chinese in America: the museum of its history or its current vitality? Museums have long been respected for their neutrality, their “basis on facts”, and vast historical collections. Upon the careful examination of this neutrality and basis on facts, however, it becomes clear to the observer that the prerequisites for such factors are unclear, andif attempted to be named and defined, fallacious. The whole premise of neutrality falls apart once it is realized that humans cannot be unbiased and neutral, a museum cannot be constructed without the influence of such bias, and that “fact” does not exist realistically-historical facts are impossible due to the trueness of individual, human, biased perspectives and the trueness of the past, present and future of cultures. Striving for objective truth and neutrality has forced museums to draw lines that cannot be drawn objectively and do not exist. This museum depiction enforces such lines upon cultures outside the museum by the observers because those hastily drawn distinctions are viewed as complete truth, therefore becoming the “truth” of the individual’s understanding. This pattern of underprivileged groups have been ignored, discriminated against, and actively pushed out of the focus of social spheres, which is reflected by the fact that they are less represented within museums due to both the lack of minority leaders, lack of access to artifacts, and culturally inaccurate“representative” exhibits. Additionally, there is a disparity amongst the way museums are perceived and the way that they spread information. Those in nonmarginalized groups, like white people, tend to be accurately and thoroughly represented in museums, and therefore tend to trust any and all information presented in museums. Since POC are often underrepresented or misrepresented within museums, they tend to not “trust” museums as sources of misinformation and feel frustration at the white bias obvious within said museum. As I first began to pry into initial concerns about the neutrality and biases of museums, more and more questions began to flow out: At what age does one learn that museums are not as neutral as they seem? At what age does one stop

A

North Carolina School of Science and Mathematics

trusting museums so readily to present accurate, unbiased information? How enjoyable is a museum dependent on an individual’s race, gender, sexuality, or social class? How do the higher levels of “trust” in a museum amongst white populations perpetuate the spread of cultural misinformation regarding POC groups? What effects do the spread of misinformation of museums have beyond the sphere of museums and into understandings & existing disparities between cultures? What “real world” (financial, emotional, health-related) impacts does such a phenomenon have? I chose to study museums because I have always viewed these objectively, as nothing more than a sort of container for bits and pieces of history,nature, or art. The idea of viewing museums as not just a carrier of presentations but as a presenter themselves is interesting to me because of the complexities that reside within an institution presenting itself to be a mere observer and reciter of history when it is itself an actor. I hope my exploration of the biases attempted to be hidden by the institutions of museums helps myself and others to look for similar nuances in other societal institutions and work to consider the implications these nuances have on our worldviews, . Museums have historically excluded “other” cultures from their exhibitions- most museums created in the 19th and 20th centuries were meant to perpetuate a colonialist narrative and either completely disregard non-British colonial culture as nonexistent, or portray these cultures and their objects as “unmodern relics of a time gone by” (Hannah & Scott). Because of this, many “other” cultures are now characterized and displayed as separate, monolithic, exhibitions, and are titled as a consideration of their “otherness”. This problematic trend continues to enforce the narrative that there are “normal” exhibits of culture and people, and then others. The segregation of marginalized groups by their otherness is precisely what perpetuates misunderstanding and discrimination within present-day culture outside of museums. Museums imitate life, yes, but life also imitates museums. People take what they observe from museums and incorporate it into their opinions.Often, they leave thinking that they have begun to understand something previously unfamiliar to them and trust museums to have provided a “fair” representation of history,but often, they only understand less. Misunderstanding is worse than unfamiliarity, because it is much easier to paint a picture on a blank slate than to attempt to correct mistakes over an incorrect picture. Museums have the appearance of a trustworthy source, so they must attempt to present representative and (as much as they can be, being curated by people who cannot help but to be inherently biased ) objective collections. However, they often have not and do not put such effort into creating exhibitions worthy of being trusted. For the museum to deserve the trust of its audience, it must be accurate and anti-discriminatory. Museums must begin to do this


178

by appointing more leaders from marginalized groups, displaying more collections from individuals representing marginalized groups, and making sure any textual displays are allowing the silenced to speak instead of speaking for them. Museums have historically been respected as a beacon of truth and neutrality, but to keep such a reputation, the museum must make more of an effort to be as close to truth as possible. Museums must begin to blur the lines between cultures and present them as evolving and intersectingbecause the truth is, cultures were never separated and only continue to intersect with one another.. We fail to understand each other not by avoidance of learning and lack of curiosity surrounding our differences, but by not reconizing our similarities.

Ashby, Jack. “The Hidden Biases That Shape Natural History Museums.” Smithsonian.com, Smithsonian Institution, 20 Dec. 2017. Bishara, Hakim. “Artists in 18 Major US Museums Are 85% White and 87% Male, Study Says.” Hyperallergic, 3 June 2019. Davis, Josh. “Are Natural History Museums Inherently Racist?” Natural History Museum, 16 July 2019. Freytas-tamura, Kimiko De. “Why Some People in Chinatown Oppose a Museum Dedicated to Their Culture.” , The New York Times, 19 Aug. 2021. Hannah, Kaiti, and Elizabeth Scott. “Preservation Bias in Museums: Left-Handers of the Past and Other Collection Conundrums.” , 17 Aug. 2020. Hochschild, Adam. “The Fight to Decolonize the Museum.” The Atlantic, Atlantic Media Company, 15 Dec. 2019. Johnson, Christine. “Not Just Objects: Alaska Native Material Culture at the McClung Museum of Natural History and Culture.” TRACE, Apr. 2015. “Latest Art Museum Staff Demographic Survey Shows Number of African American Curators and Women in Leadership Roles Increased.” . Phillips, Maya, and Vinson Cunningham. “The Smithsonian’s Black-History Museum Will Always Be a Failure and a Success.” , 24 Oct. 2019. “Quick Facts.” United States Census Bureau, 1 July 2019. Schwan, Stephan, and Silke Dutz. Apr. 2020.

,

Tharoor, Kanishk. “Museums and Looted Art: The Ethical Dilemma of Preserving World Cultures.” The Guardian, Guardian News and Media, 29 June 2015. Wood, Catherine M. “Visitor Trust When Museums Are Not Neutral ... - UW Libraries.” , 2018.

Fifth World


179

The Global Implications of Gender-Based Violence on Politics and Society: An Analysis Sydney Mason

E

very woman has walked home alone and heard the echo of footsteps following behind her, knows the feeling of fear as she tries to pretend that she doesn’t know someone is behind her. She walks faster, just a little, hoping that whoever is behind her doesn’t notice the increase in her speed. But they match it. She chances a look behind and sees a shadowy figure, just far enough behind that she can’t make out their features. Their speed increases, and this time she is the one who matches it, praying that she gets home safely. She reaches her door, turns the handle, and darts in, locking it behind her, taking deep breaths as she waits for her follower to leave. Most of the time, this fearful experience comes to nothing. A woman gets to her destination safely or is able to contact someone to help her, but that fear does not exist without reason, as seen in the murder of Sarah Everard while she was walking home at night in London. There are millions of incidents of violence similar to her murder that occured solely because the victim happened to identify as a woman, and they happen no matter what a woman was wearing or where she was, despite what popular culture may have you believe. These instances are colloquially referred to as gender-based violence. Gender-based violence is a result of social and structural circumstances. This makes it a socio-political issue of utmost importance to resolve for the wellbeing of our society. Gender-based violence is a threat that every woman lives with. Everyone knows someone that has experienced it, although they might not realize it at the time. Gender-based violence is a pervasive disease within the world’s society and to eradicate it we must first ask ourselves what it is and why it exists so pervasively within the world. Gender-based violence can be defined as violence that occurs solely on the basis of gender and includes acts such as murder, abuse, rape, and other forms of sexual assault. The World Bank estimates that 35% of women worldwide have experienced “physical and/or sexual intimate partner violence or non-partner sexual violence” within their lifetimes. This means that at least one-third of the women in our lives have been subjected to gender-based violence. Mull that number over, and count the number of women in your social circles. Do you select every first woman or every third to be the one who has experienced gender-based North Carolina School of Science and Mathematics

violence? Your father’s mother, your mother’s mother, or your mother herself? When you consider the implications of gender-based violence in your immediate sphere, amongst your relatives and friends, its impact becomes all the more real. Many times, we aren’t aware that people around us have experienced it because gender-based violence is a deeply personal pain. Considering the social stigmas around reporting incidents of gender-based violence, communicating about the experience is taboo, transforming the personal into the unbearably private. However, when a woman ignores the stereotype and does discuss the incident of their assault or injury, they subvert the silence that gender-based violence forces upon its victims. This subversion of silence is why I originally asked the question, why is gender-based violence so prevalent in our society? This question does not come from my own experiences, but from listening to someone else, from hearing her story of her romantic partner forcing her to take a step she wasn’t ready for, from seeing her retreat into her shell in the wake of the assault, although at the time I didn’t know why. The violation she experienced changed her, but because of the shame she felt from the incident, she didn’t alert anyone to the situation at the time, only telling me nearly a year later. Unfortunately, she wasn’t alone in her silence. According to a report published in the American Journal of Epidemiology, the actual number of incidents of sexual violence is approximately 14 times higher than what data suggests. (Palermo 2014) Based on these conclusions, a question must be asked: why is it that individuals, predominantly men, feel secure in their ability to assault women without consequences? There must be no perceived repercussions to account for the high rates of sexual assault. We are told that members of our society are conditioned to operate within the theorized Panopticon, the prison-like social structure in which someone may be watching at all times, and anyone may report you for an infraction. We internalize this sense of surveillance, becoming our own watchers. This conditioning keeps most from committing criminal acts— but gender-based violence continues to occur at astounding rates. Therefore, any study of gender-based violence must include a study of the societal norms that make


180

it acceptable–exempt, it would seem, from conscience–and by extension, the legal consequences of gender-based violence. The law has an integral role in determining what is socially acceptable by assigning real consequences to actions, and if gender-based violence is occuring at rates that indicate that it is permissible in society, we must assume that there are not laws to discourage this behavior, preventing it from being socially acceptable, or that existing laws are not properly enforced. The Panopticon is blind, it seems, to violence against women. There is neither prison or penitentiary for its offender. We cannot discuss the law in regards without examining gender-based violence’s influence on politics and politics’ impact on gender-based violence. Gender-based violence, in many ways, is a political and social form of violence. It is a tool used by (primarily) male-identifying persons, in order to assert dominance over the victim of the violence, usually women. Using gender-based violence as a tool of dominance reinforces social structures that place women below men, a structure reflected and reproduced within our political systems. In my research, I wish to move beyond the prevalence of gender-based violence to the impact that these incidents have on our society, primarily in the social and political spheres. These two spheres interact in a myriad of ways beyond just this one issue of gender-based violence, because social representation and status decides the political influence that a person has within their society. Since socially women are inferior to men, their political issues are deemed insignificant. This accounts for the lack of representation of women within politics as well as the lack of representation of so-called “women’s issues” within legislation and political discussions. For example, access to universal education and healthcare, ideas that women are more likely to support (Volden 2010), are viewed as expenses that don’t make sense despite abundant evidence to the contrary, and legislation continues to support gender and sex discrimination such as luxury taxes on menstrual products, assuming that basic human rights for women are unecessary. This is a reflection on the social structures of inferiority that gender-based violence reinforces and perpetuates. Gender-based violence also accounts for the underrepresentation of women in politics and the underrepresentation of their issues in another way. Genderbased violence, including the threat of it, has continually forced women to leave their political offices, as well as to abandon electoral campaigns (Bardall 2011). As such terrorism discourages women from office, so it prevents legislation against gender-based violence becomes less prevalent that would be drafted and supported by women. This creates a disturbing cycle of gender-based violence and politics, for the more that gender-based violence is used to limit women’s political expression, the more that legislation supporting “women’s issues,” including consequences for acts of gender-based violence, will be deemed less important.

This dominance tool therefore also has a direct impact on women in politics beyond its impact on reinforcing harmful social structures. The impact of and response to gender-based violence differs according to socio-economic status, both of countries and individuals. Acts of gender-based violence against white women are given greater media coverage than gender-based violence against women of color, despite the difference in prevalence between the two groups: black women experience gender-based violence at a rate of 35% more than their white counterparts, Native American women experience gender-based violence at double the rate of other racial groups (YWCA Fact Sheet). Because colonial legacies of racism are seen in social structures reinforced by legislation, women of color are viewed legally and socially as more inferior to white women; lacking such restraints, the dominance that white men need to reinforce is increased. This is not limited to white majority societies, for in many nations across the globe women who belong to a minority racial or ethnic group are targeted based on their perceived lack of power to the perpetrator of the violence. This same logic applies to women of lower economic status, who also are seen to have less power in comparison to persons with more money and, by extension, with greater access to resources. Therefore, in order to analyze gender-based violence, you must look at it under a lens of the racial and economic determinants of social and political power which erase it. When thinking about the causes of gender-based violence, one must first consider the social circumstances that produce the conditions for gender-based violence to occur on a large scale. As a product of power imbalances between women and men, gender-based violence seeks to enforce those power imbalances further by giving the perpetrator a feeling of dominance over his victim. Therefore, if gender-based violence is produced by power imbalances between women and men, there must be social structures that produce those power imbalances which promote gender-based violence. In most cultures, men have assumed a role of financial provider, whereas women were the caretakers of the children. This is something that is slowly changing, though not quickly enough. In many households, the man often provides most of the income, given the gendered inequality of wages; as such, he exerts financial dominance over his wife or partner. This automatically subordinates a woman to a man economically, and by shifting the frame to include women’s economic place not just within their larger society, we observe that women are situated below men economically as a whole as well. Within societies that value personal capital over other factors in determining a person’s power and social status, this situation places them below men socially, combining the two into socio-economic inferiority. Therefore, when considering how power imbalances and social dominance of women over men cause and exacerbate Fifth World


181

gender-based violence, we can point to continued economic disparities between women and men as a further cause of gender-based violence: the forced dependence of women upon men is reinforced by violence. It is also important to recognize the different ways that gender-based violence is addressed locally, nationally, and internationally based on economic status, race, ethnicity, and a myriad of other factors. A study, , discussed how the media attention to female victims of crimes varied greatly based on their race and ethnicity. Latina and Black women received much less media attention than White women who had experienced similar crimes, and were also far more likely to be described as a “bad person” engaging in “risk-taking behavior” or being in an “unsafe environment.” 73.2% of Latina and Black women who had been victims of crimes were described as being in an unsafe environment in comparison to 33.3% of White women (Slakoff 2010). By using terms such as this to qualify Latina and Black women, the media can successfully spin a story that has its audience believing that if they did not deserve to be the victims of the crime committed against them, they were responsible for it by putting themselves in a bad situation. This logic perpetuated by the media slowly makes crimes against women, specifically crimes against women of color, acceptable, and part of the fabric of our everyday life. When crimes against women escalate into violence against women, it is shifted into that same narrative and only given significant media attention if the woman is white and affluent. This was apparent in the case of Sarah Everard, a white woman who, while walking home at night in London, was murdered by a male police officer. The incident garnered international attention, with world leaders and governments decrying the incident and local news outlets wondering how this could have happened as “we can all acknowledge that the abduction and murder of a woman like this is rare” (Bullimore 2021). At the same time, however, the femicide rates in Mexico were spiking and indigenous women in the United States continued to go missing at alarming rates, all indicating that the abduction and murder of a woman not at all rare. Well, perhaps it was. When you dissect that statement further, “a woman like this,” it could also be describing the woman herself–a woman as such, which is to say, a solidly middle-class white woman in an affluent area in a western country. By that logic, the statement could be accurate. However, it is important to acknowledge that western countries and affluent areas are not somehow better at preventing incidents of gender-based violence. Far from it. But those incidents are given much more media attention, so that attacking a white woman in an affluent area has more repercussions because she is more likely to be given media attention, which puts more pressure on law enforcement to find who is responsible for the act of gender-based violence. North Carolina School of Science and Mathematics

To recall the Panopticon as a lens through which to view gender-based violence, there is more scrutiny over violent acts against white women, so potential perpetrators are less likely to commit a violent act against a white woman than they would a woman of color, specifically when they are in non-Western countries, because there is not incessant media attention. Focus will always be directed towards the White woman who was murdered in London than to the dozens of Latina women murdered in Mexico. Therefore, when considering some of the social causes of gender-based violence, we can point towards media, because its focus or framing implies that gender-based violence is acceptable against women of color–it is acceptable because they are invisible, or visible only as figures of blame. This also means that law enforcement isn’t pressured to look into cases involving gender-based violence against women of color. The more women of color who are hurt without media attention, the more that potential perpetrators feel they can get away with acts of violence against women of color, on and on, with no end in sight. Social causes of gender-based violence are structured by the legal and political spheres. In my earlier discussion of the social aspects of gender-based violence, my primary focuses were on the media and the economy as tools for producing gender-based violence, either through producing inequities that provided the opportunity for gender-based violence to fester, or by creating a narrative that allowed incidents of gender-based violence to slip through the cracks unnoticed. Both of these factor into the structural causes of genderbased violence as well, albeit in different ways. The primary way that structural facets of our world cause gender-based violence is that legally and politically, women are not protected and are in fact discriminated against. (Solotaroff 2014) Although efforts have been made over the years to expand protections for women, such as the attempt at the Equal Rights Amendment in the United States, overall, limited protections are in place for women. Furthermore, even basic legal systems such as the courts fail women regularly over issues of gender-based violence, choosing to believe the perpetrator over the victim in many instances, despite evidence to the contrary. The fact that our laws discriminate in such a manner towards women and that our legal systems impede the reporting of acts of gender-based violence can be attributed to some major factors, primarily, representation in politics. According to data compiled by the United Nations, in all but four countries with legislative bodies, women were in the minority of elected members, and in nineteen more countries 40% representation had been reached or surpassed for women’s representation in government. Therefore, out of 195 countries in the world, only 23 have over 40% representation by women in legislative bodies, coming out to a measly 11%. With reduced representation in government, women’s issues are not as well represented, because women are statistically more likely to support


182

issues regarding education, healthcare, and other social issues, such as gender-based violence, which as previously determined is both a social and structural issue. So when issues like abortion come up, and legislation is drafted to limit women’s choice in the matter of their reproductive abilities, including reproduction that can be a result of gender-based violence, there are no women to stand up for their female counterparts everywhere because gender-based violence has forced women to step back from their decisionmaking positions. Therefore, reduced representation of women is at least partially responsible for the lack of laws and legislation that protects women actively, and prevents their discrimination. The lack of laws that protect women, external or, as seen earlier in our Panopticon lens, internalized, serve to demonstrate to potential perpetrators that it is acceptable to hurt women because there are no legal consequences. The implications of gender-based violence, both structural and social, are far reaching. Gender-based violence’s effects on the personal ripple out into the communal and the national spheres quickly, especially when it occurs as often as it does within our world. When considering the effects of gender-based violence on the social aspects of our world, we can understand it as a cycle. As much as the media causes gender-based violence, and as much as inequalities between men and women create the circumstances that allow gender-based violence to occur, gender-based violence exacerbates these circumstances as well. Gender-based violence first and foremost exacerbates existing inequalities between men and women by the method in which it works. Gender-based violence is used to exert dominance over a woman traditionally by a man, and when it occurs it has a ripple effect that causes women to live in fear of the men around them, putting themselves down and making themselves smaller because they are fearful. Therefore, the more that women are forcibly kept in a position of inferiority to their male counterparts, the more that they will fear to speak up about it and to strike back against the perpetrators of gender-based violence. This in turn allows gender-based violence to occur more and more when there is lessened opposition. This is only one of many of the vicious cycles of gender-based violence. Structurally, the impacts of gender-based violence are less obvious, but in many ways they are far more critical. When discussing the social impacts of gender-based violence, we often refer to the personal and the emotional impacts of gender-based violence on a population, specifically a population of women. We also may discuss how genderbased violence can infringe upon the rights of women by creating spaces that become male-only for fear of becoming a victim of gender-based violence. Gender-based violence against women causes them to live in fear, but when genderbased violence is targeted towards women in political offices, or women seeking a political office, the impact shifts from

being localized to a group of women, or women as a whole, and instead becomes spread to all the persons within the woman’s constituency. When talking, then, about genderbased violence and its impacts on women in politics, it is not a question only of its impact on her, but the impact of those around her and in her constituency as well. Gender-based violence has a profound effect on politics, and when directed against a woman holding a political office is by definition undemocratic because the acts of gender-based violence are committed with the intention of disrupting the democratic process. These acts are done in order to create change within the political processes, to influence outcomes of votes within legislative branches or to remove women from the equation entirely. When women are removed from their positions of power in which they can influence legislation and decisions that impact their constituency, the issues that they support such as healthcare, legal protections for women, childcare, education, etc. are all neglected. These are often colloquially known as “women’s issues” because women are far more likely to support them. Therefore, whenever a woman is removed from office or unduly influenced through the effects of genderbased violence on her, it influences the condition of those issues and the legislation that is passed to either improve or worsen the situation. Women are an integral part of our decision- making process, and removing them only leads to the worsening of our social situation as a whole, because when social programs flourish, we all do better. Social programs such as these also are able to help women in situations where they are experiencing genderbased violence. An educated woman will know how to find support and be better economically situated to remove herself from a situation in which she is experiencing genderbased violence, and can afford a lawyer to make use of legal protections against domestic violence. A woman who is physically abused can turn to her healthcare in order to recover, and can rely on childcare while she heals. However, if these institutions are unavailable or in disrepair, these things are not possible. When introducing social resolutions and methods to reduce incidents of gender-based violence, it is important to realize that there are two different levels at which genderbased violence occurs and operates. Part of dismantling the structures that perpetuate the cycle of gender-based violence is providing access to resources that prevent gender-based violence not only in the public sphere but in the private sphere as well. In cases where a woman is raped or murdered in a public space by someone she does not know or with whom she has a limited aquaintance, it can be much easier to prosecute the case and bring the perpetrator to justice. However, when these incidents occur in a private sphere, in the home of a woman or with someone she knows well, such as an intimate partner, it becomes infinitely more complicated. In many cases of sexual assault, one of the automatic defense of the perpetrator’s Fifth World


183

actions is that there was implied consent and that the act was agreed upon by all participating parties. This is aided by the fact that since this occurred in a private space, there are no witnesses. Perpetrators of gender-based violence in private spheres have the ability to mask their actions and create circumstances for their victims within that space to prevent them from speaking out about their abuse without raising suspicions. To successfully provide assistance to victims within the private spheres as well as the public spheres is a daunting task. The home is understood as a private space free from outside judgment or scrutiny and as such, providing proof of incidents of gender-based violence is notoriously difficult. However, by providing resources for reporting to victims of gender-based violence in their homes, the divide between the private and the public spheres can be bridged and aid can be given to all those who are experiencing gender-based violence no matter where the violence occurs. This aid cannot be superficial: it must transcend simple resources that define what gender-based violence is in the home and extend to reporting hotlines and law enforcement that chooses to believe the victim over the perpetrator in those cases. After discussing immediate resources to help those being impacted by gender-based violence, it is important also to speak of tools that will help reduce gender-based violence as a whole in the long run. When thinking about social tools, the first and foremost is education. The impact of education in the fight against gender-based violence cannot be underestimated. Education is one of the greatest tools to decrease the prevalence of gender-based violence that we possess (Gadkar-Wilcox 2012), however, it works on two distinct levels. First, we must educate the male identifying members of our society about gender equality. To educate men about gender equality is to dismantle the harmful social structures which sustain ideas of dominance that have been at the crux of gender-based violence for centuries. Second, for the education of men to be successful, women must be educated as well and alongside them. Education must occur together so that women are aware of their own rights and are conscious that men are aware of those rights as well. By educating both men and women about gender equality, especially in rural or lower-income areas where information about gender equality is less available (Gadkar-Wilcox 2021), gender-based violence can be reduced, as social structures that imply a male dominance over a female inferiority will be slowly dismantled (Timothy 2022). The conclusion that education can reduce prevalence of gender-based violence is drawn from other research (Gadkar-Wilcox 2012) which reveals that gender-based violence is more common in some areas because of a lack of easily accessible educational resources. To support the claim that education is effective at reducing incidents of gender-based violence, further direct research needs to be done. Should these findings support that claim, I would propose that education surrounding gender-equality be North Carolina School of Science and Mathematics

seriously considered as public policy for nations across the world and possibly adopted by international organizations such as the United Nations for its constituents. Legislation determines what is deemed socially acceptable by assigning real world consequences to actions that are deemed to be illegal. They can reinforce harmful social structures by using discriminatory language or by allowing for discriminatory behavior to occur. However, they can also be forces of good. By assigning legal consequences for violin actions, they can prevent a lot of wrongdoing. Therefore, it is necessary that laws are utilized in the fight against gender-based violence. Some countries already are taking the step of creating laws that penalize gender-based violence with jail time and fines; however, their efficacy has been brought into question. A primary example is Bolivia. In 2012, they signed into law “Ley Contra El Acoso y Violencia Politica Hacia Las Mujeres,” or “Law Against Harassment and Political Violence Against Women.” This states that political violence and harassment against women will henceforth be illegal and that any person found culpable of political gender-based violence or harassment will be sentenced to jail time. Although the law made waves at the time, as one of the more progressive laws surrounding political gender-based violence, it has since fallen short of expectations. The law, although bound in Bolivia’s legal code, has seen sporadic enforcement and violent acts against women in politics still rampant. Many citizens of Bolivia and other Latin American countries with similar laws have protested the lack of enforcement for the law, but they have fallen on deaf ears. Political violence against women continues to occur despite its illegality, which brings into question why these laws are ineffective. The primary issues with the efficacy of these laws is their lack of enforcement, which is derived from the social mores surrounding gender-based violence. The easiest one to change is the first. Thus, all laws that penalize genderbased violence must be enforced, not only so that those who disobey the law are brought to justice, but also to show past, present, and future victims that the government takes this seriously. This also means that victims of gender-based violence can report incidents of gender-based violence without fear of reprisal from either their perpetrators or in some cases the persons they report the incident to themselves. Although this is the easier of the two issues to fix, that does not mean that implementation is easy. In a paper published in the University of Pennsylvania’s law journal, one of the major issues with the implementation of legislation surrounding domestic violence in India was the laws themselves. (Gadkar-Wilcox 2021) They were not phrased in a way that made enforcement easy, and so, there were dozens of loopholes that perpetrators of domestic violence could use to avoid persecution. These loopholes also allowed for members of the law force to avoid enforcing the laws many did not support. In fact, some court decisions stated that “cultural norms could be used as mitigating


184

factors, rather than penalty enhancing ones,” superseding the idea that legislation could have dominance over social norms, and creating precedent for social will to overcome legal decisions. Therefore, changing society’s ideas about gender-based violence is a much more arduous task. In order to change the social mores surrounding gender-based violence that make it acceptable, we first need to engage in efforts surrounding changing the media and providing education about gender-based violence to create the change socially. By understanding that legislation is dependent on not only enforcement, but also social ideas, we can fully understand the connections between social and structural solutions to issues such as gender-based violence, and use the two to not work separately, but in tandem to solve the great issues of our time. Gender-based violence is a threat that every woman lives with. It lurks in the dark corners of alleys and in the spaces between women and their partners at home. Over the centuries, it has been accepted as a simple fact of life. However, it is not a threat that needs to remain. Based on the conclusions drawn in this paper, work can be done in order to reduce the risk of gender-based violence for women, and work has been done already using some key tools, primarily education and legislation. All that needs to be done in order to reduce the risk that gender-based violence presents to women is to apply these tools in nations across the world. Although this sounds simple, it is not. In order to reduce the risk of gender-based violence to women globally, there must be a complete overhaul in both the ways in which we think about women’s status and the ways in which women’s status is protected legally. Socially, the causes of gender-based violence are many, but some of the primary ones are continued power imbalances between women and men and media representations of gender-based violence that make it acceptable for genderbased violence to occur. As gender-based violence in essence is an act that seeks to exert dominance over a woman, any social structures that seek to keep women subservient to men aid in producing the conditions necessary for gender-based violence. These conditions force women into diminished spheres of life, allowing men greater dominance over social space as such. By committing acts of genderbased violence, men reinforce gender-based inequalities, making gender-based violence and gender-inequalities a cycle that perpetuates itself, until outside strategies or tools are used to break the cycle. Education is the major tool that can be used to break the cycle of gender-based violence. Education allows for women and girls to acquire greater economic and social power, and to raise themselves and their families out of situations that the media would define as “unsafe,” and, as such, ignore. It also aids women by allowing them to occupy educational spaces in which gender-based violence has been known to occur, (Thewell 2020), reclaiming within these spaces their power and their rights from men, while also working to

reduce gender inequality. Gender-based violence affects all sectors of our lives and the lives of those around us through impacting the physical and psychological health of its victims and reinforcing unjust social structures directly and indirectly. Through legislation that makes gender-based violence unacceptable, enforcement of said legislation, and educating the world’s populace about gender equality and rights, we can change the narrative on gender-based violence. One day genderbased violence will be an anomaly, and those who experience it will not fear speaking up.

Bullimore, Hannah. “What Can We Learn from the Murder of Sarah Everard?” Life North, 24 Mar. 2021

, High

Guterres, António. United Nations Security Council, 2021,

Klugman, Jeni. World Bank, 2017, Bardall, Gabrielle. International Federation of Electoral Systems, 2011, Breaking the Mold:

Bolivia, La Asamblea Legislativa Plurinacional, Mujeres. Bolivia Infoleyes, 2012. La Asamblea Legislativa Plurinacional. Palermo, Tia, et al. vol. 179, American Journal of Epidemiology, 2014e, pp. 602–612,

Sujata Gadkar-Wilcox, Intersectionality and the Under-Enforcement of Domestic Violence Laws in India, 15U. Pa. J.L. & Soc. Change 455 (2012). Slakoff, Danielle C., and Pauline K. Brennan. “The Differential Representation of Latina and Black Female Victims in Front-Page News Stories: A Qualitative Document Analysis.” Feminist Criminology, vol. 14, no. 4, 2017, pp. 488–516. Solotaroff, Jennifer L.; Pande, Rohini Prabha. 2014. Violence against Women and Girls : Lessons from South Asia. South Asia Development Forum;. World Bank Group, Washington, DC. © World Bank. Thelwell, Kim. “Gender-Based Violence in School (SRGBV).”

Kim Thelwell.

Timothy, Alexander Essien. “Gender Responsive Pedagogy: Teachers Knowledge and Practice in Nigeria.” 2022. Volden, Craig & Wiseman, Alan & Wittmer, Dana & Ohio, The. (2010). The Legislative Effectiveness of Women in Congress.

Fifth World


185

Unequal Treatment: Impact of Implicit Racial Bias Ethelyn Ofei

R

acism has existed in the United States for many centuries. Racist beliefs have since permeated every American institution, including our country’s healthcare system. Over time, explicit racism in medicine has become more subtle. Blatant “Jim Crow racism” has developed into implicit racial bias, which can be more difficult to detect (Williams and Rucker). Implicit bias is defined as bias that results from unconsciously acting upon predisposed prejudices and stereotypes. Though subtle, this bias has the power to negatively impact the medical practices of healthcare professionals. Further, implicit bias puts racial and ethnic minorities in danger of receiving poor medical treatment. Trends of racial bias in U.S. healthcare that began centuries ago are still prevalent in today’s society. A few years ago, I read an article about tennis player Serena Williams and the life-threatening complications that she experienced after giving birth to her daughter. She suffered from a pulmonary embolism following her cesarean section (C-section). Because Williams had a medical history of blood clotting, she was quick to notify her nurses of the unusual symptoms she was experiencing. Issues arose when the nurse believed that Williams’ pain medication was causing her to become delirious and confused. It took some convincing before the nurse took her complaints seriously. Williams had asked doctors to perform a CT scan and put her on blood thinners to identify and treat the blood clots. They ignored her concerns and instead performed an ultrasound, which did not reveal anything substantial. It was not until later that they performed a CT scan, which revealed the presence of several blood clots in her lungs. She was eventually given blood thinner medication to treat her pulmonary embolism. The incident unfortunately left her bedridden for six weeks (Lockhart). Serena Williams attributed her traumatic experience to the pervasiveness of implicit racial bias in U.S. healthcare. Similar experiences are common among African American women living in the United States. They are actually three to four times more likely to die from pregnancy-related complications than white women (American Medical Association). According to the Center for Disease Control and Prevention, Black women experience a significantly higher death rate from child delivery than white women. This accounts for one of the largest racial disparities in women’s health (ProPublica). The

North Carolina School of Science and Mathematics

source of this disparity could be credited to implicit racial bias and discrimination present within U.S. healthcare. As demonstrated by William’s experience, issues of implicit racial bias can transcend socioeconomic status. Learning of the Black maternal mortality crisis led me to wonder what other impacts implicit racial bias has had on marginalized groups. Through my initial research, I found that it is not uncommon for minorities to be ignored or disrespected while seeking medical treatment. Studies have found that physicians tend to view African American patients more negatively than white patients. This has led to lower quality communication between Black patients and their providers. Consequently, Black patients are more likely to be less satisfied with the interactions they have with their healthcare providers. They have reported longer visits and experienced less positive collaboration with their doctors. Low communication between doctors and patients has also contributed to disparities in healthcare. Disparities based upon race and ethnicity continue to persist and impact the quality of care, life expectancy, and mortality rates of minority populations (Hall et al.). The effects of implicit racial bias are quite devastating to marginalized individuals as well as their communities. Black Americans, Hispanic Americans, and Native Americans have an infant mortality rate significantly higher than White Americans (Hall et al.). These communities continue to be the most affected by healthcare disparities. We are now faced with the challenge of how to reduce implicit racial biases and establish health equality for everyone living in the United States. The recent COVID-19 pandemic has exposed implicit racial biases in U.S. healthcare. For several months, many hospitals and clinics across the country lacked the appropriate resources to treat their critically ill patients. Health practitioners were forced to make tough decisions on which patients should receive the little resources and treatment options that were available. Many medical facilities suffered shortages of ventilators during the peak of the pandemic. These machines are designed to help patients breathe by supplying oxygen to the lungs. Without a ventilator, patients suffering from severe cases of COVID-19 have a higher chance of death. There have been some concerns about the ventilator allocation process and how fairly these machines are being rationed. Patients are usually assigned a mortality risk score based on the functionality of their major organs.


186

If their mortality risk score is too high, they are not considered to be good candidates for ventilator therapy. African Americans and other minorities are less likely to receive ventilators because of their increased likelihood of suffering from chronic conditions, putting them at an obvious disadvantage. Additionally, African American men have a shorter life expectancy than any other social group. As a result, doctors frequently favor providing ventilators to non-Black individuals (Menconi). This unjust treatment perpetuates beliefs that the health of minorities is not valuable and their lives are not important. Black people contribute to 39 percent of COVID-related deaths, yet they make up just 15 percent of the general U.S. population (Grace et al.). Black communities have been impacted significantly by the COVID-19 pandemic, experiencing a mortality rate 2.9 times higher than the rate for Asians and 2.7 times higher than the rate for Whites (AMP Research Lab). Additionally, they account for 25 percent of positive COVID cases. If African Americans are more likely to be diagnosed with and die from COVID, why have they not been a priority during the pandemic? The death of Susan Moore is a tragic example of how implicit racial bias can endanger the lives of minority patients. The Black woman, who was also a licensed physician, was admitted into the hospital after contracting COVID-19. Her case was severe and she struggled to breathe throughout her long battle with the virus. The white doctor that treated her did not believe that Moore had shortness of breath. He also refused to give her medication for the neck pain that she was experiencing. Later, a CT scan was performed and the results validated her reports of pain by showing inflammation in her neck and lungs. After this, doctors finally agreed to prescribe her pain medication. Throughout her ordeal, Moore maintained that she would not have been treated this way if she were white. After just two doses of antiviral medication, her doctors were ready to discharge her from the hospital. Upon returning home, her condition worsened and she was later placed on a ventilator. She died just two days following ventilator therapy. Her story shows how easily the pain of African Americans has been dismissed by healthcare providers. They are routinely undertreated for their pain in comparison to white people suffering from similar conditions (Nirappil). Her story reveals implicit biases present in U.S. healthcare and shows that even the status of being a physician is not enough to overcome these biases. Implicit racial bias is powerful enough to surmount both job status and wealth; it continues to kill minorities at an alarming rate. Establishing health equity requires healthcare providers to first acknowledge that implicit bias exists and recognize that it brings significant harm to marginalized communities.

I

mplicit racial bias originates from the historical oppression of minority populations and continues to

affect the outcomes of these patients while contributing to health disparities in U.S. healthcare. The concept of racialization has maintained a significant presence in the United States since the country’s development in the late 18th century. European colonization along with the practice of slave trading encouraged beliefs of white supremacy and superiority. Because of the widespread acceptance of these beliefs, racial and ethnic minorities were commonly discriminated against. Native Americans were forcibly removed from their homeland by European colonists; African Americans were the victims of chattel slavery and treated solely as property; other non-European immigrants were discriminated against in the centuries that followed (Byrd and Clayton). The bias against these groups was consequently reflected in the emerging U.S. healthcare system. Inconsistent and virtually unavailable healthcare was, and still is, the norm for minorities living in the United States. Enslaved African Americans only received medical treatment when it was profitable for slave owners. Even then, the treatment they received and the facilities they were admitted into were substandard. There were no established regulations for the treatment of slaves, which contributed to the poor health status of African Americans during this period (Byrd and Clayton). In addition to poor medical care, many doctors were involved in the creation and perpetuation of racial stereotypes and pseudoscientific beliefs in medicine. False theories were used to prove the psychological inferiority and physiological differences of racial minorities, especially African Americans. Dr. Thomas Hamilton was an American doctor who attempted to prove that these physiological differences existed between white people and Black people. He performed several unsuccessful experiments on African American slaves to justify his theories. Despite the failure of his experiments, Hamilton spread misinformation about the physiology of African Americans. He believed that Black people had thicker skin and experienced less pain than white people. These false beliefs were accepted by other healthcare providers and taught to medical students (Tapalaga). Teaching these beliefs to students ensured that racial bias remained present in future generations of medical practitioners. These false beliefs were also utilized to justify the enslavement of Black people: if they could not feel pain, it was acceptable to subject them to unpaid labor and subhuman living conditions. Thus, the assumption that poor health was normal for African Americans prevailed in medicine. The legitimization and institutionalization of harmful misconceptions would later lead to the worsened treatment of African Americans and other minority patients. It was not until the Civil Rights Era that medical conditions began to improve for African Americans. These improvements were partially due to increased healthcare access for much of the Black population. African American people were now being acknowledged as citizens, which contributed to improved medical treatment. However, Fifth World


187

racial discrimination continued to persist in healthcare. By 1975, African Americans began to suffer from increased morbidity and mortality rates. Morbidity is the rate of disease, while mortality is the rate of death. It was recorded that they had the highest death rates in the majority of the leading causes of death. This fact remains true today, with the Black mortality rate being around 24% higher than the white mortality rate (American Medical Association). Though segregation had been outlawed by this time, African Americans remained socially isolated within the nation’s poorest cities, including Chicago, Detroit, and Pittsburgh (National Archives). Historically, these areas have been medically underserved and fallen victim to substandard healthcare treatment. The poverty rate among African Americans is higher than for any other racial or ethnic group in the United States (Taylor). Racial discrimination has perpetuated socioeconomic disparities, which have led to further inequities in U.S. healthcare for these groups. Poor African Americans and other minorities continue to be ignored and neglected by the unfair public health system. Their communities along with their healthcare needs have been pushed to the margins for centuries, while white people have often been prioritized for medical treatment. The health of racial and ethnic minorities continues to be undervalued by the U.S. healthcare system today. Another way that racial bias has entered U.S. healthcare is through the publication of racist principles in medical textbooks and journals. Publications containing pseudoscientific beliefs and harmful stereotypes date back to the late 19th century. During this period, scientific racism prevailed in healthcare and misconstrued beliefs were widely accepted. Moreover, there were established medical principles written specifically for the evaluation of enslaved African Americans. Certain diseases were referred to as “Negro diseases” and it was understood that African Americans had certain “Negro physiological peculiarities.” Among many of the diseases identified in slaves was dysaesthesia aethiopica. This disease was described by American physician Samuel A. Cartwright as a mental illness causing laziness and “rascality” in slaves. According to Cartwright, they were also prone to drapetomania, which was a disease causing slaves to run away from their owners. Cartwright was a highly respected doctor during this time and published his articulation of these diseases in “Diseases and Peculiarities of the Negro Race.” The diseases that Cartwright described in his publication further distinguished Black people from white people. His work characterized African Americans as troublesome and insane for wanting freedom. Through his publications, he convinced others that the rights and mobility of slaves should be restricted. This proved harmful to the already inadequate healthcare situation that African Americans faced. Their health was not taken seriously which meant that many of the serious medical issues they faced remained untreated. More recently, a nursing textbook published in 2017 by North Carolina School of Science and Mathematics

Pearson Education has contributed to the perpetuation of racist assumptions in modern U.S. healthcare. Before the racist content had been removed from the publication, the textbook contained several stereotypes associated with how different cultural groups respond to pain. Among many of the assumptions were phrases such as “Blacks often report higher pain intensity than other cultures” and “Native Americans may prefer to receive medications that have been blessed by a tribal shaman” (Pearson). These harmful stereotypes were used to generalize the experiences and beliefs of ethnic minorities. Through the publication of such falsities, misconceptions about racial groups continue to be preserved in healthcare. Implicit racial bias is actively being taught to future medical professionals which promotes the unequal treatment of marginalized groups in the U.S. healthcare system as a whole. Racist teachings such as these encourage providers to ignore the specific needs of their Black, Indigenous, and people of color patients and instead regard incorrect generalizations when providing medical treatment. Additionally, implicit racial bias is believed to enter the healthcare system through the underrepresentation of racial and ethnic minorities in medical textbooks, case studies, and clinical training. The unequal representation of race and skin tone results in the marginalization of racial minorities in medical education (Louie and Wilkes). Many of the visuals contained in medical publications depict phenotypic markers of diseases exclusively on lighter skin tones. This causes health professionals to become ignorant of how particular diseases can appear differently in certain racial and ethnic groups. Consequently, this puts underrepresented communities in danger of experiencing incorrect diagnoses and treatment. The lack of representation in medical textbooks is a prime example of how bias can be unintentional and result from healthcare practitioners being uninformed. It is important to include a diverse representation of races and skin tones in the medical curriculum in order to reflect the real encounters that physicians will face in their careers. Researchers conducting a study on the presence of racial diversity in medical textbooks found that several textbooks used model patients that were white. One textbook featured imagery of skin cancer only on a white patient and failed to provide visuals for how this disease would appear on a patient with dark skin. This could easily lead to physicians missing signs of skin cancer on dark-skinned patients. The absence of skin tone and racial diversity in the medical curriculum perpetuates implicit racial bias in U.S. healthcare and negatively impacts the medical practice of providers. It can mislead them to believe that certain diseases are associated with only certain races and ethnicities. Implicit racial bias is pervasive in all aspects of the U.S. healthcare system, including medical education.


188

W

hether healthcare professionals realize it or not, their implicit biases are likely to negatively impact the treatment of Black, Indigenous, and people of color patients. Researchers conducting a survey on medical students found that many students exhibited an implicit preference for white patients (Haider). Further, national data reveals that white physicians tend to view Blacks, Hispanics, and Asians more negatively than white patients. In addition, it is not uncommon for healthcare providers to make inaccurate assumptions about minorities. Many of them have agreed with untrue statements such as “Black people are unintelligent” and “Black people are prone to violence” (National Academy of Sciences). These damaging stereotypes are activated when physicians are put in high-pressure situations that require them to make quick judgments. The emotions that physicians may experience, such as anxiety and frustration, can lead to biased decisionmaking. Many physicians believe that African American patients are less likely to adhere to medical advice and that they lack adequate social support. These stereotypes may encourage health professionals to take the health of African American people less seriously. Even physicians that have good intentions may unconsciously allow their implicit bias to affect their medical practice. They may unconsciously make incorrect assumptions about non-white patients when treating their injuries and illnesses. Widespread acceptance of these assumptions and stereotypes have contributed to the overall poor treatment of minority patients in healthcare. Quality of care is typically lower for these individuals because they are constantly subjected to implicit racial biases. More specifically, implicit racial bias has affected the pain evaluation and treatment of African American patients. Black patients are systematically undertreated for their pain, as presented by the previous example of Susan Moore. Moore was a Black patient who was initially denied medication for the pain she was experiencing from COVID-19 symptoms. Research suggests that physicians underestimate and therefore undertreat the pain of Black individuals (Hoffman et al.). Their underestimation of pain could very well be unconscious. The evidence presented in past research shows that medical students and residents continue to believe that African American people are biologically different from white people. Falsities that were disproven long ago such as “Black people have thicker skin than white people” and “Black people are stronger than white people,” are still believed by many of these medical professionals (Hoffman et al.). Their commitment to these beliefs makes them less likely to recommend the appropriate treatment options to Black patients. Additionally, the assumption that African Americans are more likely to abuse drugs has influenced the treatment of these individuals. Beliefs about the high prevalence of drug abuse in the Black community have not been supported by sufficient evidence. Data reveals that African Americans are in fact less likely than white people to abuse heroin, cocaine, stimulants,

or methamphetamine (Moskowitz). Biased perceptions of Black people and their pain tolerance have negatively impacted the quality of their medical treatment. Compared to white patients, Black patients are less likely to be given pain medication. If they do receive medication, they usually receive a lower dosage than white patients (Hoffman et al.). A retrospective study found that Black patients were significantly less likely than white patients to be prescribed painkillers for extremity fractures in the emergency room. This was despite both groups reporting similar levels of pain (Todd et al.). As illuminated by this disparity and many others, the pain of African American individuals continues to be invalidated by healthcare practitioners. Compelling evidence has proven that Black patients are being underprescribed medication. In order to better recognize the pain of Black patients, medical professionals must be willing to relearn their medical perception of Black patients. Implicit racial bias continues to be a barrier in providing African American patients with appropriate pain treatment. It has produced the pervasive mistrust of medical authority in minority communities. This mistrust has negatively impacted the quality of patient-provider relationships. African Americans maintain the highest rates of medical mistrust. This mistrust stems from the medical injustices they have experienced in the past and continue to experience today. Historically, African American people have been used as test subjects for medical experiments, sometimes without their knowledge. This was the case for nearly 600 African American men of Tuskegee, Alabama. The Tuskegee Syphilis Study was conducted by the United States Public Health Service (USPHS) beginning in 1932. The study was designed to record the progression of untreated syphilis in Black males (McVean). Participants were misinformed about their role in the experiment. They were told they were being treated for “bad blood,” a term that encompasses several ailments including syphilis, anemia, and fatigue (CDC). Instead, researchers infected them with syphilis and withheld treatment. When penicillin became the preferred treatment of syphilis in the 1940s, the men were still not treated. It was not until the public found out about the experiment that the Tuskegee study finally ended in 1972. By this time, 128 participants had died of syphilis or its complications. Many women and children in the community had also contracted the illness (McVean). This is just one example of African American people being subjected to unethical human experimentation. It is not surprising that many African Americans still believe that medical experiments can be performed on them without their knowledge (Williamson and Bigman). Racial discrimination and bias in the U.S. healthcare system have deeply damaged the African American perception of U.S. healthcare (Menconi). Henrietta Lacks is another example of how African Americans have been mistreated in the medical field. Henrietta was a Black woman who died from cervical cancer in 1951. Before her death, she attended a cancer

Fifth World


189

clinic at Johns Hopkins Hospital where a tissue biopsy of her womb was taken without her knowledge or consent. A cancer researcher noticed that the cells were able to divide more efficiently than other cancer cells. The rapidly dividing cells were named HeLa cells, after Henrietta Lacks. They have since become widely used in biological research and vaccine development (British Society for Immunology). The exploitation of Henrietta’s cancer cells outraged many, including her family members who were not informed of the unethical usage. African American patients have frequently been treated with little respect in the U.S. healthcare system. Interestingly, studies have found links between the bias level of physicians and patient confidence in recommended treatments. Physicians who are high in implicit racial bias are more likely to have patients that do not trust them. This highlights the issue of patients being less likely to adhere to treatment recommendations because they recognize bias in their physicians (Bendix). This phenomenon places a noticeable strain on patient-physician relationships. Impaired communication usually means a lower quality of medical treatment. A study found that oncologists who were higher in implicit bias had shorter interactions with patients. These patients rated their interactions with these physicians as “less patient-centered” and “less supportive” than interactions with less-biased physicians (Bendix). Medical mistrust has also been examined in Hispanic, Asian, and Native American populations. Evidence suggests that these groups also have strained relationships with medical authorities. Additionally, researchers suspect that Arab and Muslim immigrants may be skeptical of medical treatment recommendations because of increased Islamaphobia over the years. Stereotype threat also plays a role in the quality of patient treatment (Bendix). This term refers to the stressful psychological state of a person that fears being judged by others according to negative stereotypes (Steele and Aronson). Stereotype threat may further impair patient-provider communication and increase medical mistrust. With the pervasiveness of implicit racial bias in U.S. healthcare, the mistrust of medical providers is understandable and expected. Implicit racial bias negatively influences the decisions of healthcare providers and harms the medical treatment of racial minorities, especially African Americans.

I

mplicit racial bias not only affects patient treatment, but also patient outcomes and the prevalence of healthcare disparities. Research suggests that implicit bias affects patient outcomes by affecting clinical interactions and patient adherence to treatment recommendations (Blair et al.). People of color continue to experience poorer health outcomes than white people (Maina et al.). They also account for the majority of health-related disparities in the United States (Weinstein et al.). Health disparities are defined as preventable differences in the health statuses of socially disadvantaged groups (World Health Organization). North Carolina School of Science and Mathematics

Bias from individual healthcare providers contributes to institutional racism and inconsistencies in U.S. medical care. Implicit racial bias is a significant factor in the manifestation of U.S. health disparities. Numerous disparities in health conditions exist among people of color including asthma, diabetes, hypertension, HIV/AIDS, obesity, and tuberculosis. Minorities also have a higher incidence of several types of cancer including cervical, kidney, breast, colorectal, lung, and prostate (National Cancer Institute). People of color also face disparities in their morbidity and mortality rates. For instance, Black Americans have the highest rate of premature deaths from cardiovascular disease and stroke (Hall et al.). Researchers have been interested in figuring out why the mortality rate is higher in African American people. They believe a possible explanation for this disproportionately is that physicians have an unconscious racial bias against Black people, which they fail to recognize. The majority of physicians are unlikely to recognize health disparities within their own hospital or clinic (Graham). If they do recognize these disparities, they are more likely to attribute them to deficiencies of the patient or faults present within the larger healthcare system. Unintentional bias from providers can come in the form of insensitivity to the needs and differences of patients from various backgrounds (Graham). Acknowledging the existence of health disparities that stem from implicit bias and discrimination could lead to the development of specialized intervention and prevention programs. These programs will hopefully lessen the differential health outcomes experienced by African Americans and other minorities. Cardiovascular disease is actually the leading cause of death for both men and women in the United States. As previously stated, it continues to kill African American people at an alarming rate – one that is higher than for any other racial or ethnic group. Cardiovascular disease is a term that encompasses several conditions including coronary artery disease (CAD), cardiomyopathy, and atherosclerosis (Donovan). There are several risk factors for these diseases that often go unrecognized in African American patients. Because these risk factors are not often recognized, cardiovascular diseases can remain untreated in Black patients. Even if these risk factors are detected, they may not be treated/diagnosed until later on. Thus, these individuals are more likely to experience adverse outcomes and suffer higher morbidity and mortality rates from cardiovascular disease. A physician’s inability to identify risk factors in African American individuals could be the result of implicit racial bias. Some providers may be unfamiliar with how to recognize these risk factors in certain individuals due to bias in medical education. An example of how this bias affects racial groups is evident through the high rates of cardiovascular disease in Hispanic Americans. Hispanic Americans have higher mortality rates due to cardiovascular disease than other racial minorities. Research has found that Hispanic heart health is insufficiently understood by many


190

medical providers (Balfour et al.). This underlying issue is directly related to the cardiovascular disease health disparity. Failure of health practitioners to recognize differences in Hispanic/Latino backgrounds could be contributing to this disparity. Evidently, implicit racial bias has allowed adverse patient outcomes and health disparities to persist in our healthcare system.

I

mplicit racial bias is rooted in centuries of oppression. It continues to negatively impact the treatment and health outcomes of minority patients. Implicit bias is more difficult to detect than explicit bias because it is unconscious and unintentional in nature. Throughout American history, there has been a shift from explicit bias to more subtle biases in healthcare. Scientific racism is a pseudoscience that was used to perpetuate beliefs of African American biological inferiority. Pseudoscientific beliefs attempted to justify assumptions and stereotypes about the bodies of African American people. Among these false conjectures were “Black people feel less pain than white people” and “Black people have thicker skin than white people” (Hoffman et al.). It was largely accepted that African Americans had poorer health than white people. Bias in medical textbooks and medical education also did their part to reinforce harmful stereotypes about African Americans and other racial minorities. Medical journals and textbooks that have been published with discriminatory principles have been used to educate medical students and continue centuries of racial bias in the U.S. healthcare system. Unequal representation of race and skin tone in medical textbooks has caused health professionals to become ignorant of how certain diseases appear differently in racial groups, while instilling them with negative perceptions and attitudes towards marginalized patients. Further, racist assumptions and incorrect generalizations about the health of minorities encourage providers to ignore the specific needs of their patients when providing medical treatment and prescribing medication. In our current healthcare system, it is still common for medical practitioners to regard Black, Hispanic/Latino, Indigenous, and Asian Americans more negatively than white patients. Research has revealed that many of these physicians believe untrue stereotypes about the intelligence and pain tolerance of minorities. These stereotypes influence the medical practice of physicians and result in the differential treatment of patients. Once these stereotypes are learned, they are easily activated when physicians are placed in stressful situations that require them to make quick judgment calls. They may automatically make unconscious and incorrect assumptions about non-white patients. The quality of care is typically lower for underrepresented individuals because of their bias. In many cases, implicit racial bias has led to Black people being undertreated for their pain. Assumptions that Black people are more resilient to pain and that they are also more prone to abusing drugs may have contributed to the

production of this health disparity. This bias makes providers less inclined to prescribe appropriate pain medication to African American patients. Their pain continues to be invalidated by an inherently racist healthcare system. Unsurprisingly, there are high rates of medical mistrust in minority communities. African Americans maintain the highest rates of mistrust towards medical practitioners. The Tuskegee Syphilis Study is a disturbing case of human experimentation that furthered African American distrust of the United States healthcare system. Because Black people were believed to be inferior and have less valuable lives than white people, researchers believed that their unethical experimentation was justified. Many African Americans hold damaged perceptions of medical authorities, which has led to poor patientprovider relationships. Reduced communication between the two parties makes it less likely that the patient will adhere to treatment recommendations from the physician. This widens the gap between the healthcare quality for minorities and for white individuals. Negative patient outcomes for minorities can be partially attributed to issues of implicit racial bias. People of color experience worse health outcomes more often than white individuals. They also account for the majority of our country’s health-related disparities. Minorities have a higher incidence of chronic diseases and are more likely to be diagnosed with cancer. For example, African Americans are disproportionately affected by a higher morbidity rate from cardiovascular disease. Researchers believe that physicians contribute to this disparity by failing to recognize that these disparities exist in healthcare. They are more likely to place the blame for these inconsistencies on patients or on the larger healthcare system. Physicians often fail to recognize the risk factors that may be associated with certain diseases that disproportionately affect minorities. Failure to recognize these risk factors could lead to illnesses going untreated. An inability to identify these risk factors could be attributed to a lack of a proper medical education. A sufficient understanding of all cultures and backgrounds is required to provide adequate medical treatment to all racial and ethnic groups. To build upon this research, further questions must be asked. Given the invisible and concealed nature of implicit racial bias, how can we identify this bias and prove its existence in clinical settings? To prove that implicit racial bias operates in physician treatment, further research must be conducted on how implicit racial bias can be measured in physicians. We must identify the extent to which implicit racial bias affects the healthcare of marginalized populations. How can we reduce implicit racial bias in healthcare providers? The research presented in this article articulates the issue of implicit racial bias in U.S. healthcare, but does not provide solutions to reduce this bias. It seems that it would be most beneficial to target individual practitioners in order to reduce implicit racial bias across the entire U.S. Fifth World


191

healthcare system. The first step in reducing bias among physicians is encouraging them to acknowledge their implicit bias. Making providers self-aware of their unconscious bias will make them less likely to impose harmful stereotypes on their non-white patients. Research on this issue could have many positive implications for the U.S. healthcare system and its employees. It will hopefully allow them to recognize that nobody is immune to implicit bias or its harmful effects.

Black People Get Killed’.” The Washington Post, WP Company, 25 Dec. 2020. Sciences, National Academies of, et al. “The State of Health Disparities in the United States.” ., U.S. National Library of Medicine, 11 Jan. 2017. Sini, Rozina. “Publisher Apologises for ‘Racist’ Text in Medical Book.” BBC News, BBC, 20 Oct. 2017. Smedley, Brian D, and Adrienne Y Stith. Academy of Sciences. Staff, Physician’s Briefing. “Medical Students Show Racial, Cultural Patient Preference.” Consumer Health News | HealthDay, 22 Jan. 2021.

National

,

Taylor, Jamila, et al. “Racism, Inequality, and Health Care for African Americans.” The Century , 1 Nov. 2021. Andrei Tapalaga. “The Myth of Black People Not Feeling Pain Is Still Believed to This Day.” Medium, History of Yesterday, 27 Nov. 2020, “40 Years of Human Experimentation in America: The Tuskegee Study.” 30 Dec. 2020,

“The Great Migration (1910-1970).” National Archives and Records Administration, National Archives and Records Administration.

, “Tuskegee Study - Timeline - CDC - NCHHSTP.” for Disease Control and Prevention, 22 Apr. 2021.

, Centers

“Africans in America/Part 4/‘Diseases and Peculiarities.’” PBS, Public Broadcasting Service, “The American Journal of Public Health (AJPH) from the American Public Health Association (APHA) Publications.” . Andis Robeznieks Senior News Writer Twitter logo. “Inequity’s Toll for Black Americans: 74,000 More Deaths a Year.” American Medical Association, 22 Feb. 2021. Bendix, Jeff. “How Implicit Bias Harms Patient Care.” Medical Economics. Bulatao, Rodolfo A. “Health Care.” Research Agenda ., U.S. National Library of Medicine, 1 Jan. 1970. “Hela Cells (1951).”

.

Burgess, Diana J., et al. “Why Do Providers Contribute to Disparities and What Can Be Done about It? - Journal of General Internal Medicine.” SpringerLink, Springer-Verlag. Byrd, W Michael, and Lynda A Clayton. “Race, Medicine, and Health Care in the United States.” NCBI NLM NIH, U.S. National Library of Medicine. Donovan, Robin. “Heart Disease: Risk Factors, Prevention, and More.” Healthline, Healthline Media, 27 Feb. 2020. Grace, De’Zhon. “Racial Inequality and Covid-19.” The Greenlining Institute, 8 May 2020. Graham, Garth. “Disparities in Cardiovascular Disease Risk in the United States.” Current Cardiology Reviews, Bentham Science Publishers, 2015. Hall, William J, et al. “Implicit Racial/Ethnic Bias among Health Care Professionals and Its Influence on Health Care Outcomes: A Systematic Review.” , American Public Health Association, Dec. 2015. “Health Inequities and Their Causes.”

, World Health Organization.

Hoffman, Kelly M., et al. “Racial Bias in Pain Assessment and Treatment Recommendations, and False Beliefs about Biological Differences between Blacks and Whites.” PNAS, National Academy of Sciences, 19 Apr. 2016. Lockhart, P.R. Women. 11 Jan. 2018. Louie, Patricia, and Rima Wilkes. “Representations of Race and Skin Tone in Medical Textbook Imagery.” Social Science & Medicine, Pergamon, 23 Feb. 2018. Menconi, Michael. “Covid-19 Ventilator Allocation Protocols Are Poised to Disadvantage African Americans.” . Myers, Dr. Bob. “‘Drapetomania’: Rebellion, Defiance and Free Black Insanity in the Antebellum United States.” , 21 Feb. 2015. Nirappil, Fenit. “A Black Doctor Alleged Racist Treatment before Dying of Covid-19: ‘This Is How

North Carolina School of Science and Mathematics

Williamson, Lillie D., and Cabral A. Bigman. “A Systematic Review of Medical Mistrust Measures.” Patient Education and Counseling, Elsevier, 17 May 2018.


192

Existentialism and its Implications on Society Sanjana Nalla

I

remember watching Jackie, a movie about how Jacqueline “Jackie” Kennedy Onassis handles the funeral and aftermath of President John F. Kennedy’s assassination, when it first came out in theaters. There was one moment in the film that intrigued me in particular where Jackie speaks to the priest. In this scene, the priest responds to Jackie’s queries in a striking manner. He claims that there comes a moment in which we all realize there is no meaning. When we arrive at this conclusion, we will do one of three things: accept it, kill ourselves, or stop searching. For a long time, this conversation stuck with me, as I too often pondered the meaning of life, of existence, and questioned what purpose do I serve, do we serve. As I left the theater that day, I wondered if others had given that scene the same attention I did. Understanding that the film asks questions about why we should memorialize this man and why we should create a legacy; instead of being a biography, Jackie asks who this person is and what his story lends to the underlying existential crisis those who remain behind must endure through much of the story. I revisited this film in a short course on existential philosophy in which we examined this scene further and its relation to works of philosophers like Albert Camus. As a result, I became more fascinated with the topic of existentialism, a school of thought dedicated to the rejection of the claim that life has an inherent meaning; it is a philosophical movement dedicated to understanding existence as that which humans make in their radical and difficult freedom. Existentialism is more, however, than questioning life and existence; it extends to examining the nature of the human condition. This philosophical movement attempts to account for what makes humans different from other forms of existence, from animals. Existentialism can be dated back to Ancient Greece with philosophers like Socrates. Socrates advocated for the practice of philosophy to be the care of the self, and the focus was placed on the “proper way of acting rather than on an abstract set of theoretical truths.” During the Hellenistic period, the theories Socrates introduced flourished and philosophers became known as the “kind of doctor of the soul.” Furthermore, philosophy in its theoretical approach became widespread as the “pursuit of basic truths about human nature and the universe” (Flynn).

I

n mid-nineteenth-century Europe, existentialism gained popularity again with the works of Søren Kierkegaard, who is considered a founder of existentialism. Along with Kierkegaard, Friedrich Nietzsche is considered to be a precursor to the movement. In works like Either/ and , Kierkegaard argued that living an ethical life is better than the aesthetic life while emphasizing the “teleological suspension of the ethical” (Judaken and Bernasconi 214). Much of Kierkegaard’s work was influenced by his Christian faith as he often alluded to biblical stories and made claims that justified actions of Christians like disregarding morals for your relationship with God. Although Nietzsche is often associated with the Nazis, who appropriated his work to justify their atrocities, much of his writing involves undermining the lies people often recite to themselves in order to keep going; for instance, he argues that “men must accept that they are part of the material world, regardless of what else might exist,” insisting upon the materiality of our most abstract ideas. One of his popular claims is that we must all live as if this is all there is and to not live is a failure to realize human potential (Wyatt and Schnelbach). Nietzche’s relation to National Socialism, the political ideology of the Nazi Party, deserves some attention here. Elizabeth Förster-Nietzsche, Nietzche’s sister, took Nietzsche’s estate after he succumbed to his “madness” and “insanity”; with the control Elizabeth had over his writings, she was able to compile The Will to Power and somehow conjure conversations she had with her brother to support her own political ideology. As Nietzsche’s difficult work was easily misconstrued and misunderstood, Elizabeth was able to make Nazi soldiers look toward her brother’s work to justify their work and their beliefs (Hendricks). Nietzsche’s philosophical work was further misinterpreted by Nazi philosophers in order to generate Nazi propaganda. Nazi apologists chose Nietzsche over other German philosophers with reason: they found many things of use in his work, many things that could be emphasized and push forth their agendas, justifying “world domination and racial hegemony” (Yablon 741-742). Nietzschean scholars were able to generate a story, one in which Nietzsche predicted the Nazi Regime and more than sympathized with their ideology and even popularized the notion that he should have been alive during the Third Reich (Yablon 743). It is important to consider how such works can be

Fifth World


193

understood and interpreted by their audiences. When attempting, for instance, to understand Martin Heidegger’s relationship with Nazi Germany, one must take a different approach from that of Nietzsche, one in which you examine how Heidegger’s support of the Nazi Regime affected his work. While Heidegger is not discussed in this text, it is crucial that the difference is highlighted: how one’s work is altered to support such an evil regime and how one actively supports the same regime. From the nineteenth century to the twentieth century, existentialism steadily rose in popularity. The twentieth century brought in a new wave of philosophers primarily composed of French thinkers such as Albert Camus and Jean-Paul Sartre and German thinkers such as Karl Jaspers and Martin Heidegger. With them, they brought more fame to the movement as Sartre and Camus wrote existentialisminfluenced plays and novels like Nausea and Over time as more philosophers with their own ideas came into the limelight, they brought with them critics like Gabriel Marcel, a French Catholic philosopher who was often associated with Sartre’s antithesis movement. Furthermore, the distinction between philosophers became clearer as intellectuals like Marcel, Jaspers, and Kierkegaard became associated with “religious or theistic existentialism,” while Sartre and Camus became associated with their denial of God, or insistence upon a radically secular world. By the 1940s, it reached its peak in post-war Europe as people viewed the world differently due to the events of the war. The invention of new technology for warfare, young men arming killing machines, and leaders citing various causes to champion the war effort all resulted in people asking new questions about the future. Thus, existentialism addressed the moral exhaustion that took over Europe and give life meaning again and to gain the motivation to bear the burdens of the daily life (Lalka). Its leading figures were able to influence various aspects of life and society, including religion, politics, and methods of discrimination; changing the perspectives of believers of Abrahamic faiths and non-believers, to its weighted role in political relations and armed conflicts, to altering the way race, gender, and sexuality are viewed in discriminatory lenses. Existentialism traveled from Germany to France to Spain to Italy and eventually to the United States and other countries. As Existentialism entered modern society, it was met with immense criticism as commentators claimed that the movement was “a bohemian fad.” By the mid-1950s, it was thought to be “visible in all areas of human endeavor yet definable in none.” The movement was replaced in the 1970s by French poststructuralism and postmodernism as new philosophers or anti-philosophers rose to fame, many of them at once indebted to and critical of existentialism’s larger claims (Michelman). Popular concepts in existentialism include absurdity, authenticity, and anxiety regarding aspects of life, among others. Absurdity refers to an idea introduced by Sartre North Carolina School of Science and Mathematics

regarding the “unfulfillable desire for complete fulfillment.” As humans, we are capable of asking questions regarding the purpose of our lives, but when we can answer such questions, we find that we can not accomplish it. Authenticity can be understood by Heidegger’s and Sartre’s explanations as they claim we need to be honest with ourselves and face our situations rather than using deceptive approaches. Paul Tillich, a German-American theologian, insists anxiety regarding life events is a result of the “threat of non-being.” As humans, we are unnerved by the uncertainty that comes with death and ourselves; we attempt to hide from it, but eventually, we will be unable to do so (Irvine).

P

hilosophers tend to attempt to understand the meaning of body, existence, and life in terms of faith, sometimes religious, sometimes not. Many existentialists are associated with their philosophical relation to religious beliefs, Soren Kierkegaard was a staunch Christian, Jean-Paul Sartre was an Atheist, and so on. The analysis of existence often stems from questions of faith and belief as religion attempts to justify and give meaning to life. The assignment of duty and worship are meant to sustain people, but what happens when you do not believe? What are non-believers to stake a claim in? Existentialist ideology answers such questions for those who believe and those who do not and in the process of answering questions, one finds that new ones arise. As one critic suggests, existentialism can be understood as more than “being antithetical” to religion, as the movement is often associated more with agnostics and atheists than theists (Hoffman). With the discussion of religion in any field comes questions of morality, ethics, and duty, Existentialism is no different. In this section, I examine the religious motives of the various philosophers who write upon existential thought and the implications of their belief systems in their works. Although different theologians introduced different ideas about Christianity, these “anxious angels,” as they were often referred to, derived their ideas from earlier philosophers like Kierkegaard and other key nineteenth-century figures. Kierkegaard is regarded to have had the most contribution to this intersection of existentialism and Christianity. By examining Kierkegaard’s , we can see how Johannes de Silento (the pseudonym Kierkegaard wrote under for this book) distinguishes different ways of life: aesthetic, ethical, and religious. The Aesthetic is when one lives life through their own experiences, a life of felt experience. The Ethical is when life transcends not only the personal interests but the interests of the whole and is based on the idea of “Absolute Mind” as Hegel calls it. However, it reminds me of a line from Sartre’s Existentialism is a Humanism, that “everything happens to every man as if the entire human race were staring at him and measuring itself by what he does.” I read this as the imperative to become a role model for society in terms of morals and ethics similar to the ethical lifestyle Kierkegaard depicts (Sartre 26). Sarte


194

tells us to hold ourselves responsible for our actions and to keep in mind the way they influence or affect others. It truly is an underappreciated concept as it could make more people inclined to stay on the side of the so-called morally good; the idea that the actions of one person can enforce a standard for all to meet. Finally, the Religious way of life works on the individual level as it relates to the personal relationship between God and the individual, a personal matter. Johannes (Kierkegaard) introduces religion to the equation in his discussion of the universe, one whose discourse revolves heavily around ethics. According to critics, by factoring in religion and faith, Kierkegaard was able to examine the “wider implications for the whole relationship between faith and philosophy” (Judaken and Bernasconi 213). As Kierkegaard toggles between faith and ethics, he asserts an exception to the morals and ethics that rule those of the Christian faith. His assertion involves a maneuver known as the “teleological suspension of the ethical,” which requires the individual to place himself above the universal obligations that those who follow ethical behavior do (Judaken and Bernasconi 214). In and Trembling, Kierkegaard cites the story of Abraham and Isaac to provide an example of how Abraham places himself above his ethical and moral obligations as he chooses to sacrifice his son for his faith, going against one of the most basic moral concepts, doing no harm to others. In re-narrating such a story, Kierkegaard was able to assert that “choosing oneself,” choosing one’s faith, is a difficult and urgent matter of authenticity (Khawaja 17). Authenticity, as introduced by other existentialists, relates to being your true self, and Kierkegaard asserts that the best way to do this is to choose yourself, by choosing the faith that creates the self in a radical act. Kierkegaard’s motive for his writing is to persuade his peers that Christianity as a practice resists “social conformism,” the inauthenticity that comes with doing as your peers do (Guignon). Additionally, in , Kierkegaard writes about human passion and religion’s ability to ignite such passion; he asserts that to become a “self” you must live with “infinite passion” (Guignon). Authenticity revolves around concepts of acknowledging your existence in all its seriousness and then pursuing avenues that allow you to make something of that life, your existence. Kierkegaard furthers his argument for religion and passion as he claims that this something you make out of your life must be so consuming and defining that it gives your life “ultimate content and meaning” (Guignon). To justify Abraham’s suspension of the ethical, Kierkegaard brings the audience’s attention to Abraham’s anxiety and irrationality. He describes Abraham’s acts against his son as lacking human reason or justification as he claims that Abraham’s relation to the absolute, that is the absolute and private relationship to god, is above the universe and the ethical (Judaken and Bernasconi 215). There is no religious law that governs Abraham throughout his actions in the story, only Abraham and Kierkegaard’s

claim to some unknown communication and trust in a voice that only the father can hear (Judaken and Bernasconi 214). Now, one may ask how the story of Abraham choosing to sacrifice his son, Isaac, relates to existentialism. In and Trembling, Abraham becomes the “hero” as his actions redeem “humanity from what is otherwise a meaningless cycle of birth and death” (Judaken and Bernasconi 217). Through the use of this story, addresses questions of what is the meaning of life, of existence? Does it require one to be heroic, to ignore morals and ethics? However, Kierkegaard does more than create such obvious and discrete relations between the story of Abraham and existentialism. In all of his work, Kierkegaard radically seeks to rejuvenate belief in his fellow Denmark Christians. His works, though through many voices and under many names, were written to bring back “passionate commitment” instead of passive acceptances of unexaminated doctrines (Khawaja 18). The use of Abraham’s story forces the readers to examine what Kierkegaard is attempting to communicate to them–not that Abraham is some godly figure, or as a story of Abraham’s love, but a thorough analysis of faith. It is important to also acknowledge that as the father of existentialism, Kierkegaard’s belief in Christianity plays a role in the works of many that follow him as he intertwines Christianity and existentialist thought. I find it interesting how the language of faith, specifically Christianity, in Kierkegaard’s works revolves around eternal feelings of happiness or the lack thereof. Furthermore, Kierkegaard’s works inspire philosophers like Karl Barth and Paul Tillich as they too use religion to further their arguments and to analyze Kierkegaard’s.

S

ome of the most popular works from the school of thought hail from those who are non-believers of god, like atheists, agnostics, and others somewhere on the spectrum, causing one to wonder what notions influenced their work instead and what moral and ethical considerations are involved in their works. Dubbed “one of the twentieth century’s great unbelievers,” Albert Camus was famously known for the absence of God in his work. There are arguments for Camus’s religious beliefs and the lack thereof, but he himself claims in a notebook of his that “I do not believe in God and I am not an atheist.”(Judaken and Bernasconi 257). He situated stories and tales like the with notions of our own existences, ones we know to be real, and ones in which all we can do is live. One critic describes this concept beautifully: “we must live without appeal … life is without consolation” (Judaken and Bernasconi 256). Camus begins and with an exploration of absurdity and suicide. He starts the text with a powerful first sentence, “There is but one truly serious philosophical problem, and that is suicide” (Camus 4). This sentence sets up the section titled Absurdity and Suicides, in which Camus deliberates on how Fifth World


195

futile asking if life is worth living is. He claims that suicide, however, does not need to be the answer. In this section, he cites an experience he was told undermined a man who committed suicide, and claims that “to be undermined” is where it begins (Camus 5). It is interesting to see how through this text, Camus argues against suicide, insisting that to commit suicide is to confess that life was too much for you or that you could “not understand it” (Camus 5). He continues to confront potentially suicidal readers as he compares and contrasts the relationship between the absurd and suicide. According to , the word “Absurd” means “ridiculously unreasonable, unsound, or incongruous” or “having no rational or orderly relationship to human life” (“Absurd.”). As Camus elaborates on the term, it is easy to see how fitting the word “absurd” is for what he describes in terms of existential thought. Feelings of absurdity stem from the realization that the lives we live believing that we serve a purpose are just a series of habits resulting in feelings of being purposeless, and pointless. Absurd is what Camus labels seeking the meaning of life when there is none; it is absurd to attempt to understand the world. In Absurd Reasoning, Camus states that, although life is meaningless, one must nevertheless question if life should still be lived, if it truly is something not worth living. Camus creates a striking image through Sisyphus, a man condemned to a life of rolling a boulder up a hill just for it to roll back down, then to repeat the process. Through this imagery, Camus brings to his readers the concept of absurdity and notions of “futility and hopeless labor,” while emphasizing the desolate nature of Sisyphus’s life. Camus also highlights a triumph, “his intense consciousness” as Sisyphus and the rest of us “remain fully conscious that we are condemned to die” (Judaken and Bernasconi 256). Camus frames Sisyphus as a hero, as Sisyphus makes no attempts to escape his fate, but continues to roll the rock back up the hill. Thus, he describes Sisyphus in heroic terminology to the audience as he claims that Sisyphus “ is superior to his fate. He is stronger than his rock” (Camus 76). Sisyphus’s story is a great example of absurdity as he repeats his actions, a force of habit, knowing that it is meaningless, and still doing it, still existing and living. Sisyphus is solidified as a hero by Camus as he gains something with his knowledge of the truth of a lackluster reality. Through , the audience is asked to understand notions of the absurd and absurdity, but it begs the question of why? Why cause your readers misery and suffering pondering these questions themselves? Like Camus claims in the text, you can find meaning in the meaningless existences that we all live, by taking solace in the fact that there is no meaning at all. Jean-Paul Sartre’s Existentialism is a Humanism links the school of thought and atheism together in a very convincing argument. “Existentialism is nothing else but an attempt to draw the full conclusions from a consistently atheistic North Carolina School of Science and Mathematics

position.” This is an example of how Sartre claims the two connect (Sartre 53). He also claims that humans are not constructed for some sort of divine intervention or calling from above, but rather to live their lives as they please. When attempting to understand this text, one needs to understand what the word “Humanism” means itself. In this text, Sartre argues that Humanism revolves around the concept of “man is always outside of himself” and reminding “man that there is no legislator other than himself and that he must, in his abandoned state, make his own choices” (Sartre 52-53). When comparing Kierkegaard’s to Sartre’s Existentialism is a Humanism, the most important differentiation to make is their respective assertions for maintaining certain standards of morals and ethics. Kierkegaard allows for the individual to disregard moral and ethical behavior when it concerns the thing you are most passionate about; whereas, Sartre asks that the individual remembers that other individuals can witness your actions and to conduct yourself as you would want others. Another thing to note about this text is that this is one of Sartre’s most criticized and most-read works. Sartre’s denial of God is strongly reflected in his works, and he repeatedly asserts that “[man] is responsible for everything he does” (Sartre 29). He was even dubbed by many as “the most famous atheist of the twentieth-century” (Judaken and Bernasconi 261). In Being and Nothingness, Sartre examines the meaning of god in life, under the assumption that God does not exist. For Sartre, “the best way to conceive of the fundamental project of human reality is to say that man is the being whose project is to be God” (Judaken and Bernasconi 261). I think this is a really interesting take on humans and their behavior, as it insinuates that humans created this concept of God, only to play God themselves, thus making God at once a goal and a source of frustration. On a side note, it is interesting to consider how critics believe that more of Camus’s work is about the absence of god than Sartre’s, yet Sartre is best known for his atheist ideology. When speaking about Jean-Paul Sartre’s relation to other existentialists, it is important to mention Gabriel Marcel. Marcel was best known for his criticism of Sartre’s work ranging from notions of the self to the death of god and to notions of having no exit afterlife. Marcel generated his own literature through analyzing and responding to the works of other existentialists, specifically existentialists who were atheists. Much of his work retaliates against the absence of god in the works of others as he offers alternatives that include god’s presence. In my attempts to find reliable statistics about the popularity of various religions during the 1940s, I found an article named “Why 1940s America wasn’t as religious as you think — the rise and fall of American religion.” The article affirms the notion that the number of believers was decreasing in the 1940s, specifically, post World War II. According to a study into religious experiences during World World II of the American Military by Nicholas


196

Pellegrino, atheists were found to be sparse among the military men. As the less reliable article cited above suggests, after the war military men who did not believe in a religious denomination significantly increased. The study cites the horrors of the battlefield and the traumatic experiences that come with fighting on the battlefield as a primary reason for this change (Pellegrino 11). While these changes can not specifically or directly be attributed to the increased awareness of atheism and agnosticism from existentialist thought, it is interesting to consider how the lack of faith was becoming popular around the same time that existentialism was reaching its height. It is also important to note that later in the same study, , the author quotes Henry Giles, who confessed in his war journals that he stopped praying during the war as his prayers were never answered. Another soldier is cited for his recountings of instances where god and faith were met with skepticism in the military as peers asked if there were any use for praying, questioning if praying would truly be able to stop another attack (Pellegrino 167-168). Secularization, perhaps, is an outcome of war, though it too bears questions of meaning into the aftermath.

J

ewish philosophers are thought to have joined the movement due to a different set of motivations than many other existentialists, and its effects on their work may not be as obvious as one might think. It is interesting to consider how Jewish Existentialism came into existence when the philosophical movement gained popularity through the works of many Christians and non-believers. In Jewish texts like the Books of Ecclesiastes and Job, one can see existentialist themes present as the books tackle questions of meaning and suffering (Gold). The is about a man who is in search of the meaning of life. We are asked to understand his motives as he tried various methods to distance himself from the truths later discussed in Camus’s works, that life is meaningless and every life ends in death. The narrative changes as the readers progress through the story: Ecclesiastes turns to God to find meaning in life and advises the audience to do so too. He continues to ask each individual to live life, believe in a higher purpose, and “to remain humble” as we are all encouraged to live life in the moment (Gold). The is, however, a text that takes on slightly darker theme,s as the story of Job is one in which God and Satan create hardships for Job in order to test his faith. The story asks the reader to consider what righteousness provides you and if you learn anything from this story; it does not protect you from suffering. Due to a bet between God and Satan, Job is forced to suffer a series of misfortunes and then physical pain in the form of boils. When Job insists that he has not sinned but his peers tell him that he must have, he “berates God” (Gold). God then responds by reminding Job of who Job was and who God was and then returning all of Job’s wealth and more taken from him back. This text is interesting in how God allows

misfortune to come to an undeserving man and shames the man for complaining about it. Returning to the text through the lens of Existentialism, it is one of great suffering and one that questions whether living was worth the pain and suffering Job experiences. Martin Buber and his philosophy of dialogue were largely associated with Jewish existentialism. He rejected the label of existentialism and believed in deixis, pointing to the function of philosophy which results in the lack of proper rational acknowledgment. His book, I and Thou examines two modes of interacting with the world. The first is I-It, which is where we engage the world as an observer rather than a participant. The second mode is , which relates to encounter, the participating in a relationship with an object encountered. Buber’s main claim was that the mode of experience is crucial to surviving and his purpose was to help others recognize the modes in which they do so. He claims we need to trust in science but at the same time claims that science is not enough for humans (Zank and Braiterman). Franz Rosenzweig, another German-Jewish theologian, called for a “New Thinking” due to dissatisfaction with the limits of philosophical rationalism. “New Thinking” was a push for more existential principles; this movement hopes to address the existence of the thinker and individual. Buber and Rosenzweig were able to use existentialism to revalorize and explore Judaism as Judaism was collapsing in the wake of liberalism and Enlightenment. The Jewish answer to the question of existentialism, the problem of human existence, was that “without holiness we sink into absurdity. God is the meaning beyond absurdity.” (Judaken and Bernasconi 248). Jewish people sink into a dilemma between a life committed to the sanctity of life and the unending question of its meaning. Many of these questions enter a stage of urgency for Jewish people during and after the Holocaust, as questions of human existence under such horror become unbearably difficult.

F

or much of this essay, I have discussed themes centered around morality and ethics as these are crucial sets of beliefs for a society to prosper and cultivate “good.” With a movement so focused on the existence and its meaning, it is fascinating to examine how the existence of people who are often considered “other” are understood. Feminism is a social movement that has been on the rise for centuries and Simone de Beauvoir handles the topic in ways many would not think of. Simone de Beauvoir is best known for her work, The Second Sex in which she examines sexism and feminism among other topics (Judaken and Bernascon 360-361). Although she did not consider herself a philosopher as she claims in her autobiography, her ideas contributed greatly to the concept of feminist existentialism and feminist theory in general (Judaken and Bernascon 362). She calls for the abolishment of the “eternal feminine” and argues for sexual equality, an idea that was not a topic of phenomenological Fifth World


197

discussion before her. She employed two main arguments to convey the idea: exposing masculine ideology exploits the differences between men and women to create a system of inequality and how arguments for sexual equality erase the differences between men and women to establish the masculine subject as the human example. She asserts this by debasing Plato’s argument that sex is an accidental quality, making men and women equal. However, Plato also believes for women to be treated the same as men they need to train, live, and continue their lives as men do - Beauvoir asserts that Plato fails to acknowledge that this idea does not solve the discrimination toward women in play. Beauvoir’s argument for gender equality is that both men and women treat each other as equals and in order to do so they must validate their gender differences; she highlights how equality does not mean “sameness” (Bergoffen and Burke). The Second Sex can be understood as existentialist in that each individual is capable of and should have the freedom to define who they are themselves and claim the responsibility to live upon those standards and more. Upon further examination, I hope to be able to explore the responses and changes in perspectives of members of society. Did people of different socio-economic statuses react differently? Were some groups of people more willing to entertain the notions introduced? Were women able to gain more freedom due to the introduction of ideas by those like de Beauvoir? For further research in the intersection of discrimination and existentialism, I hope to examine how discrimination based on sexual orientation was tackled by existentialists. When many believe that there is no meaning for existence, how do existentialists provide justification? Do they provide any at all? Did the stigma around those who did not identify with heteronormative sexual identities change? Were members of this group able to be more expressive and open about their sexualities? Did more people voice support? Were there any implications on how religious groups and organizations viewed people who were part of the LGBTQ+ community?

A

s a school of thought addressing the meaning of life and existence and examining human nature, existentialism brought many changes to post-war Europe. This research aimed to identify how newly introduced ideas affected society, various aspects of life, as well as religion, political and armed conflict, and discrimination. This philosophical movement changed the perspectives of people of different faiths about their respective belief systems and religion in general. Existential questions and religion go hand in hand as both help you understand how to live life. From Kierkegaard to Marcel, from Nietzsche to Camus, from Buber to Tillich, various ideas have been introduced throughout the duration of the movement. They have influenced various aspects of society that writers were already influenced by. For example, as a Christian, Kierkegaard integrated many North Carolina School of Science and Mathematics

elements from religion into his work; the best instance would be his use of the story of Abraham and Isaac to convey his idea of a teleological suspension of the ethical. In , he uses this story to examine how religion and faith interact with philosophy and justify an exception to the morals and ethics of society as he allows for an individual to place their faith over their moral obligations. He cites Abraham’s choice to sacrifice his son, Isaac, for his faith and relationship with God to show how it is a necessary feature of the Abrahamic faith. Not only does Kierkegaard attempt to justify these actions, but he also turns Abraham into a “hero” as he redeems humanity from a meaningless cycle of birth and death. When spread to other Christians, this concept allowed them to wonder at Abraham’s actions and at their responses if they are ever placed in a situation that forced them to sacrifice their own ethical understandings. Kierkegaard’s Christian faith resulted in his writing in relation to Christianity, which helped other Christians “better” understand their religion as a faith radically ungrounded in doctrine. Similarly, other existential philosophers used the concepts and aspects of society with which they were familiar to challenge that familiarity. Existence precedes essence: we make ourselves, in fear and trembling, as humans. While we were able to understand some aspects of the changes brought about by existentialism, there is still much to understand and uncover. We do not understand how existentialism influenced society and the common beliefs prior to World War II. One aspect of life this research fails to consider is the culture and the societal expectations members will have for each other, will knowing that many are struggling to bear the burden of living create more leniency for peers at this time? How did people understand their peers after understanding existentialism? Did it change their opinions on material wealth and happiness? This research brings new questions: if it was not for the popularity of some philosophers like Jean-Paul Sartre prior to his existential work, would existentialism gained the same level of popularity. In the same line of research, another question arises: would existentialism have risen to the same level of popularity if it was not for World War II, leaving many to question their lives in its wake.


198

“Absurd.” Merriam-Webster.com Dictionary, Merriam-Webster. Bergoffen, Debra, and Megan Burke. “Simone De Beauvoir.” Stanford University, 27 Mar. 2020.

,

Camus, Albert. The Myth of Sisyphus: And Other Essays. Vintage Books, 1991. Flynn, Thomas. “Jean-Paul Sartre.” 2011.

, Stanford University, 5 Dec.

Gold, Michael A. “Existentialism in the Bible: Taking a Quick Look at Job and Ecclesiastes.” Medium, Medium, 7 June 2021. Guignon, Charles B. and Francis.

1998, Routledge Encyclopedia of Philosophy, Taylor

Hendricks, Scotty. “How the Nazis Hijacked Nietzsche, and How It Can Happen to Anybody.” Big Think, 30 Sept. 2021. “Humanism.” Merriam-Webster.com Dictionary, Merriam-Webster. Hoffman, Louis. “Existential Psychology, Religion, and Spirituality: Method, Praxis, and Experience”. , San Diego, CA. 01 August 2010. Irvine, Andrew. Existentialism, Boston University, 1998. Jonathan Judaken, and Robert Bernasconi. Situating Existentialism : Key Texts in Context. Columbia University Press, 2012. Khawaja, Noreen. University of Chicago Press, 2016.

The

Lalka, Robert Tice. “Surviving the Death of God: Existentialism. God, and Man at Post-WWII Yale.” , Yale University, 2005. Pellegrino, Nicholas. “Embattled Belief: The Religious Experiences of American ...” Encompass, Eastern Kentucky University, 2013. Sartre, Jean-Paul. Existentialism Is a Humanism. Yale University Press, 2007. Stephen Michelman. Thomas Flynn.

. Scarecrow Press, 2010. . OUP Oxford, 2006.

Wyatt, C.S., and Susan D. Schnelbach.

. . , 21 May 2020.

“Why 1940s America Wasn’t as Religious as You Think -- the Rise and Fall of American Religion.” Religion News Service, 11 Dec. 2014. Yablon, Charles M. “Nietzsche and the Nazis: The Impact of National Socialism on the Philosophy of Nietzsche .” LARC @ Cardozo School of Law, Yeshiva University, 2003. Zank, Michael, and Zachary Braiterman. “Martin Buber.” Stanford University, 28 July 2020.

,

Fifth World


199

Investigation of the Influences of Christianity on Economic, Medical, and Social Reformation in China Gracie Lin

A

s measured in 2021, there are over a thousand Chinese Christian churches in America, collectively representing a wide variety of denominations (Yang). My childhood was centered around my local Chinese Christian community; having grown up in the church in a family heavily involved in their practices, Christianity became a major aspect of my life. As I walked through the steps of my faith, expanding my literature on famous missionaries and evangelists, skepticism tugged in the back of my mind. The books I had been reading presented Christianity’s influence as wholly beneficial, but the history I had been learning with my studies of Christian influences seemed to speak otherwise. Intrigued by the true influences of Christianity on other countries, I decided to conduct comparative research on how Christianity permeates throughout many aspects of a country’s life; however, such a topic proved so vast and widely varied that I decided instead to observe these influences in China. Through my research of Christianity’s effect on China, I discovered that Christianty had a multitude of effects on China. Particularly, Christianity impacted China’s long-term economic development, the foundation of modern medicine, and increased advocacy work surrounding the elevation of Chinese women. On the other hand, Christianity played a major role in inciting the Taiping and Boxer Rebellions, respectively, causing millions of Chinese deaths. To begin understanding the influence of Christianity in China, we need to contextualize the introduction, development and response to Christianity in China. Much of the information gathered about the introduction of Christianity in China is referenced from a stela composed by Christian monk Jingjing in A.D. 781 (“A Brief History of Christianity in China”). Discovered in the early 1600s, the stela dates the introduction of Christinaity in China to A.D. 635, when a Nestorian Monk by the name of Aluoben traveled to Chang’an, the ancient capital of China (“A Brief History of Christianity in China”). Aluoben, otherwise known as Alopen, was a Persian Bishop and the first to introduce Christianity to China (“Nestorian Christianity in the Tang Dynasty”). Prior to his arrival, there was likely a small group of Nestorians existing in China amongst Persian merchants, but Aluoben marked the historic beginnings and North Carolina School of Science and Mathematics

first recorded account of Christianity in China (“Nestorian Christianity in the Tang Dynasty”). The Tang Dynasty was marked by a period of interest and curiosity in foreign religions, and these values were exemplified by the Emperor, who allowed Aluoben to settle and establish a monastery (“A Brief History of Christianity in China”). In 638, Aluoben, alongside Chinese associates, completed “The Sutra of Jesus the Messiah”, the first Christian book in Chinese. This book explained in depth the relationships between Christian values and ancient Chinese traditions, pointing out that, for example, loyalty to the state and filial piety aligned with Christian teachings (“Nestorian Christianity in the Tang Dynasty”). The foundation Aluoben laid for Christianity grew exponentially under the intellectual toleration and protection marked by the Tang Dynasty. For the next 200 years, Christianity flourished, with versions of the Old and New Testament translated into Chinese. This “golden age” came to an abrupt stop with the decline of the Tang Dynasty. During the ascension of Wu Tsung to the throne, Taoists took control of the Court. A combination of economic and political matters pushed Wu Tsung to persecute Christians in 845, ordering some 260,000 monks and nuns to return to secular lives (“Nestorian Christianity in the Tang Dynasty”). The persecution of monasteries and Nestorian Christians contributed to the near-disappearance of Christinaty in China. In 982, A monk from Naijran sent by the Nestorian Patriarch reported that “Christianity is extinct in China; the native Christians have perished in one way or an-other ; the church which they had has been destroyed and there is only one Christian left in the land.” (“Nestorian Christianity in the Tang Dynasty”). Christianity did not see a significant appearance until the Yuan Dynasty, establishing the Yuan Dynasty in the 13th Century. Similar to the policies of the Tang Dynasty, the Mongol court welcomed Christianity and allowed for free practice. Christian missionaries were allowed to openly practice, and parts of northern China were governed by Christian tribesmen. Furthermore, the Mongol Empire and Pope mutually sought to form a political alliance with one another (“A Brief History of Christianity in China”).


200

Franciscan missionaries were sent to China, who were welcomed with open arms (“Christianity in China”). Giovanni da Montecornivo was a prominent missionary who founded the earliest Roman Catholic missions to China and eventually became the archbishop of Peking, and the patriarch of the Orient (“Giovanni da Montecorvino”). Italian merchants also contributed to the spread of Christianity by founding Catholic communities in major trading centers (“A Brief History of Christianity in China”). However, this period of growth came to, yet again, another stop when the Mongols were expelled and the Ming Dynasty established. The Ming Dynasty persecuted both Roman Catholic and Nestorian Christians, effectively halting all development of Christianity in China (“A Brief History of Christianity in China”). Christianity did not reappear until 1588, with the arrival of Matteo Ricci during the waning of the Ming Dynasty in the 16th Century. Matteo Ricci was one of the prominent leaders of a new wave of Italian Jesuit missionaries set on evangelizing China (“Christianity in China”). The Ming Dynasty closed off China to foreigners, so Ricci adapted his plan to immerse himself in Chinese culture, learning traditions, customs and language. Ricci himself described his plan as follows: So as not to occasion any suspicion about their work, the fathers [i.e., the Jesuits] initially did not attempt to speak very clearly about our holy law. In the time that remained to them after visits, they rather tried to learn the language, literature, and etiquette of the Chinese, and to win their hearts and, by the example of their good lives, to move them in a way that they could not otherwise do because of insufficiency in speech and for lack of time. (“Matteo Ricci) His adoption of Chinese language and culture proved to be effective, as he entered China and became the first Westerner invited into the Forbidden City (“A Brief History of Christianity in China”). Unlike other missionaries, Ricci not only brought Christian teachings to China, but also introduced Western concepts of mathematics, astronomy, and geography, creating a remarkable map of the world titled “Great Map of Ten Thousand Countries”, the first to show Chinese intelligentsia the geographical relation of China to the rest of the world. Through the works of Matteo Ricci and other Jesuit missionaries, Christianity had a solid foundation throughout China (“Matteo Ricci”). Christanity’s influence received another major spike in the 19th century following China’s defeat in the Opium War. As a consequence of their loss, the greatly uneven Treaty of Nanjing was set in place, leading to a massive influx of European merchants, soldiers, and most notably, missionaries (“19th Century: European Encroachment & the Assault on Traditional Chinese Thought”). These European missionaries came mainly from the Protestant faith, and came with a mix of results. Some missionaries dedicated themselves to evangelizing, while others threw away their

initial goals to pursue business or diplomacy work (“Hudson Taylor”). However, the overall overwhelming introduction of Christiantiy served as a detriment to Chinese culture, as the newer missionaries carried a different methodology than the Nestorian and Jesuit missionaries. Rather than bridging Christian teachings to Chinese traditions, and immersing themselves in Chinese culture, Protestant missionaries forced Western beliefs and customs on Chinese people. Nonetheless, European missionaries gained strong influence from the coastlines into the interior regions of China, where prominent missionaries like Hudson Taylor founded Christian organizations (“19th Century: European Encroachment & the Assault on Traditional Chinese Thought.”). While Christianity had a solid foundation in China, this came to a violent end post-World War II as the Chinese Communist Party took a hold of China (Poceski). The Communist Party aggressively sought to obliterate all religious beliefs and practices, destroying all temples, churches, and persecuting not only Christians but Uyghur Muslims, Tibetan Buddhists and more (“The Communist Party’s Crackdown on Religion in China”). The polarizing shift from decades of religious freedom and acceptance to widespread and open persecution of Christians and places of worship has had a devastating effect on China. The Chinese government holds a firm control over any form of religious expressions and organizations by whatever means they see appropriate. Some control is somewhat subtle, like the removal of 1,200 crosses from churches across the Zhejiang province in 2015, while others are bold, such as the sentencing of a Protestant pastor in the same province to 14 years in prison for refusing to take down his church’s cross (Poceski). Yet despite the repressive religious policies, Christianity quietly flourishes in China through underground churches, online Christian communities, and vigilant self-censorship among openly practicing groups (Fulton).

A

s we have laid a comprehensive overview of Christianity throughout China’s history, from its beginnings during the Tang Dynasty to its current state, we may now turn to understanding the different ways in which Christianity has permeated throughout many aspects of China’s development. First, let us examine the influence of Christianity on China’s economic development. A 2012 analysis conducted by Yuyu Chen, Hui Wang and Se Yuan investigated the effects of historical Christian activities on promoting local economic development. China is currently the second largest in Foreign Direct Investment (FDI) recipients among the entire world, and the largest in developing countries. They hypothesized that long term interaction with Christianity closed the cultural difference between local Chinese residences with the outside world. Through various charity works, such as natural disaster relief, local Chinese communities became well acquainted Fifth World


201

with foreign missionaries, and became receptive to Western values. The mutual growth in the relationship between different cultures fostered an environment where improvements in cultural proximity possibly could have generated significant impacts on the economy. This hypothesis was tested by analyzing the coefficient of the two-stage least squares regression fit (2SLS) between dependent variables asset shares, revenue shares, and employee of foreing firms in manufacturing industries shares. The 2SLS coefficient was found to be significantly positive, indicating a strong likelihood of the positive effects of Christian activities on China’s FDI, and in general, longrun regional economic prosperity (Chen et al.). In another study headed by Qunyong Wang and Xinyu Lin, boosts in China’s economic growth were found to be most positively and strongly correlated with Christianity, as opposed to other major religions in China. As Figure 1 demonstrates, the coefficient was statistically higher among Christianity than any other major religion in China. Furthermore, concentrations of robust growth were located in areas of China where Christian institutions and congregations were prevalent (Grim).

Figure 1. Coefficient of the relationship between different major religions in China and economic growth in China (Wang and Lin).

Wang and Lin explained this positive influence through two main arguments. Firstly, government-recognized Christian institutions make up 16.75% of all religious institutions, and a large percentage of these congregations are found in China. These institutions tend to become a powerhouse for economic growth, stimulating such development by directly spending for goods, services, salaries, and safety networks for individuals and communities. Furthermore, tens of millions of Chinese Christians are affiliated with unregistered Protestant and Catholic churches, which also covertly contribute to regional economic growth. Secondly, the ethics taught in Christianity, and in Chinese Christianity specifically, might have an influence on human development, encouraging and mobilizing the workforce to power the economy. For example, the Christian value to hold oneself accountable to God and other believers leads to legal and more rational investment behaviors, as opposed to illicit activities (Grim). Aside from the institutions which push China’s

North Carolina School of Science and Mathematics

development, the massive growth of Chinese Christians has led to revolutions for religious freedom, where the freedom they are slowly obtaining leads to economic benefits. It is predicted that by 2030, China will have over 224 million Protestants, more than all the Christians combined in the United States. However, as the Christian population grows, as does the social and academic background of converts widen. Christianity was mainly prevalent in rural areas in the 1950s-1980s, but economic transition to a market economy and global integration in 1990 saw a tangential emergence of Christian businesspeople and Christian intellectuals. Later, populations of Christian lawyers, artists and professionals of various fields began to grow in number. The rise of Christians in China has led to contributions towards expanding freedoms in Chinese society. Chinese Christian lawyers have fought against unjust civil and human rights policies against Christian and non-Christian citizens, while, the greater fight for change is being covertly carried out by Christian businesspeople, intellectuals and professionals working under the current social and political system. The progress Christians have made and continue to campaign for have opened up the freedom of religion for everyone in China (Yang). This can be correlated to economic influences because freedom of religion has been associated with strong economic growth. In fact, the world’s 12 most religiously diverse countries each outpaced the world’s economic growth between 2008 and 2012, and active participation of religious individuals in society has been shown to boost economic innovation (Grim). Thus, through empirical and statistical analysis, the influence of Christianity on China’s economic prosperity has been demonstrated through two main avenues: influences on the community and individual. A noted effect of Christianity on China is its influence on medical practices. The introduction of Western medical practices was brought mainly through Christian medical missions, however influences date back to the 15th and 16th Century when Jesuit missionaries were prevalent in China. The Jesuit missionaries introduced early Western medical practices in China, and were highly regarded by the emperors for their medical expertise (Fu). Protestant medical missionaries began entering China during the start of the 19th century, to spread Christianity and propagate Western medicine. During this time, traditional Chinese medicine was the most widespread medical practice, and had deep roots with Chinese culture, traditions and ideologies. Introduction of Western medicine challenged traditional Chinese medical practices, which were deemed heretical and non-scientific by Western medical practitioners (Jo). However, an exchange of practices occurred as Western medical practitioners stayed longer in China, admitting to the benefits of acupuncture and traditional Chinese drugs. Medical missions to China did far more than introduce Western medicine- a major influence of these missions was the founding of the first modern clinics, hospitals, and medical schools, where official training for nurses was


202

established (Choa). For instance, the Yunnan Province along the southwestern border of China experienced significant effects on the modernization of their medical practices. In the late 19th Century, introduction of Western medicine was incorporated into local Yunnan medicine and spread into inland China. Then, in the beginning of the 20th Century, Catholic missionaries established church hospitals in Yunnan, such as the Dafashi Hospital built in 1901 and the Fudian Hospital in 1902. Furthermore, the church hospitals went on to create medical and nurse schools throughout Yunnan, promoting medical education and profoundly influencing the modern education of Western medicine in that province (Wang, Fu). Establishment of medical schools in China also sought to specifically target Chinese women, who were unequal under Chinese law, and unwilling to be treated by male Western doctors, as a social custom prevented it. These factors resulted in the overwhelming need for female Western doctors, and Dr. Luchina L. Combs became the first female medical missionary to be sent to Peking (Beijing), China in 1873 (Fulton). During her time in China, Dr. Combs founded the first women’s and children’s hospital in 1875. Many female medical missionaries followed her lead, establishing more women’s hospitals; the first medical college for women was created by Dr. Mary H. Fulton. The aim of the College was to spread Christanity and modern medicine, while elevating the social status of Chinese women (Pang). The empirical data surrounding the influence of Christianity on medical practices speaks for itself, however statistical data detailing the specific influences of religion on health also provide interesting considerations. A study conducted by Zhang et al. sought to examine the relationship between good mental and physical health with religion in China (Zhang et al.). Utilizing self-reported health statuses on health and happiness, a positive correlation was drawn between better health and more happiness and religion. Specifically, the positive effects on health and happiness were found to be especially strong among Protestant Christians (Zhang et al.). In addition, missionaries provided medical services to marginalized groups which were traditionally shunned from receiving treatment. Rural populations, opium addicts, lepers, the blind, and those with mental illnesses found themselves able to receive treatment at the hand of missionaries. The missionaries also worked to elevate public health by encouraging personal hygiene, vaccines and providing antiseptics and anesthetics (Spees). Thus, Christianity had a profound impact on the modernization of Chinese medicine, and establishment of medical schools, hospitals, clinics, in addition to improving the happiness and health of those practicing. Yet another influence of Christianity on China was the impact of female missionaries, dubbed “Bible Women,” on social reformations. Female missionaries were incredibly vital in missionary work in China due to the extreme

isolation Chinese women faced. Confucian teachings governed Chinese women’s lives, dictating three main devotions: devotion of herself to her father before marriage, devotion to her husband in marriage, then devotion to her son in widowhood. Furthermore, as previously mentioned, women were only allowed to have little contact with men outside of their family, leading Chinese women to become isolated during most of their lives, especially in upperclass society (Spees). The women’s ability to work or gain independence was further inhibited by the tradition of footbinding. Footbinding was the process of breaking and binding one’s feet from a young age, typically beginning at around four to six years of age. The goal of footbinding was to achieve the ideal “golden lotus” foot, which meant the foot would be bent to around three inches long and the toes tucked underneath towards the heel, as demonstrated in Figure 2. Having the “golden lotus” foot was a sign of beauty, but permanently impaired Chinese women’s ability to work (“Footbinding”).

Figure 2. A foot subjected to foot-binding processes compared to a normal foot (Pinterest).

Female missionaries felt troubled by the gender inequity and painful customs Chinese women had to endure, and sought to reform them. Mari Gratia Luking, a female missionary, laments the oppression of Chinese women, saying, “We asked what had become of the girls. The sisters said the women and girls are among the greatest sufferers… but I am too full of the suffering of our people to write more” (Spees). Thus, female missionaries worked to abolish child marriage, foot-binding and infanticide. Additionally, these missionaries tackled the issue of child trafficking, where poor parents, who were unable to raise their children, sold their girls to rich families to survive. The girls who were sold off were usually abused and badly treated. Western missionary women began to take in impoverished and deserted children, but a ban from the missionary board prevented them from continuing to do so (Spees). Instead, the children were placed with Chinese Christian families, ensuring the child’s safety. Such a story of an orphan taken in by missionaries is the story of a seven year old Chinese girl who was found

Fifth World


203

dying on a mission headed by Luella Tapppan. The girl had been sold for 60 dollars by her desperate and starving family, and was taken from her owner after some negotiation and handed to missionaries. She was raised under the hand of missionaries and went on to pursue a full education and become her school’s top graduates (Spees). Missionary women also sought to elevate Chinese women’s position in society by educating them. Prior to missionary influence, Chinese women had little to no education, so an important priority for missionary missions was to educate Chinese women. This was a major goal for several reasons, the first of which being the elevation of their social status and standard of living. Reverend Pang Ken Phin spoke up against the lack of education in Chinese women during a 1988 conference on the “Mission History from the Women’s Point of View,” stating, “Education is one of the strongest tools to raise up the status of women and improve women’s lifestyle. An old Chinese saying states, ‘It is a virtue for a woman to remain ignorant.’ This traditional teaching hindered the education of our Chinese women. It caused a great loss to the family, to church and to society” (Spees). Thus, the education of Chinese women, the cornerstone and foundation of the traditional family and domestic life, would elevate the status of women and increase the possibility of jobs for them. The Jiangyin mission was a mission that dedicated itself towards educating women, where female missionaries started teaching in their homes, and grew to expand into official boarding schools where four year programs were offered in theology. However, the courses went beyond theology, and taught mathematics, geography and writing. Children were offered the same education and courses, and learned industrial and household skills in addition. The Jiangyin mission also set up a women’s ward, where a school was opened up to train women to become nurses. As the program grew, the hospital trained over 231 Chinese nurses certified by the Nurses Association of China (Spees). However, many of the missionary women seeking to reform China carried an arrogant attitude. The elitist mindset of missionary women turned many Chinese people away from Christianity and developed a sense of dislike and distrust in foreigners. Anna Pruitt, a missionary who worked in China, exemplified the arrogance of missionary women. While conducting domestic work with Chinese women, she attempted to “transform” the Chinese women, assuming that traditional Chinese customs and norms were outdated and worth discarding. Instead, she pushed for “Western scientific education, the English language, and Christian ideals” to completely redefine Chinese society into a “Western-oriented” nation (Spees). Her naivete stemmed from the ignorant mindset that China was an uncivilized, heathen nation that deserved saving. This belief is similar to the Western hesitation towards traditional Chinese medicine, as they saw it as heretical, despite the traditional medicine’s proven benefits. The moral superiority and North Carolina School of Science and Mathematics

technological advancement Westerners believed that they had over Chinese people led them to forcibly attempt Westernizing Chinese women, as they believed that converting to Christianity and westernization would free them from their oppression. This savior complex plaguing Western missionaries prevented them from fully influencing Chinese people in a positive light, and the boundaries dividing Western and Chinese people only grew stronger with the purposeful Western rejection of Chinese culture. Unlike missionaries such as Matteo Ricci, who immersed himself in Chinese language so he could assimilate into Chinese culture, some Western missionaries outrightly rejected Chinese culture and maintained a Western lifestyle in China. In all aspects, such as food, clothing and architecture, Western missionaries made it a point to refuse adaptation of Chinese customs, and even believed that eating Western food would be one of the best defenses against “the threats of an alien environment” (Spees). Female Western missionaries did much to improve the social status of Chinese women by educating them, in addition to saving many children from being sold into abusive families or starvation. The white savior mentality many missionaries carried, however, hindered much of their work. Finally, one of the most severe, brutal and deadly outcomes of Christianity’s introduction in China was the Taiping Rebellion and the Boxer Rebellion. The Taiping Rebellion was a massive religious and political upheaval during the Qing Dynasty. Rarely referenced in Western literature, the Taiping Rebellion ravaged 17 provinces and took approximately 30 million lives, making it the bloodiest civil war in human history. This war lasted from 1927-1949, destroying the Qing Dynasty and setting it along the path of collapse (Newman). Prior to the Taiping Rebellion, the Qing Dynasty was experiencing a period of extreme economic success and growth. A positive trade balance was struck with the West, exchanging tea, silk, porcelain for silver. The population was also booming, doubling from 178 million in 1749 to 432 million in 1851, and alongside the population growth, China’s cities grew with the exposure of New World crops such as potatoes, corn and peanuts. These successes the Qing Dynasty was enjoying only masked underlying instability which would quickly come to light, and work to derail the dynasty (Newman). The rapid population growth became a burden, and the New World crops could no longer support this growth; the cultivation and irrigation necessary to support those crops eroded and degraded the arable land. Soon enough, massive areas of the population became subject to starvation, and China began to experience a labor surplus. Alongside this labor surplus was high levels of unemployment, only made worse by the dynasty’s high state taxes. To top it all off, opium addiction was also on the rise and widespread usage of the highly destructive drug became endemic. Life for the average citizen worsened as time went on while Qing bureaucrats and imperial court members


204

became increasingly corrupt, hoarding tax revenue and public funds. The First Opium War also left China highly subject to the British, after a decisive loss forced the signing of the Treaty of Nanjing (Newman). In combination, the growing corruption and economic and social difficulties contributed to the public’s resentment of the Qing Dynasty. Among these individuals rose Hong Xiuquan, pictured in Figure 3, the leader of the Taiping Rebellion.

1851 in a series of small clashes with Qing forces. Hong later declared the city of Jiantian in Guangxi to be his new dynasty titled the Taiping Tianguo, or the Heavenly Kingdom of Great Peace. Under this theocratic monarchy Hong would be the Heavenly King (Newman). The Kingdom gathered a formidable army of a million armed forces, and began marching North, recruiting followers until they reached Nanjing. The violence that followed in the Taiping force’s wake was gruesome, as Figure 4 depicts. Nanjing, one of China’s grandest and wealthiest cities, fell to the Taiping forces in March 1853, and became the capital of the Heavenly Kingdom. After claiming Nanjing, Taiping forces sought to cleanse it of Manchu “demons”, ruthlessly executing, burning and expelling Manchu men and women (Newman).

Figure 4. Taiping Rebellion No.2 by Song Zhengyin (Newman). Figure 3. Contemporary Drawing of Hong Xiuquan in 1860 by an unknown artist (“Hong Xiuquan”).

Hong Xiuquan was just a young man in 1837 when he failed his third attempt at the imperial civil service examinations. These exams were notoriously difficult and largely revered due to the prestige of a civil service career. Hong’s failure of the exam for the third time set him into a nervous breakdown, where he experienced hallucinations in which a heavenly-like father figure approached him. His delusions went unexplained until he read a pamphlet from a Christian missionary in 1843. Hong came to the conclusion that he had witnessed God, and as he dug deeper into the interpretations of his visions, he eventually concluded that he was the son of God and the brother of Jesus (“Hong Xiuquan”). Alongside his friend Feng Yunshan, Hong created his own religious group called the God Worshipping Society, and began preaching his own gospel. The God Worshipping Society proved to be popular among peasants and laborers of the Guangxi province and the Hakka people, a sub-ethnicity of the Han. Hong’s followers amassed into a movement of 20,000 to 30,000 people by 1850. After his conviction, Taiping followers of the God Worshipping Society began the rebellion in January of

After the fall of Nanjing, the Heavenly Kingdom began setting out on the North Expedition in May 1853 to capture the Beijing, the capital of the Qing Dynasty, The expedition fell short due to poor planning for the cold and harsh winters of North China, and they were met by a revitalized Qing resistance. The siege laid by the Taiping forces on cities between Nanjing and Beijing weakened their forces, and they were forced to retreat back to Nanjing after a successful Qing counterattack (Newman). Despite the setback, the Heavenly Kingdom army continued upon their conquest of China, and a series of victories against the Qing imperial troops allowed for the Taiping conquest of the Jiangsu and Zhejiang provinces. However, the advances made on Shanghai marked the end of the Heavenly Kingdom. Assisted by Western forces, the Qing imperial army managed to push back the Taiping army and severely damage their forces. Riding these successes, the Qing army continued a reconquest of areas occupied by the Taiping, and eventually reclaimed all lost land. In 1864, Hong commanded all his followers to eat wild weeds and grass, as he believed they were manna provided by God. Living by his own command, Hong too ate wild weeds and grass, but fell ill to them and died in June of 1864 (Newman). The Boxer Rebellion was another major historical event

Fifth World


205

in Chinese history largely influenced by Christianity. This rebellion was a peasant uprising that occurred in 1990 that attempted to drive out all foreigners from China (“Boxer Rebellion”). The term “Boxer” was a name given to a secretive Chinese society by foreigners. The Yihequan (“Righteous and Harmonious Fists”) was a group whose aim was to destroy the dynasty along with the Westerners who held privileged positions in China. The group’s original intention, however, wasn’t to rebel against foreigners; rather, it started as a society for spiritual forms of martial arts, and later turned into a political organization in the 1890s (McGuffin). Towards the end of the 19th century, increasing economic impoverishment, foreign aggression, and a terrible drought and flood which destroyed Northern China, planted the first seeds of the Boxer Rebellion (“Boxer Rebellion). The Boxers began accumulating followers in the Shandong and Zhili province, where the drought and flood had done the most damage (McGuffin). The Boxers gained governmental support in 1898, when antiforeign forces gained control of the Chinese government and persuaded the Boxers to unite with the Qing Dynasty to destroy the foreigners. The strong resentment the Boxers carried towards foreigners was largely rooted in Christianity and Confucianism. Confucianism is integral to Chinese culture and is based on several key relationships, like familial and dynastic relationships. With the communities practicing ancestral worship, the Boxers believed that churches, railroads, telegraph wires and generally any Western-introduced structure was poison to the country and land, which tied people to their ancestors. It was believed that these structures invited angry spirits and misfortune to China, and so all structures must be destroyed (McGuffin). Furthermore, Western Christian missionaries openly disregarded traditional Chinese ceremonies and family relationships and abused their privileges in China. For example, missionaries often pressured local officials to side with Chinese Christians in local lawsuits and property disputes (“Boxer Rebellion”). Thus, it was most often the case that Western missionaries would only protect Chinese Christian converts. If there was a family dispute requiring legal decisions, Western missionaries had the ability to bypass local authorities because they were exempt from various laws (McGuffin). This only exponentially increased the animosity the Boxers had against Western and Chinese Christians. In late 1899, the Boxers were openly attacking Western missionaries and Chinese Christians, and this violence only increased in Beijing, where Boxers burched churches and killed any suspected Chinese Christians on sight. The end of the Boxer Rebellion was brought about by an international effort of some 19,000 troops hailing from Japan, Russia, France, Italy, United States, Britain and Austria-Hungary. This combined effort captured Beijing, freeing foreigners and Christians who had been besieged there since June 20th. Negotiations were conducted, and after extensive North Carolina School of Science and Mathematics

discussions a protocol was signed in September 1901, finally bringing an end to hostilities (“Boxer Rebellion”). The majority of the casualties during the Boxer Rebellion were civilians, with thousands of Chinese Christian deaths and approximately 200 to 250 foreign nationals deaths, most of which were Christian missionaries (“Boxer Rebellion’’). The Taiping Rebellion and Boxer Rebellion have strong ties to the introduction and influence of Christianity on China. The Taiping Rebellion could have been largely avoided if not for Hong Xiuquan’s introduction to Christianity. In the case of the Boxer Rebellion, the Westernization Christian missionaries established in China inadvertently angered the Boxers in Northern China, leading to the massacring of Chinese Christians and Western missionaries.

I

n conclusion, the introduction and spread of Christianity had long-lasting and deeply rooted influences on China’s economic and modern medicine development, in addition to encouraging reformations for women’s rights. However, Christianity was an underlying cause for the Taiping and Boxer Rebellions, which led to millions of deaths in China. Aside from statistical evidence, much empirical data points towards the benefits Christianity has had on China’s economy. Since there is such a large population of Chinese Christians, the various backgrounds of those Christians is also wide. The growth of Christians has led to increased advocacy for religious freedom in China, which could be correlated to economic influence, as freedom of religion has been associated with strong economic growth. Secondly, medical missions carried out by Western missionaries had an incredibly significant influence on the foundation of modern medicine and medical schools in China. The first medical schools, clinics and hospitals were established by Western missionaries, through which they introduced and taught Western medicine to the Chinese community. The medical care provided to Chinese communities also targeted marginalized groups, like opium addicts and the poor, in addition to treating women, who often hesitated to seek treatment from male doctors. The strong foundation laid by Western missionaries paved the way for China’s eventual medical structure, however Christianity has also been statistically shown to impact the mental and physical health of Chinese citizens. Zhang et al. demonstrated a positive correlation between better health and more happiness and religion, and the effect was strongest amongst Protestant Christian. Third, Western female missionaries had a major impact in working to elevate the status of women. Women in China were subject to strict social norms- not only were they highly discouraged from interacting with men outside of their family, they often underwent foot binding, and were physically unable to move. Female missionaries sought to tackle these unjust customs, and did so by advocating against foot binding, and a myriad of other issues, like child marriage


206

and starvation. Before the ban against taking in children, missionary women would take in children who were sold off to rich families in order for their families to survive, and raise them. However, in order to elevate Chinese women’s position in society, female missionaries concentrated their efforts on educating Chinese women by teaching in their homes and establishing schools especially for women. Lastly, the Taiping Rebellion and the Boxer Rebellion have ties to Christianity, where motivators for rebellion stem from either a misconstrued form of Christianity, or a hatred for missionaries and Christians. Since Confucianism and ancestral worship is an ancient Chinese tradition that believed in the sacred nature of the Earth, Western structures, like churches and railroads were seen as a desecration inviting angry spirits to create chaos. However, Western missionaries in China abused their privileges as foreigners, by pressuring local officials to side with Chinese Christians and bypassing authorities because they were exempt from various laws. This dishonesty and preferential treatment of Chinese Christian converts contributed to the hatred the Boxers had for Chrisitans. Furthermore, many Western missionaries sought to completely wipe away “uncivilized” Chinese culture in favor of a superior Western one. This ignorant attitude caused a divide between Western missionaries and Chinese communities, and caused a distrust between missionaries and Chinese people. Through this compilation of empirical and statistical evidence, it is apparent that Christianity has had a significant influence on China’s development in various aspects, namely that of its economy, medicine and social reformations. The evidence presented in this paper points toward a net positive effect, as Christianity has proven to be correlated to China’s massive economic boom and lead to better health among Chinese citizens, among other positive influences. However, this must be carefully weighed with the deaths brought by the Taiping Rebellion and Boxer Rebellion, which stemmed, in some aspects, from Christanity. These considerations bring about questions of ethics, and deeper investigations of the long-term impacts of the Taiping Rebellion and Boxer Rebellion would be worthwhile to pursue.

“Christianity in China.” Christianity in China - New World Encyclopedia. “Does Religious Beliefs Affect Economic Growth? Evidence from Provincial-Level Panel Data in China - ScienceDirect.” Accessed December 13, 2021. “Footbinding.” Encyclopædia Britannica, Encyclopædia Britannica, Inc. “Frontline/World . Jesus in China History.” PBS, Public Broadcasting Service. Fu, Louis. “Medical Missionaries to China: The Jesuits.” 2011): 73–79.

Fulton, Brent. “Chinese Christians Deserve a Better Label than ‘Persecuted’.” ChristianityToday.com, Christianity Today, 9 Oct. 2020. Fulton, Mary H., and The United Study of Forring. Inasmuch. BiblioBazzar, 2010. “Giovanni Da Montecorvino.” Encyclopædia Britannica, Encyclopædia Britannica, Inc. Grim, Brian J. “What Christianity Contributes to China’s Economic Rise: Brian J. Grim.” Things. “Hong Xiuquan.” Encyclopædia Britannica, Encyclopædia Britannica, Inc.,

Jo, Jeongeun. “[A study on the awareness of Chinese medicine by medical missionaries: focused on the China Medical Missionary Journal (1887-1932)].” Ui Sahak 24, no. 1 (April 2015): 163–94. Mario Poceski Professor of Buddhist Studies and Chinese Religions. “There’s a Religious Revival Going on in China -- under the Constant Watch of the Communist Party.” The Conversation, 6 Aug. 2021, “Matteo Ricci.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., Nestorian Christianity in the . Pang, Suk Man. (1899-1936). 1998. Hong Kong Baptist University, Master of Philosophy. Wayback. Person. “Hudson Taylor.” Christian History | History, 8 Aug. 2008.

Berkley Center for Religion, Peace and World Affairs. “Christianity’s Growth in China and Its Contributions to Freedoms.”

Encyclopædia Britannica, Inc.

, Christian

Pinterest.

“Rebels: The Boxer Rebellion.” Spees, L P. “MISSIONARY WOMEN IN CHINA: CHANGING CHINA, CHANGING THEMSELVES,” n.d., 13. “The Taiping Rebellion: The Bloodiest Civil War You’ve Never Heard Of.” TheCollector, 4 July 2021. Unknown Artist. “Christians in China Being Tortured and Murdered during the Boxer Rebellion (1900).” Boxer Rebellion, Britannica. Unknown Artist. “Contemporary Drawing of Hong Xiuquan in 1860.”

Asia for Educators, Columbia University. “19th Century: European Encroachment & the Assault on Traditional Chinese Thought.” Living in the Chinese Cosmos |

19, no. 2 (May

, Britannica.

Wang, Jian, and Liling Fu. “[An introduction to the transmission of modern western medicine in southwestern borderland].” 45, no. 2 (March 2015): 87–90. Yang, Fenggang. “Chinese Christians in America: Conversion, Assimilation, and Adhesive Identities by Fenggang Yang.” The Pennsylvania State University Press, Penn State University Press.

. “Boxer Rebellion.” Encyclopædia Britannica,

Chen, Yuyu, and Hui Wang. “The Long-Term Effects of Christian Activities in China,” n.d., 27.

Zhang, Jing Hua, Zhang, Haomin, Liu, Chengkun, Jiang Xiaoyang, XHang, Hongmin, Iawloye, Ojom “Association between Religion and Health in China: Using Propensity Score Matching Method”, n.d., 27

“Chinese Faith Communities Contribute Significantly to Local and National Prosperity.”

Zhengyin, Song. “Taiping Rebellion No.2.”

Choa, G. H. University Press, 1990.

. . Chinese

Fifth World


207

Social Media in Modern Christianity Evelyn Ong

I

attended a Southern Baptist Church retreat, Caswell, in North Carolina a few times between 2017 and 2018. It was something unlike anything I had ever experienced before. When I had attended church in my hometown in North Carolina, the demographic was quite different; most of the congregation were older adults, with a few young families and children. These services were very much a typical Protestant service, silence your cellphones, open your hymnal, listen intently to the pastor for an hour. Caswell was the complete opposite: it was composed entirely of high school students using their phones to film different parts of the service, which included engaging skits, live music and dancing, and a sermon. The increased access and usage of media has brought the Christian Church to modern adaptations of their traditional form of education, outreach, and infrastructure. The two types of church services I attended during this time showed both an incredible contrast between the traditional form of evangelism and the younger generation, and the complex efforts of contemporary religious institutions to to attract young people. The services I attended at the Caswell retreat were multi-media productions that included a live band playing contemporary Christian music, several actors who would perform skits, spoken word pieces and other theatrical performances, and short film clips used throughout the sermons. Together, these pieces formed an intricate service designed to capture the attention of young people and attempt to excite them for whatever the foundation of the service was. In 2018, I spent several months attending a newly founded non-denominational local church. This demographic was also a great contrast to the Southern Baptist Church I had been attending, with young families and teenagers filling most of the audience. This church, similar to the Caswell retreat, incorporated music, film, and social media into each of their services. Even further, and similar to the megachurches that will be discussed in section I of this paper, smartphones were encouraged to be used to share a specific thought and hashtag related to the church. Each month, the church would have a catchy and popular culturerelated theme to help attract new members to the church. The parallels between these practices all circle back to Christianity’s adaptation to the modern world that relies on technology and works through social media. In this paper, I will be discussing the very unique and interesting outcomes

North Carolina School of Science and Mathematics

that have come from Christianity, a religion solely based on a text, acclimating to the current age where nothing survives if it isn’t digitized. This paper will discuss two megachurches and their use of technology in their services as well as how social media can alter the integrity of a church. It will then analyze how social media and technology have catalyzed the deinstitutionalization of Christianity and encouraged individualism, pluralism, and syncretism within evangelicalism. The final section will address the mishaps of megachurches in the social climate today and how social media factors into the reaction to controversial events.

E

vangelicalism has traditionally been fueled by intimacy, emotion, and exclusivity. Since the recognition of Christianity in the first century, it has been a religion built upon the intimate, direct relationship between a constituent and their God. For hundreds of years, the Bible was held exclusively in the hands of the elite clergymen. This created a dynamic that forced church members to actively engage with the pastor and his sermon, allowing for a very narrow interpretation of the Bible. Those who practiced Christianity had very little variation in their beliefs because they only believed what was told to them. The exclusivity of this practice formed very intimate relationships and a unique dedication to the church. The Church flourished through this practice of withholding the Bible from common people by requiring members to become educated only through its instruction, mediated by its authority alone. In this section, I will be discussing the reasons behind the widespread integration of media and popular culture into the traditional practice of Christianity. The printing press initiated the deinstitutionalization of Christianity and social media has only perpetuated that. When the printing press was invented in 1436, it immediately changed the world beyond just Christianity. The first Bible was printed in 1455 and this altered the foundation of which Christianity was built upon. The Bible slowly became more accessible to people of the Eastern World. Social media and the internet is the modern version of this revolutionary invention. Both phenomena resulted in a great increase in communication and individualistic thoughts, prompting the deinstitutionalization of Christianity. Social media is used today to publicize personal thoughts and experiences, introducing peers to different ideologies, something that is historically frowned upon by the church. This exchange of


208

beliefs has created a new generation of people less likely to engage in religion and spiritualism (McClure). Due to social media’s prevalence today, it has become impossible for Christian churches to capture audiences without the use of popular culture. Having grown up with the internet and smartphones, the newest generation, nicknamed “iGen,” cannot be detached from the addictive entertainment of social media. In a study done by review. org, 48 percent of Americans admit to being addicted to their phones (Wheelwright 2021). The constant reminder, “Please silence your cell phones,” has become an ineffective norm across the globe. When attending a church service, throughout the pews you will see teenagers checking social media, adults reading emails, and the occasional toddler playing video games on their parent’s smartphone to pass the time. It is almost impossible to draw the undivided attention of multiple generations, even for just an hour. Churches have begun to productively make use of this addiction to social media by incorporating popular culture into their sermons and encouraging the use of social media to publicly share thoughts of the service. Megachurches superficially use social media and the internet to cater to younger people and the current generation’s addiction to screens. This can be done by flashy propaganda and encouraging congregation members to share their thoughts on personal social media;, however, this can be seen as a method of monetary gain for the church versus being portrayed as sharing faith. Deborah Justice discusses the use of popular culture in two well known megachurches in two vastly different areas: CityChurch in Würzburg, Germany and Lives Changed By Christ (LCBC) in Pennsylvania, US. Justice found that both churches are using different forms of media to engage their congregation without having to take away their accessibility to their phones. Both churches make use of popular film clips and music during their services in an attempt to familiarize certain topics of the bible with the younger generations. In Justice’s report of an LCBC service, she mentions the pastor’s use of film clips from a recently released movie. This is done perhaps to break up the density of a sermon or to keep listeners involved. Justice also discusses a relevant paper written by Lynn Schofield Clark that questions the morality of churches that are using popular culture in their sermons. Clark states that often, churches will take film and video clips from their context and morph them into messages relevant to their lesson. To argue on behalf of this point, it can be questioned whether churches should use such clips that may not represent what they blindly believe. When churches make use of social media in their sermons, it can be viewed as a hypocritical suggestion. Justice observes an interesting interaction, where CityChurch used a clip, considered inappropriate to the Christian community, to exemplify staying true to Christ even with worldly distractions. When the pastor was questioned on his choice of using that particular clip, he responded

by flipping the blame entirely on the congregation. The pastor’s response was along the lines of ‘you already view things considered as scandalous.’ (116). However, does this dismiss the inappropriateness? Does this make it acceptable to make use of for a multi-generational crowd? If a pastor uses these clips in an educational format, does it take away from the sinful nature of the film? These thoughts truly question the reasoning behind the reliance on popular culture and whether it is to draw attention or to educate. The mediatization of Christianity has further blurred the line between superficiality and genuity, forcing religious institutions to compete with more than just local contenders. Megachurches and other non-denominational institutions now use social media as their primary form of propaganda. I have shared an experience similar to Justice’s point on LCBC and other megachurches’ use of catchy titles to advertise their educational themes. LifeSprings Church is located in Sanford, North Carolina, but has two other campuses around the state where services are streamed. I attended this church for about a year, and they shared many of the same practices that Justice reports on in Pennsylvania and Germany. Each month, there were themes with memorable titles and flashy brochures to give out to friends and family hoping to draw more members. This church specifically used social media to capture its audience, mostly young adults and families. Before each service, there would be a relatively biblical saying and an encouragement to post the statement on social media with a hashtag related to the church. The church parallels many other techniques previously mentioned, but the mediatization of Christianity is extremely prevalent throughout LifeSprings Church’s practices. Mediatization exists in many different forms in today’s world of Christianity. Whether you are examining a megachurch with thousands of members, or a smaller local church it is evident that Christianity has become heavily dependent on social media. Media is used to increase church attendance, improve member’s attention, and draw new constituents. Many aspects of LCBC, CityChurch, and LifeSprings Church have been mediatized from the incorporation of popular culture, streaming, and propaganda through social media. One third of church attendees have reported that their church participates in live streaming their services (Krings, 2021). The same source reported Life Church, with 39 locations in 12 different states, streamed to 4.7 million people in just one week. This presents a unique phenomenon of an entire religion existing and interacting on the internet where worshiping a hands-on text is imperative to the fundamental principles of that religion. In an interview in an article written by Chris StokelWalker for BBC, Reverend Pete Phillips discusses the use of smartphones in Christianity in general. He also discusses how the digitized version of the Bible is not a good translation, “But you know that Revelations is the last book and Genesis is the first and Psalms is in between. With a Fifth World


209

digital version you don’t get any of that, you don’t get the boundaries… you’ve no sense of what came before or after.” (2017). This point is extremely important, for it recognizes the importance of the physical Bible in Chrisitianity, and how the digital universe can negatively affect the practice of Christianity. The mediatization of Christianity has been forced by the public attachment to smartphones. Many megachurches, internationally, have been forced to use popular culture and social media to gain interaction and membership to their services. For example, CityChurch in Würzburg, Germany encourages congregation members to text their questions to the pastor’s assistant during the service (Justice, 2014). In Stokel-Walker’s article, Reverend Pete Phillips also mentioned, “technology has shaped religious people themselves and changed their behaviour,”and “The attitude has changed because to restrict people from mobile phone use now is to ask them to cut their arm off” (2017). These statements play to the necessity for Christianity to adapt to the developing world of technology in order to survive. Churches have recently been capitalizing on this modern addiction to social media by dressing it as a way to further engage in and support church services. To return to my earlier point, far before the invention of technology, smartphones and communication techniques, and the internet, Christianity as an institution had an element of intimacy and exclusiveness that attracted and held its members. Without the Bible being accessible to common people, this created an interesting trend of growth as church members grasped onto all sayings of the Church. From the printing press, to industrialization, to the internet, the science field has brought ‘distractions’ to the world of religion. In the same article by Stokel-Walker, Reverend Liam Beadle says, “With the advent of social media, I think we are being reactive, we’re jumping on the bandwagon” (2017). Both of these points take into consideration how the Christianity has had to adapt to the world of the internet, rather than shaping its own outcome. The complexity of modern life has forced Christianity to become creative and delve into the world of social media. This means using propaganda that will attract younger generations through social media, using popular culture in service to uphold that attraction, and making Christianity accessible to all, perhaps the greatest contrast from its origin.

T

he internet and social media have broadened access to the Bible, leading to the development of many unique practices of Christianity. Similar to the invention of the printing press, technology has broadened access to the bible at an even greater rate today. The ability to read the Bible at almost any moment in time, from an app on a smartphone or computer, changes the foundation of Christianity. What was once built upon interpretation from the hierarchical structure of the Roman Catholic Church, is now a religion devised into unique opinions amongst many North Carolina School of Science and Mathematics

of the congregates of any given church. The access to the Bible allows for personal interpretation and acceptance of the written scripture, creating an extremely wide variety of what people deem as christian. In the following section, there will be an in-depth discussion of the affects the digital translation of the Bible has had on the traditional form of Christianity, and how this factors into increased practices of pluralism, syncretism, and individual spiritualism within the branch. The history of the practice of Christianity was extremely intimate and traditional until the internet introduced worldwide communication among individuals. The entire premise of social media and networking sites is to share parts of your personal life online—to friends, family, and sometimes even strangers. This inherently destroys the intimacy and importance of privacy in the Christian religion. Many social networking sites have created the possibility of sharing one’s intimate details of their beliefs to people around the world. Even further, media has allowed for streamed services to reach millions of Christians from their homes each week. Life.Church streamed to over 4.7 million members alone in the first week of the COVID-19 pandemic of 2020 (Dacast, 2021). This however, is a contrast to my point. In this same study by Dacast, they found that almost half of Christian worshipers appreciate the ability to practice from the comfort and privacy of their own home (Dacast, 2021). I find this interesting because although this is one benefit to live streaming church services, it becomes less personal to broadcast what could be an intimate, meaningful conversation between congregation members and pastors to millions of people nationwide. Access to live streamed and pre-recorded services allows for further deinstitutionalism and less personability of Christianity as a whole. In an article written by the New York Times, Hansen discusses the idea that larger churches with greater resources and capabilities of high-quality streaming services overpower traditional local churches with smaller attendance rate (NYT, 2021). Attending a church service from a life stream creates an additional characteristic of convenience, that may encourage christians to “try out” many different churches either in their area, or thousands of miles away (NYT, 2021). Thus leading to larger churches with a greater attraction feature through the use of live music, social media engagement and catching propaganda as mentioned in section I, taking away from the individual strengths in smaller local churches. The digitalization of the Bible has led to more individualism and unique beliefs among the Christian community. As Reverend Phillips addressed in the BBC article, when the Bible was made available universally, this changed the face of Christianity. Not only did it affect the way that the Bible is read and understood chronologically, but it also alters the feelings and interpretations of the Bible when it is not being directly fed through a higher source, like a Church. Technology and the internet have made


210

communication available to individuals across the globe, increasing diversity among the Christian community. The global access to the Bible has also opened it up to personal interpretation. Without the overarching hierarchy of the traditional Christian institution, congregations have the ability to directly read and understand the Bible in whatever terms they deem appropriate outside of the pressure of the church. Mobile phones have diminished the sacredness of the practice of Christianity through providing distractions and taking away from the foundational principles. Not only have mobile phones altered the way the Bible is used and interpreted, but it also creates an entire field of distractions readily available in the back pocket of over 85 percent of Americans today (Pew Research, 2021). The Christian religion considers many worldly diversions sinful in the eyes of their believed God. In a paper written by Dale Sims, he states that 70% of all men, including Christian men, visit pornography sites at least once a month (Sims, 2015). The actual act of visiting pornography is less important to this point, rather than the overall idea of distractions and temptations being practically served directly to individuals, thanks to the modern age of technology. With increased communication and networking, the exchange of practices and beliefs has caused a growth in spiritualism. To further develop the idea of contrasting perspectives made accessible by the internet through communication, in discussing the “Facebook Effect,” Paul McClure of Baylor University said, “Religion, as a result, does not consist of timeless truths . . . Instead, the Facebook effect is that all spiritual options become commodities and resources that individuals can tailor to meet their needs” (2016). This represents the vast change of the role of religion in society from the time of its beginning, to the modern twenty-first century. Religion originally existed as a multi-dimensional explanation for many events of the world, mediated through the hierarchy of Popes, bishops, and everything in between. Today, however, religion is introduced as something not imperative to existence, but rather, something to engage in for personal gain. The Facebook effect that McClure addresses is an interesting phenomenon relative to the spread of information and how this can affect an individual’s interpretation of religious information. The use of social media increases the odds of practicing pluralism and syncretism. In McClure’s study on social networking use, he found that social media users are far more likely to accept different religious beliefs from themselves. To further that point, his study found that individuals who use social networking sites are overall more likely to accept the practices of syncretism and pluralism (McClure, 2016). Syncretism is defined as the amalgamation of different religions, pluralism as a theory or system that recognizes more than one ultimate principle (Oxford Languages). This indicates that users of social media are more receptive

to other beliefs and practices, a great contrast from the monopoly Christianity has historically held on the realm of religion. The introduction of social media has fostered communication and interaction between very different cultures and sectors of society, creating a new and rich culture unique to the institution of Christianity. Individualism within the church has flourished underneath the integration of social media in modern Christianity. Rob Nyland at Brigham Young University, he found that there was no significant relationship between an individual’s religiousness and their use of social media (2007). With that being said, it is important to discuss the possible uses of social media for those who are religious, regardless of there being a pattern, or lack thereof. Social media makes the interaction of people with similar beliefs possible as well as those with opposing beliefs. This spectra of religiosity can create a range of beliefs and practices that would be accepted by a user of social media, fostering a unique form of individualism within the Christian community. The accessibility to varying principles and interpretations can almost certainly create dissonance within a practitioner and formulate a distinct set of beliefs that will ultimately establish a diverse population of practices of Christianity. In the article “Three Forms of Mediatized Religion,” Stig Hjarvard discusses the existence of media in religion and how it serves as a deliverer of information in different formats. This is interesting because the author also addresses how the media’s main purpose is to gain attention, and the Church’s use of media in modern practices equates to the church’s desire for attention from the current generations, returning to the question of the modern church’s integrity when using social media for propaganda (Hjarvard, ). The use of social media may also make a user more receptive to their own desires and beliefs. In using social media, the public expression of individualistic beliefs is a new phenomenon relative to the history of Christianity, which has been a specifically formatted practice of religion.

M

egachurches have often been found to engage in acts contrary to many Christian practices, and this superficiality has become more prevalent as the social climate of the developed world is changing quickly. A megachurch is defined as a church with over 2,000 members in its congregation and is typically part of the evangelical denomination (Wikipedia, 2021). Megachurches are known for their high number of members across in-person services as well as online streaming. Churches this large are built upon the current capabilities of modern technology in association with many different forms of media to advertise and draw attention to the church. In this section, I will discuss the problematic patterns that have been found among some of the most popular megachurches today and how these are based upon the integration of media into christianity. Larger megachurches are more susceptible to public

Fifth World


211

scrutiny at the hands of social media and network broadcasting. Hillsong Church is a church based in Sydney, Australia in 1983, and has only grown in membership since then. They currently have over 150,000 weekly members worldwide, with churches located in 30 countries. In November of 2020, Carl Lentz, a head figure at four Hillsong Churches in the Northeast area of the US, was accused of sexual abuse in his leadership position at Hillsong Church. In an elaborate article done by VanityFair, Carl’s background and growth within Hillsong is explained as he became a relatively prominent celebrity, baptizing Justin Bieber, and associating with several other well-known artists and actors. To return to Hjarvard’s observation of media being used to gain attention, Hillsong Church is a perfect example of how this attention quickly became a negative facet (). Lentz’s popularity among celebrities grew as he engaged on social media with the Biebers, Kevin Durant, and other prominent figures of society, posting at weddings and on vacations (French, 2021). In Susan Raine’s article on grooming and child abuse in the Catholic Church, she discusses the widely-shared belief that the Christian church has been notorious for engaging in sexual abuse with minors for years (Raine, 2019). Although this has been a pattern within the church prior to technology and social media, the outcomes of cases like Lentz’s have become increasingly worse. It is difficult to argue against the idea that churches use social media to gain attention in effort to grow exponentially, like Hillsong Church, and it ultimately harmfully impacted the church’s reputation following Lentz’s incident. Social media only fostered Lentz’s celebrity existence and further instigated his downfall through Hillsong as a known perpetrator of sexual abuse. Additionally, social media only supported the global broadcasting of the exposure of Lentz’s criminal activity. Discrimination in larger church denominations is aggrevated and protested through the support of technology and social media. In 2019, the United Methodist Church made the announcement that they would be strengthening the ban on LGBTQ+ members of the Church (NBC, 2021). The Methodist Church has approximately 12.6 million members worldwide, with the majority in the United States (NBC, 2021). The decision not to recognize samesex marriage came with plenty of backlash from members of the organization through social media. This has led to an anticipated split of the United Methodist Church into the Global Methodist Church, of which they have planned not to support and recognize the LGBTQ+ community within their branch. A group of students in Omaha, Nebraska made the decision not to continue in their confirmation process after hearing of the UMC’s decision to exclude LGBTQ+ members (CNN, 2019). The church and the student group posted on social media confirming their protest of the decision and their continual support of their LGBTQ+ members. This use of social media created a positive environment, encouraging other churches and individuals North Carolina School of Science and Mathematics

to protest the discriminatory practices of the United Methodist Church in the US.

T

his paper ventured through many different aspects of the religious sector of our society today, beginning with how technology is in some ways similar to the printing press. The way the printing press universally gave individuals access to information, providing Christianity with an increase in believers, as we can expect social media to do as well. There are benefits and downsides to incorporating social media into church services, but overall churches have seen an increase in attendance with the assistance of social media and technology allowing live-streamed services. The use of popular culture in church services has also been a method of keeping younger generations engaged and attending Christian services. With the broad access to the internet and knowledge, it is quite easy for young people to make different decisions in their religious and spiritual journey, rather than relying entirely on their familial practices to decide the particular religious activity. Using popular culture and social media to propagate its message is just one of evangelicalism’s newly adopted techniques to continue attracting the newest generation of Christians. Media today serves for many different purposes in the world of Christianity: educational assistance, propaganda, engaging with new members and keeping up with the changing world around it in order for churches to maintain their relevance. With widespread access to the Bible, recorded church services, and other resources made available to almost everyone through the help of technology, Christianity has been able to spread globally but also has brought upon the deinstitutionalization of the religious entity. The ability for one to read and interpret the Bible and then proceed to share those interpretations with the world is a direct factor in individualism within Christianity. As opinions form without the pressure of clergymen, it has also been found that those who engage in social media or the internet are more likely to accept others with different beliefs and practices (McClure, 2016). Pluralism and syncretism are two practices that are rising in participation as individuals begin to truly explore themselves and their interests, through the education that technology provides us. Social media has become a platform to not only learn of people with varying interests and beliefs from one’s self but also a place to share personal experiences and find others who may align well. This is something vastly different from traditional Christianity, where there seemed to be one path for all members and not much variation, and if there was, no one addressed it. One of my remaining inquiries after thoroughly exploring social media and technology in religion would be the difference in engagement that a church may find in someone who practices pluralism or syncretism, versus a traditional evangelicalist. It would be worth looking further into what practices are being exercised the most in those who do practice pluralism


212

or syncretism— and what principles are being left behind. Would there also be a correlation between socioeconomic status, environmental factors, and geographical factors and those practices that are being used and those that aren’t?

Clark, L. S. (2014, April). Religion, Twice Removed: Exploring the Role of Media in Religious Understandings among a Seculara Young People. Oxford Scholarship Online. Faith and ‘The Facebook Effect’: Young Social Media Regulars Less Committed to One Religion, Baylor University Study Finds. (2016, May 16). Media and Public Relations | Baylor University. French, A., & Adler, D. (2021, February 11). Carl Lentz and the Trouble at Hillsong. Vanity Fair. Hansen, C. (2021, August 8). Opinion | What We Lose When We Livestream Church. The New York Times. Hepp, A., & Krönert, V. (2008, January). Media Cultures and Religious Change: Mediatization as Branding Religion. In Conference “Religion, Media Process and the Transformation of the Public Sphere: A Day Symposium” at CRESC Centre for Research on,Socio-Cultural Change, 9thof January (pp. 1-11). Hillsong Church. (n.d.). About Hillsong Church | Church. Krings, E. (2021, October 27). Church Live Streaming Statistics | Streaming Virtual Service Trends for 2021. Dacast. Life.Church Locations. (n.d.). LifeChurch. LSC App Home Page. (n.d.). Lifesprings. Maxouris, C. C. (2019, May 20). Omaha teens are rejecting confirmation in protest of church’s anti-LGBT stance. CNN. Mobile Fact Sheet. (2021, April 7). Pew Research Center: Internet, Science & Tech. Nyland, R., & Near, C. (2007, February). Jesus is my friend: Religiosity as a mediating factor in Internet social networking use. In AEJMC Midwinter Conference, Reno, NV. Raine, S., & Kent, S. A. (2019). The grooming of children for sexual abuse in religious settings: Unique characteristics and select case studies. Aggression and Violent Behavior, 48, 180–189. Roos, D. (2021, November 9). 7 Ways the Printing Press Changed the World. HISTORY. Stokel-Walker, C. (2017, February). How smartphones and social media are changing Christianity. BBC Future. The Associated Press. (2021, March 2). United Methodist conservatives detail breakaway plans over gay inclusion. NBC News. Wheelwright, T., Wheelwright, T., Buchi, C., McNally, C., McNally, C., & McNally, C. (2021, September 6). Cell Phone Behavior in 2021: How Obsessed Are We? Reviews.Org. Wikipedia contributors. (2021, November 27). Megachurch. Wikipedia.

Fifth World


213

The Evolution of Queer Representation in Media the United States Pearl Maguma

H

ow has queerness changed over the hundreds of years of the United States? Over the many years of American history people’s ideas, morals, and relationships have changed and evolved to fit societal norms: it’s human nature. These ideals are reflected in homelife, schooling, and especially the media where most cultures are portrayed. The consumption of media is a part of our daily lives and, whether it’s realized or not, can affect how we perceive the world, other people, and ourselves. LGBTQ struggles, stories, and relationships are often left out of this media representation, and when they are represented, the stories are completely misconstrued, often degrading or demonizing that identity. In the United States, this erasure or misrepresentation has its roots in colonization. The struggle of LGBTQ people for progress, including self-representation in media, has a long history. It is a constant struggle, and if it has recently begun to get better, then this is because LGBTQ people have refused to give up. The idea of gay relationships and gender queerness has always been part of the history of America, dating all the way back to pre-colonization. The diverse peoples of Native America were very open to ideas of what we would call today transgender people. Some of the tribes were also open to relationships between transgender people and people assigned with their birth gender. But a drastic change came during the entrance of Europeans, who proceeded to colonize the country and who sought to wipe out most of the natives and their traditions. The cultural void that was left was filled with traditional European values centered around Christianity, meaning the condemnation of homosexual or gender-queer identities. With the imposition and growth of Christian beliefs in the colonies, then the states, conformity was key to fitting into society, which is a theme that carries out for most of the 18th to 20th centuries. They are also shown in many novels, reinforcing the ideas of the set standards of that time. But with the rise of the industrial revolution in America and westward expansion at its peak, forms of unruly sexual behavior became all the more common in developing cities, ranging from prostitution to homosexuality. This newfound sexual freedom continued throughout the 19th and 20th centuries and soon communities that went against the norm of society were born. Places like Greenwich Village in New York were becoming more and more popular for North Carolina School of Science and Mathematics

the queer generation and “safe spaces” were being formed. These “sinful-folk” were beginning to make their presence known through crossdressing, which had long been known on the stage, but which transformed city streets into sites of gender performance. Though not taken seriously, the queer community was starting to enter the spotlight, slowly but surely. Then in the late 60’s, the LGBTQ+ had one of their biggest breaks into mainstream media with the Stonewall riots. Led by 2 POC transwomen, the riots were in response to discrimination and raids against queer people in their spaces, and their cries for liberation solidified their place as a main component of the civil rights era of the 60’s. In the 80s, even as more people were fighting for gay liberation, the AIDs epidemic began. Because of how it transmits, AIDs (which is caused by HIV) spread rapidly through the gay community, and with the lack of government assistance, many died due to it. Conversion camps were also a threat to the community, for these would put young queer folk through rigorous religious training and torturous psychological experiences. The struggle seemed to be a constant for the community with hardly any positives to make up for it. Now in the rise of the 21st century, the perception of the LGBTQ community has changed drastically, with more people becoming accepting of queer people and greater media representation of queer lives. Though the progress is showing, there are still questions the US and its citizen should ask themselves; how far have we as a society grown since colonization and bigotry against queer people? How are we still being held back from the path of true acceptance? How do we teach future generations to accept the LGBTQ community so as not to take any steps backwards? All of these are essential questions to answer to improve today’s representation of queer people, and make sure it stays consistent in the future. With the entrance of the new age of media in the 1910s, film began to increase the representation of sezuality greatly. In addition to the rapid increase of this newborn media, there was a lack of restriction, which gave filmmakers more creativity to bring new ideas to the big screen. One of their creative decisions was to add queer people, but not in the way one might hope. In the silent film era, most communication of the characters’ feelings were done through movement


214

and body language, which meant pushing character types to the extreme, and in the context of gays or lesbians onscreen, it meant using stereotypes. Thus, one of the most prevalent stereotypes for gay men during the early 1900s was the “sissy,” a hyper-feminized man not only in looks but in personality. This theatrical caricature would often be portrayed working in jobs normally assigned to women, displaying emotions deemed “too much” for an “average” man (especially emotion with other men), and disinterested in being a “provider” or otherwise engaging in the strenuous life (Somerton, 2020). Because this version of a gay man was so wildly different from a “traditional” man–the object of much concern at that time, as Teddy Roosevelt’s speeches reveal–they were seen as comic relief, never to be taken seriously. The case for lesbians wasn’t much better because of how they were masculinized. The early lesbian in film would often be sporting male clothing like tuxedos and would have near predatory and/or flirty behavior towards the women they interact with, while still getting attention from men despite her profoundly presented sexuality. The treatment of transgender women was also reduced to a joke of a man pretending to be a woman and receiving violence because of that fact. These dehumanizing tropes were used until the 1930s, where the introduction of a new, restrictive law in American filmmaking, the Hays Code, was enforced. Due to the intense popularity of film, some powerful people believed that there should be regulations so as to “not corrupt the minds of viewers” and keep the theater “moral.” Hollywood came up with a list of 36 prohibited and tightly policed themes that couldn’t be shown or should be executed carefully. The major listed ‘Dont’s and Be Carefuls’ of Hollywood included the following: crimes against the law (it cannot inspire imitation), brutality, sex, white slavery, vulgarity, profanity, obscenity, disrespect to titles, flags, races, and nationalities, etc. The Hays Code created a stark change in how movies were produced and approved by placing censors that would last until the mid-1960s. Of especial interest to censorship was sex, with many subcategories including PDA, sexual violence and rape, and sexual deviancies like adultery and homosexuality. This type of censorship to the queer filmmakers in Hollywood was a massive obstacle to their creative freedom, as some regulations–in particular, queerness–were more heavily enforced than others. To get around this, the directors who wanted a semblance of representation in their films had to be subtle with the queerness; thus, they constructed codes of sexual expression, including signs and inuendos that a gay person can pick up on but fly over a straight person’s head. Executing this required careful planning, but most of the time there would nevertheless have to be a tragedy to befall the queer-coded character. This “punishment” was a subliminal way to tell the watchers that being queer and having ‘perverse thoughts’

or ‘impure actions’ was reason enough for them to be killed, ostracized, or commit suicide–and that there were no other narratives available for the lives of queer people. These acts of homophobia and internalized homophobia in the characters kept up until 1967, which was the peak of the civil rights movement that had transgender women of color leading the fight for LGBTQ+ equality. Following suit, there was newfound freedom in how movies were produced. Without the restrictions of the Hays Code, movies like , , and The Boys in the Band were able to take off as beginning movies of the renaissance of queer film that portrayed the experience and relationships of queer people without death or punishment for living their lives. The breaking of stereotypes set in the dark age of film was also being broken. One such example is But I’m A Cheerleader?, a 1999 movie that centers on a high school cheerleader with a seemingly perfect and straight life who is sent to a gay conversion camp, only to discover her true sexuality. The way the movie has a highly feminine woman who comes out as a lesbian is a stark contrast to the traditional butch appearance by which Hollywood typically imagines women; moreover, the movie also contains a more masculine woman at the conversion camp that happens to be straight as well. The story’s ending was also a sweet sentiment of how queer people can find happiness in spite of being placed in a potentially traumatizing situation (Babbit, 1999).

A

s the acceptance and popularity of movies and shows with queer characters increased over the years, media producers began to see a trend: whenever queer characters were in a movie, even if they were only hinted at slightly, an entire audience of gay, lesbian, bisexual, and transgender people would tune in, but some straight or homophobic views would drop it. To get around this problem, the producers found a way to lead the queer audience on by thinking a character is queer while also never confirming the queerness of the character and making it vague. Thus birthed the practice of Queerbaiting. The definition of queerbaiting is one that has many components. First, as the name suggests, it requires a bait; something to pull viewers in. In the case of the popular show Sherlock, the series immediately drops obvious queer inuendos when John Watson asks about Sherlock’s current love life and Sherlock responds by stating he’s not interested in pursuing a relationship with him, which implies that Watson is gay or bi (Sherlock episode 1, 2010). This is not the only instance it happens: over the course of the series, there are plenty of characters who assume Sherlock and John have a romantic relationship because of their closeness and devotion to each other, which can also be strongly assumed by the viewers too (Sheehan, 2015). Their relationship gets to the point where John and Sherlock get jealous when one is around another person for too long, but the creators refuse to allow Fifth World


215

them to become a couple, preferring instead to maintain the straight audience’s delusion that the duo are merely “close friends.” This denying of what would be great representation for the gay community is an exploitative practice that values profit over the interest of viewers. It is also reminiscent of the Hays Code era of cinema. The way in which potential queer characters are hinted at but denied, either through death or the lack of confirmation, feels like a repeat of a bygone era of censorship and repression. The structural similarities of queerbaiting shows/movies and Hays Code era movies/shows with hinted gay characters show a very similar type of homophobia in their constructions of closets in which to hide the open secrets of homoerotic relations. This branch of homophobia stems from the root that queer relationships “shouldn’t exist” or, unlike other romances, are inessential to the story.. To bring up Sherlock again, during an arc of the show, John Watson gets married to a woman for a brief period of time, but their relationship was incredibly short-live. In short, she was used as a plot point to develop a relationship that was never brought into fruition, “Mary’s death was important to keep John’s arc moving had to die because ‘Sherlock Holmes is about Sherlock and Dr. Watson and it’s always going to come back to that (Tyler, 2020). Thus, a “straight” woman is killed to keep a queer character straight. Not only is queerbaiting a problem in trying to lure LGBTQ+ members into consuming media, but a newer practice, dubbed queercatching, has arisen in recent years. Queercatching is a term coined by Rowan Ellis. It refers to or (Ellis, 2019). Both of these definitions actively try to “catch” queer people. It makes them interested in a series or movie before or way after the said media is produced, only to find that whatever queercoding the advertised character was supposed to have was extremely minimal or even non-existent. The way the character is shown off is also strange; the info gets around fast in queer spaces, but it’s rarely on the frontlines of news (Ellis, 2019). With all these methods to exploit the queer audience, there would understandably be some distrust between bigtime content creators and the general consumers. Thus, the fans would eventually take matters into their own hands, often by taking existing forms of fiction and writing about them in their own unique way, catering to their dreams of the scenes they wanted but couldn’t have. This new and wildly popular form of media would take the world by storm with the rise of the internet and become a norm within fandom culture. Fanfiction, as it’s called, was a gamechanger for representation. North Carolina School of Science and Mathematics

In the early 2000s, just as the internet was becoming more accessible to more people, spaces started popping up. Groups of people who had similar interests and ideas would converse, share, and learn more about that particular piece of entertainment. These groups would grow to the point where the members would span across different websites and create smaller, more niche divisions. This type of group, or “fandom,” is the heart that pumps the blood of most, if not all popular media in the world. In these fandoms, people would create their own ideas to add to the existing source material, which is called a “headcanon”; and if that headcanon is widely agreed upon in the fandom, it becomes “fanon” (Rpedia, 2015). So, if two characters whose relationship in a series was never consummated, people in the fandom would create “ships” (short for relationships) for the series, imagining the story if the two characters were allowed to be together, with unlimited creativity to create any scenario they want. When people got a hold of this, fanfiction became a driving force for greater representation in the online world. From the 2000s onwards, websites like Tumblr, Fanfiction.net, Wattpad, Archive of our Own (AO3), and many others catered themselves to the general public as well as queer people, and shipping culture grew wildly on those sites. Gay ships (commonly referred to as “slash”) became extremely popular and topped the charts in most popular fandoms. To show how massive the support for gay ships are in these communities, in the top AO3 fanfics for 2021 sixty-nine out of one hundred pairings were focused around gay (male x male) relationships, seventeen were straight (female x male), four were lesbian (female x female), and the remaining ten were other types of relationships (friendships or self-inserts). Compared to what is shown in the canon mainstream media, it’s not a surprise that queer people tend to gravitate towards these sites in search of representation. The evolution of queer media has gone through a multitude of phases since the popularization of visual media in the 1910s, all the way to the diversity of fan-created fiction that encapsulates almost every form of media under the sun. The circumstances of production, the societal rules, and audience feedback were all essential components for how queer media was represented in the last century, good and bad. By going through the history of queer people in visual media, what is learned is that, in spite of European gender norms imposed upon the peoples of America and of systemic, institutional hostility and violence in media as in law, the multicultures of LGBTQ were still able to take root and to flourish. Because of their fight, the public opinion of the LGBTQ community has started to become more positive and laws have been passed to not discriminate against them, allowing queer culture to be more public. From the suppression of indigenous cultures to the present moment, the years of oppression faced by queer people have yielded to a time of greater hope. Of course, America isn’t the only place that has queer


216

individuals with similar struggles. Many other countries around the world have their own queer cultures, which have also been through similar silencing and near erasure due to a change in what society deems fit. Since this is the case, people, not just in the United States, but worldwide can learn more about hidden queer cultures and expressions through film and other forms of media. We’ve certainly grown a lot since the time when identifying as LGBTQ+ would get you killed, because now members of the community have the opportunity to see themselves on the big screen in a way that truly and authentically shows who they are, without shame.

Bronski, Michael. A Queer History of the United States, Beacon Press books, 2011 Whitehead, Harriet. Sexual Meanings: The Cultural Construction of Gender and Sexuality, Edited by Harriet Whitehead, Sherry B. Ortner, Cambridge University Press, 1981 But I’m a Cheerleader. Dir. by Jamie Babbit. Ignite Entertainment; The Kushner-Locke Company, 1999. Youtube. Collier, Cassandra M. “The love that refuses to speak its name: examining queerbaiting and fanproducer interactions in fan cultures.” (2015). Sheehan, Cassidy. “Queer-baiting on the BBC’s Sherlock: Addressing the invalidation of queer experience through online fan fiction communities.” (2015). Hayes, David P. “The Production Code of the Motion Picture Industry (1930-1967).” The Production Code of the Motion Picture Industry (1930-1967). centreoftheselights. “AO3 Ship Stats 2021.” Archive of Our Own, Archive of Our Own, 31 July 2021. Duggan, Jennifer. “Fanfiction: Remixing Race, Sexuality and Gender.” Making Culture: Children’s and Young People’s Leisure Cultures, Kulturanalys Norden, Göteborg, 2019, pp. 47–49. Mondello, Bob. “Remembering Hollywood’s Hays Code, 40 Years On.” NPR, NPR, 8 Aug. 2008. Ellis, Rowan. “The Evolution of Queerbaiting: From Queercoding ... - Youtube.” The Evolution Of Queerbaiting: From Queercoding to Queercatching, Youtube, 30 Jan. 2019. Rpedia. “Types of ‘Non (Canon, Fanon, Headcanon, AU).” RPedia, Tumblr.com, 21 May 2015. Tyler, Adrienne. “Sherlock: Why Mary Watson Was Killed off in Season 4.” ScreenRant, ScreenRant.com, 27 July 2020. Somerton, James. “The History of Queer Baiting (Part 1) - Youtube.” The History of Queer Baiting (Part 1... The First 100 Years), Youtube.com, 12 Aug. 2020. Somerton, James. “The History of Queer Baiting (Part 2) - Youtube.” The History of Queer Baiting (Part 2... Stage and Small Screen), Youtube.com, 12 Aug. 2020.

Fifth World


217

Misogynoir in Film and Television: Where Does this Leave Black Women Today? Della Crawford

H

ow does one define the Black woman? Is she defined by her race, her sex, her disposition? Do these definitions acknowledge her as fully human, fully beautiful, fully vulnerable and fully “realized…”(Asmus et al.)? In answering these questions, we must examine the ways in which Black women have been constructed and presented through the lens of the American entertainment industry and from a historical perspective. From the beginnings of slavery, whites sought to control not only the body of the Black woman, but her mentality, her spirituality, and her social life. This required controlling her narratives, experiences, and the world’s perception of her. Although African-Americans were dominated by slave owners, justification for this bondage required a narrative that often portrayed those held captive as being barbaric, over-sexualized at worst. At best, the captives were helpless and in need of the bonds of slavery to save them from worldly harm. The enslaver is savior, “protecting” Africans and African-Americans from their culture, their identity, their non-Christian faith, themselves, forcing them to assimilate into the roles white society created for them. Thus the slaveowner and white society developed an account of Black people, specifically Black women, that constructed them as either a threat to the stability of polite society or in need of a savior. This not only served to soften the blow of the atrocity of treating another human as an animal, but it also kept Black people in a societal cage. Slave owners justified their cruelty by crafting palatable, even charitable stories about the people they enslaved. This is an example of a concept called cognitive dissonance, which is “... the tension experienced by an individual when their thought system, feelings or behavior are conflicting… slavery placed our societies in a state of cognitive dissonance: the proclamation of human rights coexisted with the reduction of people to the status of objects deprived of rights” (Desbordes). The same stories told to defend slavery continued to be perpetuated through the depictions of Black women in entertainment. Caricatures were created to normalize and further perpetuate these false narritives. The Mammy and the Jezebel were prominent characters in American entertainment during the 19th and 20th centuries. These “identities,” imposed upon Black women by non-Black women in writing and dramatic arts,

North Carolina School of Science and Mathematics

continued throughout time, introducing in recent years new caricatures such as the Sapphire (the angry Black woman) and the Bougie Black Woman (the Black woman who has escaped the stereotypes of other Black women and fit into the stereotypes of upper-class white women). The purpose of this writing is to explore how images of Black women in the media have sought to define Black women’s lives. It also seeks to explore how these portrayals diminish the humanity and individuality of the Black woman. How Black women are defined in television and film has affected the lives of Black women by obstructing their individuality; it has also impeded her ability to escape the captivity of identities forced upon her by white society. Ultimately, how the Black woman is defined is substantially contingent upon how society wants her to be defined instead of who she truly is. By understanding these caricatures, their evolution, and how they affect Black women, we can better address the problems that Black females face as a result of these portrayals. The Mammy & Jezebel or centuries in the United States, white society has assigned identities to Black women to maintain the stance that the dehumanization of Black people is both acceptable and necessary to maintain a civilized society. The Mammy and the Jezebel are two major examples of personas grafted upon Black women by whites. The Mammy is an asexual, nurturing character who has no goals, desires, or self-awareness. She serves her white family, not her own and not herself. Her complete purpose in life is to enhance the life of her master. She is, in essence, “self-less and themmore.” The Jezebel is a Hyper-sexualized characterization of Black women. She uses her sexual acuity to influence others and manipulate outcomes. The Mammy determines the success of her existence based on the comfort and pleasure of her master. The Jezebel determines the success of her existence based on her physical and sexual prowess. Although the Mammy is often portrayed as physically unattractive, homely, humble, and integral, while the Jezebel is portrayed as physically attractive, charismatic, proud, and cunning, they are both devoid of personal awareness and internal measures of success and esteem. Both the Mammy and the Jezebel serve to justify oppression

F


218

by limiting the nuances which exist in human beings. Humans possess multifaceted and multi-dimensional traits. These women, however, are characterized as flat, predictable subjects, not as humans in their complexity of interests and deires. Their existence is subject to white society’s limits on who they can be. They are stereotypes that are ingrained in American culture and still impact the way Black women are perceived. The Mammy and the Jezebel are the offsprings of the concepts of slavery which insisted upon captivity, limitation, and powerlessness. The Mammy, usually a maid, housekeeper, and babysitter, was “developed in the South by the privileged class during slavery, in which the physical and emotional makeup of enslaved women was used to justify the institution of slavery” (Jewell 170). White supremacists constructed the media image of the Mammy to present the notion that Black women were content and happy with their roles of servitude and were a “safe space” for the white family. She was darkerskinned, older, and overweight. She was not appealing or beautiful by white society’s standards. This lack of appeal and attractiveness kept her from being a threat to the white wife, the white family, and ultimately white society. In reality, Black domestic workers looked completely different from this portrayal. According to David Pilgrim from Ferris State University, “house servants were usually mixedrace, skinny (blacks were not given much food), and young (fewer than 10 percent of black women lived beyond fifty years).” By depicting these servants as undesirable, society could mistakenly believe that white men were not attracted to black women. This was utilized to advance the notion that Black women were not sexually exploited by white men (Pilgrim 2000). How could you sexually exploit someone as unattractive as a Black woman? This caricature defended the oppressor while simultaneously dehumanizing the oppressed. The image of the Mammy is present throughout American entertainment. It was common in minstrel shows from the 19th century and even apparent in modernday films. One of the first deceptions of the Mammy in a minstrel song is . The lyrics of the song declare that “Mammy’s teaching them to tat and knit… Mammy’s little lammies” (“Mammy Jinny’s Hall of Fame / Historic American Sheet Music / Duke Digital Repository”). This song clearly suggests that one of the primary roles of the Mammy was to care for white children. She cared for white children so much that these children often viewed her as a “second mother”(“Portrait of Mauma Mollie.”). In D. W. Griffith’s movie (1915) and in the film Gone With The Wind (Selznick & Fleming, 1939), the Mammy is shown to defend her white master’s home against the “barbaric” soldiers who appeared to be a threat to the white family that she serves. This presents the idea to viewers that Black people, particularly Black women, are docile, easily manipulated, and exist with the sole purpose of serving and defending white people.

The Mammy defends her oppressor by fighting those who fight to free her from oppression. In essence, she sides with her oppressor to perpetuate her own continued oppression. We still see this characterization in contemporary films such as Tate Taylor’s The Help (2011). This film focuses on an aspiring journalist writing a piece on Black maids. The maids in the film take care of white families, neglecting the needs of themselves and their own families. Additionally, in true Mammy fashion, they even befriend and assist the white journalists, neglecting again their hopes and dreams to further the interests of others. The Jezebel caricature presents Black women in a way that seems directly to conflict with the Mammy. The term Jezebel originates from the Bible. “Jezebel was a Phoenician princess who married King Ahab of northern Israel. Jezebel refused to worship the Hebrew God Yahweh, flouting traditional gender roles of the time. [She] becomes a very… powerful queen ” (Clayson and Raphelson). The Jezebel is a hyper-sexualized caricature of Black women that canbe traced back to the Transatlantic slave trade. “Unaccustomed to… tropical climate, Europeans mistook semi-nudity for lewdness…[the cultural tradition of ] polygamy was attributed to…uncontrolled lust, tribal dances… level of [the] orgy” (Healey and O’Brian). Europeans and eventually white slave owners used this misconstrued hypersexuality to justify slavery and sexual relations between white men and Black women. Black women had to comply with this idea created by white men for their own safety and protection. “A slave who refused the sexual advances of her slaver risked being sold, beaten, raped, and having her “husband” or children sold. Many slave women conceded to sexual relations with whites, thereby reinforcing the belief that Black women were lustful and available” (Pilgrim). This “lustfulness” gave white society even more incentive to villanize Black women through media. The Jezebel villainization of Black women has been present in American entertainment for centuries. In the 1915 film (Griffith), the character of Lydia Brown, a black woman, is depicted as a Jezebel. She is the manipulative, sexually deviant mistress of Austin Stoneman, a white abolitionist politician from the Union. The use of her image as a manipulative, hyper-sexual woman presents the idea that Black women leverage their sexuality to manipulate others and to gain power. Stoneman’s actions made it clear that he was not attempting to liberate a human being, but instead a sexual object that he could use at his diposal. She could not legitimately deserve one to fight for her basic human right to be free. The Jezebel became an exceedingly prevalent character in film during the 1970s, blaxploitation genre. In (Papazian & Hill, 1973) and (Feitshans & Hill, 1974) (Pilgrim), Black women use overt sexual advances and sexual acumen to manipulate and seek revenge on others. Mammy and Jezebel representations in entertainment have repercussions for Black women in real life. The Fifth World


219

Mammy caricature can be linked to body image issues and the Jezebel caricature can be linked to Black women feeling insecure about their sexuality and sexual expression. The Mammy stereotype has the potential to make Black women feel insecure about their body image. The Mammy sharply contrasts with the stereotypical beauty standards of being thin and having light skin that are imposed on all women. “For Black women, issues around weight and body image may be intensified because this image of thinness has historically been based on a white, [middle-class] standard of beauty. Thus, implications of the fat, stereotypical Mammy image must be taken into account when discussing eating disorders in this population (Root, 1990; Thompson, 1994). “The Mammy image continues to represent the economic and working conditions of many poor Black women.” Thompson (1992) suggests that [binge eating] for these women may become a solace and a socially appropriate way of dealing with the stress associated with poverty as well as the sense of emotional deprivation felt in other areas of their lives. Economically privileged Black women are also at risk for developing eating disorders. Among surveyed middle-class Black women, 60% reported feeling too fat and unhappy with their thighs, hips, and stomach (Thomas & James, 1988). Carolyn West observes that “Eating disorders may develop among upwardly-mobile Black women in an attempt to assimilate into mainstream culture by distancing themselves from the obese Mammy image” (West 460). “Among Black women with African features,” West also argues, “physical characteristics, such as dark skin and kinky hair, which are typically associated with the Mammy image, may perpetuate shame and feelings of unattractiveness” (460). To this, Wyatt adds, “When sexual stereotypes associated with the Jezebel image (e.g., Black women are promiscuous, engage in early sexual activity, become sexually aroused with little foreplay) are internalized, performance anxiety, feelings of inadequacy, and sexual dysfunction can result” (Wyatt, 1982). She continues, “If a Black woman perceives her sexuality as one of her few valuable assets, it may become a source of esteem or a negotiating tool to manipulate men rather than an expression of pleasure and care. At the other end of the sexual continuum, shame and repression of sexual feelings can be the outcome of attempts to distance from the Jezebel image” (462). These archetypes led to new archetypes of Black women in film and television. These archetypes are meant to contain and to control the continuing resistance of Black women to economic exploitation, social domination, and political injustice. They include but are not limited to the Sapphire, also known as the Angry Black Woman, and the Bougie Black Woman. The Sapphire hy is she so angry? Why does she look so aggressive? Why is she so masculine? Does being a Black woman

W

North Carolina School of Science and Mathematics

mean you must be passive and subservient? What happens to Black women who are not passive nor subservient? The Sapphire, also known as the Angry Black Woman, is an archetype that portrays black women as mean, aggressive, sassy, and ill-natured. Although the traits associated with the Sapphire have been associated with Black women for centuries, the first appearance of the Sapphire can be dated back to the Amos ‘n’ Andy radio show (Thompson). In this radio show, Sapphire was the name of the wife of Kingfish. She was known for being “domineering, aggressive, and emasculating” and for “berating… her husband”(West). This portrayal of Black Women has survived and thrived as a common thread in contemporary entertainment as well. Modernday Sapphires are invading media and content. Madea from Tyler Perry films, Rochelle from the television series Everybody Hates Chris, and Aunt Esther from Son all embrace, if not sanctify the Sapphire. Madea is loud, brash, and extremely masculine. In fact, her character is played by a cisgender man. Similarly, Rochelle is loud, stern, strict, emasculates her husband, and threatens to harm her children if they are disobedient. Aunt Esther is known to be brash and frequently insults her brother -in-law. Although these traits are carefully hidden in a comedic tone, they have a damaging impact on the way that Black Women are viewed by others and by themselves. In addition to this, the viewers very rarely see character development from these characters that would suggest that they have evolved beyond the stereotype that they represent. “The Sapphire image serves several purposes,” according to Pilgrim (2015). “It is a social control mechanism that is employed to punish black women who violate the societal norms that encourage them to be passive, servile, [nonthreatening], and unseen”(West 149). Black women who choose to speak their minds, stand up for what they believe in, and/or are in positions of power are discredited as “angry Black women.” The figure of Sapphire serves to discredit, minimize and further marginalize the black woman. Her voice is not only silenced, but she is, in essence, gagged. The silencing of her voice is justified as a measure against one who is prone to histrionics and cannot be taken seriously. Her voice is silenced by herself in order to avoid being the distasteful and unpleasant Sapphire. Even more concerning is the notion that sometimes the failure of a Black woman to remain silent and “in line” could lead to her demise, based on false perceptions that she is violent. “[I]n Prairie View, Texas, a police officer threw Sandra Bland to the ground and threatened to ‘light her up’ with a taser. What was Sandra’s crime? She was pulled over and arrested for failure to signal a lane change. Sandra later hanged herself in a jail cell” (149). Black women can not even freely express themselves out of fear of further perpetuating stereotypes. “It is frustrating for African American women to realize that ‘you cannot behave your way out of racial terror’” (Love, 2016). The Sapphire persona reinforces the constant theme seen


220

through all of these characters - it perpetuates the slave mentality of having no say in your own life or in society. The Bougie Black Woman e’ve all come across her at some point in our lives. Whether you were kickin’ it in Bel-Air with Hilary Banks from or you were watching Toni Childs break hearts and take names in , you were watching bougie Black women rule the screen. The bougie Black woman is characterized by her wealth, her class, her self-confidence, and her beauty. The archetype of the bougie Black woman is a relatively new one, gaining popularity in the 1980s and it appears she, too, is here to stay. Despite being a more contemporary, and some might argue, a more empowering portrayal of Black women, the bougie Black woman comes with a set of more complex and concerning problems for Black women. One of the first appearances of the bougie Black woman came in the form of Dominique Deveraux from the 1981 series . In a 1984 interview, Carroll, who played the role of Dominique, famously said “I wanted to be the first Black bitch on television,” and she did just that with her breakthrough role. At first glance, she may appear as a Sapphire. After all, she was coarse, brash, and domineering throughout the series. But underneath it all, she was rich, high class, and an entrepreneur. The catch is that she was bi-racial, having a black mother and white father. At first glance, we may see it as a beautiful portrayal of a black woman with class and beauty. But with a deeper examination, it is clear that, were it not for her “white” genes, her glamour, wealth, and success would not be possible. When the humanity of the Black woman was finally acknowledged, at least in terms of conventional economic success, it served once again to prop up notions of white benevolence. Historically, white fathers have not been kind to their dark children, and worse to the women who gave birth to them. As some solace for Black women, the bougie black woman did make some miniature strides in gradually leading to the bending of other stereotypical characters. It also primed the pump for the acceptance of more fully human black women in film and media. But it did not do so, at least in Deveraux’s character, without shackling the black woman’s growth and esteem to a woman who was half white. There is immense irony in Carroll’s feelings about the role and white society’s perception of the black woman. It was the unlikability and the humanness that attracted Carroll to the role. “I’ve never played a role quite this unlikeable. And I like that. I like that very much because I think very often, particularly minorities, it’s almost required of them that they are nice people, and I don’t want to play a nice person” (“Diahann Carroll Interview [1984] First Day on Dynasty Set” 0:00–2:07).” Here we see the juxtaposition of the Mammy’s being freed against the wall of the Sapphire’s

W

being acceptable. Two things that would not be possible in a fully Black Deveraux. One must question, on a deeper level of examination, was it, in fact, the white part of Deveraux that was accepted, even with her brash nature and elite status. There have been many more bougie Black women to grace the screen since Dominique Deveraux. Hilary Banks from , Whitley Gilbert from A , and Toni Childs are just a few examples. The bougie Black women in these television series all have common defining traits. They are all well within white society’s standards as it pertains to being educated, ambitious, confident, beautiful, and wealthy. Another, almost obvious, trait of these characters, they are all fair-skinned black women. They are not necessarily bi-racial, as Deveraux was. But they clearly bend toward being considered “white-like”. For many black women, these characters were inspirational and presented a bold exclamation of successful black womanhood. And to many whites, they were not “normal” or “average” Black women. They didn’t “seem” black or “act” black. As Black society was transitioning into more freedom, wealth, and status, film and media appeared to follow. “[T] he African-American middle class emerged in the early twentieth century and increased dramatically after the Civil Rights Act of 1964. This growth continued into the 1970s, 1980s, and 1990s, in part because of [the] enforcement of anti-discrimination laws in employment (The United States Commission on Civil Rights 54).” The emergence of the bougie Black woman directly corresponds with a period of significant economic growth for the Black community and thus gives a more honest perception of the diverse socioeconomic standings of the Black community and negates, to some extent, the overrepresentation of impoverished Black people in entertainment. Despite being wealthy, high-class, and aspirational, the bougie Black woman is flawed and imperfect, often impeded by her wealth, her desires, and her high expectations. In an episode of Hilary Banks expects her boyfriend to propose to her in an extravagant way. In order to meet her expectations, he proposed to her while bungee jumping off of a mountain (“Where There’s a Will, There’s a Way (Part 2)”). As a result, he fell to his untimely death. Her desire for extravagance and luxury led to the death of the love of her life. In , Toni Childs has an affair with the affluent Dr. Clay Spencer ( ). When her current boyfriend and struggling artist, Greg Sparks, learns of this affair he breaks up with her and she loses the “love of her life.” While these portrayals present the bougie Black woman as self-centered and materialistic, they show some of the struggles experienced by Black women who seek to live a more “gentrified” life. Stereotypical media characters are entertaining, but they are detrimental to society in general. This is magnified for black women who have, for most of their lives, been defined by others in Fifth World


221

real life. While the bougie Black woman may be a step in the right direction for portrayals of black women in media, it is still a step ridden with shackles. While no reasonable black woman ever desired to be a Mammy, a Sapphire, or a Jezabel, they do casually see the bougie Black woman as idyllic. The risk is the black woman may be trapped in the illusion that she must always be a “type”, therefore, a “stereotype”. The complication of the bougie Black woman is immense. She is never white enough for white society and never Black enough for Black society. The bougie Black woman is trapped in a purgatory of identity. In searching for the identity or definition of the Black woman, the bougie Black woman simply feels more palatable. Black women can admire her and white society can tolerate her. But she is still a fictional character–a persona, created in an oppressive culture to define and limit the true, honest, individual Black woman. Although she is perhaps created by well-meaning creative minds, the ties to her existence and a history of oppression cannot be ignored. She serves to pacify the Black woman’s need to pursue her own truth and humanness. She serves as a pat on the head by the slavemaster and white society. She is a crumb from the table. She is a messenger who says, “Look, I gave you voting rights”. She represents a token of freedom and growth, but she is really only a means to appease the Black woman’s human instinct and core need to exist as a free, individual, uninhibited woman. We cannot argue against the fact that the portrayal of bougie Black women in film and television is revolutionary for the Black, female community. Black women were never previously portrayed as beautiful, ambitious, and powerful. It certainly more accurately describes the experience of the modern Black woman. Yes, seeing Black women portrayed in this manner can be refreshing and empowering. But as do all these artificial, “other”- imposed identities, they limit the fierce, vast and wide human that is the black woman. The black woman remains in her box, her cage. She is told she is free because she has a life that more closely resembles that of her white counterpart. And unlike the other characters, the bougie Black woman has attributes that are highly lauded in our culture. As with any artificial being, a wide hole in the soul still remains. But this hole is filled with unreasonable expectations of wealth, fame, status, and perfection.

S

eeing yourself represented in film and television has the potential to make a positive impact, especially if you have faced copious amounts of oppression. What happens when you only see yourself as tropes created by others to further oppress you in film and television, and ultimately in life. This is often the experience for Black women when they see themselves represented in media. Black women have complex, real-life reactions to the way that they are portrayed in film and television. These archetypes have to violence against Black women, loss of self-confidence with black women, body-image issues, masculinization of Black North Carolina School of Science and Mathematics

women, and role constrictions for Black women in the workplace and in film/television. But what may be more concerning is how these boxes of artificial people, place Black women in boxes themselves. How Black women are defined in television and film has affected the lives of Black women by obstructing their individuality and it has also impeded her ability to escape the captivity of white society’s projected and intended identity which has been branded on her. Ultimately, how the Black woman is defined is substantially contingent upon how society wants her to be defined instead of who she truly is. By understanding these caricatures, their evolution, and how they affect Black women, we can better address the problems that Black females face as a result of these portrayals. For every Black woman that is suffering through the identity crises of who she really is, there is a Black community that looks to her for their identity. These barricades to being human, keep the human being that is the black woman bound in a world of invisibility. In order to fully understand how the Black community is affected as a whole, the archetypes that are typically attributed to men need to be further explored, the impact that this has on the Black LGBTQ+ community must be further explored, and ways to present Black women in a humanistic manner in media must be explored.


222

Al Jazeera. “Mammy, Jezebel and Sapphire: Stereotyping Black Women in Media.” Television | Al Jazeera, 26 July 2020. “Amos and Andy.” Amos and Andy. Accessed 14 Dec. 2021. Ashley, Wendy. “The Angry Black Woman: The Impact of Pejorative Stereotypes on Psychotherapy with Black Women.” Social Work in Public Health, vol. 29, no. 1, 2013, pp. 27–34. Crossref. Asmus, Sigrid, et al. Beyond Mammy, Jezebel & Sapphire: Reclaiming Images of Black Women. Jordan Schnitzer Family Foundation, 2018. “The Birth of a Nation 1915 1080p.” YouTube, uploaded by James Buck, 4 Dec. 2018. Clayson, Jane, and Samantha Raphelson. “Unpacking What It Means To Call Kamala Harris A ‘Jezebel’ | Here & Now.” 23 Feb. 2021. Eck, Christine. “Three Books, Three Stereotypes: Mothers and the Ghosts of Mammy, Jezebel, and Sapphire in Contemporary African American Literature.” Criterion: A Journal of Literary Criticism, vol. 11, no. 1, 2018. Jewell, K. Sue, editor. The New Encyclopedia of Southern Culture. University of North Carolina Press, 2009. “The Jezebel Stereotype.” Repozitorij Filozofskog Fakulteta Rijeka, 2018, pp. 1–47. Kelley, Blair. “Here’s Some History Behind That ‘Angry Black Woman’ Riff the NY Times Tossed Around.” The Root, 25 Sept. 2014. “The Mammy Caricature - Anti-Black Imagery - Jim Crow Museum - Ferris State University.” Ferris State University, 2002. O’Brien, Eileen. Race, Ethnicity, and Gender (Reader): Selected Readings. Sage Publications, Inc, 2020. Pilgrim, David. “The Jezebel Stereotype - Anti-Black Imagery - Jim Crow Museum - Ferris State University.” Ferris State University, 2002. “The Sapphire Caricature - Anti-Black Imagery - Jim Crow Museum - Ferris State University.” Ferris State University, 2008. “The Sapphire Caricature - Anti-Black Imagery - Jim Crow Museum - Ferris State University.” Ferris State University, 2008. “Portrait of Mauma Mollie.” The Library of Congress. Accessed 14 Dec. 2021. Sterrett, Kira. “Asexuality and Hypersexuality in Black American Cinema: The Manifestation of the ‘Mammy’ and ‘Jezebel’ Archetypes in Modern Films and Their Origin.” Film Daze, 24 Feb. 2021. Thompson, Cheryl. “Black Women’s Portrayals on Reality Television: The New Sapphire.” Journal of Communication, vol. 66, no. 6, 2016, pp. E5–7. Crossref. Townsend, Tiffany G., et al. “I’m No Jezebel; I Am Young, Gifted, and Black: Identity, Sexuality, and Black Girls.” Psychology of Women Quarterly, vol. 34, no. 3, 2010, pp. 273–85. Crossref. West, Carolyn M. “Mammy, Sapphire, and Jezebel: Historical Images of Black Women and Their Implications for Psychotherapy.” Psychotherapy: Theory, Research, Practice, Training, vol. 32, no. 3, 1995, pp. 458–66. Crossref. Desbordes, Rodolphe. “Slavery and Cognitive Dissonance.” SKEMA ThinkForward, 2 Nov. 2020. “Mammy Jinny’s Hall of Fame / Historic American Sheet Music / Duke Digital Repository.” Duke Digital Collections.

Fifth World


223

Examining the Relationship and Consequences of Hypersexuality in Sexual Assault Victims Kendall Esque

F

or the 230,000 people who are reported to have been sexually assaulted each year, there is a before, and there is an after. We tend to concentrate on preserving the before by promoting prevention strategies and encouraging justice for the perpetrators, but attending to and accepting the trauma of victims is essential. This work is divided into four sections which constitute a general thesis: Hypersexuality is a trauma response that can be a positive mechanism for recovery but can also lead to negative situations. As a result, each section explores multiple examples and relevant data emphasized during the research period, seeking to arrive at actionable conclusions in the final section. After juggling with multiple truths of hypersexuality as an avenue both of exploring a healthy sex life again and of more sexual abuse, this work insists that one of the most important things to help survivors is to get it correctly defined and diagnosable in the DSM-5. This of course is in addition to new and practiced therapies for survivors and growing the world to prevent and eradicate assault. At the heart of this paper is the hope that a deeper exploration of what it means to be a victim will alleviate the difficulties that come along with the confusion of sexual assault trauma.

T

o start: what happens when someone is sexually assaulted? Most pertinently, what is happening in their brain? In adolescent women in one study, after a certain period, 91% of the participants were at risk for Post Traumatic Stress Disorder and its symptoms. Additionally, 88% and 71% were at risk for developing depressive and anxiety disorders in the same period respectively(4). PTSD takes the spotlight when talking about dealing with traumatic events. We like to think of jumpy war-hardened senior citizens, but the reality is that it is much more common and can be caused by any number of difficult life events. With so many people who have experienced sexual assault, it can be reasonably assumed that PTSD is an important phenomenon to examine when considering recovery. In , Van der Kolk and McFarlane state that trauma is primarily rooted in what is and isn’t reality for a patient. These fixations express themselves as information processing difficulties which

North Carolina School of Science and Mathematics

generally connect back to specific symptoms of a traumatic experience. Information processing is a psychological term that relates thought processes to that of a computer operating system(Information). Difficulties occur when someone is unable to use the information they have gathered. Van der Kolk and McFarlane separate difficulties into six distinct categories for patients with PTSD. One of these is compulsive reexposure to the trauma, which is when a patient may seek out situations that remind them of their trauma. This is generally considered to be harmful as it often places the patient either in the victim or the perpetrator role when re-experiencing what brought on their PTSD in the first place. This plays a strong role in recovering victims of sexual assault, as many will continue to put themselves in similar situations of that when their initial assault occurred increasing the likelyhood of it happening again (LA). In , Maughan attributes this effect to victims trying to get a grasp on the reality of their trauma(1). While the neural pathways are altered around trauma (LA), victims tend to replay and dissect memories and experiences compulsively, like you might pick at a cold sore, and put themselves in a pattern of replaying scenarios over and over to examine what went wrong (Negoski in Maughan, 1). Hypersexuality comes into play when victims of assault begin to process the trauma by replaying it. Characterized by oversexualizing oneself through seeking out sexual situations, talking about sex, or any other behavior that centers sex in one’s life (5), hypersexuality is a unique form of a common coping strategy. The most common misconception about people with hypersexuality is that they can only be struggling if they are having massive amounts of sex, similar to a sex addiction. While this is a good indicator, someone can still be dealing with hypersexuality by using talking, joking, and telling stories about sex, pornography, or masturbation as an outlet for their feelings. This contradiction from a response to trauma from sexual assault to feeling hypersexual is unsurprisingly a difficult experience. Societal perceptions color all of our behaviors, but paint the strongest shade onto topics like sex. Our patriarchy enforces gender roles, sexual expectations, and treats people like objects, while also not having an easy fix. This is exacerbated by other factors including race. For


224

example, many attribute the fetisization and adultification of Black girls at a young age to hypersexual behavior of these women in adulthood. Societal pressures are there to encourage and discourage sexual behavior when it is to the benefit of the system. Any of our notions and opinions on hypersexuality and sexual assault must be taken with the knowlege we are probably reciveing them from someone else, and their benefit must be examined. Additionally, the most important point to understand is how these pressures affect the behavior and feelings of victims in recovery.

H

ypersexuality on its own is not an inherently negative phenomenon. Whether they are glorified or shunned for increased participation in anything culturally related to sex, people are subject to a unique and original experience. Positive feedback can come from both those looking in and the patient themselves. Positive reinforcement coming from those around victims are heavily influenced by societal standards. Especially for women, the “hoe phase” can also be a source of encouragement from her acquaintances where taking liberty in her sexual experiences is empowering. This praise, like much of the feedback on hypersexuality, is rooted in the patriarchy. Sexual behavior is considered expected of women for their male partners and their label as sexual objects influences the image we get of promiscuous women. Additionally, praise from female counterparts on “breaking from the male gaze” or being sexual for their pleasure in a way that doesn’t seem to serve the system places worth on women for liberation from the patriarchy, which relies on the patriarchy itself. This liberation should be afforded nuanced and empathetic feedback from peers, but as humans do it is rarely given this care. Outside of the viewpoint of others, many people use a phase of hypersexuality positively for their means. Victims of sexual assault have been known to rationalize their experience as an “unintentional coping mechanism.”(2) Most patients have a couple of common thoughts surrounding why they are doing what they are doing: Taking back control/autonomy: When someone gets positive experiences from taking back control of the situations in which they are sexual. This includes positive feelings from having autonomy over their own body with partners who are cooperative, as well as, putting themselves in new situations to improve the chances they won’t be caught off guard in future scenarios. This thought goes along with some of the typical symptoms of PTSD. To rationalize a traumatic experience, victims spend lots of mental energy considering their situation. This can manifest itself most pertinently in a fixation on replaying their experience. When they are allowed to re-enact the scene where they are in control it can form new neural pathways associated with the experience. These positive pathways allow victims

to reconstruct their identity despite their trauma, instead of because of it. Without the chance to redefine their traumatic experience, most victims further wear down a mental rut of wallowing in their intrusions (LA, 491). Boosting self-esteem: In addition to any positive societal feedback they are being given for their promiscuity, they feed an internal bucket with positive thoughts about themselves. Positive sexual experiences on their own are a breeding ground(no pun intended) for feeding self-esteem even after traumatic experiences. Unintentionally mimicking therapy strategies(2): Many therapies surrounding recovering from assault have to concentrate on approaching new sexual experiences. When pursuing sexual partners or conversations, patients may accidentially stumble on something helpful that a therapist could have suggested they try. One unique option some sex therapists encourage is BDSM and roleplay. Specifically, the emphasis on, “continuing enthusiastic consent and communication,” that are needed to participate in these kinds of acts allows survivors to feel secure in participating in sexual acts again(7). Another important aspect that Lewis mentions, is that some survivors seek out communities of people with similar experiences. Whether it is just to talk or to be able to engage sexually with people whose internal struggle is similar to theirs is incredibly beneficial to survivors. Therapy is not a requirement to scour the internet or seek out others with similar experiences(intentionally or by coincidence), so many survivors begin to support one another. Above all, the idea that “Survivors can, and do, recover on their own,” brings home the importance of hypersexuality as a valid and sometimes necessary part of the recovery process(7).

A

s it may have become obvious, however, that in addition to having positive or neutral effects, hypersexuality can put victims in harmful situations or negative headspace. Risk is inherent with any part of life, but hypersexuality can influence victims to enter situations where they are likely to be harmed or even harm others. Out of simplification, I organized these effects into four categories: Internal negative impacts, outside judgment, revictimization, and corruption. Internal negative impacts refer to thoughts, feelings, and actions that come from the victim’s mind. Outside judgment is anything that makes the victim feel bad about themselves for how they are feeling or acting and relies on others making observations. Revictimization, similar to outside judgment, is about the actions of someone else on the victim, specifically, it refers to a victim being harmed in a situation they were led to by hypersexual feelings. Lastly, corruption is when a victim becomes a perpetrator, especially as it refers to the specific trauma they were inflicted with. Internal negative impacts: One of the most common

Fifth World


225

negative effects of hypersexuality is less enjoyment during sexual scenarios. Many patients report constant dissociation during sexual encounters, which often comes inherently with hypersexuality. In conjunction with this, hypersexual behavior is associated with less enjoyment during sex because struggling to stay in the moment is conducive to poor amounts of arousal. This “lie back and think of England” mindset may make sex start to feel like a task. In many cases, because patients enter these scenarios to cope and experiment with their complicated relationship with sex, it is reduced to something they feel like they have to do. As for desensitization increases, pleasure is reduced and an unhealthy relationship with sex can be formed. A good description of what this looks like comes from the thought process of one college student writing about her experience, “‘Well clearly even if I don’t want to have sex it’s going to happen anyway, so I might as well just sleep with whoever and do it whenever I want’”(2). Separate from dissociation, sexual assault survivors may experience intrusive thoughts or flashbacks to their assault during new sexual encounters. Acting effectively as a trigger, by being reexposed to a similar type of activity, neural pathways are reactivated to promote an unpleasant experience. Patients are also subjected to self-esteem issues. Although these are often a co-symptom with coping with sexual assault, societal bias towards promiscuity can affect how a person sees themselves during and after intercourse. Negative thoughts about themselves or their body are common in conjunction with constant thoughts and experiences related to sex. Outside Judgement: Negative experiences for people following a hypersexual recovery path are not isolated to themselves. For many people, the difficulty arises when others are involved. A prominent example of this is the societal bias around not only sexual behavior but also survivors of sexual assault. It can be a classic one-two punch for people dealing with one in conjunction with the other. If the “#metoo” movement has proved anything, it’s that not only are survivors more powerful in numbers, but that there will always be doubters, trolls, and contrarians to make victims feel small. In addition to the stigma faced as survivors, those who develop hypersexual symptoms as a result are exposed to the taboo around their behavior. Even without the struggles that the combination of being a survivor and sexually active poses, the feelings that prevent us from being honest about sex makes the lives of sexually active folks a lot more precarious. The mental toll of these stigmas can prevent people from being themselves and contributes to self-esteem issues and mental illness. Separate from the mental toll, people who are more outwardly sexual are more likely to be faced with physical abuse from opposers. This dangerous combination, as detailed above, offers

North Carolina School of Science and Mathematics

a uniquely difficult place of criticism for patients as well. From an observer perspective, people are quick to make assumptions about the situation of the person in question. Responses can range from, “They must be okay about their traumatic sexual experiences if they are spending so much time hooking up/watching porn/talking about it,” to “Their abuse is on them if they are constantly putting themselves in sexual situations.” As you may have noticed, most of these negative effects are a result of the collective/societal view of groups of people. This doesn’t demonstrate a failure on the part of hypersexuality as a coping mechanism, but that of people condemning the actions of those they are not in the situation of. And instead demonstrates the risk that people who are subjected to hypersexual feelings are confronted with as they try to move towards recovery. Revictimization: Separate from strictly mental setbacks to recovery, hypersexuality can expose victims to opportunities for physical and sexual abuse. As alluded to above, people who are verbally discontent with the behavior and experiences of others are likely to also be physically abusive in some cases. Specifically concerning revictimization of sexual assault, hypersexual behaviors increase the chances of a survivor being in a situation that lends itself to additional assault. This may include seeking out toxic partners, having to go lengths to be in a sexual encounter, or abuse substances. This happens through two avenues: hypersexuality itself, and PTSD. The actual mechanism is that victims are more likely to put themselves in risky situations for sexual encounters, and as a result are met with attackers and rapists, etc. But multiple factors make this possible. At the top of this list with PTSD is “compulsive reexposure to trauma”(LTA, 493). As explained in part one, victims are spurred by intrusive thoughts to attempt to recreate the scenario in which their assault occurred. This is an attempt to see what could happen differently and to redesign the neural pathways associated with their assault. Often this can end badly with being assaulted another time, as if you were in a risky situation when someone violated you the first time, it forces you to be put back at that same risk. Part of the complication of hypersexuality is that sex isn’t a perfectly available “commodity.” Patients are forced to participate in hookup culture, turn to prostitutes, or other risky methods for indulging. The more often they are in these unstable situations the more likely they are to experience re-victimization. In any case, they are again the victim no matter how they got to be assaulted. But from a safety and recovery standpoint, preventing them from ever having to be in that situation again is a good way to approach helping victims. Risky situations also don’t always mean the threat of revictimization. Patients also face the risk of STDs, contraction of COVID-19, exposure to addictive substances, and harmful drugs, among other things. One survivor describes


226

the time her hypersexuality hit a peak that now makes her feel terrible for actions on urges she wasn’t responsible for, “Not the smartest idea amidst a global pandemic of course, and I now realize that my actions during that time were indeed selfish and unsafe.”(2) To support this a study was done on risk assessment abilities of college students. Inevitably no two experiences are the same, but Neilson attempts to quantify the difference between experiences of women who have experienced assault to their peers who haven’t. And without a doubt, victims have more difficulty recognizing how risky a situation is and the cues presented to indicate one, than non-victims(7). Corruption: In addition to revictimization, hypersexuality can put those in recovery at risk to become a threat to others. Again, among trauma victims compulsive re-exposure to trauma is common. Under this umbrella, there is yet another category. Harm to others as an outlet to replay trauma is often found in victims. Unfortunately, this can manifest as people without outlets experiencing predatory thoughts and impulses, as well as acting upon them. With these thoughts in mind, those also experiencing hypersexuality are pressured doubly. Inappropriate and intrusive sexual thoughts and desires that accompany them have the potential to turn into actions. A study on the cognitive component of rape conducted by Kathryn Ryan considers hypersexuality while also emphasizing rape-supportive beliefs in rapists. She argues that rapists are influenced by what information they are consuming concerning sex and especially on the roles of women. This influence is a good indicator for how a person may act on their desires, because while many rapists are hypersexual and experience frustration with the lack of sex they are having, many of their premeditated thoughts about assaulting someone come from a place of deeply ingrained beliefs about their “right” to another person(11). This means that in addition to having to seek out situations to replay trauma, victims that were taught from a young age problimatic expectations surrounding sex and gender roles are wired to act on harmful impulses and desires. For someone experiencing symptoms of hypersexuality, it is scary to hear that their trauma puts them in a place where they are more likely to have a hard time mentally, be reexposed to their trauma, and may even be more likely to be in a position to be on the other side of their experience. Luckily, as we talked about above there are strategies for healing healthily as a survivor with hypersexuality. Along with that, we must improve the likelihood of people getting help.

D

SM-5(identification as a key part of recovery): The DSM-5, Diagnostic and Statistical Manual of Mental Disorders, is a guide for classification and diagnosis

for use by US medical professionals(DSM-5). It was updated as recently as October 1st, 2021. On the topic of hypersexuality, treating sexual trauma, and PTSD there are a couple of issues. The main one is where hypersexuality is in the manual, and if it should be elaborated on and presented as a potential symptom of PTSD and trauma recovery. Currently, hypersexuality is only referenced under deviant sexual disorders such as Voyeuristic Disorder, Exhibitionistic Disorder, Frotteuristic Disorder, Masochism, Sexual Sadism, and Fetisishtic Disorder as a symptom, even though it is clearly defined as only, “A stronger than usual urge to have sexual activity.”(DSM-5) As discussed above, this reinforces the idea/stigma of hypersexuality being inherently harmful and something that ultimately afflicts those with a likelihood to be perpetrators. The addition of hypersexuality under these circumstances would allow for legitimate diagnoses to be made. Diagnosis and recognition of a traumatic experience of sexual assault can be especially affirming for victims, and as we know a diagnosis of any sort is a gateway to effective treatment. Post-assault, diagnosis of PTSD, and other trauma-related disorders opens similar pathways and leads to benefits for patients. In a study conducted at the University of Toronto, Kilimnik, Trapnell, and Humphries looked into the differences enjoyment of sexual scenarios in female college students who had never been assaulted, had been in a Non-consensual Sexual Encounter(NSE) and recognized it as assault, and had been in a NSE and didn’t identify it as assault. They found that in women who didn’t identify their NSE sexual dissastisfaction occured at higher rates. Even suggesting that, “identifiers may more resemble women with no NSE history than they resemble their non-identifying counterparts,” when it comes to sexual satisfaction. When it comes to PTSD, Van der Kolk and McFarlane state, “Having a recognizable psychiatric disorder can help people make sense of what they are going through, instead of feeling “crazy” and forsaken.” As both of these examples illustrate, having an accurate diagnosis is overwhelmingly positive both psychologically and when it comes to a treatment plan. If we are going to take this a step further, we can even argue that if we were to continue to treat hypersexuality as it is placed in this manual and how society places people in recovery, it is still good to diagnose it. Rehabilitation is always going to be much scarier in sexual assault and offender cases than treatment before an incident. If we want to protect others and victims especially, we should be giving answers. Hypersexuality only being placed as a symptom under deviant sexual disorders in the DSM-5 offers a onedimensional view of the affliction and enforces the idea that hypersexual individuals are predestined to be bad people. To help break the cycle, offer a more comprehensive explanation of these behaviors, and offer the right care to people suffering, it should be added to the DSM-5. Fifth World


227

Current Therapies with Related Benefits: When it comes to the treatments that are given with a diagnosis, many are already in practice and development. There are multiple avenues for treating both hypersexuality and the PTSD that engages with it. In addition to this, there are strategies to target both the trauma and subsequent hypersexuality that comes with being assaulted. Treating Hypersexuality: When it comes to dealing with hypersexuality, many sex therapists work to address the issue. The goal is to allow patients to cope and explore their sexual feelings without being at risk. In Toronto, Robyn Red encourages this tentative exploration using many strategies, and finds that for many of her patients, BDSM(“bondage and discipline/dominance and submission”) play is productive. In these encounters constant communication and enthusiastic consent is required for good sexual experiences, so it can allow patients to leanr to explore sex without fear of violation(7). On the opposite side of this spectrum of treatments is celibacy. For many patients taking a sexual leave gives time for exploring other therapies and healing without the impending fear of sexual encounters. For others, this part of the process is the beginning, which Red says she prefers to meet her clients after, where they have a “calcified” sense of the trauma(7). Treating PTSD: Many of the strategies used specifically for sexual assault victims are also helpful for any person experiencing PTSD. One of the most pertinent strategies for recovery is EMDR therapy. EMDR stands for Eye Movement Desensitization and Reprocessing and, “Is a psychotherapy technique that repeatedly activates opposite sides of the brain to release emotional experiences trapped within the nervous system(7).” Many trauma survivors find this helpful in reducing triggers and mental anguish associated with their experiences. In the lane of therapy, talk therapy and medication can be essential to recovery. Even without experiencing significant trauma, most people have enough to unpack with a professional to make a visit worth it. For sexual assault survivors this can mean learning new coping strategies, and how to deal with current strategies, including protecting yourself from the negative impacts. Medication helps lots of people with intense emotions related to flashbacks and intrusive thoughts. New Technology in Therapy: Part of harm reduction is allowing or the behavior that puts the patient at risk while making sure it is as safe as possible. When using the phrase, “harm reduction” the most common example is how safe injection sites reduce overdose in patients with opioid addictions because they are given resources to administer with less risk and are an open space to start recovery. To do this with sexual assault survivors using hypersexuality, manipulating environments like hookup hubs is important. Many new solutions are reliant on technology. Using

North Carolina School of Science and Mathematics

anti-abuse screening on online messaging platforms and dating/hookup apps can warn users about potential abusers and also prevent predators from using these apps. Using artificial intelligence and machine learning strategies, developers have been attempting to train apps to identify abuse and risky situations to warn and provide resources to the potential victim. Another example of this would be monitoring pornography and other intellectual materials on the web. Screening for harmful sexual demonstrations being broadcast that might influence people and stopping the use of content containing minors from being avaliable. This begins to reconstruct the notions we have a society surrounding sex for more positive messages, leading future perpetrators away from harmful thinking and potential victims from self esteem and other harm. Final Thoughts: Not all of this subject has been given the attention it deserves. Many of the demographics we can present are just that. Mechanisms still need to be accurately identified to create prevention strategies and reexamine our current systems. In all cases, hypersexuality is just the reaction of a person to their surroundings and or traumatic sexual experiences. It holds no benefit or cost on its own, but can lead to harmful situations. To create an environment where we can acknowledge these drawbacks while celebrating victims for making it through what they are and supporting them in getting back to who they are takes what was stated above.


228

Chang, Edward C., et al. “Sexual Assault History and Self-Destructive Behaviors in Women College Students: Testing the Perniciousness of Perfectionism in Predicting Non-Suicidal Self-Injury and Suicidal Behaviors.” Personality and Individual Differences, Pergamon, 7 June 2019. Clark, Harriet. “Hypersexuality as a Valid Trauma Response.” Empoword Journalism, 24 Aug. 2021. Halpern, Abraham L. The Proposed Diagnosis of Hypersexual Disorder for Inclusion in DSM-5: Unnecessary and Harmful, June 2011. Hypersexualization of Self after Sexual Abuse - Youtube. Khadr, Sophie, et al. “Mental and Sexual Health Outcomes Following Sexual Assault in Adolescents: A Prospective Cohort Study.” The Lancet Child & Adolescent Health, Elsevier, 19 July 2018. Lewis, Carly. Finding Pleasure in Sex, Again; In an Age of Predators and Sexual Despair, Assault Survivors Talk about How They Found Intimacy and Fun Again.

Neilson, Elizabeth C., et al. “Understanding Sexual Assault Risk Perception in College: Associations among Sexual Assault History, Drinking to Cope, and Alcohol Use.” Addictive Behaviors, Pergamon, 14 Nov. 2017. Ryan, Kathryn M. “Further Evidence for a Cognitive Component of Rape.” Aggression and Violent Behavior, Pergamon, 21 Nov. 2003. Slavin, Melissa N., et al. “Gender-Related Differences in Associations between Sexual Abuse and Hypersexuality.” The Journal of Sexual Medicine, Elsevier, 10 Aug. 2020. Space, The Speak Up. “Let’s Talk about Hypersexuality after Assault; a Thread!” Twitter, Twitter, 7 Oct. 2020. DSM-5. https://www.psychiatry.org/psychiatrists/practice/dsm. Accessed 6 Apr. 2022. Information Processing Model Definition | Psychology Glossary, Accessed 6 Apr. 2022. Van der Kolk, Bessel A., et al., editors. Traumatic Stress: The Effects of Overwhelming Experience on Mind, Body, and Society. Guilford Press, 1996.

Fifth World


229

Feminism and Science: A Historical and Contemporary Analysis of Seemingly Paradoxical Institutions Meghana Chamarty

F

rom a very young age, I loved playing around with things and doing the types of experiments that a young child with access to kitchen chemicals could execute. The concept of gender and feminism were not of much concern to me yet, but every weekend I found myself at some version of a “Women-In-STEM” event. Whether it was an empowerment workshop for young girls who showed interest in science, or a robot exhibition for a robotics team looking to increase diversity, I unknowingly found myself in the crossfire between longstanding conflicting institutions: science and feminism. At these events, science was curated and tailored to be presented in a simplified and feminine manner, as though science itself was too hard and unappealing for girls. At these events one did not learn of scientific endeavors, rather we listened to the struggles of existing in a male-dominated field. In hindsight, I understand the intent in holding such events: to promote STEM careers to girls and educate them about the potential hurdles that they may face. But as a young child, I questioned my interests. Why have I not yet felt this alienation that these women have said that I should feel? Was I not supposed to like science because I was a girl? Was I supposed to be less capable than my male counterparts? Before having a chance to experience science for what it truly is, the societal constructs of gender were imposed on my passions. As I grew as a scientist and woman, the conflict between femininity and science tainted my experiences. I constantly asked myself a question of much deliberation among many women in the scientific community, can science and femininity exist concurrently? Turning to female scientist success stories, I found them all biased with feminist undertones. Conflicting with my personal experiences, their stories were not as simple as the narratives being communicated. Finding a lack of truth in contemporary media, I turned to the past. Though I mostly found similar trends, there was the exception of one story: Rosalind Franklin, the female scientist whose scientific contributions served the basis of the revolutionary Double Helix Theory. A closer look at feminist and scientific discourse, this research not only

North Carolina School of Science and Mathematics

analyzes the development of the feminist-scientist narrative but also breaks down the paradoxical case study of Rosalind Franklin. Uncovering the truth in her story and utilizing that truth provides an opportunity to present potential methods to integrate science and feminist institutions to bring about simultaneous advancements for both in modern society.

Figure 1. 1950’s housewife, courtesy of Women Living Well.

In the early 20th century, most women were confined to the domestic sphere doing “things which weren’t too demanding like childcare, scrubbing the floor, washing the sheets and curtains, sewing on buttons, and coalmining” (Flemming). The idea that women were unfit for mental/ physical work outside of the household persisted, so they were kept illiterate and weak. With World War II, the abilities of women could no longer be ignored or denied.


230

They were enabled to contribute outside the domestic sphere“in the factory or in uniform.” Their contributions were “ a sine qua non” of the war effort(Ambrose, 489). The argument that had long confined women to the household lost its validity after they were enlisted to help during World War II. As women began entering the academic sphere there was this lack of acceptance. Their contributions during the war were “overshadowed when the male war heroes came home” (Parker, 9). The presence of femininity resulted “in the loss of authority,” and so those who wanted to remain in the academic sphere and in positions of power needed to rid themselves of their feminist traits and ideologies.

Figure 2. Francis Perkins, the first female Secretary of Labor.

A prominent example of a successful woman who had to rid herself of her feminine tendencies is Francis Perkins, the first female Secretary of Labor. Working with Franklin D Roosevelt during the Great Depression, she was integral to the creation of the New Deal. Perkins, “tried to have as much of a mask as possible” in her professional life. She knew that “a lady interposing an idea into men’s conversation is very unwelcome” so she consciously rid herself of common feminine traits. She did not wear makeup and dressed in a “sedate fashion,” rather portraying the impression of being a “quiet, orderly woman who didn’t buzz-buzz (gossip) all the time” (Perkins). Perkins is just one example of this inverse relationship between success in the academic sphere and the outwards display of femininity, which has persisted historically

and presently. As more and more women have entered the scientific and academic sphere, the idea that one who is feminine cannot be competent in the workforce has persisted. Currently, although women are being encouraged to join STEM occupations, they are placed in positions of lower power and pay. Professions of “a high degree of honor and status in the United States society” such as “law, medicine, architecture, ministry, dentistry, judicial positions, science, and university teaching” have consistently been dominated by white males (Sokoloff, 1992). These occupations award higher pay, power, and require greater educational qualifications. Conversely, the most popular professions for women are in the careers of teaching and nursing - occupations that do not stray far from the traditional duties of the household (The American Association of University Women). These occupations have become, what I consider, the new domestic sphere. Despite partaking in the academic sphere, societal pressures and personal preferences still implicate women that do not tend to their domestic duties. Although 57.4% of all women are presently involved in the labor force (as compared to 33.9% in the 1950s), they are placed or choose to be placed in roles associated with femininity—roles that hold less power and have lower pay (U.S. Bureau of Labor Statistics). When the labor force still remains so segregated, it makes sense that this inverse relationship between scientific success and femininity exists: the face of success in the academic sphere has consistently been held by white males. Therefore, for one to be taken seriously or achieve similar success, this standard should be met. Deviance from such standards, attributes of femininity or non-whiteness, degrades an individual’s academic credibility and negatively affects their ability to achieve success in the profession. Advancements in feminism and science remain mutually exclusive, although one distinct example - the narrative of Rosalind Franklin - has proven that a positive correlation between the two is not impossible, shedding light on potential solutions to this seemingly perpetual problem. Rosalind Franklin is a 20th-century scientist whose miscommunicated narrative brought importance and recognition to her scientific achievements while also catalyzing the advancement of the Women’s British Liberation Movement.

Fifth World


231

Figure 4. Crystallographic photo of Sodium Thymonuclate, Type B. “Photo B.” May 1952

Figure 3. Rosalind Franklin with microscope. National Institute of Health.

R

osalind Franklin was born in London in 1920. Franklin’s passion for science deviated from the gender norms of the time. Consequently, she struggled “to be accepted by a scientific world which often regards women as mere diversions from serious thinking” (Watson, The Double Helix, 133). Despite the gender harassment (as this phenomenon is now termed in the scientific community) Franklin continued to dress in a feminine manner, so as to not mask her femininity but not flaunt it either. Her male co-worker, James Watson, condescendingly expressed that “her dresses showed all the imagination of English blue-stocking adolescents” (Watson, The Double Helix). Although Franklin encountered gender-based prejudice at work, “the fact is Rosalind was never an active feminist, but simply evoked or created respect in her own right as a person” (Klug). Franklin began applying her pioneering x-ray techniques to DNA at King’s College. Working with Maurice Wilkins, Franklin encountered many disagreements stemming from their differences - mainly their seemingly incompatible personalities: Franklin preferred working in isolation while Wilkins was rather collaborative, and Franklin was outspoken and stubborn (a rare quality for women at the time) while Wilkins was quieter. The conflict within the lab space created a very divided lab. Pushed away by Franklin, who he felt should be working under him and not giving him orders, Wilkins became increasingly collaborative with the scientists at Cambridge: James Watson and Francis Crick.

North Carolina School of Science and Mathematics

Despite the disagreements, Franklin remained focused on her work. Her photographs of DNA were ”among the most X-ray photographs ever taken” and “these pictures were vivid, No. 51 especially so. The overall pattern [of photograph 51] was a huge blurry diamond...The pattern shouted helix” (Judson). This photograph not only blatantly hinted at the double helix structure of DNA, but Franklin’s techniques allowed her to “sort out two different forms of DNA” (Aaron Klug, The Nobel Prize) and produce photograph 51. Unknown to Franklin, her crystallography work served as the basis of Watson and Crick’s work (two scientists also researching DNA), in which they failed to properly credit her. Watson was “shown Rosalind Franklin’s x-ray photograph” by Wilkins and immediately knew “that was a helix.” From her Photograph 51, which took years of techniques to curate and months to develop, Crick, Watson, and Wilkins were able to conclusively determine the structure of DNA within a month. Watson later reflected, “I was told the dimensions..so, you know, I knew roughly what [the photograph] meant...the Franklin photograph was the key event” (Watson, Center for Genomic Research Inauguration). Despite the importance of Franklin’s contributions to their discovery, she was credited with a mere footnote. This occurrence - undermining or crediting the work of women to men - has now been coined the Matilda Effect. Franklin is one of many, “women scientists who have been ignored, denied credit or otherwise dropped from sight” (Rossiter).


232

by others to share their version of her story. Initiating a series of exaggerated miscommunications, two books - The Double Helix by James Watson and Rosalind Franklin and DNA by Anne Sayre - created a narrative about Franklin.

Figure 5. James Watson and Francis Crick with their model of DNA.

At the time this footnote was of no importance since no one other than Crick, Watson, and Wilkins knew of what truly happened. These three went on to win the Nobel prize in chemistry for their work on DNA. Franklin’s scientific contributions to the DNA problem went unnoticed, and she resumed her scientific endeavors in other concentrations. Although she was a great scientist, she was not given the deserved recognition for her groundbreaking work on DNA. She was simply another scientist, one with great accomplishment but in no sense extraordinary. Alas, years of working with x-ray and radiation had their taxing effects on Franklin. Safety protocols in new fields were still being developed, so many brave, pioneering scientists - Marie Curie, Harry Daghlian Jr., and Rosalind Franklin (to name a few) - succumbed to the perils of their own work. At the mere age of 35, Franklin developed Ovarian Cancer. Continuing to conduct research, she fought hard but passed away 18 months later in April 1958 at the age of 37. Ironically, it was mutations of the DNA in her own cells that led to her demise. Passing away so young, her scientific contributions were not recognized when she was alive. Additionally, she was not considered extraordinary enough to have shared her personal narrative in any public manner (autobiography, memoir, interviews). Due to Franklin’s focused fixation on her research, she never communicated her views beyond science. Since she became incredibly important after her death, other individuals have inadvertently miscommunicated her story to benefit the narrative they want to communicate. Due to her premature death in 1958, Rosalind Franklin left no narration of her life, and her pioneering contributions to the Double Helix Theory never received the recognition it deserved. Franklin’s lack of narrative became a tool exploited

Figure 6. Watson, James. The Double Helix. Simon & Schuster.

Published in 1968 by James Watson, The Double Helix “relates [Watson’s] version of how the structure of DNA was discovered” (The Double Helix, 3). Watson’s portrayal of Franklin was demeaning. Referring to her as “Rosy”, his dismissal of Franklin’s contributions to The Double Helix Theory exposed his appalling sexist views. Rather than accepting her intellectual merit, Watson repeatedly disrespected her scientific caliber. Watson undermined her intellectual capabilities and education stating, “she had not had the advantage of a rigid Cambridge education only to be so foolish as to misuse it” (The Double Helix, 45). This memoir portrayed Franklin more so as the feminine stereotype and reinforced her false feminist traits, characterizing her in a very misogynistic manner. Watson unintentionally utilized the language of misogyny. Misogynistic language is “an unjustified sexual bias” that causes harm both internally and to external factors. Essentially mocking her, this language and narrative discredited her scientific caliber and characterized women as “meer distractions in the workplace” (The Double Helix). He portrayed her in a lowly manner, sharing that “clearly Rosy had to go or be put in her place” (The Double Helix, 14), as though he believed her opinion was of no real use. Furthermore, he distracted from the caliber of her work, stating, “rather than build helical models at Maurice’s command, she might twist the copper-wire models about his neck” (The Double Helix, 74). By exaggerating her feminine qualities and defining her as a ‘woman’ before a ‘scientist,’ Watson’s inherent misogyny deduced Franklin from an individual into a simple side character. However, within the greater context, this was a societal trend. Ultimately, Watson portrayed Franklin as someone who she was not, and because she was not alive to give consent or share her Fifth World


233

own story, this became her first told narrative. Watson’s communication of the misogynist treatment inflicted upon Franklin exposed the injustices she, and other 20th-century women in science faced. This, combined with her premature death, led to outrage from British feminist authors.

Figure 7. Sayre, Anne. Rosalind Franklin and DNA. WW Norton & Company.

One such feminist author was Anne Sayre. The wife of a crystallographer, Sayre was a friend of Franklin. Sayre’s biography, Rosalind Franklin and DNA, first incorrectly portrayed Franklin as an uncredited feminist. Sayre believed that The Double Helix was written “to menace bright and intellectually ambitious girls” (Rosalind Franklin and DNA, 196) , and wrote Franklin’s biography in response, claiming “ Rosalind has been used [by Watson] to provide reasons why men who work with intelligent women should resent them.”(Rosalind Franklin and DNA, 197). In an attempt to correct Franklin’s scientific reputation and bring awareness to her contributions to DNA research, Sayre unintentionally twisted facts. Although the Nobel Prize couldn’t be awarded posthumously, Sayre argued, “Rosalind has been robbed. Little by little; it is a robbery against which I protest.” (Rosalind Franklin and DNA, 190). Tainted in bias, Sayre miscommunicated the nature of the sexism experienced by Franklin. Using charged, feminist language, Sayre characterized Franklin as a martyr for feminists: someone who was fighting for the feminist cause in her scientific endeavors and was blatantly mistreated by her male co-workers who stole her work out of disregard for her worth as a scientist. While some of this holds true, the majority of Sayre’s language was exaggerated and fabricated. Victimizing Franklin, Sayre communicates that Franklin “was over and over again a victim” of the sort of thinking that not only prefers women to confine themselves to kitchen and nursery and possibly

North Carolina School of Science and Mathematics

church, but is outraged by their presence anywhere else at all” (Rosalind Franklin and DNA, 197). Sayre’s feminist bias inherently portrayed Franklin as a feminist. Outraged at an injustice committed against a so-called fellow feminist, the feminist movement rallied together to create change for women in the workplace. While this change has brought about longstanding advances in domestic and vocational women’s rights, Franklin was not actually a feminist. Although the widely known narrative of Franklin portrays her as a feminist figure, she was not, in fact, the feminist icon that most know her to be. This narrative stemmed, of course, from Sayre’s biography. In miscommunicating Franklin’s life, Sayre “overshadowed [Franklin’s] intellectual strength and independence both as a scientist and as an individual” (Maddox). Sayre’s communication would have, “embarrassed [Franklin] almost as much as Watson’s account would have upset her” (Glynn). The combination of Sayre’s biography and Watson’s memoir portrayed Franklin as the wronged feminist, bringing attention to the contributions of her work and catalyzing the women’s British liberation movement: an advancement for both the science and feminist institutions. Franklin’s falsely communicated narrative brought advancements to both feminism and science. This narrative accelerated the British Women’s Liberation Movement, inspiring millions of women internationally and creating lasting legislative change. In April 1971, following Sayre’s biography, the British Women Liberation Movement (BWLM) fought “for the right for women to train for and gain entry into all occupations, highlighted women’s unpaid work in the home, and demanded equal pay for equal work outside the home.” (British Library). Within a year, parliament passed the Sex Discrimination Act, which outlawed “sexual discrimination in the workplace.” These acts are still in implementation today and were amazing advancements for gender equality in the workplace. Following Great Britain’s example, the United States soon also developed similar initiatives. Franklin’s miscommunicated narrative accelerated the British Women’s Liberation Movement, has inspired millions of women internationally and created lasting legislative change. Today, there is greater gender equanimity and a larger representation of women in science. Scientists in biology are now women, basically. It’s more than fifty percent.” ( Brenner). Although we are currently still a far way from truly achieving equality within the field of work, the advancement brought about during the time has been crucial to the establishment of women outside the domestic sphere. Franklin’s (now recognized) scientific contributions have served as the basis for many scientific breakthroughs. Knowing the structure of DNA was the key to figuring out how DNA was replicated and how genes are coded. With this knowledge, the Human Genome Project - an initiative to sequence the human genome - was started and led by, ironically, James Watson. The human genome project is,


234

“ one of the most ambitious scientific undertakings of all time” (Human Genome Project Results). This project serves as the basis of knowledge for the genetic technology upon which our contemporary understanding of genetics and medicine is built upon.

F

rom Franklin’s case study, methods to integrate science and feminism to bring about advancements for both can be created and implemented in modern society. By acknowledging the experience of women in science (or just women in general) and integrating them into both feminist ideology and our overall scientific understanding, the logics of science can be expanded beyond its current role and utilized to advocate for the feminist cause. To start, our understanding of women’s bodies and psychology must be expanded into the realms of medicine and science. Presently and historically, there has been a lack of not only female doctors and researchers, but also female test subjects. Therefore, research and our understandings have been based on that of men. Female bodies and psychology has come to be viewed as something different than that of men, imperfect some would call it. The, “existing paradigms systematically ignore or erase the significance of women’s experiences and the organization of gender” (Stacey and Thorne), so the results of research have been utilized on women, but their effectiveness is flawed. This ‘copy-paste’ approach characterizes women’s bodies and psychology as flawed, when they are simply just different than that of men. To correct this, more diverse sample sets are needed in research. Women’s bodies cannot be considered just like men’s, but they are not to be viewed as inferior either. This distinct standard must, therefore, be integrated into the education of doctors or researchers. For women to be interested and accepted in science, science must be interested andaccepting of women. A greater respect and understanding of female bodies and psyche will not only bring about better treatment of women in the workforce, but will also allow women to feel more compelled to join such fields. Once our understanding of women’s bodies and psychology is expanded, political, cultural, and moral bias must be removed from science. Inherent gender biases characterize solely-female anatomical attributes as passive and weak, and characterize solelymale anatomical attributes as active and strong, creating scientific flaws and bringing about disadvancements in feminist understandings. Science, being team based and almost generational in a sense, passes down information from one set of scientists to the next. With that, inherent biases, practices, and disciplines are passed down as well. Their research is evaluated within the social context of the research group and the scientific community as a whole; therefore, it is these subsets that need to be altered to create overall inclusive, bias-free, research. To correct this, the language of scientific research and

publications should be carefully critiqued. Language is the source from which characterizations are formed; and, although unintentional, repeated correlations of language use can alter the essence of a scientific term and cause it to be misunderstood in a way that likely conforms to societal biases. Language has the power to, “shape the thinking and acting of working scientists.” When the manner with which a scientist thinks is altered, the integrity of the science being conducted becomes compromised. A scientist’s attention and perception to “the fields in which they can envision experiments that might be useful to undertake” changes, and this perpetuates the problem (Keller 2001: 106). When editing a body of research, its underlying portrayal should be carefully evaluated for such bias. Only then can scientific research become detached from societal contexts and become a source for progressive advancements. To further bridge the gap between feminism and science, research in areas of science that are of particular interest to feminists (those that are generally not seen as important areas of research by scientific community because of their lack of obvious applications), should be conducted. These areas are important to feminists because they can be used to prove equality and bring advancements in gender equality. Much religious oppression within cultures originates from lack of concrete information and misinformation, therefore, credible sources of scientific research conducting research in such areas become very relevant for feminist activists. Western feminist philosophers of science have, “so often targeted the value-free ideal associated with positivism,” utilizing a rationalized body of philosophy to progress their ideologies and bring about advancements in the feminist movement. Although, these western feminists often “fail to see the usefulness of that ideal for nonwestern feminists” (Narayan, 216-217) who can use feminist-oriented scientific research as a tool to challenge oppression. Upon thinking of solutions to mitigate the divide between feminist and scientificideologies, it becomes evident that the problem lies not within individuals themselves, bur rather that it is greatly a systemic issue: one that has been imbedded throughout history and perpetuated by our experiences. Our current solutions blame women themselves for choosing fields other than science, or choosing to partake in occupations within the new domestic sphere. In actuality, the problem is rooted within something larger than the individual decisions of women. Pointing the finger at women is rather counterproductive and further exasperates the extent of this issue. Targeting the system and addressing the issue at its cause requires more intensive and conscious effort by both the science and feminist institutions, but will allow advancements in both to be brought about. This work begins by evaluating the origin of the introduction of women in the workforce during World War II, following its lack of progression after the return of men from war, and outlining the societal understanding of the physical and Fifth World


235

mental capabilities of women over this period of time. From this evaluation, an inverse relationship between science and feminist institutions was determined. The onset of a new class of occupations - the new domestic sphere -is then analyzed and placed into historical and modern context. Then the false and true narrative of Rosalind Franklin was analyzed and used as a case study to propose contemporary solutions to this systemic flaw. Although portrayed as a feminist, it was discovered that Franklin was not in fact one but rather portrayed as one after her death. Her scientific successes were popularized by the feminist movement and concurrently sparked the British Women’s Liberation Movement. This movement resulted in the passing of the Sex Discrimination Act, which was a major step for gender equality in the workforce. Franklin’s case study evidently showed that the lack of women in intensive scientific fields results not in the lack of interest in science nor solely the mistreatment of women in the workforce, but rather inherent prejudices embedded within the core of our cultures and language. Proposed solutions include expanding our understanding of women’s bodies and psychology into the realms of medicine and science, involving more diverse sample sets in research, educating of the distinct-but-equal standards of differentiation, removing political, cultural, and moral bias from scientific culture, critiquing the language of scientificresearch and publications, and expanding research to areas of science that are of particular interest to feminists. Through this process of research, I was truly able to contextualize my experiences and understand my presence as a women in science. I now understand that being feminine and being a scientist are not mutually exclusive; rather, they are beautifully entwined. The best of both institutions can be only be unveiled when the institutions converge.

Ambrose, Stephen E. 1995.

. Scribner’s and Sons,

Brenner, Sydney. Admitting Women to Cambridge University. British Library.

. Interview.

.

Fleming, Jacky. The Trouble with Women. Kansas City, Andrews McMeel Publishing, 2016. Glynn, Jenifer (2012). “Remembering my sister Rosalind Franklin”. The Lancet. “Human Genome Project Results.” Genome.gov. Keller, Evelyn Fox, 1985, , New Haven: Yale University Press, 2001, “Making a Difference: Feminist Movement and Feminist Critique of Science”, in , Angela N.H. Creager, Elizabeth Lunbeck, and Londa Schiebinger (eds), Chicago: University of Chicago Press, pp. 98-109. Klug, Aaron. Letter. 14 Apr. 1976.

.

Klug, Aaron. Interview. Conducted by Joanna Rose. The Nobel Prize. Judson, Horace Freeland. Plainview, N.Y.: CSHL Press, 1996. Print.

. Expanded ed.

Maddox, Brenda. “The Double Helix and the ‘Wronged Heroine.’” Nature News, Nature Publishing Group. Narayan, Uma, 2004, “The Project of Feminist Epistemology: Perspectives from a Nonwestern Feminist”, in Harding (ed.) 2004c: 213-224. Parker, P. (2015). 5, No. 1: 3-14.

, Administrative Issues Journal,Vol.

Perkins, Frances. Notes on the Male Mind. Rossiter, Margaret W. “The Matthew Matilda Effect in Science.” Social Studies of Science, vol. 23, no. 2, Sage Publications, Ltd., 1993, pp. 325-41. Sayre, Anne.

. WW Norton & Company.

Sokoloff. (1992).

.

Stacey, Judith; Thorne, Berrie, 1985, “The Missing Feminist Revolution in Sociology”, Social Problems. The American Association of University Women. (2003).

.

Watson, James.Center for Genomic Research Inauguration, Harvard. September 30, 1999. Watson, James. . Simon & Schuster. “Women in the Labor Force: A Databook : BLS Reports.” of Labor Statistics, 1 Apr. 2021.

North Carolina School of Science and Mathematics

, U.S. Bureau


236

Tribal Influence and How the Culture of a People Can Change Over Time: A Case Study Sewoe Mortoo

T

he first time I can remember being in Ghana was one of the first times I truly felt at home. That’s not to say I’d never felt at home before, but it was a different feeling, a feeling of acceptance and belonging to a history that was my own. Seeing my extended family after almost an entire decade and visiting my ancestral village made the experience so much sweeter than just a regular family reunion. We traveled across Ghana together, visiting family members and seeing important sites. However, it was also a hard experience. Visiting Elmina Castle, one of many slave forts on the”Gold Coast,” was one of the most gut-wrenching experiences of my life. Standing in between those walls that had seen so much pain suffering, hearing the events that had happened to so many men, women, and children, history felt real: it was Not just something you hear in the classroom and then forget, but something that truly has an effect on your life, every day. The choices of the enslavers to own people in such a way? They have affected the history of millions of people around the world. With the way the enslaved literally carried the economies of western Europe and the Americas on their back, the world most likely wouldn’t be what it is today without them. And yet, their descendants and families have received nothing except lives defined by the violent choices of people whose lives depended upon their toil. When I visited Ghana for a second time, I became more cognizant of this truth. Everything our ancestors have done, everything that they have experienced, has completely shaped the world that we live in today. They not only suffered, but resisted enslavement; and their choices have created what we can today recognize as their culture. Culture isn’t something that just appears out of thin air – it’s something that has been cultivated through time, beliefs, and experiences. Describing the culture of a people can be a complicated feat because culture is complex. Yes, it is the food, the music, and the stories. But it’s also the way they set up their communities, their chosen governments and practices, and the amount of influence their religion had on their community. It’s the way they traded within themselves and with outsiders. The food they grew, the herbs and spices they used, the clothes they wore and the cloth used to make them. All these little things together create a culture of a people. And all these little things have been defined by the environment in which they settled and the communities

created within that environment–by the social geography of slavery. The physical environment has a peculiar impact on everyone. Today, our lives are molded to the seasons and their weather patterns. We associate certain aspects of weather with school or with work, habitually referring to breaks as summer vacations or snow days. We all know that school starts between late summer and the beginning of fall, and that spring break and winter break happen really close to the Christian holidays of Easter and Christmas. However, these arbitrary relationships between our work lives and the seasons aren’t the same for everyone in the world. The reason we associate the end of the school year with summer is because of the hemisphere that we live in. If we lived in Australia, and followed America’s school year, we’d associate the end of the school year with winter vacation, and the holiday season with summer break. The environment has a different effect on everyone simply based on your location in the world. Also, agriculture, one of the most fundamental aspects of our society, is entirely based on the seasons, and has been for thousands of years. Crops depend on certain weather conditions to flourish. The environment also has an impact on our social relationships. Despite globalization, the United States’ relationship with Canada is still different than the United States’ relationship with France. Even though the United States has alliances with both of them, the different proximity to one another has a subsequent impact on their economic and political relations. On a smaller scale, the communities you are a part of locally have an impact on the issues that most concern you. While you may have serious consideration for events that are happening to other people, the events that directly affect you define the person you are and the choices you make. The environment, both physical and social, defines the life that you have the ability to live. Through this paper, I will establish the ways that select groups and tribes in Ghana have influenced each other’s culture throughout history, and how the physical environment can alter a culture and its people that it belongs to. One thing I’d like to express is the fact that the history of the chosen tribe and of Ghana in general is immense, complicated, and unable to be boiled down to a research paper. The complex relationships between each tribe–their conflicts, alliances, trade routes, and so much

Fifth World


237

more–,will not completely be expressed in this paper, because I am expressing how the history of the people has been defined by the physical and social environment, not simply what the history of the people is. I will be focusing on the geographical determination of significant historical events and what those events meant for the culture of the people who lived there.

B

efore colonization, indeed, before the creation of western nation-states such as the United Kingdom, France, Germany, Spain, and Portugal, there were flourishing civilizations throughout Africa, defined by language, culture, and their environment. One region that hosted many powerful civilizations was West Africa. West Africa is made up of many ethno-linguistic groups, such as the Yoruba, Gbe, and Akan, who produced many powerful societies, all with their own distinctive language dialects, cultures, and religious practices. Gbe speakers are spread throughout Ghana, Togo, Benin, and Nigeria, stretching from Ghana’s Volta Lake in the west to Nigeria’s Weme River in the east, and beginning at the eighth latitude parallel in the north to end at the shore of the Atlantic Ocean. The Gbe-speaking region is bordered by other ethno-linguistic groups such as the Ga-Adangme, Akan, and Guan speakers to the west, Yoruba speakers to the east, and the Akpafu, Adele, Lolobi, and Aguna speakers to the north. In fact, the close proximity of some Gbe-speaking tribes to Akanspeaking tribes has led to them being considered ethnically Akan. The Gbe ethno linguistic group has six main branches: the Ewe, the Fon, the Aja, the Gen, the Mina, and the PhlaPhera, however there are fifty-one distinctive Gbe-speaking communities in the region. The Ewe inhabit the most western part of the Gbe-speaking region, with the Fon inhabiting the most eastern part of the region, Aja in the central region, and Gen, Mina, and Phla-Phera interspersed throughout the center (Figure 1).

Figure 1. Map showing the locations of major Gbe-speaking tribes across modern South-Western Africa (Venkatachalam).

The Aja were the only organized Gbe-speaking state by the turn of the fifteenth century, while the Ewe have never truly formed into one defined society, up until today. Unlike

North Carolina School of Science and Mathematics

the Ewes, by the seventeenth century, the Fon were able to form the Dahomey kingdom, one of the most notable African kingdoms in history. The Dahomey possessed one of the only forms of absolute monarchy in Africa in history, which combined with a well-functioning centralized bureaucracy, added to the success of the people (Britannica). The Dahomey female warriors, also known as Amazons, have served as the inspiration for many features of pop culture in today’s world, as well. The Gen and the Mina are the branches of Gbe speakers that aren’t considered ethnically Gbe, despite having adopted many of the traditions and practices that define the Gbe (Venkatachalam). While Gbe speakers do share a language family, they also share a set of beliefs and traditional practices. For example, most Gbe speakers believe that they settled in their current location after migrating from Yorubaland between the eleventh and fifteenth centuries. And, these individual groups all have similar religious practices, based in the worship of deities. Finally, the Phla-Phera people were settled in the area before the Gbe speakers arrived, and some considered the distinction of language between Ewe and Yoruba to have stemmed from them (Venkatachalam).

T

he tribe of import is the aforementioned Ewe tribe – almost half of all Gbe speakers throughout Ghana and Togo are Ewe. The geographical location of the Ewe – bodies of water on three sides and a mountainous region on the last – adds even more complexity to the history of the Ewe. While Ewe are considered to be one ethnicity by the government of Ghana, the vast differences between the physical, social, and political environment of each Ewe community contradict the idea held by many of the homogeneity of the tribe. Though the people in Eweland all speak the “same” language, there are a variety of dialects and distinctive differences that make up the Ewe tribe. For one, the stark contrast between the mountainous, coastal, and freshwater regions and their subsequent ecological features mean the defining physical environment and medicinal practices are dissimilar for many Ewe. The relationship that each Ewe has to the environment varies from place to place. Some owe their lives to their environment, spending their days on the water fishing or cultivating significant produce, while others simply see the environment as the place they call home. Furthermore, the Ewe’s close proximity to the Akan allowed for many unregulated societal and political influences on critical aspects of Ewe culture, even if not all Ewe were impacted equally (Venkatachalam). The Akan are considered to be the most similar tribe by many Ewe, but not all, because while the Akan did annex much of Eweland during the nineteenth century, and subsequently imposed their system of government (chieftaincy) on to many Ewe communities, those who withstood Akan control did not absorb as many features of Akan culture as others did (BobMilliar).


238

Figure 2. Map showing prominent Anlo cities across present-day Ghana, Togo, and Benin (Venkatachalam).

Nevertheless, to divide the Ewe into the geographical regions is to generalize and to abstract their differences, which are in practice less distinct, for the people within these regions interact with one another. The northern Ewe (Gbi Ewe) are located at Lake Volta, with settlements (dukowo) such as Peki, Kpando, Kpalime, Hohoe, and Alavanyo. Before the rise in influence of the Akan on many Ewe, the governments of dukowo throughout Eweland were based on patrilineages (Bob-Milliar). However, after the Akan annexed much of Eweland during the eighteenth century, much of Gbi Ewe adopted chieftaincy as their primary form of government (Bob-Milliar). While this area is considered as part of the Gbe ethno-linguistic group, there is also a large Twi-speaking community present., Gbi Ewe hosted many trade routes that passed in and out of Eweland because many routes that flowed from the savanna at Lake Volta to the Atlantic coast passed through Gbi dukowo. These trade routes brought in many influences from outside of Eweland, such as religion – Islam became a well-practiced religion in this area – and after colonization, Christianity’s influence increased greatly (Greene). The center of Eweland – Ewedome – is considered to begin where the plains are located (below the northern region), and consists of notable dukowo such as Ho, Kpetoe, Tove, Keve, Abutia, and Adaklu. Unlike the Gbi Ewe who were known for trading, Ewedome were known for their agricultural lifestyle, with maize, corn, and yams grown in the long stretch of plains. The Akan’s influence was felt severely here as well, with the political ideology of chieftaincy overcoming Ewedome’s strict patrilineal governments (Venkatachalam).

Because the southeast of Eweland spans both Ghana and Togo, notable settlements including Lome (presentday capital of Togo), Anlo, and Fenyi, many people from south-eastern Ghana have a different opinion on how to distinguish between the Ewe people, splitting the Ewe into two instead of three groups: the coast and the inland Ewe (Eweme). Unlike the Eweme, the coastal Ewe have much more distant ties to the Akan and their cultural practices, favoring their neighbors to the east: Aja, Fon, Mina, and Gen. The Yoruba are to the far east of Eweland; however, some communities within the two tribes share many similarities, including their religious practices and other cultural traits (Venkatachalam). That doesn’t mean that the Akan didn’t have an influence on coastal Ewe, with some Akan settlements annexing and controlling prominent southeastern Ewe dukowo throughout history. The political practice of chieftaincy is still prevalent there;, however, their historical practice of dual descent and dedication to their religious beliefs have led many to offer less respect and authority to their chiefs (Bob-Milliar). And, while no duko in Eweme ever rose to prominence over the others in their region, the settlement of Anlo came to have great power (Venkatachalam).

T

he Anlo believe two heroes, Sri and Wenya, to be the founders of the majority of Anlo settlements, after leading the people out of Notsie (present-day Togo) (Venkatachalam). In fact, many settlements are named after the first Anlo’s search for a new home, such as Keta, meaning “I have seen the Head of the Sand,” and Kedzi, meaning “I have at last arrived on the sand.” The history of the Anlo and their neighbors is fraught with violence and economic success. Because many of their direct neighbors to the north were small city-states, it was easy for the Anlo to invade them, causing much of the time between the seventeenth and nineteenth centuries to be full of conflicts between Eweland settlements and even non-Ewe settlements. And Anlo’s position between the sea and the lagoon meant that it was prone to conflict with people who wanted control of the seas and fishery trade for themselves. Multiple Akan chiefdoms annexed much of Anlo during the eighteenth century, and though most of Anlo were able to attain independence by the mid-eighteenth century, Keta (the commercial hub of the Anlo) was only able to obtain independence in the late eighteenth century. There were also many conflicts between the Anlo and the Ada, Akyem, Akuapem, and Anexo over fishing rights and access to resources, until a permanent political merger came to fruition. The Asante’s (prominent Akan tribe) influence on much of Eweme simply increased as time passed, and though the Anlo stayed separate from the Asante, the Asante had an immense impact on the defining politics of Eweland. The first Europeans to maintain power in Anlo were the Danish, who built Fort Prinzenstein in Keta in 1784. At this point, throughout Eweland, Europeans were coming Fifth World


239

into contact with fragmented people instead of a unified entity. Before colonialism, the Anlo participated in a form of slavery known as pawnship. Pawnship was a form of codependency that “guaranteed credit and mobilized labor” through temporary bondage. The two main types of this type of bondage include the kluviwo and awobawo. The kluviwo – servant-children – were children that were given as security as a parent paid off their debt. They enjoyed some freedoms, but were still contracted to work for a creditor. The awobawo were slightly different in that the creditor owned the contract and could only ‘own’ the services the individual provided, instead of owning the individual themselves. The main difference between these forms of bondage and other forms such as panyarring (capturing hostages to collect debt) and modern slavery is that the kluvi and awoba were usually members of the community. If there was ever a time in which they had to enter these contracts, they still retained much of the same status in their community. When an individual was panyarred, a person from the debtor’s family was kidnapped, and the entire family was held responsible for paying off the debt. There was also no contract, so some individuals were sold into slavery after extended periods of time with no payment. What we know as modern slavery was also practiced by the Anlo during the eighteenth and nineteenth centuries, just as the rise of the Atlantic Slave Trade. There were two main types of slaves: the amefeflewo and ametisavawo. The amefeflewo (bought persons) were people from outside Anlo usually brought by the trade routes throughout Eweland, while the ametisavawo were people captured through war with other settlements and chiefdoms (Venkatachalam). Though Denmark was the first European power to abolish the slave trade, and the Anlo were influenced by them greatly, the economic success that the slave trade brought – both in West Africa and the Atlantic – drove the Anlo to continue the overseas slave trade even after it had been officially abolished. It also didn’t help that the Danish sold their forts to the British in order to offset economic losses as the British were looking to increase their political influence in the area. Althoughh the British had abolished slavery some time after the Danish, remnants of the ideology and practices remained in that area and beyond. Following the Asante-Anlo alliance and subsequent Asante Wars between them and Eweme, the British defeated the Anlo and brought most, if not all, coastal dutowo into its Gold Coast colony (Venkatachalam). Europeans didn’t just influence the Anlo through military or political power. In fact, Europeans were able to influence the Anlo people through religious conversion. The first missionaries to meet the Anlo people were the Germans of the Bremen Missionary (Greene). They were greeted with reproach by those in Anloga and so chose Keta to set up their first mission station within the Anlo state. This quickly began to destabilize the religious systems in Anlo because people now had the freedom to practice North Carolina School of Science and Mathematics

religions that required no prior familial integration. The Bremen missionaries built the Anlo Christian community through the recruitment of children that were sold into slavery (Greene). By baptizing them, they were creating the first generation of Anlo Christians. Today, there are a great many of Anlo Christians both in Ghana and throughout the world. The colonization of Eweland by European powers brought even more discord into the relationships between the dutowo. Most of the central Ewe were brought into the German Protectorate of Togoland, which was split up into British and French Mandated Territories after World War I. The end of the slave trade also signified the end of much of Anlo’s economic success. After being absorbed into the Gold Coast colony, economic industries shifted to include shallot and liquor production, along with the previous industry of fishing. However, environmental problems began to arise in some Anlo towns. Keta, which was its primary commercial town, had been experiencing it since the early 1900s (Venkatachalam). The construction of the Akosombo Dam, in combination with the perpetual removal of sand, has increased the rates of erosion throughout coastal Ghana (Boateng, Naadi). In fact, many coastal countries in West Africa are experiencing increasing rates of erosion because of the displacement of the sand through the creation of dams (Boateng).

Figure 3. Variety of images displaying the impacts of erosion in Keta, Ghana in 2003 (Boateng).


240

Figure 4. “I Want to Go to Keta” by Kofi Acquah, concerning the ersosion of Keta.

And present day Ghana has experienced high rates of emigration, which means that many of the people who defined this area, or would continue to, have decided to leave. With the way the people and the environment are so interconnected, it makes sense that the deterioration of the physical environment was linked to the economic decline and decline of the Anlo population.

T

he culture that today is recognized as Anlo is only recognized in that way because of the complex history of the Anlo. The Anlo can be grouped into groups of increasing size – each with their own distinctions. First, they are part of the coastal Ewes in which they rose to great prominence. Through trading with the Eweme and other tribes such as the Akan, and with the considerable commercial success of Keta, the Anlo were able to maintain this position for the majority of their history. Next, they

are part of Eweland, the region in which all Ewe speaking tribes reside. Eweland is surrounded by three bodies of water, the Atlantic Ocean, Lake Volta, and Weme River, and a mountain range, meaning that the different tribes live vastly different lives. While the Anlo spend much of their time fishing on the sea, other Ewe spend much of their time in the plains between the lake and the sea, growing produce, such as maize and corn. These regions have also developed different relationships with the Abrahamic religions. In northern Eweland, Islam developed a strong foothold on the people (Christianity followed later) and meshed with the already formed religious beliefs of the people, while in the southeast, Islam didn’t have an impact. Coastal, central, and northern Ewes all have their own lifestyles and practices dependent on their physical environment, which makes them slightly different and distinguishable by region. However, their similar language (though with different dialects) and culture overall means that they are all Ewe. Finally, the Ewe belong to the Gbe ethno-linguistic group, along with the Fon, the Aja, the Gen, the Mina, and the Phla-Phera. Each of these groups have their own belief systems and practices, but the one thing that ties them together is the language. Tribes and peoples outside of these clusters have had an impact on the Ewe as well, including Akan tribes and European powers. The Ewe’s history with the Akan has been tumultuous, with wars, alliances and everything in between. However, no matter positive or negative, these interactions have caused permanent change to the culture of the people. The majority of Anlo tribes practiced patrilineal governments, until the influence of the Akan, through trade and annexation, converted them to chieftaincy. Relationships between Akan and Ewe tribes were so fraught that while some were under the control of the Akan, others were in alliances with them. And when Europeans came, they used religion and military force to impose their power on the people. This changed the culture of the people once again because their religion and governments began to change. As more people were converted to Christianity, people’s respect for previous religious figures and beliefs began to wane slightly. Unlike before when membership to your religion was closely connected to your family, Christianity allowed for anyone at any time to convert. This caused strain within the Anlo community, and more, as the religion began to spread. So many aspects of Anlo culture have been defined by the social environment they found themselves in – everyone they came into contact with changed something about the people. It may have been the government or the industries the people were a part of in order to make a living. It may have been the religion the people practiced or the way they made their clothing. No matter what it was, the Anlo culture has been changed throughout history. And that may not be a bad thing. Some of the biggest signifiers of Anlo, the clothing, were brought by Akan to Eweland. While this Fifth World


241

was simply one location, one people, that was reviewed, how many more places in the world could we find similar trends? How has the entire world been defined by each other, by actions people took hundreds of years ago?

Boateng, Isaac. (2012). An application of GIS and coastal geomorphology for large scale assessment of coastal erosion and management: A case study of Ghana. Journal of Coastal Conservation. Britannica, The Editors of Encyclopaedia. “Dahomey”. Encyclopedia Britannica, 30 May. 2019, Accessed 12 December 2021. George M. Bob-Milliar, Chieftaincy, Diaspora, and Development: The Institution of Nk suohene in Ghana, African Affairs, Volume 108, Issue 433, October 2009, Pages 541–558. Greene, Sandra E.. Sacred Sites and the Colonial Encounter : A History of Meaning and Memory in Ghana, Indiana University Press, 2002. ProQuest Ebook Central. Naadi, Thomas. “Ghana’s Coastal Erosion: The Village Buried in Sand.” BBC News, BBC, 11 May 2016. Venkatachalam, Meera. Slavery, Memory, and Religion in Southeastern Ghana, C. 1850-Present. New York: Cambridge University Press, 2015. Internet resource.

North Carolina School of Science and Mathematics


An Afterword to Volume Seven There is a passage in Walden’s wonderful chapter on “Spring” in which Thoreau observes the thawing of ice down a embankment made by the railroad which ran through the woods by the pond. This “deep cut” in the earth reminds us that Walden Pond was not a pristine environment; indeed, the real interest in Thoreau’s description of the sun’s work on the “exposed banks” of the railway is precisely the care by which he closely attends to a damaged landscape. The writing evokes the poetry of nature as a “hybrid product” recreating itself after great suffering. As, for instance, the “innumerable streams” of melting ice “overlap and interlace with one another,” so their flowing water “takes the forms of sappy leaves or vines, making heaps of pulpy sprays a foot or more in depth, and resembling, as you look down on them, the laciniated lobed and imbricated thalluses of some lichens; or you are reminded of coral, of leopard’s paws or birds’ feet, of brains or lungs or bowels, and excrements of all kinds. It is a truly grotesque vegetation. . .” The poetry of the fluent natural world so vividly evoked by Thoreau’s prose is inseparable from its relation to the human body. Imagining himself present “in the Laboratory of the Artist who made the world and me. . . sporting on this bank, and with excess of energy strewing his designs about,” Thoreau exclaims in wonder, “What is man but a mass of thawing clay? The ball of the human finger is but a drop congealed. The fingers and toes flow to their extent from the thawing mass of the body. Who knows what the human body would expand and flow out to under a more genial heaven?” What makes this moment of revelation so remarkable is its creative response to an earlier crisis. Readers familiar with Walden know well the opening of “Higher Laws,” when Thoreau, glimpsing a woodchuck, confesses that he “felt a strange thrill of savage delight, and was strongly tempted to seize him and devour him raw,” not from hunger, but from a desire for “the wildness that he represented.” It becomes clear that such “wildness” is no less part of human nature than the higher laws with which this chapter is concerned. “I found in myself, and still find, an instinct towards a higher, or, as it is named, spiritual life, as do most men, and another towards a primitive rank and savage one, and I reverence them both. I love the wild not less than the good.” Such love is, however, difficult to sustain, even in the solitude of the woods; and we soon find, to our dismay, that Thoreau turns in revulsion from this disturbing “generative energy,” opposing its “sensuality” and “uncleanness” to “purity.” I leave the rest of this chapter to readers; here, I wish to note that, in the thawing warmth of spring, Thoreau discovers in the renewal of the damaged world the words necessary for a more expansive and fluent life. Our embodiment, or, as Judith Butler more aptly describes it, our constant acts of

“embodying,” entangled and encumbered as they are with social meanings, should be understood and affirmed as “hybrid” or even as “grotesque” insofar as they attest to an “excess of energy” and to a life exceeding constraint, a life that escapes. I am reminded of this as I reflect upon the brilliant students whose work appears in these pages, and indeed upon those who, no less brilliant, were unable to publish in the difficult circumstances of another pandemic year. Our work together was begun in the winter of 20-21, when the students in the Research Experience in Humanities—the prerequisite for Research in the Humanities—and I would gather in a basement classroom to discuss the changing nature of human “nature” as imagined in science and literature. In one gothic novel after another, challenges to received notions of the “human” are met with imaginative and actual violence: the “wild” is opposed to the “spiritual,” or, more exactly, to the spiritualization of conceptions of the human that can no longer be sustained in the face of difference. But difference can be effaced: the confrontation with that which exceeds “us” can be rewritten as an encounter with a monster whose “face” fills us only with horror—or so the gothic novel would seem to suggest. Yet the monstrous is irreducible to the monster who escapes us; and the horror opens onto a wild delight in the play of the world, “sporting on the bank” of the devastated earth. So, too, does research in the humanities emerge from scientific discovery, as the critical and creative response to questions which science raises but cannot answer on its own terms. It emerges from students hungry for life after two years of isolation, students whose rigorous commitment to scientific inquiry leads to social and political questions of a life that might be lived “under a more genial sun.” These are writers upon whom nothing is lost. There is much to be gained, then, from their emergence as writers, scientists, and artists. To those readers who are uncomfortable with the subjects of some of these essays, I say this: Find comfort in the courage of these students, for they refuse to know nothing about what matters most to them. Their writings cannot but encourage and cheer the spirit even in times when optimism is difficult to summon. Like Thoreau in his beautiful ode to the morning, they remind us that “we must learn to reawaken and keep ourselves awake, not by mechanical aids, but by an infinite expectation of the dawn, which does not forsake us in our soundest sleep.” Woke, wild, and good, these works keep faith with the dawning of another world. David Cantrell Durham, NC




Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.