Vol. 4: Fall 2014 (Essays selected from courses taught in Spring 2014)
1
Table of Contents EDITORS’ WELCOME...........................................................................................................................3 JUSTICE FOR ALL: Bullying: Who’s Really to Blame? (By Amanda Fish) ............................................................................5 Redeeming the Legacy: Joe Paterno’s Statue (By Eimi Smith) ............................................................ 11 Killer Psychopaths: Why? (By Kelly Wapenski) .................................................................................. 18 Abolishing Capital Punishment (By Raj Kolloru)................................................................................. 24 The U.S. Criminal Justice System: A Complex Structure (By Wesley Chai) ........................................... 29 The Rise of the Extreme Right in Contemporary Europe (by Lallo Darbo) ........................................... 47 ETHICAL CONSUMERISM: Unethical Conduct (By Dana Chapman)............................................................................................. 54 I Was Imagined by a Copywriter (By Romain Caïetti) ......................................................................... 61 CONVINCING THE CROWD: Video Games: Art or Entertainment? (By Silje Solland) ...................................................................... 71 Dog Aggression: Nature versus Nurture (By Kylee Chun) ................................................................... 79 Eyes of Realism: Personal Perspective (By Samantha Patanapaiboon) ............................................... 85 CULTURES AND SUBCULTURES: The Divergence and Denigration of Chinese Language (By Selah Chung) ............................................ 93 The Philippines: Questioning the Country’s Independence (By Janine Mariano) ................................. 99 American Dream: Americans vs. Immigrants and Global Citizenship (By Pancy Lwin) ....................... 105 The Smell of Gunpowder (By Ray Abarintos) ................................................................................... 112 World of Cosplay: Is it Weird? (By Candace Ferris) .......................................................................... 115 In Pursuit of Hearing: Looking into Cochlear Implants (By Jessica Bie) .............................................. 121 LOOKING AT LITERATURE: Fitzgerald and Steinbeck: Is the American Dream Really Dead? (By Elizabeth Dash) ......................... 134 Feminists Never Rest: A Study of “The Yellow Wallpaper” (By Rosario Kuhl) .................................... 144 The Great Gatsby: The Shallow Reality of Daisy Buchanan (By Sophia Suarez) ................................. 153 Shadow of the Imagination: The Dark Side of Peter Pan (By Everett J.Y. Fujii) .................................. 160 MEET THE WRITERS! ...................................................................................................................... 171 MAHALO! ...................................................................................................................................... 176
2
EDITORS’ WELCOME Hello! My name is Savannah Halbrook, and I transferred to Hawai‘i Pacific University in Fall 2013 as a junior majoring in English. I am now a senior and I live in Waianae, Hawai‘i, though I was born and raised in Texas. Before serving as an editorial intern, I was the Editor-in-Chief for The Brand (the student newspaper at my prior college, HSU) and a Hawai‘i Pacific Review staff reader and managing editor. Working with Fresh Perspectives has been an amazing, influential experience. I have thoroughly enjoyed the opportunity to read these outstanding first-year essays while getting to know the writers behind them. I am excited for their publication and I hope that everyone enjoys this issue! Hi everyone! My name is Brittany McGarry and I am a senior, majoring in English with a minor in Writing. I’ve lived in Ewa Beach, Hawai‘i, for over five years. In addition to editing, I work as a writing mentor at CAS and organize events for our English honor society, Sigma Tau Delta. It has been a privilege to be an editorial intern for this publication and to be able to interact with students and their essays. I hope you all enjoy reading these inspiring pieces as much as I have. Whether you consider yourself “good” at writing or not, we all have something important to say that deserves to be heard. I’m Kathleen Cassity, Associate Professor and Chair of HPU’s English Department. This is our fourth issue of Fresh Perspectives, an anthology designed to broaden our students’ audiences and showcase their writing. Since this issue derives from Spring semester when Writing 1200 is our predominant offering, it features long research papers written for that course. This makes for a lengthy volume, which should be encouraging to anyone concerned that technology might be impeding students’ ability to produce extended texts. While all the pieces were subjected to a professional editing process, the words and ideas are each writer’s own. Many of the pieces here express potentially controversial viewpoints; please remember that all opinions reflect the views of the writers, not HPU as an institution, our editors, or me. Finally, these pieces represent a range of disciplines and discourse communities; thus, citation styles and other discursive conventions will vary from piece to piece. Once again, Savannah and Brittany have done an outstanding job of grouping these essays. Essays in “Justice for All” address ethical questions on topics from bullying to serial killing to inequities in our justice system. The “Ethics and Consumerism” pieces question our contemporary approaches to food production and advertising, while selections in “Convincing the Crowd” take bold positions on a range of issues, from video games to canine aggression and more. “Cultures and Subcultures” explores cultures as wide-ranging as China and the Philippines, the military, Deaf culture, and cosplay, while essays in “Exploring Literature” reexamine such well-known texts such as The Great Gatsby and Peter Pan. This makes for an eclectic and enjoyable mix of substantive essays. We hope you will enjoy reading this issue, and we thank you for your support of HPU’s first-year writing students. 3
JUSTICE FOR ALL
4
Bullying: Who’s Really to Blame? (By Amanda Fish) Is the sky red or blue? No, I’m serious. Is the sky red or blue? Blue? Is that your answer? Congratulations! You, everyone before you, and everyone after you, all were right. That’s because everyone knows that, generally speaking, the sky is blue. The problem is that this paper must, in effect, convince you that the sky is actually red. Sound impossible? I would like to temporarily shift your focus to a parallel dichotomy: bullying. In a bullying situation, whom should you defend: the victim or the bully? The predominant answer may be the victim, but remember that we are trying to assert the claim that the less-obvious answer is true. The obvious choices are that the sky is blue, and that we should side with the victim. But, reader, I am trying to tell you that sometimes the sky is red; sometimes, we should consider the perspective of the bully. By now you are probably wondering why you should side with the bully, and why you should believe that the sky is red. To answer the latter portion of your inquiry, I present you with a modified version of my original question: Is the sky red or blue . . . at sunset? If I can make you realize that sometimes the sky is more red than blue, then maybe I can use this paper to prove that sometimes, we should try to understand the bully. I believe it can be universally agreed upon that we should try to stop bullying in our society’s schools. The problem is that we need to figure out the best way to accomplish this. All of our current methods seem to be ineffective at solving the problem at its heart, and are instead aimed at treating only the symptoms. Today’s methods are more geared toward punishing bullies for mistreating their victims than they are at solving the actual issue to prevent future incidents. Maybe, just maybe, our focus is in the wrong place. What would happen if we switched our emphasis from the victim to the bully, despite the fact that this idea goes against every notion about bullying society has taught us? Society tends to villainize bullies rather than understand their perspective—a perspective which may be the key to solving bullying in its entirety. Before we get into the nitty-gritty details, allow me to clarify a few things for future reference. Although I will be exclusively referring to bullying in high school, I recognize that bullying also occurs at the middle school and even elementary school levels. I am simply making everything succinct and consistent by only referring to bullying in high school, as opposed to listing out elementary school, middle school, and high school each time I make a point. Also, this paper will address bullying in terms of physicality and emotional trauma—which, according to “Bullying Statistics,” is more prevalent in schools today than physical bullying—but will not include cyber-bullying (“Bullying Statistics”). My rationale for this choice stems from the data showing that not only is cyber-bullying relatively new and difficult to address, but also the fact that schools are only just recently developing stances and policies on cyber-bullying. Despite the fact that 42% of teenagers have been bullied online (“Bullying Statistics”), it should be noted that cyber-bullying is located in a realm outside of the school and, quite frankly, is so complex that it could be an entire research paper by itself. 5
The last point I wish to clarify is my claim that even if the victim’s point of view is half of the story, more focus should be placed on the bully’s perspective. Oftentimes people mistake this sentiment as me saying that the victim’s point of view should simply be discarded. I am not saying that seeing things from the victim’s perspective is wrong. On the contrary, it is crucial to understanding the situation. However, looking at bullying only through the victim’s eyes limits our evaluation, and thus our knowledge, on the issue of bullying and how to best stop it. More distinguishing is done later on in this paper in regards to the bully and victim’s perspectives, so please withhold any blatant rejection of my claim on the basis of the victim’s importance. With all of those clarifications out of the way, we are left with only one question: What is the definition of bullying, and how does one distinguish between bullying and just a classmate being mean? I am afraid this question is more complicated; in fact, it may be impossible to fully answer because it delves into the stereotyping of bullies and bullying behavior. What comes to your mind when I say the word ‘bully’? Besides the physical stereotypes—big, strong, intimidating, and so forth—there are certain personality-characteristics we tend to associate with bullies, maybe without even realizing that we are doing it. Bullies are characterized as being narcissistic and self-preoccupied, which leads them to attempt to assert their dominance and power over their victims because they are oversensitive to any forms of perceived criticism (Lines). Bullying as an issue revolves around this imbalance of power and the bully’s repeated attempts to cause harm. Bullies are able to do so continually because they “tend to exhibit a lack of empathy for the suffering of their victims,” which is indicative of mental health problems including psychosomatic symptoms (Cowie). Looking at all of this makes a bully sound like some heartless, selfish, aggressive entity—like something not even human. Bullies are presented as “vicious creatures…as sub-human, dangerous, marauding menace[s]” (Rigby). This is what I call the “villainization” of bullies. You may notice that there is a recurring pattern in life that the victim garners sympathy, while the antagonist is hated. Our natural inclination is to immediately side with the apparent victim; this ties into humans’ empathy and compassion for each other. However, empathizing solely with the victim makes it easier to villainize the bully. Even the great author Mark Twain recognized this, noting, “But who prays for Satan? Who, in eighteen centuries, has had the common humanity to pray for the one sinner that needed it most?” Oftentimes those who are villainized by society are the ones who received the least kindness to begin with. Because bullies are dehumanized by society, they lack empathy for their victims as a subconscious means of justification (Cowie). Bullying behavior is often seen to be an impulsive, dysfunctional way of communicating. Despite this, we label individuals as bullies, which correlates to the fact that “the practice of labeling dominant characteristics [as bullying] merely create[s] the very responses that we anticipate but want to avoid” (Lines). To bullies, bullying is simply a way to navigate socially and establish themselves in school. Schools are ideal settings for bullying because “they are hierarchal institutions . . . and there are power dynamics operating” (Twemlow). Power games of domination and manipulation are played by bullies as a way to see who backs down first; it is a twisted form of social dynamics.
6
One example of this “game” is the gray area of “name-calling” (Lines). Sometimes, namecalling is seen as an acceptable form of social communication and banter. Other times, though, name-calling is viewed as a “precursor to violent and aggressive behavior” (Lines). The distinguishing factor is often just the general feeling people get during the conversation. Friends insulting each other will give off a different vibe than bullies asserting their dominance over the social hierarchy. Dominating others gives them control, which, in the bullies’ eyes, translates to social safety in school. When their control is challenged, they strike back. While this rationale does not excuse what bullies do to their victims, it reminds us that bullies are people too. They are simply humans reacting to a situation the best they know how. Unfortunately, often none of this is taken into account by teachers during their assessments of bullying situations. Preference is shown to the victim, or whoever is weaker in the power hierarchy. So what are we to do, then? Should we just ignore the victims? As I clarified earlier, it is impossible to fully prioritize the bully’s perspective over the victim’s. Could you, reader, look a crying, emotionally distraught young teenager in the eyes and tell him that he should try to see things through the eyes of his tormentor? Could you try to tell that victim that society has simply villainized their bully, and that their bully’s viewpoint is more crucial than their own? I find it hard to believe that any sympathetic person could do this in good conscience. If they did, victims would feel unheard and distrustful, and fewer would come forward to teachers at all. As it is, 58% of children never tell an adult when they have been the victims of a bullying attack (“Bullying Statistics”). You can see how shifting more focus to the bully would undervalue the victim’s perspective. Nonetheless, this does not diminish the importance of considering the bully’s perspective. It is imperative that victims of bullying know that their story is being taken seriously, and that their perspective is believed; otherwise, bullying as an epidemic may never end. A balance must be achieved between the victims’ knowledge that their stories are taken seriously and the bullies’ mindset that they are being attacked and villainized. One of the most serious implications of villainizing bullies is how it affects the attempted solutions. Bullying has been so thoroughly established as an issue that the villainized bully’s perspective on the supposed solution is often discredited and ignored. In general, people have a sense of when they are being ignored, or when their opinions are being deemed unimportant. We have all experienced it: when we are ignoring our parents, when a friend is not paying attention to us, when a teacher is tuning out a professor. When a bullying situation occurs, the bully experiences this disregard, and yet they are expected to just go along with the “solution” anyway. I say “solution” hesitantly because 77% of students have experienced some form of bullying, so I wonder if the issue is truly being solved at all (“Bullying Statistics”). The various methods used to solve bullying today encompass “systematic approaches to countering bullying and identifying ‘problem people’ and changing them” (Rigby). Would you listen to somebody who tells you that you are a problem and need to change? I doubt it would make you feel very good about yourself. These methods include “collaborative conflict resolution, peer counseling, assertiveness training” and “supportive interventions,” which are geared toward encouraging the bullies to change their dominating ways to positive relations with 7
their classmates (Smith). Some people are more assertive and strong-willed—dominating, in effect—than others. Must these people constantly censor themselves in order to have “positive relations?” Another strategy is the “promoting issues in common method,” which attempts to connect the bully to the victim through their shared anxieties and fears. According to New Perspectives on Bullying: A Guide for Practitioners, promoting issues in common “changes the bully-victim relationship by enabling bullies to understand the fears and anxieties they have in common with those they bully” (Cowie). Though well-intentioned, this strategy assumes that not only will the bully and victim have the same insecurities, but that they will trust each other enough to talk about it. Several schools have “zero tolerance policies,” which entail suspensions for the bully – and, if severe enough, expulsion. From the bullies’ perspective, they are being punished while the victim—whom in some cases the bully considers equally culpable—gets off scot-free. Bullies who experience this may think, “That’s not fair,” and this only serves to build up their animosity. Their respect for school authority is also lessened. Luckily, many schools have just recently realized that “zero tolerance policies [are] ineffective in changing the behavior of bullies” because studies show that there is a correlation between the use of suspension and lower academic performance (Cowie). Instead, many schools are focusing on “collaborative conflict resolutions” by holding seminars, workshops, and assemblies on bullying, which cover what students should do if they witness it, why it is not okay, and so forth. (Cowie). They aim to discuss the basic principles of “communication, negotiation, mediation, arbitration, litigation, and legislation” in regards to school-wide policy on bullying (Smith). The problem is that each teacher approaches bullying situations differently. This is why some classrooms have “Safe Zone” and “No Bullying” signs posted on the walls and others do not. Education officials have been trying for years to end bullying in schools through a variety of tactics, strategies, and policies. Unfortunately, they have had little success so far. Perhaps a change in strategies would better solve bullying issues in schools today. Everyone has heard the stories of bullying; some have even experienced it firsthand. It is time for our society to stop superficially listening to these stories. It is time to confront the storytellers. It is difficult to discuss bullying because everyone has had different amounts of exposure to it and it is as a sensitive topic. But these varying experiences evoke questions concerning the practices and impacts of bullying, and only through careful examination of these experiences can the solutions be discovered. Terri Kanaele was emotionally bullied by two classmates for just over two months in high school. Eventually, she could not handle the stress anymore and she told her principal. Kanaele’s bullies were expelled. No warning, no attempts to rectify the situation, nothing. When I asked Kanaele why she thought her classmates bullied her, she hesitated. She had never thought of it before. Eventually, Kanaele said, “I think they wanted me to be scared of them.” Kanaele stated she knew her bullies regretted their actions, and that she wished they had never bullied anyone because now that they are adults who know better, they are suffering the consequences. Despite
8
this, when I asked her what she would do or say if she saw one of her former bullies today, Kanaele said, “I’d smile and laugh in his face.” Kanaele’s story, her school’s response, and the impacts on the bullies’ futures are typical of bullying stories today. When I confronted the storyteller, when I asked her if she had ever considered her bullies’ perspectives, she had not. She knew they regretted their actions—what reasonable adult would not regret their decisions when they were young and stupid?—yet she felt no pity for them. In her mind, they were still bullies, not people. This is villainization in the real world: two human beings with hopes, dreams, and futures were simplified and stereotyped into bullies. No consideration was given to them, only to protecting Kanaele from experiencing future bullying. While this is not necessarily a bad thing—as I have said, protecting the victims is important—it was done at the complete expense of the bullies. Their futures were sacrificed for Kanaele’s. When I was attempting to find people to interview, it was nearly impossible to find somebody who answered “yes” to the question, “Were you a bully in school?” At first I thought that this was because they were ashamed to admit it, but then I realized that this thought process relates to the same stereotyping and villainization that I have been talking about. The simple fact of the matter is that bullies do not see themselves as bullies. As previously stated, to bullies, their actions are justified. Their bullying stems from their anger and desire for revenge for some perceived past injustice. But this mentality is the manifestation of the saying “an eye for an eye,” which “leaves the world blind” (Smith). A bully perceives something as a threat to their own self-appraisal, and so they respond through manipulation and aggression; this is called “defensive egotism” (Cowie). In New Perspectives on Bullying, Ken Rigby explains that defensive egotism relates to the idea that “bullying, manipulation, and aggression stem from perceived threats to their self-appraisal.” Because bullies see things in this manner, with their own actions being warranted, any punishment that is a result of the anti-bullying techniques used today is seen as an unfair attack against them. They are the ones being told they are wrong, they are the ones who need to change their behavior, and they are the ones being villainized. Yet despite all of this, they must “suck it up” and go along with the “solution” to the problem. How can we expect our numerous antibullying tactics to have any kind of success with this flawed logic? The current solutions are rendered completely ineffective, which is why considering the bully’s point of view is vital. The key to helping end bullying is not attempting to understand, categorize, and then stereotype the psyche and motivations of a bully, but instead to realize that they are people who have been painted into a corner. From this corner, they perceive no way to get out except through more lashing out and manipulative behavior. What is the square root of 25? I know, I know, I should not be asking a math question in the middle of an English paper. Just humor me: square root of 25? If your answer is 5, congratulations! You have wasted ten pages’ worth of your own time. Granted, you are indeed partially correct—but you are not thinking outside of the box. You are not deviating from the expected path. If, however, your answer was 5 or -5, you understand what I have been trying to teach you all along. You are looking at a problem from an unexpected perspective. Some problems need fresh, unusual perspectives in order to fully be solved. Maybe bullying is one of these problems. 9
Ultimately, if we want to stop bullying in our schools, we must recognize that sometimes bullies are being villainized, and we need to try and see things from their points of view. Yes, the victim’s perspective is still important; after all, 5 is also the square root of 25. However, we have tried to solve bullying from solely the victim’s perspective, to seemingly no avail. Now it is time to try and deal with bullying in our society with the bully also in mind. If we stopped villainizing bullies long enough to understand them, maybe we would be able to reduce the rampant bullying that occurs in high schools today. WORKS CITED “Bullying Statistics”. www.bullyingstatistics.org. Bullying Statistics 2013 Castro, Allison. Personal interview. 8 April 2014. Cowie, Helen, and Jennifer Dawn. New Perspectives on Bullying: A Guide for Practitioners. Berkshire: Open University Press, 2008. 54-60. Fekkes, M., and F. Pijpers. “Bullying: Who does what, when, and where? Involvement of children, teachers, and parents in bullying behavior.” Health, Education, and Research 20.1 (2004): 81-91. Online. Juvonen, Jaana, and Sandra Graham. “Bullying in Schools: The Power of Bullies and the Plight of Victims”. Annual Review of Psychology 65 (2014): 154-185. Print. Kanaele, Terri. Personal interview. 25 April 2014. Landwehr, Jon. Personal interview. 8 April 2014. Lines, Dennis. Bullies: Understanding Bullies and Bullying. London: Jessica Kingsley Publishers, 2007. 61-100. Olweus, Dan. “A Profile of Bullying at School”. Educational Leadership (2003) 48-54. Online. Rigby, Ken. New Perspectives on Bullying. Philadelphia: Jessica Kingsley Publishers, 2002. 127-134, 234-236. Smith, Peter K., and Sonia Sharp. School Bullying: Insights and Perspectives. London: Routledge, 1994. 108-113. Twemlow, Stuart W., and Frank C. Sacco. Preventing Bullying and School Violence. Arlington: American Psychiatric Publishing, 2012.
10
Redeeming the Legacy: Joe Paterno’s Statue (By Eimi Smith) A typically peaceful Pennsylvania morning was shaken on this particular day in 2012. People were gathering, some in despair, some in relief, but all in shock. The ruckus of the demolition machines and shouts of the construction workers caught the attention of students, faculty, fans, and the community surrounding Pennsylvania State University; most of them were chaotically struggling to snap one last photograph or even catch one last glimpse of the monumental statue being uprooted outside of the Beaver Stadium. Should Joe Paterno's statue have been taken down? On one hand, Paterno was aware of the child abuse that was going on by Jerry Sandusky and he did the bare minimum of only telling the Pennsylvania State University athletic director rather than alerting the authorities. In return, Sandusky continued to abuse young males on Penn State property for nearly 13 years after Paterno first became aware of the abuse. However, Paterno was also a legendary coach, philanthropist, educator, and leader to all his athletes, students, and fans. Joe Paterno is Pennsylvania State University in the sense that he was the face of the university, the man who made the university go from good to great, the man who enforced athletes to do more than just play sports, and the man who contributed millions into bettering the university’s educational system. Is the one major mistake that Paterno made that exposes a momentary character flaw a valid enough reason to alter his reputation, regardless of all the good that was done in the past? By looking at Joe Paterno and examining the other statues glorified in America, I will argue that Joe Paterno’s statue should not have been removed, due to his commitment to bettering Pennsylvania State’s athletic and educational goals, his fulfillment of obligation to the university, and the fact that he could not be directly condemned for any criminal activity. On November 2, 2001, the statue of professor and head football coach Joe Paterno was unveiled near Beaver Stadium on the Pennsylvania State University’s campus. The bronze, 900-pound, 7foot tall statue was awarded to Paterno for both his academic contributions to the university and world-renowned coaching ability (Mink, 2012, p. 1). Alongside the magnificent statue stood a stone wall with three distinct sections. The first read, “Joseph Vincent Paterno: Educator, Coach, Humanitarian" (Preston, 2012, p. 1). The second included an engraved sculpture of Paterno as well as his players following him. The third, however, perhaps exposes the most about Joe Paterno’s true character and ambitions. It quoted from Paterno in regards to how he hopes to leave his legacy: "They ask me what I'd like written about me when I'm gone. I hope they write I made Penn State a better place, not just that I was a good football coach" (Preston, 2012, p. 1). Following the 2011 Jerry Sandusky child sex-abuse scandal and exactly six months after Paterno’s death from lung cancer, Penn State President Rodney Erickson announced that the Paterno statue was to be removed. Erickson provided the reasoning behind his actions that the statue was “a source of division and an obstacle to healing. For that reason, I have decided that it is in the best interest of our university and public safety to remove the statue and store it in a secure location. I believe that, were it to remain, the statue will be a recurring wound to the multitude of individuals across the nation and beyond who have been the victims of child abuse” (Mink, 2012, p. 1). 11
Paterno’s family also released a statement after notice of the statue removal: “Tearing down the statue of Joe Paterno does not serve the victims of Jerry Sandusky's horrible crimes or help heal the Penn State Community. We believe the only way to help the victims is to uncover the full truth” (Mink, 2012, p. 1). The removal was quite an emotional event on campus: “Some ran here, the sweat of an early morning run soaking their clothes and dripping off their faces. Others arrived on bike. They brought their dogs, children and smartphones, capturing photos and video of the removal” (Mink, 2012, p. 1). Corrie Weimer, a sophomore communications science disorder major at the university, heard about the removal of the statue and immediately made the 40-mile road trip to get a photograph with the statue of a man she admired and looked up to. When she arrived too late, she “was a mess with moist eyes and a quivering lip. When I was pulling up I couldn't believe they would do it. I was probably cursing out people I shouldn't be cursing out. I was just thinking why would they do something so awful" (Mink, 2012, p. 1). Unlike most college football coaches, Joe Paterno was more than just a coach to his athletes; he was a role model to students and fans as well. Although Joe Paterno was primarily acknowledged for his excellence and victories as a football coach, he was also an English professor at the university and supported academics perhaps just as much as athletics. He published the “Grand Experiment,” which was based on the idea of athletes always putting their education before sports and which increased the national gradepoint average required by athletes in order to be cleared to play in games and practice (Berube, 2012, p. 381). As a result, Penn State players consistently had higher GPAs than other Division 1 football athletes and averaged considerably higher, with an 80% graduation rate, as opposed to the national average calculated by the National Collegiate Athletic Association (NCAA) of 67%. Paterno and his wife, Sue, were also renowned for their generous contribution of over $4 million to improve various departments of Pennsylvania State University, including the Penn State AllSports Museum (“Joe Paterno,” 2012). After gaining more status following the first national championship victory, Paterno and his wife led a capital campaign that ultimately raised over $13.5 million in order to expand the university’s library because, according to Paterno, “You can’t have a great university without a great library” (Berube, 2012, p. 381). The new wing of the library bears the Paterno name to show honor and gratitude. According to Erickson, “The school’s library, which also bears Paterno’s name, will not be altered in any way because it represents the academic mission of the university Paterno helped foster” (Mink, 2012, p. 2). Joe Paterno was passionate about encouraging both his athletes and the whole student body of Penn State to pursue an enriching education and was adamant about going above and beyond to make that possible. It is no secret that Joe Paterno was also a huge success in his role as a motivational football coach. He dedicated 52 years of his life to coaching 582 games total. Paterno is the only college football coach to have ever won the four traditional New Year's Day Bowl games: the Rose, Sugar Bowl, Cotton Bowl, and Orange Bowl. Paterno was announced the 1986 Sports Illustrated “Sportsman of the Year” (“Joe Paterno,” 2012). The National Football Foundation and College Football Hall of Fame selected him as the first active coach to receive the Distinguished American Award. In 1998, he received the Eddie Robinson "Coach-of-the-Year" Award, which is presented to a college coach who serves as a role model to students and players, an active 12
member of the community and an accomplished coach (“Joe Paterno,” 2012). Paterno’s 409 career wins granted him the title of the “winningest coach in college football history.” Paterno’s success on the field was far from limited to just athletic victories; he was an effective leader to his athletes who looked up to him as well. Former quarterback Todd Blackledge stated the following about his former coach: ...I can tell you that virtually all of the players he's touched in fifty years as an assistant and head coach have been enriched by the experience…I consider myself, and I know my teammates and Penn State players past and present feel likewise, a better person for having played for Joe Paterno. (“Joe Paterno,” 2012) Another former player for Paterno, LaVar Arrington—a top candidate for the NFL first-round draft choices, a two-time All-America winner, a winner of both the 1999 Butkus Award as the nation's top linebacker and the Maxwell Club's Chuck Bednarik Award presented to the top collegiate defensive player—speaks about his training under Paterno: If you're not a man when you get there, you'll be a man before you leave . . . Joe has his system so that you're prepared for life. Joe trains you more mentally than physically so that nothing will rattle you. He often has said he measures team success not by athletic prowess but by the number of productive citizens who make a contribution to society. (“Joe Paterno,” 2012) It is no question that Paterno had a long-lasting impact on his players which motivated them to do more than just be great at playing football. He sincerely strove to have a positive impact on all of those with whom he was in contact. Perhaps this is the secret to becoming a great leader and successful coach. As for Paterno, his method of enthusiasm and dynamic coaching allowed him to become the most successful college football coach of all time. Regardless of how legendary Joe Paterno came to be, there is no debate that the public view of his character was severely tarnished when the Jerry Sandusky child sex-abuse scandal was exposed. Because there were several side stories and conflicting results from evidence, Penn State hired Freeh Sporkin to perform an independent private investigation. In short, Jerry Sandusky, a former Penn State assistant football coach and professor, was charged and convicted of sexually harassing 45 young male children on Penn State property. Meanwhile, allegations proved four of Penn State’s most powerful leaders—Athletic Director Tim Curley, Vice President Gary Schultz, President Graham Spanier, and Head Football Coach Joseph Paterno— were made aware of the horrendous situation but failed to alert the authorities. “Our most saddening and sobering finding is the total disregard for the safety and welfare of Sandusky’s child victims by the most senior leaders at Penn State,” Freeh said during a press conference shortly after his report was released (2012). Perhaps most damaging to the legacy of Joe Paterno is the fact that the report concluded that graduate student Mike McQueary informed Paterno that he had caught Sandusky showering with a boy in the locker room back in 1998, yet Sandusky was never reported to the authorities until 2011 by a janitor who saw him sexually harassing a young male in the Penn State football locker room (Freeh, 2012, p. 182). From the public perspective, Paterno was deemed responsible for not protecting those children who were harmed for 13 years while he had knowledge or 13
suspicion of the criminal and abominable behavior by Sandusky. On May 13, 1998, Curley sent Schultz an e-mail with the subject line as “Jerry” and the message, “Anything new in this department? Coach is anxious to know where it stands” (Freeh, 2012, p. 97). The Freeh report assumes “Coach” is believed to be Paterno. This evidence makes the assumption that Paterno was more aware of the situation than he led the public and authorities to believe. Although the media and the public have accepted the Freeh report as the definitive conclusion of the Sandusky scandal, the report also has its flaws. Michael Chertoff, a former secretary of Homeland Security for George W. Bush, stated that the report is “an incomplete legal analysis; a failure to collect evidence concerning most of the allegations; a disregard of evidence tending to undermine Freeh's assumptions; and a failure to investigate plausible alternate explanations" (Thompson, 2013). Paterno’s family also spoke about the released report: The Freeh report… is the equivalent of an indictment—a charging document written by a prosecutor—and an incomplete and unofficial one at that. To those who truly want to know the truth about Sandusky, it should matter that Joe Paterno has never had a hearing; that his legal counsel has never been able to interview key witnesses . . . that selective evidence and the opinion of Mr. Freeh is treated as the equivalent of a fair trial. Despite this obviously flawed and one-sided presentation, the University believes it must acquiesce and accept that Joe Paterno has been given a fair and complete hearing. We think the better course would have been for the University to take a strong stand in support of due process so that the complete truth can be uncovered. It is not the University's responsibility to defend or protect Joe Paterno. But they at least should have acknowledged that important legal cases are still pending & that the record on Joe Paterno, the Board and other key players is far from complete (Mink, 2012, p. 2). Surely, Freeh had the intentions of finding out the truth in order to establish justice for those who had fallen victim to Jerry Sandusky. His investigation did indeed find facts and evidence that incriminated Paterno, Spanier, Curley, and Schultz more than they led people to believe. However, as stated by Chertoff and the Paterno family, the report is incomplete and not completely revealing. Therefore, placing blame and responsibility on those men is not a fair assessment. It is especially one-sided because of the fact that Joe Paterno died before he was given the chance to defend himself or give the facts of the story. His family speaks again on his character, which was clearly supported by his past endeavors: Joe Paterno wasn’t perfect. He made mistakes and he regretted them. He is still the only leader to step forward and say that with the benefit of hindsight he wished he had done more. To think, however, that he would have protected Jerry Sandusky to avoid bad publicity is simply not realistic. If Joe Paterno had understood what Sandusky was, a fear of bad publicity would not have factored into his actions. (ESPN, 2012) Again, because Paterno is not alive to tell his side of the story now, neither his family’s statements nor the Freeh report can be deemed as the truth behind his involvement in covering up the scandal. Also, Paterno most likely would not have been convicted of any criminal activity if he were still alive. Schultz, Curley, and Spanier face criminal charges of perjury and failing to report child abuse. Yet there is much dispute in the case, which is why it is still not resolved (Preston, 2012, 14
p. 2). These disputes on whether or not these men should be put on trial arise due to attorneys attaining informational illegally and therefore not being able to use certain evidence; the accuracy of the Freeh report; lack of sufficient testimonies/witnesses; and many more legal complications. Due to the conflicting allegations and the fact that Paterno is no longer alive, it cannot be definitively concluded that he was truly guilty or innocent of covering for Sandusky. Paterno was never convicted, so the reasons why his legacy has been tarnished and statue was taken down were due to public opinion that moral obligations were abdicated and that the man who Paterno claimed to be throughout his career, a man of honor and morals that exceed athletics, no longer appeared genuine. Of course those who fell victim to Sandusky and their families advocate blaming Paterno, because he potentially could have saved over 40 children from being sexually abused, but that is not necessarily a fact either. The fact is that the truth of the events probably will never be determined; therefore, it is unjust to place blame on anyone at this point in time. According to Scott Allison’s “The Sense-Making of Joe Paterno’s Legacy”: “As a general rule, people hold heroes to a higher moral standard and harbor an almost perverse schadenfreude delight in watching heroes crash and burn” (p. 1). When first notified of the abuse, Paterno did proceed to tell a higher-up at the university what he had learned. He also never physically saw the abuse occur; he was only told by a graduate student and passed the message along. For these reasons, it is understandable that Paterno was fired from the university. Yet, even while the punishments for his actions occurred while Paterno was still alive, he bore the consequences. While he was rapidly dying, he watched his football team be put to shame, his athletes stripped of their scholarships and drop out of school, his “most winningest coach in college football history” title stripped from him, his cherished university attain a bad reputation as a place where child abuse occurs—and, perhaps most hurtful of all, he watched as many of his fans, friends, family, co-workers, and athletes lost respect for him. For not going above and beyond to protect innocent children, maybe he deserves all of these things, but the fact that his statue was taken down after his death is certainly disrespectful and unnecessary. All across the world are a plethora of statues and monuments that glorify people who have committed indecencies and brutalities. The people of Kosovo awarded Bill Clinton a statue in his honor for protecting their country against a bombing attack in 1999 (Friedel & Sidel, 2006). This statue was awarded after the 1995 Monica-Lewinsky scandal when Clinton, at that time President of the United States, was caught having sexual relations with another woman, Monica Lewinsky, while married to Hilary Clinton. General Robert E. Lee also has a statue in his honor in Richmond Hill, Georgia. Lee was a one of the most strategic and successful commanders for the Confederate Army in the Civil War. Under his command, over 133,722 soldiers for the Union were killed (Beavins, 2013). Although Lee fought for the side which society today views as wrong for perpetuating slavery, the statue of Lee still stands, to honor his loyalty to his home state of Virginia and excellent leadership skills when it came to training his army. There were no objections or protests calling for his statue to be removed, in order to be respectful to the families and the victims of his army. This is because the statue does not serve the purpose to exonerate Lee of the casualties for which he was responsible; instead it is simply a symbolic gesture to esteem him for the legacy he left behind. 15
In the same sense, Joe Paterno’s statue was dedicated to him back in 2001 to praise him for his dedication to Penn State, his philanthropy, and his successful leadership as head coach of the football team. The statue was irrelevant to the Sandusky scandal because it obviously did not serve the purpose of praising Paterno for potentially hiding child abuse. From the events, it seems as if Erickson removed the statue as means to protect Penn State’s reputation rather than because he felt it was a “reoccurring wound” to the victims and their families (Mink, 2012, p. 1). It is certainly a shame that after nearly half of a century of Paterno’s life was dedicated to bettering the university, the university failed to stand behind him and protect his legacy even when the events of the situation had not yet been fully exposed. From speculation, it appears as if Joe Paterno had either turned a blind eye to the child abuse taking place on campus, acted as an aid to help Sandusky cover up the child abuse, or quite simply passed the information along to a higher-up in administration rather than alerting the legal authorities. Quite frankly, the world may never know the complete truth of how Paterno responded to the Sandusky scandal. Still, his statue should not have been taken down. Paterno dedicated his over half of his life to ensuring that both athletes and non-athletes at Pennsylvania State University were able to access all of the resources necessary to achieve excellence. His coaching method is inarguably one of the most effective, as proven by his continuous winning record. His determination to influence his players on a deeper level than just physical activity created “men” instead of “boys.” The lack of evidence and unclear accusations made in the Freeh report call into question his direct accountability for the mistakes of Jerry Sandusky. Last but not least, the fact that statues of other leaders who have committed crimes such as adultery and murder still stand to honor some other aspect of their lives, regardless of the harm that was done to society, emphasizes why Joe Paterno’s statue should still be standing today. REFERENCES Allison, S. (2012, January 24). The sense-making of Joe Paterno's legacy. Society of Personality and Social Psychology. Retrieved from https://spsptalks.wordpress.com/2012/01/24/the-sense-making-of-joe-paternoslegacy/. Beavins, S. (2013). Civil War trust. Saving America's Civil War Battlefields. Retrieved from http://www.civilwar.org/education/history/biographies/robert-e-lee.html. Berube, M. (2012). At Penn State, a bitter reckoning. Sage, 12(4), 381-382. Retrieved from http://csc.sagepub.com/content/12/4/381 ESPN. (2012, July 12). Paterno family issues statement. ESPN.com. Retrieved from http://espn.go.com/college-football/story/_/id/8159863/the-family-joe-paterno-issuesstatement-response-freeh-report Friedel, F., & Sidel, H. (2006). William J. Clinton. The Presidents of the United States of America. Retrieved from http://www.whitehouse.gov/about/president. Joe Paterno. (2002, March 14). Rock Ethics Institute. Retrieved from http://rockethics.psu.edu/resources/aboutus/bio/joe-paterno. 16
Mink, N. (2012, July 22). Penn State president orders Paterno statue removal. USAToday.com. Retrieved from http://usatoday30.usatoday.com/sports/college/football/bigten/story/2012-0722/penn-state-paterno-statue/56410366/1. Preston, J. (2012). Penn State removes Paterno statue. The Lede Penn State Removes Paterno Statue Comments. Retrieved from http://thelede.blogs.nytimes.com/2012/07/22/pennstate-will-remove-paterno-statue/?_php=true&_type=blogs&_r=0. Thompson, C. (2013, April 22). Louis Freeh has heard this before: Ex-government official decries 2012 Freeh report as flawed. PennLive.com. Retrieved from http://www.pennlive.com/midstate/index.ssf/2013/04/did_somebody_steal_paterno_f am.html. Wiley, S., & Dahling, J. (2013). Being Penn State: The role of Joe Paterno's prototypicality in the Sandusky sex-abuse scandal. Industrial and Organizational Psychology Perspectives on Science and Practice, 1, 152-155.
17
Killer Psychopaths: Why? (By Kelly Wapenski) Killers are fascinating, and many people wonder what makes them tick. In fact, there are probably many reasons; psychopathy is suspected to have many different origins involving biology, childhood trauma, and psychological deficiencies. Psychopaths are almost like magicians with their manipulative abilities. They fool everyone around them into believing they are a part of society. Not knowing how psychopaths work may make us more vulnerable to them. By comparing the first identified psychopaths to psychopaths today, a more precise picture arises of what we can expect from a psychopath. My focus here is on psychopaths who develop the urge to kill and why this urge is created. Once that urge is acted upon, it becomes much easier for them to repeat the act over and over again, which makes them serial killers. This all builds to the question I am asking here: Can we determine how a psychopath is created, and is it realistic to think we can help or “cure” them? The term “psychopath” has been in use since the mid-1800s. According to Merriam-Webster’s Online Dictionary, a psychopath is “a person who is mentally ill, who does not care about other people, and who is usually dangerous or violent.” The term was first used officially in court to describe a woman who had murdered a girl. According to Henk van Setten, a Dutch social historian, it is a mistaken assumption that the earliest published use of the term “psychopath” appeared in a January 21, 1885 London newspaper article (par. 1). In van Setten’s blog entry, “1885: The Semenova Case,” he explains how the term was used by other psychiatrists in the 1840s, but these psychiatrists had a very general understanding of the term (par. 1). Van Setten claims that “psychopath” was first used in trial of alleged psychopath Ms. Semenova, who killed a girl and pleaded insanity with the help of a Russian psychiatrist by the name of Ivan Balinsky (Van Setten, para. 3). Balinsky persuaded the jury that Semenova was suffering from “psychopathy” and was therefore morally irresponsible (van Setten, par. 5). This is the first time on record that someone on trial for murder was acquitted of the charge because of psychopathy. An excerpt from an article written at the time of the trial defined a psychopath as “an individual whose every moral faculty appears to be of the normal equilibrium. He thinks logically, he distinguishes good and evil, and he acts according to reason. But of all moral notions he is entirely devoid . . . Besides his own person and his own interests, nothing is sacred to a psychopath” (van Setten, para. 5). Compare that definition to the one van Setten found in the 1941 case study book The Mask of Sanity: “someone with a specific kind of personality disorder… a mental illness where amoral, extreme behavior is hidden under an outward semblance of normality” (qtd. in van Setten, para. 8). I believe that both definitions are accurate, though the journalist’s definition is much cruder. Psychopathy is a mental illness in which a person is devoid of truly feeling moral notions and empathy toward others. Now picture a psychopath. How does this person look? Someone may picture a person who sticks out from society because of a deformity, or someone who is simply odd and grungy looking. This stereotypical person is also probably a male and is most likely thirty-five to fifty years old. Yet people do not often think about how psychopaths look when they start out: first as 18
babies and then as teenagers. No one can simply point to a person and determine if they are a psychopath based on looks. This previously described imaginary psychopath is probably ostracized from society and does not fit in, but while this may be true for some psychopaths, others appear integrated into society and seem to blend in. These psychopaths are cunning and manipulative in ways that help them appear normal. This is why we have a difficult time determining who is a psychopath; they can develop a personality disguise. However, psychopaths do display certain characteristics and when many of these are found in one person, it is likely they are a psychopath. Michael Torrice of the California Institute of Technology focuses on certain psychopathic behaviors in “Psychopaths Keep Their Eyes on the Prize.” These behaviors are a result of psychopaths’ “inability to empathize with others' emotions, such as the fear in a person's face, and impulsive, anti-social behavior, such as reckless risk taking or excessive aggression” (Torrice, para. 2). Torrice also explains the research done by neuroscientist Joshua Buckholtz of Vanderbilt University, which focused on observing the mesolimbic system, the part of the brain that motivates us to search for rewards (Torrice, para. 3). Researchers studied a normal personality, gave volunteers a small dose of amphetamine, and measured how much dopamine their nucleus accumbens released by watching it in a PET scanner (Torrice, para. 4). They found that the more dopamine is released in the anticipation of reward, the more their impulsivity increases (Torrice, para. 5). This may explains why psychopaths are impulsive, but what we need a more accurate way to identify signs. Common behaviors found in psychopaths due to impulsivity include substance abuse, gambling, and overall risk-taking in everyday life, but these are not easy to recognize because most people are highly motivated and many people have an addiction to drugs or gambling. What has been found is that while psychopaths have similar tendencies to our own, they have a deeper need to release their urges. Most of us learn how to control our urges because we think of other people, but psychopaths are not able to do that. There is no formula for measuring a person’s behaviors to determine whether they are a psychopath, and that is the most difficult problem with identifying psychopaths. There is no “psychopath face” or features that are visible: it is all in the brain and how they think. The media creates a certain image of a psychopath or killer in our heads, but they can be anyone. The goal of the media is to put a face to a fear so there is something to rally against. If the media depicts psychopaths as gruesome and misshapen, then people will fear those whom they perceive as gruesome and misshapen. Yet realistically, we have no face to put to the term “psychopath.” The first widely known psychopath is Jack the Ripper, one of the most famous serial killers. He is how we picture a psychopath today: someone who approaches in the night, unseen, who gruesomely kills women. Psychopaths are methodical and calculating. Jack the Ripper killed prostitutes and developed his method by becoming even more gruesome with each murder. What is even more shocking is that no one knows his identity because he was never caught, as told by Biography.com in “Jack the Ripper” (Jack, para. 1). The only thing that people could figure out was that this killer understood human anatomy because of the unusual way he killed.
19
Another detail is that he sent letters to the police about his murders and told of murders that he would do in the future (Jack, para. 5). Jack the Ripper “mutilated and humiliated women, and his crimes seemed to portray an abhorrence for the entire female gender,” an emotion depicted in many gruesome, serial killer cases (Jack, para. 8). The only thing people knew about psychopaths back then was that they killed many innocent people. Psychopaths were seen as “monsters of the night” and that perception is still held today, though we are now realizing they also come out in the day. These killers had no faces, no profile that could distinguish them from society. Fast forward to the present and we see psychopaths such as Ted Bundy. Bundy had a system for how he treated his victims. He targeted women with a certain look and then proceeded to defile and kill them in similar, ritualized ways. This “look” developed because of a past love of his: Stephanie Brooks. Around the time she broke up with him, Bundy also found out that his supposed older sister was actually his mom, according to journalist Robin Brain (para. 2). However, he never took blame; he ended up blaming his victims for his actions after he had already blamed pornography and a hard childhood (Brain, para. 3). Bundy was deeply into violent porn and this formed his system for how he killed women. He trapped women by pretending to have an injury and then he attacked his victims and strangled them. According to Bundy, he was “known to travel with the dead bodies and even [keep] heads of his victims” (Brain, para. 5). Also, there could have been even more victims, but some got away and some victims were never found. Therefore we can never know how many people were actually killed by Bundy. Ted Bundy became famous through the media and is a prime example of a psychopath and what psychopaths are capable of. He also reveals that no one can create a profile of a psychopath. Psychopathic killers can be handsome, ugly, short, tall, male, female, old, or young. They may appear as normal, everyday people and assimilate into society by mimicking others. Their behavior evolves to resemble an everyday person, working to achieve goals. Regular people differ from psychopaths in what their goals are and how they achieve them. Today, we are no closer to determining profiles of psychopaths than when we were faced with Jack the Ripper. However, some patterns may enable us to identify psychopaths more accurately. Psychopaths are unable to connect with the world; some cannot hide the fact that they do not fit in and therefore they are ostracized from society. While those are the more obvious ones, some psychopaths blend in better. This is an antisocial disorder and therefore these people will have a difficult time when they are around others unless they develop the ability to pretend to be normal. Such people lead parasitic lives, lack the ability to feel guilt, lack empathy, are prone to boredom, commit diverse criminal behaviors, cannot accept responsibility for their actions, and so forth, according to the Robert Hare Psychopathy checklist by Dr. Dean Haycock. One cannot simply look at another person and determine if they are a psychopath. Even for experts, many psychopaths have mastered the arts of manipulation and mimicking others, making it especially difficult for outsiders to spot them. One theory as to why psychopaths exist is genetics. Journalist Philip Hunter wrote in “The Psycho Gene” that people prone to violence and mood swings may have an MAOA deficiency; 20
moreover, there is a version called MAOA-L that is specifically linked with aggressive behavior, which about 40% of the population carries (Hunter, para. 5). These people with MAOA-L have smaller limbic systems, which are associated with emotion, behaviors, and long-term memory. Also, these people demonstrated a “hyper responsiveness of the amygdala during tasks such as copying facial expressions” (Hunter, para. 6). This confirms that psychopaths mimic others in order to fit in, and apparently they can become highly adept at it. There is something different in psychopaths’ brains, though not something specific to only psychopaths. This is just one a factor that contributes to how psychopaths are created. Psychopathy is defined as a mental illness and that means there is also a psychological basis. Since psychopathy varies, it seems to be a culmination of multiple personality or mental disorders. Not all psychopaths are violent, and “many psychopathic individuals often have no history of violent behavior or criminal convictions,” as stated by University of California Irvine’s Jennifer Skeem in “Psychopath: A Misunderstood Personality Disorder” (Skeem, para. 5). However, this does not mean psychopaths should be ignored, because some do resort to violence. Whether they will do so is based on yet another compilation of factors. Some may have aggressive personality disorders or other conditions that make them prone to violent behavior. Freud suggested that psychopaths are simply releasing their inner sexual urges because of improper childhood development, and many psychopaths are indeed known for being strongly sexual as well as obsessed with their childhood. They may demonstrate this by how they choose their victims, who may represent a parent, past significant other, or even themselves as little children. A common found factor among psychopaths is a traumatic or unusual life event that leads to an obsession with how they choose their victims. Male psychopaths who choose female victims often come from a broken home in which the mother has somehow not meet the needs of the child. Often, an abusive life event causes them to act out. In “Early Traumatization and Psychopathy in Female and Male Juvenile Offenders,” Maya Krischer and Kathrin Sevecke of the University of Cologne compared juvenile delinquents to students with “early emotional, physical, or sexual trauma and neglect” (para. 1). Their hypothesis of a relationship between trauma and psychopathy was confirmed with delinquent boys; however, for delinquent females, other variables affected their scores such as problems within the family. Psychopathy appears to have its roots in childhood, with issues that fester until the person acts out, often in adulthood. How would it be possible to simply choose one theory? Psychopaths do not simply exist because of one factor; its origins are complex. In addition, every individual psychopath is different and has had unique experiences. Evidence suggests that while psychopaths may be born more prone to violence, this does not necessarily mean they are born psychopaths. Something must trigger their aggressiveness and cause them to retreat from the world. As shown by Krischer and Sevecke, childhood trauma and neglectful families are often the roots of psychopathy. Just as some people are born to be more prone to depression or aggression, they may not always act out unless something causes them to snap. Life often creates hardships and not everyone reacts in the same way; psychopaths are often those who have a more difficult time adjusting. Perhaps with the right (or wrong) combination of factors, psychopathy could be created in any person.
21
An interesting controversy is whether or not psychopaths actually feel emotion; perhaps they simply feel different emotions than other people. Psychopaths appear unable to comprehend deep emotions such as love. They lack psychological self-awareness and the ability to have deeper insight, causing them to feel a deep loneliness and sense of abandonment. This leads to a psychopath seeking pleasure through connecting with and feeling superior to his victim, as explained by Claudia Moscovici, founder of Psychopathyawareness.com (para. 2, 4). Psychopaths often keep mementos of their victims; this is the only way they can feel some connection with anyone. These mementos become their trophies and most important possessions. They kill more to gain more mementos, in order to feel more connected. The irony is that the more killings they do, the less connected they become from society. One of greatest literary attempts to analyze psychopaths and killers is Fyodor Doestoevsky’s Crime and Punishment. Dostoevsky focuses on the psychology behind crime and analyzes it through his main character, Rodion Raskolnikov. In Raskolnikov’s town, he stands apart from society, believing himself to be superior to everyone else and believing that others are simply tools for him to use (Sparknotes). However, some people try to help him, especially Sonya. These people try to help him see what he is doing, but he pushes them all away because he believes he does not need anyone else. Psychopaths are often caught up in their pride and believe that they only need themselves. When they realize they need other people, it is a realization that causes them to connect with others through killing. Yet Raskolnikov’s crime torments him and drives him mad. He even starts to feel guilt and realizes he can feel love for Sonya. This part suggests that psychopaths make their situations worse by tormenting what little good they may possess by killing. It is possible that psychopaths do have some conscience, but other things override it. Taking lives causes that conscience to slowly ebb away until there is nothing left except for the psychopath within. People start out life being whole and good, but then when certain events occur, their souls may become twisted and they lose sight of the good. The major question that remains is whether or not there is a cure or treatment. There is no magical pill that cures psychopathy, though there have been cases and studies where researchers have seen improvement. Hans van Vinkeveen in “Some Psychopaths Can Be Treated,” describes the research of forensic psychology professor Dr. David Bernstein, describing Bernstein’s preliminary results from a study involving 100 different patients who were involuntarily committed for psychiatric treatment, specifically for schema therapy (Vinkeveen, para. 1). Schema therapy focuses on the patient’s emotional state instead of the unchangeable psychopathic personality traits, and the therapist is perceived as a “parent” to the patient (Vinkeveen, para. 4): “The goal is to break through this emotional detachment and draw patients into a more vulnerable position … The next step is to teach patients how to discuss their emotions,” making this therapy unique from other treatments (Vinkeveen, para. 4, 5). This causes the patient to self-reflect and may help to heal childhood wounds. As Bernstein states, his results are preliminary, but this opens up new options for researchers. We can define who psychopaths are, but we are still at a loss to understand why they exist. Psychopaths may be created through both experiences and biology, but they are not destined to be serial killers from the moment they are born. However, no one is any closer to determining 22
who a psychopath is simply by focusing on a single cause. Individuals are much too complex. We cannot pinpoint an exact answer for why psychopaths exist, but it is likely a culmination of many factors that triggers a break from the world. There is no easy way to identify who is a psychopath, since we have learned that we cannot determine this from outside appearances. Realistically, it is impossible right now to examine someone’s brain and know everything about them. However, but I believe if we keep doing studies and observations, more patterns will emerge and someday we may even learn how to properly identify and treat psychopaths. WORKS CITED Brain, Robin. “Life And Times of Ted Bundy, The Sadistic Psychopath.” SelfGrouwth.com. Web. 1 April 2014. Haycock, Dean. “Hare Psychopathy Checklist.” Encyclopedia.com (2003). Web. 1 April 2014. Hunter, Philip. “The Psycho Gene.” EMBO Reports. 2010. Web. 1 April 2014. “Jack the Ripper.” 2014. Biography.com. Web. 1 April 2014. Ley, Dharol Tankers. “Psychopathology, Neuroscience, and Moral Theory.” Philosophy, Psychiatry, & Psychology 18.4 (2011): 394-357. Academic Search Premier. Web. 3 March 2014. Moscociti, Claudia. “The Psychopath’s Emotion: What Does He Feel?” Psychopathy Awareness. Web. 1 April 2014. Krischer, Maya K and Kathrin Sevecke. “Early Traumatization And Psychopathy In Female And Male Juvenile Offenders.” International Journal of Law and Psychiatry 31.3 (2008). Web. 2 April 2014. Skeem, Jennifer L, and Menon, Divya. “Psychology: A Misunderstood Personality Disorder.” Association for Psychological Science. 7 December 2011. Web. 1 April 2014. Web. SparkNotes Editors. “SparkNote on Crime and Punishment.” SparkNotes.com. SparkNotes LLC. 2002. Web. 9 Apr. 2014. Torrice, Michael. "Psychopaths Keep Their Eyes On The Prize." Science Now (2010): 1. Academic Search Premier. Web. 30 Apr. 2014. VanSetten, Henk. “1885: The Semenova Case.” The History of Mental Health. Web. 1 April 2014. Vinkeveen, Hans van. “Some Psychopaths Can Be Treated.” Webmagazine. 23 October 2012. Web. 9 April 2014. 23
Abolishing Capital Punishment (By Raj Kolloru) The death penalty, formally known as capital punishment, is a widely debated topic in the United States. It is defined as the act of executing a criminal by a government for especially egregious crimes, such as first-degree murder. Capital punishment has historically been utilized by several state governments for many years, despite varying public opinion regarding its use. Many other countries either do not use capital punishment or have completely abolished it, including all countries in the European Union (Paredes 16). The majority of countries who still allow and actively use capital punishment to punish those who have committed certain crimes tend to be located in developing and less advanced areas of the world, such as India and Botswana. While arguments are made for both the advocacy and the abolition of capital punishment, the social, moral, and legal implications of capital punishment absolutely warrant the abolition of the outdated and arguably barbaric act in the United States. As a country, the United States prides itself on freedom and liberty. However, according to one statistic, “94 percent of all executions took place in China, Iran, Saudi Arabia and the US” (“White Paper” 86). While this statistic might not directly reflect on the United States’ legal process, the comparison to other authoritarian nation-states does not help the reputation of the United States as a free country. Many human rights organizations support the abolition of capital punishment on the grounds that it is an infringement on global human rights. Another issue blending into both the legality and morality of capital punishment is the fact that a government cannot pay reparations to an inmate who is executed but later exonerated due to insufficient evidence. If an inmate serving a life sentence with or without parole is later found to be innocent, at least the possibility to attempt to reverse the mistake through apologies and reparations is still there. The permanence of capital punishment is not to be understated. The legal processes and implications of capital punishment alone reveal the question of whether or not it is viable when other alternatives of punishment exist. One major issue with the legal process is the appointment of public defenders to those who are on death row. Most defendants on death row have been raised in a lower socioeconomic background, which not only makes them more susceptible to crime, but determines the type of legal defense they can obtain. Proper, well-trained capital punishment defense lawyers tend to be more expensive than free public defenders. Public defenders are horribly overworked, juggling numerous cases at once. They also tend to get hourly pay, which is usually much lower in comparison to their salaried, private firm counterparts. Usually in capital cases, the defendant is battling the massive power of legal representation that the respective state employs, depending on where the crime was committed. The inexperienced, overworked public defender has very little chance of besting a governmental/state legal army. The notion that a human being’s life can be decided based on the quality of their representation is absurd; wealthy defendants can afford lawyers who understand the complexity of capital cases, as opposed to those less financially fortunate defendants who cannot (Paredes). In addition, the location of the crime can also determine whether or not an inmate may get capital punishment, as not every state in the United States has outlawed capital punishment. For example, if the same exact murder is committed in Hawaii and in Texas, this could result in different applications of justice and punishment, given that Hawaii does not have capital 24
punishment whereas Texas does. Another factor that can possibly determine whether or not an inmate will get capital punishment is the race of the victim. Schweizer points out that those inmates who have committed crimes against victims who are white are more likely to receive the death penalty, especially if they are a minority (Schweizer 93). Another significant problem in the process of capital punishment is the legal costs associated with capital cases. There is an extensive appeals process in which defendants can appeal the court’s decision to place them on death row. However, this appeals process is extremely lengthy and can take a long time; meanwhile the defendant is in a state of not knowing whether or not his life will be taken by the government, and if so, when. The psychological stress due to that lack of knowledge leads to arguments regarding the morality of capital punishment. The bombardment of appeals, reviews, post-trial hearings, and expert testimony and witnesses only adds to the point that capital punishment is economically unviable. While the popular notion is that capital punishment is cheaper than the alternative of life imprisonment without parole, the reality is that these factors make capital punishment significantly more expensive than other alternative modes of punishment (Hunter). It is also difficult to allow juries, who are just as prone to judgment and illogical thinking and behavior as any other human entity, to be granted one of the most ultimate powers that a human can be given: the power to assist in the determination of whether or not another human being will live or perish. Legal scholar W.J. Roberts argues, “Individuals of unsympathetic or angry temperament, whether inhabitants, for example, of a district in which a shocking act of homicide has been committed, or those who are made acquainted with gruesome details through the newspapers or through the proceedings in a court of justice, are disposed to carry their resentment at the crime into their judgment” (Roberts 270). While the morality arguments are often ambiguous or vague, the morality of capital punishment is worthy of discussion, as many researchers argue both perspectives. The question of life or death is an argument that many advocators and abolitionists attempt to answer through the use of philosophy and religion. However, the issue of morality is much more tangible and relevant to common citizens than abstract discussions. Several argue that capital punishment has become more humane and therefore more moral throughout the years due to the use of lethal injection. However, the morality of asking a physician to willfully break professional ethical codes to administer and facilitate the death of another human being can only undermine the credibility of those in that profession. In addition, how can one truly know or understand if lethal injection is less painful or more humane than other methods of execution without directly experiencing a lethal injection? There are two major issues with the morality of capital punishment, one being the significantly higher amount of African Americans that are on death row, indicating a heavy racial bias on the system. One social work professor from Norfolk State University states, “Capital punishment places an unequal burden on African American families. While African Americans constitute only 13.6% of the U.S. population [Rastogi, Johnson, Hoeffel, & Drewery, 2011], they make up 42% of the total death row population [Snell, 2011]” (Schweizer 91). This obvious racial bias is more than enough reason to abolish the unfair and prejudiced act of capital punishment. Not only is the government deciding whether or not certain inmates should live or die, but the artificial hierarchy that is created alludes to the notion that race determines the value of a human life (Schweizer 92-93). In addition to the effects of racial bias on those who are actually on 25
death row, the bias affects the families of those inmates as well. These effects will be discussed in the societal implications segment of this essay. The other issue with the morality of capital punishment is the execution of intellectually challenged inmates due to a lack of a standardized definition of mental disability, coupled with lack of consistency in regards to the application of capital punishment to those who can be considered intellectually challenged. Allison Freedman gives an example of how capital punishment seems to be an unfair sentence to someone with mental disability: Marvin Wilson’s story exemplifies the inconsistency in the application of the death penalty with regard to individuals with mental retardation. Marvin Wilson, a 54-year-old American male convicted of shooting and killing another man, had an IQ of 61 and even lower functioning levels. He struggled in school, dropped out after 10th grade, and had trouble performing the simplest tasks without assistance. A board-certified expert concluded that Marvin was mentally retarded, yet he was put to death in Texas on August 7, 2012. (2) Supporting capital punishment is inherently difficult when there is even the possibility of an intellectually challenged individual to be executed by their own government. Yet that seemingly unrealistic possibility became a reality to Marvin Wilson. The social implications of capital punishment are numerous. As Cyndy Hughes states, the legalized brutalization and publication of government-sanctioned, organized murder is bound to have certain societal effects upon both the citizens and those who are incarcerated (Hughes 155). The major goal of any criminal justice institution, such as a prison, would ideally be to facilitate re-education and rehabilitation in order to reintegrate those who have committed crimes back into society as productive, functioning members. However, when capital punishment is introduced to that delicate ideal, it is tainted by the threat of death and fear, which stagnates any possible societal progress that the inmate might have made otherwise. An often overlooked and understated societal consequence of capital punishment is the effect on the families, friends, and others who are associated with those who are on death row or who get executed. The mere placement of inmates on death row is enough to incite social problems, let alone the inmates who are actually executed. Families are ostracized in their communities due to the increased scrutiny and media attention that they receive due to the high-profile capital cases. More often than not, the parents are blamed for how their child committed a crime deemed worthy of capital punishment by the State. The families also have to cope with the loss of a loved one and are forever shamed. Dealing with the fact that their own government is seeking to execute one’s loved one is stressful enough. However, not only do the friends and family suffer long after the execution, those who are required by the State to assist in the administration of an execution such as the physician, the prison guards, and those who are witnesses to the execution most likely suffer some sort of psychological effects after the execution, which can only add to the possible brutalization of capital punishment that was mentioned earlier. Sanctifying and possibly even glorifying murder by a state institution might possibly entice those are who exposed to the brutality (either through in person or through television).
26
The argument regarding the possible deterrence of capital punishment and the accompanying research is not very strong. The evidence regarding whether or not capital punishment deters violent crime is inconclusive at best, with some articles advocating that violent crime is deterred and other articles advocating that violent crime is unaffected (Goldberg 74). One researcher concluded that during 1976 to 1987, there was no evidence to show a deterrent or a brutalization effect (Bailey 631). Some research from around the 1950’s to 1970’s even argues that there are more murders as a result of more executions. It is very difficult to measure the deterrence of capital punishment, since very few executions are actually carried out in comparison to the amount of people on death row (Shepherd 206). There are those who argue that the act of execution can deter future criminals from committing crimes if they understand that their life can be destroyed as a consequence. However, many crimes are committed in the “heat of passion,” which therefore means that they are not premeditated and are committed without much psychologically conscious thought. If anything, more thought goes into avoiding arrest and conviction in general, not simply the threat of capital punishment. Capital punishment cannot be an effective deterrent if it is not administered as equally and as fairly as it should be. There is even a possibility that the racial and socioeconomic bias in administering capital punishment may incite those who are not of color and those who are wealthier to commit crimes if they are aware that they are probably likely to be placed on death row (although there is no scientific research to support or refute that notion). The amount of outside factors affecting crime rates is staggering; it is nearly impossible to isolate capital punishment deterrence as a factor. While more contemporary research should be done in order to discover current trends in capital punishment deterrence, the deterrence effect of capital punishment in contemporary times is overall inconclusive. Capital punishment in the United States has serious implications that are overall detrimental in several ways, namely the social, legal, and moral aspects. The legal process is difficult to trust and be relied upon. Its morality is questionable, and arguably one of the few aspects of capital punishment that many people either believe in or do not believe in; people rarely change that belief. Its social aspects are disheartening, to say the least. The lack of education and awareness of issues surrounding capital punishment in the United States is also a problem. As changes to the system cannot begin at the highest levels of government, they must begin with the individual and in our communities. While the issue of capital punishment has been around in the United States for as long as the country claimed independence, change is still possible. For the foreseeable future, capital punishment is an issue that will be widely debated by both opponents and advocates alike. WORKS CITED Bailey, William C. "Murder, Capital Punishment, and Television: Execution Publicity and Homicide Rates." American Sociological Review 55.5 (1990): 628-33. JSTOR. Web. 22 Mar. 2014.
27
Freedman, Allison. "Mental Retardation and the Death Penalty: The Need For an International Standard Defining Mental Retardation." Journal of International Human Rights 12.1 (2014): 1-21. Academic Search Premier. Web. 27 Mar. 2014. Goldberg, Steven. "On Capital Punishment." Ethics 85.1 (1974): 67-74. JSTOR. The University of Chicago Press. Web. 20 Mar. 2014. Hughes, Cyndy C., and Matthew Robinson. "Perceptions of Law Enforcement Officers on Capital Punishment in the United States." International Journal of Criminal Justice Sciences 8.2 (2013): 153-65. Academic Search Premier. Web. 27 Mar. 2014. Hunter, Edward. "Experts Agree: Death Penalty not a Deterrent to Violent Crime." University of Florida News. University of Florida News, 15 Jan. 1997. Web. 1 Mar. 2014. Paredes, Anthony J. "Capital Punishment in the USA." Anthropology Today 9.1 (1993): 16. JSTOR. Web. 22 Mar. 2014. Roberts, W. J. "The Abolition of Capital Punishment." International Journal of Ethics 15.3 (1995): 263-86. JSTOR. Web. 2 Mar. 2014. Schweizer, Jennifer. "Racial Disparity in Capital Punishment and Its Impact on Family Members of Capital Defendants." Journal of Evidence-Based Social Work 10.2 (2013): 91-99. Academic Search Premier. Web. 27 Mar. 2014. Shepherd, Joanna M. "Deterrence Versus Brutalization: Capital Punishment's Differing Impacts among States." Michigan Law Review 104.2 (2005): 203-56. Academic Search Premier. Web. 2 Mar. 2014. "White Paper on Ethical Issues Concerning Capital Punishment." World Medical Journal 58.3 (2012): 82-87. Academic Search Premier. Web. 2 Mar. 2014.
28
The U.S. Criminal Justice System: A Complex Structure (By Wesley Chai) Introduction Eerie music plays… “Look for severe childhood disturbances associated with violence. Our Billy wasn't born a criminal, Clarice—he was made one through years of systematic abuse. Our Billy hates his own identity, you see. He always has, and he thinks that makes him a transsexual. But his pathology is a thousand times more savage and more terrifying.” It was yet another “Night of Horror Movies” on an ordinary Saturday night for my family. We were watching Silence of The Lambs at home. In the movie, Clarice Starling was pulled from her training at the FBI Academy at Quantico, Virginia, by Jack Crawford of the Bureau’s Behavioral Science Unit. Jack tasked Clarice with interviewing Hannibal Lector, a former psychiatrist and incarcerated cannibalistic serial killer, believing Lector’s insight might be useful in the pursuit of a serial killer nicknamed “Buffalo Bill,” who skinned his female victims’ corpses. Despite some of the creepy scenes in the movie, I was fascinated by and curious about the way the FBI trainee followed clues leading to the arrest of the serial killer in the end. This movie triggered my interest about everything related to combat against heinous crimes, and joining the U.S. law enforcement has since become my dream. Growing up watching a series of other movies, such as The Fugitive and US Marshals, I upheld the perception that our U.S. criminal justice system always carries out investigations and incarcerates the right suspects while defending innocents who did not commit the crimes. I grew up assuming that the U.S. criminal justice system is the best in the world, and I believe many people support this view. However, this impression has changed after my research for this paper. I found out that discrimination against racial minorities, as well as the power of higher social classes, makes our supposedly just system unjust. In this paper, I would like to demonstrate the way in which my understanding of and position on the U.S. criminal justice system has evolved. While I am now more critical of it because I have a better understanding of its complexity, I nonetheless still believe in its potential to be a very effective system. We shall first take a look at some background information. U.S. Criminal Justice System The U.S. criminal justice system is led by the U.S. Department of Justice, currently headed by Attorney General Eric Holder (US Department of Justice, 2014). Under the department there are numerous federal agencies, including the Federal Bureau of Investigation (FBI), Drug Enforcement Administration (DEA), Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), U.S. Marshals Service, and many more. Below the federal agencies in the hierarchy are various police departments in cities across all 50 states. Kenneth J. Peak, former Chairman of the Police Section of Academy of Criminal Justice Sciences, stated in his textbook Justice Administration that the chain of command goes down from the Chief to Captain, Lieutenant, Sergeant, and lowest rankings (Peak, 2010, p. 29). This implies that racial discrimination in the 29
U.S. criminal justice system can potentially occur in any of these hierarchy levels, broadening the whole issue. In addition, local police departments make up more than two-thirds of the 18,000 state and local law enforcement agencies (Bureau of Justice, 2014). A local police department is a general purpose law enforcement agency, other than a sheriff’s office, that is operated by a unit of local government such as a town, city, township, or county. In general, as of 2011, there are 293,058,940 full-time law enforcement employees in the U.S. (FBI, 2011). This high number of employees guarantees us much security in the public. The basic route through the U.S. criminal justice system begins with the police. First, a crime is committed and reported to the police. The criminal is then identified and arrested after investigation. Next, after charges are filed, it moves on to the courts, where prosecution and pretrial services (classification into felonies or misdemeanors through preliminary hearing, and appearance before district attorney and grand jury), adjudication (arraignment after gathering information), and sentencing and sanctions (trial and guilty plea) are involved. Lastly, to get out of this system, probation, prison, and parole are involved. I find this part of the system particularly important and at the same time ironic: Courts are where criminals are tried, but they are also where white-collar criminals can potentially escape from their crimes. Philip L. Reichel, Professor of Criminal Justice at University of Northern Colorado, mentioned in Comparative Criminal Justice Systems that American criminal law is generally classified into substantive and procedural law. The four general characteristics of criminal law are listed as follows: politicality, which means only violations of rules made by government authority can be crimes; specificity, which means people know in advance what particular behavior they must have or refrain from; uniformity, or the same criminal liability for all despite social background; and penal sanction, or punishment administered by government (Reichel, 2013, p. 59). There are also seven major principles of criminal law: mens rea, actus rea, concurrence, harm, causation, punishment, and legality (Reichel, 2013, p. 60). To be considered a crime, an action must be a legally proscribed (legality), consist of human conduct (actus rea), be causative (causation) of given harm (harm), which coincides (concurrence) with blameworthy frame of mind (mens rea), and be is subject to punishment (punishment) (Reichel, 2013, p. 60). Lately, debates arise regarding whether the U.S. criminal justice system follows the consensus or conflict model of law. This is discussed by Frank E. Hagan, Director of James V. Kinnane Graduate Program in Administration of Justice, in Crime Types and Criminals. The consensus model envisions criminal law as arising from agreement among the members of a society as to what constitutes wrongdoing (Hagan, 2010, p. 11). On the other hand, the conflict model sees criminal law as originating in conflict of interests of different groups (Hagan, 2010, p. 11). Whether or not the U.S. criminal justice system falls under either model of law will be explored throughout the rest of this paper. To uncover inequality, I will introduce what has been implemented by our government to supposedly ensure equality. You will notice later in this paper that despite the implementation of law on equality, it is doubtful that there is equality in our criminal justice system.
30
Equality in the Law: The Bill of Rights It is important for the laws of justice to be impartial and equally applied to anyone: those who finds themselves in a position of being a victim of a crime and are seeking justice, or those who are falsely accused of a crime they did not commit. I have come to assume that equality in law for anyone, of any color and social class, is important. This is why I researched the privileges, or rights, put forth by our system for all U.S. citizens, regardless of race and social class. Of the many rights specified in the U.S. Constitution, the rights stemming from five amendments are of special importance in criminal procedure. Four of these—the Fourth, Fifth, Sixth, and Eighth Amendments—can be found in the Bill of Rights, while the last one will be discussed in the next section, “Racial Equality in the Law: The Fourteenth Amendment.” The Bill of Rights is the collective name for the first ten amendments to the U.S. Constitution. It was created on September 25, 1789 and ratified on December 15, 1791. The purpose is to set limits on government actions in regard to personal liberties. Proposed to assuage the fears of Anti-Federalists who had opposed Constitutional ratification, these amendments guarantee a number of personal freedoms, limit the government's power in judicial and other proceedings, and reserve some powers to the states and the public. The Fourth Amendment protects citizens from unreasonable searches and seizures. The Fifth Amendment provides protection from double jeopardy and self-incrimination, and for grand jury indictment in serious crimes. The Sixth Amendment provides for speedy and public trials, impartial jury, confrontation, compulsory process, and assistance of counsel. The Eighth Amendment protects citizens from cruel and unusual punishment. These four Amendments imply same treatment and rights for all in our criminal justice system. The Bill of Rights is an extremely important document, as it is the earliest to set forth constitutional rights and freedom for all U.S. citizens (originally white Americans only) equally. The promise of civil rights is the promise of inclusion, yet the vast disparity in incarceration rates between blacks, Latinos, and whites stands as an ugly reminder of the nation’s long history of race-based exclusionary practices. (Relevant statistics will be shown in section “Criminal Trends on Racial Ethnicity: Minorities VS Whites.”) Does the Bill of Rights really ensure equal rights among all? This made me question the actual existence of equality and justice in our system, and led me to the information presented in the next section. Racial Equality in the Law: The Fourteenth Amendment So, how does our government enhance equality for all? Beyond the Bill of Rights, the Fourteenth Amendment is of special relevance in criminal procedure. While originally the amendments in the Bill of Rights applied only to the federal government, most of their provisions have since been extended to the states due to the due process clause of the Fourteenth Amendment, a process known as incorporation. Without this amendment, the Bill of Rights may only have applied to white Americans and the federal government. Adopted on July 9, 1868, the Fourteenth Amendment was one of the Reconstruction Amendments. It addresses citizenship rights and equal protection under the laws, and was proposed in response to issues related to former slaves following the American Civil War. It was bitterly contested, particularly by Southern states, which were forced to ratify it in order for them to regain representation in Congress. This Amendment is particularly important because it gives 31
all U.S. citizens, now including blacks, equal rights. Therefore, the Bill of Rights has become to people of all races and ethnicities. This supposedly enables our criminal justice system to be just by ensuring equality among races. However, I found out that these rights can be used against victims or in favor of rich criminals in courts. (This will be discussed in sections “Case Examples: The Corporation” and “Case Example: Ethan Couch.”) Using the above background knowledge as a foundation, I researched U.S. criminal statistics related to racial ethnicities, focusing especially on incarceration rates. These statistics shocked me, as the numbers imply to me a large possibility of discrimination being present in our criminal justice system. Criminal Trends on Racial Ethnicity: Minorities versus Whites Currently, more than 2.3 million people in America are in jail or prison, 60 percent of whom are African American and Latino (Lawrence, 2011, p. v). Of all of the statistics portraying racial inequity in our country, the over-representation of these races incarcerated in proportion to their representation in the actual population of the U.S is the most alarming to me. This indicates the failure of so many of our society’s institutions, predicts dire consequences for millions of children and families of color who are already at socioeconomic disadvantage, and challenges the very definition of our democracy. Much literature has presented a wide range of data in the field of criminal trends on racial ethnicity. Although these statistics and data may not completely coincide with each other in terms of specific numbers, they all imply the same idea: racial minorities have higher criminal rates or higher chances of being considered as criminals in comparison to whites. According to Hagan’s Crime Types and Criminals: At the turn of 21st century, roughly 27 percent of those arrested in the U.S. were black, while blacks made up only about 12 percent of the population. They represent over one half of the nation’s prison population. 1 out of every 3 black men in their 20s is either in prison, in jail, on probation, or on parole. While 23 percent of black men in their 20s were under supervision, only 10 percent of Latinos and about 6 percent of whites were being similarly sanctioned. In Washington, D.C., estimates have been made that 70 percent of all black men have been arrested and served time in jail before the age of 35. (Hagan, 2010, p. 37-38) This simply shows that although blacks make up such a small proportion of our population, their incarceration rate is relatively high as compared to white Americans. It also shows that the Latino incarceration rate is higher than that of whites. This seems both unusual and suspicious. Keith O. Lawrence, who holds a Ph.D. in International Politics from City University of New York Graduate Center, stated in his online e-book Race, Crime, and Punishment: Black-white differences in incarceration rates are most dramatic: an estimated 4,777 black males were locked up for every 100,000 black males in the free population, compared to about 727 per 100,000 white males. A stunning 11.7 percent of black men
32
in their late 20s were incarcerated. Black men of all ages are 5 to 7 times more likely to be incarcerated than white males of the same age. (Lawrence, 2011, p. 4) Here, another source supports the statistics provided in the previous source, Crime Types and Criminals, in that the incarceration rates of blacks and Latinos are still reported as higher than that of whites. According to Essentials of Sociology, written by University of Nebraska Professors David B. Brinkerhoff and Lynn K. White, University of New Mexico Professor Suzanne T. Ortega and Arizona State University Professor Rose Weitz: African Americans make up 34 percent of those arrested for rape, 34 percent of those arrested for assault, and 50 percent of those arrested for murder. Hispanics represent about 28 percent of those imprisoned for violent crimes. (Brinkerhoff, White, Ortega, & Weitz, 2011, p. 143) This is yet another source supporting the point that incarcerations of African and Hispanic Americans are just too high. Upon reading these statistics, I began to suspect the presence of racial inequality in our criminal justice system. Overall, the above statistics from the three different sources have shown huge numbers of criminals and people arrested among African and Hispanic Americans. On the other hand, noting that these two racial minorities have such small populations as compared to white Americans, I cannot stop doubting that this presents a potential problem in our criminal justice system. Furthermore, Arab Americans are not an exception. The stereotypes of Muslims and Arab Americans as "terrorists" affect the U.S. enforcement of law today. This can be seen from The Treatments of Arab Americans Today website: In Detroit, attorney David Steingold represented an Arab American client accused of organizing a credit card fraud ring. It was the stated opinion of the FBI that every single Arab in Dearborn is either in member of Hezbollah or a sympathizer. More significant than the prejudices of individual law enforcement officers, however, are the systematic plans being considered to give governmental agencies even more sweeping powers in the "war on terrorism." (The Treatment of Arab Americans Today, 2011) All the above have shown that racial minorities have high arrest and crime rates. Despite the confinement of black and brown crime largely to those very communities, the darker-skinned collectively have been stigmatized as dangers to the society, while white male criminality remains individualized. What are the reasons behind this differentiation? Explanation of Criminal Trends on Racial Ethnicity Biological theory of criminology does exist in the U.S. criminal justice system. Formulated by Italian criminologists Cesare Beccaria and Cesare Lombroso, this idea states that criminals can be identified by certain physical stigma and outward appearances that distinguish them from non-criminals (Hagan, 2010, p. 73). In this case, the physical stigma here is race, or people of color (races other than white). 33
Race is a powerful and revealing lens through which to reconsider the relationship between mass incarceration and American democracy. Overwhelming racial disproportionalities exist in every facet of the criminal justice system. Americans of color, particularly poor blacks and Latinos, are disproportionately entangled, monitored, and confined by the system, while whites disproportionately administer its enforcement and punishment machinery. California State University Associate Professor John A. Berteaux mentioned in his article “What Are the Limits of Liberal Democratic Ideals in Relation to Overcoming Global Inequality and Injustice?”, published in Human Rights Review, that race has often compelled America to confront inconsistencies between its liberal democratic ideals and the patterns of social outcomes that actually occur (Berteaux, 2005). Furthermore, Eurocentric bias may sometimes persist in criminology, partly due to the lack of criminologists of races other than white. The field of criminology is dominated by views reflecting those of European (white) descent, and such a bias may tend not to fully appreciate the interactions among racism, inequality, and experiences of African Americans and other minorities in the criminal justice system (Hagan, 2010, p.37). Undoubtedly, our system is trying to employ more racial minorities, but this does not change the fact that a higher percentage of white Americans are currently employed in the system. Supposedly, the Fourteenth Amendment, discussed in the previous section, should have produced racial equality among all U.S. citizens. However, racial stratification continues because the racism fueling it has been a perpetual building block of America’s social, economic, and political architecture, rather than a temporary individual trait. This dates back to the usage of black slaves before Civil War. American society’s institutions, values, and social arrangements have been forged in a crucible of racial hierarchy, with structural racism, in which public policies, institutional practices, cultural representations, and other norms work in mutually reinforcing ways to perpetuate racial group inequity. Structural racism here refers to the dimensions of US history and culture that have allowed privileges associated with “whiteness” and disadvantages associated with “color” to endure and adapt over time. Additionally, the conceptions of serious crime and fitting punishment tend to reflect, in substantial measure, negative stereotypes linked to color. The strong political consensus against parole and for mandatory sentencing, capital punishment, prison construction, and other toughon-crime measures is powerfully assisted by Americans’ fear of victimization by nonwhite “superpredators” (Lawrence, 2011, p. 6). Sometimes, people of racial minorities are also denied the right to bail, to have a lawyer in courts, or to remain silent, by the authorities. In such cases, the Bill of Rights is violated. The authorities hence actually break the law in the criminal procedures. Apart from the points above, black and brown males are disproportionately exposed and confined to criminogenic environments (Lawrence, 2011, p.7). Criminogenic refers to causing or being likely to cause criminal behavior. Structural racism sorts whites and nonwhites along every important societal dimension, not least of which includes class and space (Lawrence, 2011, p.7). No other groups have been as systematically denied educational and wealth-building opportunities while being told that these are keys to upward mobility and social recognition. 34
Why are there more criminal arrests or higher crime rates among racial minorities? Differential association theory of criminology, formulated by Edwin Sutherland, states that deviance is learned like other social behaviors (Hagan, 2010, p. 89-90). This means that one becomes predisposed to criminality due to the criminogenic environment around him or her. Strain theory of criminology, formulated by Robert Agnew, explains that the dislocation between the goals of society and the means to achieve them cause deviance (Hagan, 2010, p. 86). This means that criminals believe they have no choice but to use illegitimate methods as the means of achieving legitimate goals. Regarding Arab Americans, the terrorist attacks of September 11, 2001 have instilled the impression of terrorists as any Arab, from the perspective of U.S. government authorities, which are mostly made up of whites. The Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act (USA PATRIOT Act) was passed by the Congress and signed into law by President George W. Bush in the wake of terrorist attacks on September 11, 2001 by Osama Bin Laden. In this act, three particular sections are used heavily against “potential terrorists,” in this case Arabs. Section 206 allows “roving” wiretaps for spy and antiterrorism investigations under the Foreign Intelligence Surveillance Act (FISA). Section 215 authorizes federal officials engaged in foreign intelligence and international terrorism investigations to obtain business records. Section 213 expands the government’s ability to search private property without notice to the owner. These three sections were used more specifically against (but not limited to) Arab Americans, as federal agencies deem them as potential terrorists or criminals. Indeed, the USA PATRIOT Act can be considered as a cautionary movement against our enemies. An example of a similar issue is the authorization of interment with Executive Order 9066 by President Franklin D. Roosevelt, placing people of Japanese heritage in War Relocation Camps during World War II. However, I think that this precedent does not change the fact that USA PATRIOT Act violates our constitutional rights, particularly the Fourth Amendment, as well as our human rights and our civil rights. This is the injustice still existing in our criminal justice system. All of these factors have changed my original perspective of perfect U.S. criminal justice system. Now I know that there is certain discrimination towards racial minorities in the system. Through my research, I have discovered that part of the reason why there are no more policies to enforce and ensure equality among all races is our use of the federal government system. Limits of U.S. Federalism on Equality Lisa L. Miller, Associate Professor of Political Science at Rutgers University, discusses U.S. federalism in her article “The Invisible Black Victim,” from Law & Society Review. Several features of the federal structure put into place at the Constitutional Convention have had enduring effects on the political and legal struggle for racial equality. The division of power between the states and the new national government left virtually intact of all the states’ traditional police powers used to address a wide range of citizen concerns, including the health, safety, and morals of the states’ citizens. The strength of state governments under the U.S.
35
Constitution provided pro-slavery advocates with powerful legal and political claims to maintain their “peculiar institution” (Miller, 2010). Surprisingly, Congressional power has a relatively anemic nature. Limited by design, the Congress has a narrower jurisdictional breadth than state governments with respect to addressing major social policy issues (Miller, 2010). Lacking a clear, decisive national consensus, however, congressional authority is inhibited by the fact that it lacks a constitutional mandate to legislate on broad social welfare issues regarding inequalities. While the federal courts have given Congress a wide berth in its exercise of the Commerce Clause powers since the New Deal, the Supreme Court does occasionally limit the scope of Congress’s power based on its reading of the Commerce Clause in Article I, Section 8, and the Tenth Amendment, and continues to hear cases challenging congressional authority to address major social policy domains. Commerce clause is the provision of the Constitution that gives Congress exclusive power over trade activities among states, and with foreign countries and Indian tribes. The New Deal refers to a set of federal programs introduced by President Franklin D. Roosevelt to transform America's economy, which had been shattered by the Wall Street Crash of 1929. Furthermore, there are multiple legal and legislative venues for participation (Miller, 2010). This porousness does provide citizens with multiple locations for participation, but multiple centers of power also make it difficult for the poor and those with low resources to sustain pressure across a political landscape that is navigable largely through sustained human, social, and fiscal capital. Multiple venues can reinforce and exacerbate classic collective action problems, which disproportionately disadvantage the poor and racial minorities (Miller, 2010). I strongly believe that our criminal justice system is built on the basis of justice and equality for all, as mentioned in previous sections “Equality in the Law: The Bill of Rights” and “Racial Equality in the Law: The Fourteenth Amendment.” However, I have learned otherwise. The following sections, “Case Example: Glenn Ford” and “Case Example: Darryl Hunt” consider two cases I came across during my research for this paper. They shall serve as examples of the unfair treatment in our criminal justice system, before I move on to the topic of inequalities in social classes. Case Example: Glenn Ford According to ABC News, Glenn Ford, 64, had been on death row since August 1988 in connection with the death of 56-year-old Isadore Rozeman, a Shreveport jeweler and watchmaker for whom he had done occasional yard work. He had always denied killing Rozeman. On March 10, 2014, State District Judge Ramona Emanuel took the step of voiding Ford's conviction and sentence based on new information that corroborated his claim that he was not present or involved in Rozeman's death. Ford had been tried and convicted of first-degree murder in 1984 and sentenced to death in Louisiana. On March 11, 2014, he walked out of the maximum security prison at Angola and told the broadcast outlet that he does harbor some resentment for being wrongly jailed. 36
Ford's trial had been profoundly compromised by inexperienced counsel and by the unconstitutional suppression of evidence, including information from an informant. A suppressed police report related to the time of the crime and evidence involving the murder weapon was also cited. Since Ford is an African American, there might well have been some sort of racial discrimination against him during procedures of arrest and prosecution. I first look to the state, which is Louisiana. Checking Louisiana’s state history, we know that it was part of the Confederate States of America dating back to Civil War. Since this southern state once stood against racial equality, I would infer that it is possible that racist views are still being passed down from generations to generation. If so, that racist view might just be the reason behind Ford’s wrongful conviction. Case Example: Darryl Hunt In the early morning hours of August 10, 1984, Deborah Sykes, a 25-year-old copy editor at a local newspaper, was raped and murdered on the outskirts of Winston-Salem, North Carolina. Sykes had been on her way to work; she was stabbed 16 times. She was found naked from the waist down and tests revealed that there was semen on her body, indicating that she had been raped. A local man came forward and told police he had seen Sykes with an African-American man on the morning of the crime. When that man described a person who matched Darryl Hunt's description, police arranged a photo lineup. The witness tentatively identified Hunt as the man he had seen with Sykes. Hunt was tried for first-degree murder in the Sykes case. Eyewitnesses brought forth by the prosecution testified that they had seen Hunt with the victim before the crime or that they had seen Hunt enter a local hotel and leave bloody towels behind in the restroom. Hunt testified on his own behalf that he did not know the victim and had nothing to do with the crime. The jury deliberated for three days. They convicted Hunt and he was sentenced to life in prison. On appeal, the North Carolina Supreme Court overturned the conviction because prosecutors had introduced Hunt's girlfriend's statements after she had recanted them. Hunt was released on bond in 1989. With the trial pending, prosecutors offered Hunt a plea bargain: he could be freed and sentenced to time already served (5 years) in exchange for a guilty plea. Hunt rejected the offer and faced a second trial. Hunt was retried in rural Catawba County before an all-white jury. The main eyewitnesses from the first trial testified again, and two jailhouse snitches testified that Hunt had admitted guilt to them while in prison. The jury deliberated for less than two hours and convicted Hunt of firstdegree murder. Again, he was sentenced to life in prison. He had been free for 11 months. Hunt's original trial attorney, Mark Rabil, worked on the case for nearly 20 years. After the second conviction, in which Rabil was part of a larger defense team, Rabil and another attorney, Ben Dowling-Sendor, filed for DNA testing in the case. In October 1994, DNA results came back. Hunt's DNA did not match the semen found on the victim's body at the crime scene.
37
Despite the results, however, Hunt's appeals were rejected. Judges found that the new evidence did not prove innocence. Repeated appeals met the same fate. Finally, in 2004, 19 years after Hunt was convicted and 10 years after he was first excluded by DNA, the DNA profile from the crime scene was run in the state database at the request of Hunt's attorneys. The results matched a man incarcerated for another murder. Hunt was exonerated and freed in 2005. Willard E. Brown, the man whose DNA matched the profile at the crime scene, had since pleaded guilty to the murder of Deborah Sykes. This is yet another example of how racism persists in the U.S. criminal justice system. Although there was no physical evidence linking Hunt to the alleged rape he was being convicted of, the all-white jury went ahead and convicted him anyway. Even when the DNA results showed that Hunt was not involved, he still could not escape from the wrong conviction. Looking at this case, I really doubt the promised equality stated in our laws. Races versus Social Classes Previously I stated that blacks are segregated into criminogenic environments. In addition to race, this is also partly due to the differences in social classes. In this section, I am demonstrating how race and social class are closely related. Comparing African Americans and Whites, median incomes of both races are $34,192 and $53,256, respectively (Brinkerhoff, White, Ortega, & Weitz, 2011). The median wealth for both races is $6,166 and $67,000, respectively (Brinkerhoff, White, Ortega, & Weitz, 2011). These statistics put more African Americans, on average, into lower socioeconomic classes, while more whites are, on average, in the higher classes. With the existence of white privilege—benefits and opportunities whites receive simply because they are white—in some places, racial minorities, and especially African Americans, are disadvantaged in comparison with white Americans. University of the South Emeritus Professor of Political Economy Ansel M. Sharp, Florida Atlantic University Professor Charles A. Register, and Mississippi State University Professor of Economics Paul E. Grimes mentioned in their textbook Economics of Social Issues that not only are more African Americans in lower social classes than white Americans on average, but African and Hispanic Americans are also more likely to live in poverty as compared to white Americans. In all family types in 2009, the percentage of white families in poverty was 9.3%, while that of black and Hispanic families was 22.7% and 22.3%percentrespectively (Sharp, Register, & Grimes, 2013, p. 181). This shows that the poverty rates of black and Hispanic families are each approximately 2.5 times higher than white families, and Asian Americans seem to fall in the middle, with a poverty rate of 12.1 percent in 2010 (National Poverty Center, 2011). Psychological theory of criminology, formulated by Sigmund Freud, states that crime is due to human personality, made up of id (set of uncoordinated instinctual trends), ego (organized and realistic part mediating between the desires of the id and the super-ego), and super-ego (reflection of internalization of cultural rules, mainly taught by parents applying their guidance and influence) (Hagan, 2010, p. 79-80). This theory upholds the idea that childhood experiences are crucial and can affect children in the future. For instance, children from lower social classes without proper socialization and education may be more prone to delinquency and criminality. 38
Again, the biological theory of criminology mentioned previously comes into play. With the impression that these racial minorities have higher chances of criminality due to insufficient education and improper living environments, law enforcement may look down on them and judge them as criminals even before any crime is committed. In this part of my research, I unearthed that races and social classes are highly related to each other. At this point, I strongly feel that the higher social status generally possessed by white Americans may be a reason behind their lower incarceration rates. This is why I started studying U.S. criminal statistics related to social classes. Criminal Trends and Social Classes: Crime in the Streets VS Crime in the Suites Although trends of social classes are not included in the Uniform Crime Report, a vast majority of those arrested or labeled as criminals are from lower social classes. According to traditional explanations, the volume of crime commission decreases as social class becomes higher. However, official statistics definitely undercount typical crimes of upper socioeconomic groups (Hagan, 2010, p. 36). If embezzlement, price fixing, and stock manipulations were included, we would see a very different social-class distribution of criminals (Hagan, 2010, p. 36). According to the bimodal theory, when calculated and recorded correctly, the criminality graph should be a curve with two modes, one each at the lowest and highest social classes (Hagan, 2010, p. 36). Overall, the effect of social classes on crime rates is complex. Braithwaite’s (1985) review of more than 100 studies leads to the conclusion that lower-class people commit more direct, interpersonal types of crimes normally handled by the police than do people from the middle class (Brinkerhoff, White, Ortega, & Weitz, 2011, p. 143). This partially explains the higher rate of arrests of African and Hispanic Americans as compared to white Americans. Middle-class people, on the other hand, commit more crimes involving the use of power, particularly in the context of their occupational roles: fraud, embezzlement, price fixing, and other white collar crimes (Brinkerhoff, White, Ortega, & Weitz, 2011, p. 143). The suggestion/implication that people of lower social classes have higher crime rates coincides with my research of crime rates of black and Hispanic Americans. Explanation of Criminal Trends and Social Classes A possible explanation of why crime rates of lower social classes are higher is deterrence theory, which states that criminality is a choice based on cost and benefit assessments (Brinkerhoff, White, Ortega, & Weitz, 2011, p. 134). Deterrence theorists often argue that lower classes commit more crimes because they receive fewer rewards from conventional institutions such as school and the labor market. Economic theory and conflict theory of criminology, both influenced by Karl Marx, are also possible explanations. Economic theory states that poverty is the root of social problems and capitalism creates economic inequality (Hagan, 2010, p. 71). This theory upholds the view that criminal law is protecting the interests of the powerful and the wealthy. Conflict theory here simply refers to the conflict model of law mentioned in the section “U.S. Criminal Justice System.� It upholds ideas that the law favors interests of certain groups of people and that white collar crimes are not treated in the same fashion as street crimes. America, which adopts 39
democracy and capitalism, is definitely in conflict with Karl Marx’s ideology supporting communism. However, I still find his theories applicable, as I have discovered through my research that our laws do favor white Americans more. In addition, strain theory and differential association theory of criminology, mentioned in the section “Explanation of Criminal Trends on Racial Ethnicity,” are applicable here, too. Lower socioeconomic classes tend to have higher chances of exposure to criminality, giving them higher chances of being influenced to commit crime in the future. Those of lower status can also be driven to criminality to meet their basic needs because that is the only perceived way to achieve the desired goals. Growing class inequality makes me aware of the way class inequality and injustice is rendered invisible. The U.S. has the widest gap between the rich and the poor of any industrialized nation. The hourly compensation in the U.S. for blue and white collar workers is not keeping pace with their wages in comparable industrialized nations around the world (Berteaux, 2005). While the incomes of the middle class and poor are falling, the incomes of the richest one percent in America have risen at three times the inflation rate (Berteaux, 2005). However, the general public, including me originally—although aware of these disparities—still believes that it lives in a society in which class distinctions are virtually nonexistent. Apparently, the media is also controlled by the moneyed class, within which members have an interest in maintaining a certain image of themselves (Berteaux, 2005). This can be used to disguise inequality and injustice through reporting that the U.S. is made up of a lot of middleclass individuals with the same ideals, desires, and interests. As ethnic groups, who are of lower social class on average, moved to the suburbs and increasing numbers of racial minorities moved into cities, differences among European ethnics became less of a dividing line. The federal policy that made it possible for many European Americans, who are of higher social classes on average, to move into the suburbs created a homogeneous “white” identity (Berteaux, 2005). The New Deal politics of the 1930s created a white citizen subject with a “possessive investment in whiteness” (Berteaux, 2005). These policies and practices gave whites a long-term economic investment in “whiteness,” encouraging whites to expend time and energy on the maintenance of “whiteness.” Finally, another interesting factor with regard to higher crime and arrest rates among lower social classes relates to prisons. This was mentioned by Theodore Hamm, Director of the Journalism and New Media Studies program at St. Joseph’s College, in his article “Breaking Prison” from Nation. It appears that building more prisons is also a way to get rid of people on the streets, instead of specifically for criminals or violators of the law, due to high population (Hamm, 1999). Studying the U.S. population, I found out that it increased dramatically, from 231,106,727 people in the 1980s to 318,379,703 in 2010 (InfoPlease, 2011). Although it may seem like prisons are taking people—potentially innocents and especially those of lower social classes or in poverty on the street—to decrease the effect of the population boom, I believe that this is certainly not true. Think of it in this way: even if the government really needs to decrease the population, why only target those of lower social classes? Does this not 40
already suggest stereotypes against lower social classes? I believe that if an excessively great number of people of lower social classes or in poverty are sentenced to prison, it must either be due to discrimination or a sudden increase in crime rates among those people. In this case, there is a high possibility of it being discrimination. There has been much discussion between different sources surrounding this topic. At this point in my research, I believe that race and social class are closely related in the field of crime. Discrimination towards lower social classes is sometimes also due to discrimination against racial minorities, which make up a higher percentage of the population in lower social classes. However, I also noticed one dilemma which requires further discussion: Do those in higher social classes, which are mostly made up of whites, not commit any crime? U.S. Government Combats White Collar Criminals Not mentioned in the previous section is how our government was not able to stop and convict white collar criminals. White collar crime refers to crimes committed by persons of respectability or high social status. In response to these crimes, our federal government put the Sherman Antitrust Act into action. Introduced in 1890, this is the first of many regulatory laws passed to control corporate behavior, and it forbids restraints of trade and formation of monopolies. Policing of corporate violations is conducted by over 50 federal regulatory agencies. Examples include the Environmental Protection Agency and the Food and Drug Administration. These regulatory agencies issue warnings and penalties to corporations for violating the law. However, there are many criticisms against these federal regulatory agencies and their efforts against corporate crime. According to Crime Types and Criminals: Lacking sufficient investigative staff, these agencies often rely on records of the very corporations they are regulating to reveal wrongdoing. Criminal fines authorized by law are insignificant compared with economic costs of corporate crime and become minor nuisance but not strong deterrent. Other criminal penalties like imprisonment are rarely used and, when they are, they tend to reflect dual system of justice where these offenders are treated better than traditional offenders. The enforcement divisions of many regulatory agencies have been critically understaffed and cut back to inoperable levels. The top echelons of agency commissions are often filled with leaders from the very corporations or industries to be regulated, creating potential conflicts of interest. The relationship between regulators and the regulated are often too compatible, with some regulator more interested in representing the interests of the corporations they are supposed to be regulating than in guaranteeing public well-being. (Hagan, 2010, p. 211) Despite being created to regulate and prevent corporate crimes, the federal agencies are ineffective in doing so. I have come across corruption of federal agencies due to the criticisms listed above. Examples can be seen in documentaries like Vanishing of the Bees and Gasland. I found another interesting source supporting the idea of corruption here. Gross Bertram (1980), former American social scientist, Federal bureaucrat and Professor of Political Science at Hunter College, in his book Friendly Fascism, talks about what he calls “dirty secrets.” The little secret is that those who commit crimes worrying citizens the most—violent street crimes—are, for the 41
most part, products of poverty, unemployment, broken homes, rotten education, drug addiction, alcoholism, and other social and economic ills which the police can do little if anything about. The big secret is that law enforcement officials, judges, prosecutors, and investigators are often soft on corporate crime. Although the U.S. criminal justice system has laws targeting white collar criminals, the above has shown that these laws and regulations are not working. Instead, laws are being used to white-collar criminals’ advantage. As mentioned in section “Races versus Social Classes,” higher social classes are made up of more white Americans than racial minorities. Therefore, I conclude that white Americans generally have a higher possibility of escaping from justice. Two cases that crossed my mind during my research will follow this section below. They serve as examples supporting that it is extremely easy to escape from justice if one is white and rich. Case Example: The Corporation The Corporation is a 2003 Canadian documentary film written by University of British Columbia law professor Joel Bakan, directed by Mark Achbar and Jennifer Abbott. The documentary shows the development of the contemporary business corporation, from a legal entity that originated as a government-chartered institution meant to affect specific public functions, to the rise of the modern commercial institution entitled to most of the legal rights of a person. It concentrates mostly on corporations of the U.S. One theme is its assessment as a "personality" as a result of an 1886 case in the U.S. Supreme Court in which a statement by Chief Justice Morrison R. Waite led to corporations being viewed as "persons" and having the same rights as human beings based on the 14th Amendment to the United States Constitution. The attorneys of the corporations were “smart” in terms of using the 14th Amendment towards their business advantage. Corporations are “goal-driven.” Their goal is to use the classical management approach to manage their organizations and gain as much profit as possible. The classical management approach is used when all members or workers of the corporation are working as cogs in a machine. This focuses on maximizing efficiency, even at the cost of safe and fair working conditions and ethics. For example, slaves and child labor are employed for low wages. With these corporations “desperate” for profits, they even turn to enemies of the country for business. “Everything is just business,” they say. This perfectly illustrates a downfall of capitalism. From this documentary, one cannot help but believe that the U.S. criminal justice system adopts the conflict theory of law. Laws are indeed sometimes used to protect the interests of the powerful. With democracy that promotes capitalism in the U.S., money and power are often closely interrelated. The more money you have, the more power you gain. The more power you gain, the higher your chances of winning a lawsuit. In addition, corporations can no longer be charged with the same offense again for those cases, even if better evidence were to be found. This is because the Fifth Amendment forbids double jeopardy against anyone in any case, giving them chances to escape from conviction forever. The Fifth Amendment also grants the right to remain silent, allowing corporations to not answer questions disadvantageous to them. The fact that high authorities of these corporations are often white Americans and that whites represent them supports my idea that being white and rich makes it easier to escape from punishments. 42
Case Example: Ethan Couch Wealth has never had a stigma in the affluent suburbs of Fort Worth, where the town of Westlake landed on Forbes’ list of America’s most affluent neighborhoods last year, with a median income of $250,000. But in recent days, the implications of being rich have set off an emotional, angry debate that has stretched far beyond the North Texas suburbs, after a juvenile court judge sentenced a 16-year-old from a well-off family to 10 years’ probation for killing four people in a drunken-driving car crash. The judge, Jean Boyd, declined to give the teenager, Ethan Couch, the punishment sought by Tarrant County prosecutors—20 years in prison—and instead ordered him to be placed in a longterm treatment facility while on probation. Judge Boyd did not discuss her reasoning for her order, but it came after a psychologist called by the defense argued that Mr. Couch should not be sent to prison because he suffered from “affluenza”—a term that dates at least to the 1980s to describe the psychological problems that can afflict children of privilege. Prosecutors said they had never heard of a case where the defense tried to blame a young man’s conduct on the parents’ wealth. The use of the term and the judge’s sentence have outraged the families of those Mr. Couch killed and injured, as well as victim rights advocates who questioned whether a teenager from a low-income family would have received as lenient a penalty. This is rather an odd case as a “mental illness” regarding minors of privilege was brought into court. Minors of privilege here refer to children from affluent families and higher social classes. The successful application of this “illness” to escape from supposedly imposed penalty indicates that any rich family can use this to defend themselves from then on. If this is applicable, should not the theories of criminality, mentioned in previous sections “Explanation of Criminal Trends on Racial Ethnicity” and “Explanation of Criminal Trends on Social Classes,” also be applicable in courts? But they are not. This represents a difference in treatment. My Change of Perspective Before I conclude this paper, I would like to voice how my opinion on justice and fairness changed throughout the research process. To me, justice is a complex concept that pervades social thought to an unrivalled extent. It is basic to law, ethics, and politics alike. It can be conceived as a norm, a value, a virtue, a standard of evaluation of almost any aspect of life and coexistence, and as a human motive affecting thoughts, emotions, and actions. Justice cannot be defined in any simple formula, but its meaning is well captured in a more familiar term, “fairness,” which is not at all obscure and is readily grasped even by young children in today’s society. Fairness refers to the state, condition, or quality of being fair, or free from bias or injustice, and is also referred to as “evenhandedness.” People value fair conditions and show positive reactions towards fairness, whereas they oppose unfair conditions. This can be observed in countless situations. Catholic University of Eichstätt-Ingolstadt Professor Elisabeth Kals and Bundeswehr University Munich Professor Jurgen Maes mentioned in their online e-book Justice and Conflicts 43
that fairness is especially important to people in situations without immediate control (Kals & Maes, 2012, p. 185). Amazingly, there is hardly any research on how people search for information in order to judge the fairness of an authority (Kals & Maes, 2012, p. 185). In the field of criminal justice, I believe that fairness refers to equal rewards or punishments for all. Now, I have come to learn that the idea or philosophy of justice and fairness is much more complex. Oftentimes, there are hidden sides in our criminal justice system unknown to all, including me. For instance, discrimination against racial minorities and the power of higher social classes are just two of the many unknown aspects. This led me to question myself: “What are other possible hidden aspects? So is our criminal justice system purely unjust? Is our criminal justice system really the best?” Conclusion Ben Whishaw claims that “the criminal justice system, like any system designed by human beings, clearly has its flaws.” Indeed, I have found that we only have equality on paper, but inequality is in practice now. I started this research paper from a narrower point of view, focusing mostly on discrimination against racial classes and the power of higher social classes. However, my research paper has led me to a wider view of our complex criminal justice system. REFERENCES The Treatment of Arab Americans Today. (2011). Retrieved March 14, 2014, from War on Terrorism and Racism: http://academic.udayton.edu/race/06hrights/waronterrorism/Arabs01.htm ABC News. (2014, March 12). Man Who Spent Decades on La. Death Row Is Freed. Retrieved March 14, 2014, from ABC News: http://abcnews.go.com/US/wireStory/man-spentdecades-la-death-row-freed-22869716 Achbar, M., Simpson, B. (Producers), Bakan, J., Crooks, H., Achbar, M. (Writers), Achbar, M., & Abbott, J. (Directors). (2003). The Corporation [Motion Picture]. Canada: Zeitgeist Films. Berteaux, J. A. (2005, July-September). What Are the Limits of Liberal Democratic Ideals in Relation to Overcoming Global Inequality and Injustice? Human Rights Review, 6(4), pp. 84-95. Brinkerhoff, D. B., White, L. K., Ortega, S. T., & Weitz, R. (2011). Essentials of Sociology (8th ed.). Belmont, California, USA: Wadsworth. Bureau of Justice. (2014, March 13). Local Police. Retrieved March 13, 2014, from Bureau of Justice: http://www.bjs.gov/index.cfm?ty=tp&tid=71 Cornell University Law School. (2014). 14th Amendment. Retrieved March 14, 2014, from Cornell University Law School: http://www.law.cornell.edu/constitution/amendmentxiv
44
Cornell University Law School. (2014). Bill of Rights. Retrieved March 13, 2014, from Cornell University Law School: http://www.law.cornell.edu/constitution/billofrights Cornell University Law School. (2014). Commerce Clause. Retrieved March 16, 2014, from Cornell University Law School: http://www.law.cornell.edu/wex/commerce_clause FBI. (2011). Table 74: Full-time Law Enforcement Employees . Retrieved March 13, 2014, from FBI: http://www.fbi.gov/about-us/cjis/ucr/crime-in-the-u.s/2011/crime-in-the-u.s.2011/tables/table_74_fulltime_law_enforcement_employees_by_population_group_percent_male_and_female_20 11.xls Kopelson, A., Kopelson, A. (Producers), & Davis, A. (Director). (1993). The Fugitive [Motion Picture]. USA: Warner Bros. Utt, K., Saxon, E., Bozman, R. (Producers), & Demme, J. (Director). (1991). The Silence of the Lambs [Motion Picture]. USA: Orion Pictures. Adlesic, T., Gandour, M., Fox, J., Roma, D. (Producers), Fox, J. (Writer), & Fox, J. (Director). (2010). Gasland [Motion Picture]. USA: New Video Group. Gonchar, M. (2013, December 18). Do Rich People Get Off Easier When They Break the Law? Retrieved April 6, 2014, from The New York Times: http://learning.blogs.nytimes.com/2013/12/18/do-rich-people-get-off-easier-when-theybreak-the-law/?_php=true&_type=blogs&_r=0 Gross, B. (1980). Friendly Facism. New York, USA: M. Evans and Company, Inc. Hagan, F. E. (2010). Crime Types and Criminals. Thousand Oaks, California, USA: SAGE Publications, Inc. Hamm, T. (1999, October 11). Breaking Prison. Nation, 269(11), pp. 23-26. History Channel. (2014). Confederate States of America. Retrieved March 16, 2014, from History: http://www.history.com/topics/american-civil-war/confederate-states-of-america History Learning Site. (2013). The New Deal. Retrieved March 14, 2014, from History Learning Site: http://www.historylearningsite.co.uk/new_deal.htm InfoPlease. (2011). Total U.S. Population. Retrieved March 14, 2014, from InfoPlease: http://www.infoplease.com/ipa/A0004997.html Kopelson, A., Kopelson, A. (Producers), & Baird, S. (Director). (1998). U.S. Marshals [Motion Picture]. USA: Warner Bros. Kals, E., & Maes, J. (2012). Justice and Conflicts (eBook). New York: Springer-Verlag Berlin Heidelberg.
45
Langowrthy, G., Henein, M., Erskine, J. (Writers), Langowrthy, G., & Henein, M. (Directors). (2009). Vanishing of the Bees [Motion Picture]. UK: Dogwoof Pictures. Lawrence, K. O. (2011). Race, Crime, and Punishment (eBook). Washington, D.C., USA: The Aspen Institute. Miller, L. L. (2010, September-December). The Invisible Black Victim: How American Federalism Perpetuates Racial Inequality in Criminal Justice Miller Invisible Black Victim. Law & Society Review, 44(3/4), pp. 805-842. National Poverty Center. (2011). Poverty in the United States. Retrieved March 14, 2014, from National Poverty Center: http://www.npc.umich.edu/poverty/ Peak, K. J. (2010). Justice Administration: Police, Courts, and Corrections Management (6th ed.). New Jersey, USA: Pearson Education, Inc. Reichel, P. L. (2013). Comparative Criminal Justice Systems: A Topical Approach (6th ed.). New Jersey, USA: Pearson Education, Inc. Sharp, A. M., Register, C. A., & Grimes, P. W. (2013). Economics of Social Issues (20th ed.). New York, USA: McGraw-Hill. The Innocence Project. (2004). Darryl Hunt. Retrieved March 16, 2014, from The Innocence Project: http://www.innocenceproject.org/Content/Darryl_Hunt.php US Department of Justice. (2014). About DOJ. Retrieved March 13, 2014, from US Department of Justice: http://www.justice.gov/about/about.html Worrall, J. L. (2010). Criminal Procedure: From First Contact to Appeal (3rd ed.). New Jersey, USA: Pearson.
46
The Rise of the Extreme Right in Contemporary Europe (by Lallo Darbo) After the 2010 parliamentary election, Sweden, generally perceived as multi-cultural and tolerant, saw its first far-right party voted into Parliament. Similar gains by far-right parties have been evident all over Europe during the last two decades (Betz 663). Usually these political parties are concerned with questions such as immigration, integration, and the cultural preservation of the country in consideration. These parties range from anti-Semitic parties like Jobbik in Hungary to the more Islamophobic Danske Folkeparti in Denmark. Many questions have arisen from this sudden increase in support for far-right parties. When did opinions about immigration become so relevant to one’s party affiliation? What caused people to abandon their traditional parties? And perhaps the most interesting question we should ask ourselves is if these far-right parties are here to stay. According to Hans-Georg Betz, a Professor in European Studies at Paul H. Nitze School of Advanced International Studies, far-right parties are usually associated with a populist agenda (Betz 664). In most cases, the groups they target and consider the “problem” in society today are the ones opposing a more ethnically homogeneous, culturally conservative, and nationalistic approach towards contemporary societal issues. Economic issues are usually explained as outcomes of a failed policy initiated by the more conventional parties, like the social democratic and liberal parties. According to most far-right parties, other parties have created a failed multicultural society where people who do not belong to the ethnic majority are consuming more of the state’s resources, e.g.in the form of social welfare, than they are contributing, e.g. taxes. This is the main characteristic of far-right parties in Europe, as they all demonstrate “opposition to the social integration of marginalized groups and the extension of democratic rights to them” (Betz 664). According to most far-right parties, both contemporary societal issues and economic issues can be traced back to a single phenomenon, a causation that put the society in the failed state that it now is in. This single problem is, according to a vast majority of far-right parties, immigrants. Jens Rydgren, a Professor in Sociology at Stockholm University, states the following: Anti-immigration issues are the core message of the new radical right. These parties have used four arguments to frame immigrants as national/ cultural threats: First, as implied above, for the radical right, immigrants are a threat to ethno-national identity; second, they are a major cause of criminality and other kinds of social insecurity; third, they are a cause of unemployment; and fourth, they are abusers of the generosity of the welfare states of Western democracies, which results in fewer state subsidies, etc., for natives. (244) Simplistic explanations of complex issues such as unemployment, criminality, and ethnonational identity have evidently become very appealing to a significant portion of European voters. Gaining popular support by rallying the majority against a minority is called populism, and this is the rhetoric that most far-right parties use. Populism requires someone to blame, and in most cases this target of resentment becomes immigrants. When it seems as if one agent of an
47
entire system is the cause of all these problems, the urge to use your entitled political leverage— your vote—to do something about it becomes strong. Despite one’s political views, it is evident that the world has become more and more globalized. Issues concerning a country’s national identity arise from a more integrated world, and the farright movement can be seen as a reactionary political entity to this. According to Betz and Johnson, the far-right movement is “a response to the erosion of the system of 'ethno-national dominance', which characterized much of the history of modern nation states” (323). As many countries struggle to find their place in a new political landscape, many therefore suggest a return to the old ways when things seemed more straightforward. According to Betz, “the disintegration of traditional subcultures has contributed to a progressive dissolution of traditional party loyalties” (663). This cause for abandoning one’s traditional party can be seen as the reason people give their votes to different far-right parties and choose to become affiliated with a relatively new political movement. As the demand for a wider range of political options grows stronger, so does the supply of political parties. When events like the 2008 economic meltdown occur, they leave a lot of people unemployed, usually the low or middle class “average Joes.” When the traditional party’s ideologies seem to fail to explain and prevent these events from happening, alternative approaches to these problems such as the simplistic view of many far-right parties can become appealing to those who feel abandoned by the party that they for so long felt affiliated with (Betz). Thus a lot of people choose to adjourn their traditional party loyalty, as Betz explains: “This ... stems from the established parties’ inability to respond to the consequences of the profound socio-economic and socio-cultural transformation investing Western Europe” (664665). When a lot of countries in Europe make the shift from being social democratic, with an emphasis on collective social welfare, to becoming more liberal, with an emphasis on individual independence from governmental influence, the result in many cases becomes disappointment with this new leadership. As Betz explains, “The contemporary political space of advanced Western democracies is structured by a shift from modern industrial welfare capitalism to postindustrial individualized capitalism” (672). This corresponds to Europe’s right-led coalition governments that usually, upon assuming office, embark on a campaign of lowering taxes. Since right-led governments usually pursue an agenda of liberal reforms, influenced by their political ideology, this usually transforms into decreased state influence and a cutting in subsidized commodities and services. Some archetypes of these liberal governments were Sarkozy’s presidency in France, Berlusconi’s government in Italy and current right-coalition leader Reinfeldt in Sweden. These are all examples of countries that have previously emphasized the importance of the state in safeguarding the social security net. Yet as governments change from left- to right-led governments, they usually pursue an agenda of lowering taxes, often with promises of more money in the pocket of the individual since it is widely believed that people save money when not having to pay a lot of taxes.
48
The people who do not benefit from these tax cuts are the lower and middle income people who compose a majority in most western European countries, since cutting taxes means that formerly state-funded services and commodities become more expensive. Those who are well off are usually not as uncomfortable with privatization as the less well off, since it is usually to the disadvantage of those who have less means at their disposal. The dismantling of the European social welfare system becomes inevitable as governments seek new ways of lowering taxes. For example, in Sweden, the deregulation of state monopoly on pharmacies was thought to lead to tougher competition, which in turn was thought to lead to lowering prices in favor of the consumer. Privatization was also believed to give revenue to the state since the state would sell pharmacies to the highest bidder. Yet the result of the Swedish deregulation of pharmacies led to no or very little change in the prices of pharmaceutical products, and the revenues gained by the state were so embarrassing that they became classified (Rawet). Another deregulation policy made in Sweden during the late 1990’s was the deregulation of the Swedish school system, which meant that commercial corporations and companies were able to invest in and establish so called free schools, also known as private schools, on any educational level. “Free” in this context does not mean that they are no-cost schools; almost all schools in Sweden are free. This means that these schools are independent, in contrast to state-owned public schools. According to Mikeal Hjerm, a professor at Umeå University who specializes in xenophobia and nationalism, these schools are usually established in well-off neighborhoods in big cities (301). These areas are usually not where significant proportions of immigrants live. As with the deregulation of pharmacies, the deregulation of the Swedish school system disadvantages the less well-off since they are further distanced from their privileged contemporaries. Yet it also has assimilative implications since immigrants usually are a part of the lower classes. This means that the schools become segregated between ethnic Swedes and Swedes of other ethnicities. Hjerm states: The fact that adolescents from immigrant dense schools are less inclined to display xenophobia than are pupils from more ethnically homogeneous schools may initially be interpreted in terms of segregation along ethnic lines. Not only is this problematic in relation to concepts of underclass and material inequality, it may also strengthen xenophobic values that in the long run will further increase segregation. (302) School is a crucial part of a person’s life; it is an environment that often changes one’s view of the world and is essential in developing one’s social capabilities. If this development is not allowed to take place in a heterogeneous environment, adolescents are more likely to develop views about peoples of other ethnicities based on the mentality of their own group rather than views based on experience. The adolescents who experience this segregation are the people who will decide the future of nations. If they are not exposed to environments that in some way challenge their perception of the world, they will remain ignorant of many things, and where there is ignorance there is often prejudice. Thus the implications of the privatization of the Swedish school system stretch far beyond the corridors of the schools.
49
While the result of privatizing schools in Sweden directly corresponds to the development of xenophobic views, it also results in a public disappointment, just like the pharmaceutical deregulations. The deregulation of previously state-run entities usually ends in another way than promised. Even this is not meant to happen; it is viewed by the public as unfulfilled promises of wealth and prosperity. A return to older principles seems necessary to reestablish an ideal that is thought to once have ruled the nation. The far-right party in Denmark, the Danish People’s party, explains in their party program, “Our Danish cultural heritage and responsibility urge us to act” (“Principrogram”). The Swedish Democrats say the same explanation, “The Party believes in a strong welfare society while at the same time we have been inspired by social conservative ideas” (“Vår Politik”). While resisting the idea of a pluralistic multicultural society is a characteristic of most far-right parties, resentment towards supranational organizations is another. Immigration is a consequence of a more integrated world, a byproduct of globalization. Globalization is seen by most far-right parties as something bad, something that dilutes and ruins the cultural heritage of nations. While immigration without doubt is viewed by far-right parties as one of these forces, the European Union is seen as another threat to domestic stability. Just as far-right parties use populist rhetoric to turn resentment towards the ruling parties into support for them, they use the failure of the European Union as another example of the ruling governments’ failed policies. As earlier explained, far-right parties facilitate resentment, and by using scapegoats they are able to gain popular support for their ideas. The 2008 financial crisis was not only a failure by the ruling governments of Europe, according to them; it was also a failure of the European Union. The resentment became most apparent in Greece, the country most hard-hit by the economic crisis, as a radical-right party for the first time entered Greece’s Parliament in 2012. The Greek radical-right party Golden Dawn is known to be very hostile towards immigrants and became infamous all over Europe after a party official slapped a female left-party official during a televised political debate (“Golden Dawn”). Despite the scandal, they managed to gain several seats in the Greek parliament. This year, parties from all over Europe will compete for seats in the Parliament of the European Union in Brussels. Preliminary polls suggest that far-right, anti-EU parties have gained massive support in France, Britain, the Netherlands, and Italy (Baker). It is widely believed that parties opposing the European Union will make up the most Euro-skeptic parliament in forty years. Farright parties see the European Union as depriving the state of its legitimate sovereignty, as laws legislated by the EU take precedence over domestic laws. Traditional parties in Europe usually view a more integrated EU as the only way for Europe to compete with and protect itself from foreign markets, an inevitable outcome of a more globalized world. Compromising the sovereignty of the state is seen as a necessary measure that is compensated by the privileges a member of the EU can enjoy. Yet for far-right parties, the significance of the state is deeply rooted in their ideology, similar to fascism. The far-right parties see the EU not only as a failure but, most importantly, as something that tries to deprive the state of its legitimacy. This becomes yet another disappointment which is then turned into resentment towards the ruling elite, resulting in a large portion of Europe’s population seeking support from unprecedented parts of the political spectrum. 50
This is what fuels many far-right parties: resentment towards the ruling elite and opposition towards the “dominant cultural and political consensus” (Bentz 664). The dominant cultural and political consensus is defined, in many European countries, as the traditional liberal approach towards immigration and the vision of the multicultural society that such policies reflect. Since the most recent waves of immigrants into Europe are of African and Middle Eastern decent, the resentment among the far-right parties against this is usually manifested in Afrophobic and Islamophobic opinions. This becomes apparent in the political program of the Swedish Democrats when they say, “We wish to outlaw imported meat that has been produced through unnecessary suffering means; we also want to outlaw all kinds of ritual slaughter in Sweden.” The Swedish Democrats have their roots in Swedish Neo-Nazism. Their Party slogan used to be “Sweden for the Swede.” Now they have lowered their aggressive tone and adjusted to the standards of professional Swedish politics; yet an Islamophobic and anti-Semitic view can be inferred from their political program. The Swedish Democrats do not have a single point about animal rights on their political agenda except when it comes to causing pain through ritual slaughter, a practice associated with other cultures. Seeing foreign cultures as a threat to one’s own culture is not only ethnocentric because you believe that there are inferior cultures that should not be interacting with your own; it is also a way to deny the inevitable. The far-right movement can be seen as the last desperate gasp of social conservatism in Europe since individual countries are battling against a phenomenon that is happening all over the world. Globalization is impossible to resist; every place is so interconnected with some other place that it has become impossible for countries to cut the links they have to other countries. Multiculturalism is not a path that governments originally intended, but it is an inevitable outcome of globalization. Of course mistakes will be committed; no country has a perfect system of immigration and integration. Neither is the European Union a perfect organization, and there will always be people who are discontent with being a part of it. The new far-right wave can be reduced to merely a political movement that facilitates those discontented opinions about the conventional leadership of European countries. Yet even if one chooses to take this simplistic approach in explaining this phenomenon, the faith of Europe’s future political landscape lies in the hands of the traditional parties’ ability to retain popular support. A return to the emphasis of collective welfare is crucial in regaining the confidence of Europe’s low and middle classes, which in most cases are the target groups of far-right parties (Betz 663). Privatization leads to a widening of gaps, widening gaps lead to segregation, and segregation lessens the chances for social interaction between people. Since many of Europe’s immigrants are a part of the lower classes, this means less social interaction between different ethnic groups. Social interaction is the only way to rid a country of the prejudice and preconceived ideas that are the core of the far-right movement. While one might argue that xenophobia and prejudice are inherited properties of many societies and cultures and impossible to completely get rid of, more social interaction would result in more successful integration and at least less xenophobia in the pluralistic societies that define 51
contemporary Europe. This is crucial in disproving populist hypotheses and would lead to less xenophobic and racist views. If the conventional parties of Europe prove that they are capable of solving these issues that fuel far-right parties, populist agendas will have no or very little relevance, and far-right parties might disappear as quickly as they appeared. WORKS CITED Baker, Luke. “As European elections approach, will the anti-EU surge?” uk.reuters.com. Brussels, 13 February 2014. Web. 09 April 2014. Betz, Hans-Georg. “The Two Faces of Radical Right-Wing Populism in Western Europe.” The Review of Politics 55.4 (1993): 663-685. Academic Search Premier. Web. 09 April 2014. Betz, Hans-Georg and Johnson, Carol. “Against the current--stemming the tide: the nostalgic ideology of the contemporary radical populist right.” Journal of Political Ideologies 9.3 (2004): 311-327. 29 April 2014. ”Golden Dawn.” huffingtonpost.com. Associated Press, 17 June 2012. Web. 09 April 2014. Hjerm, Mikael. “What the Future May Bring: Xenophobia among Swedish Adolescents.” Acta Sociologica 48.4 (2005): 292-307. Academic Search Premier. Web. 09 Apr. 2014. “Principprogram.” danskfolkeparti.dk. Dansk Folkeparti. October 2002. Web. 09 April 2014. Rawet, Peter. "Pharmaceutical Investor doubled their input in 3 years.” Svt.se. Svt. 5 May 2013. Web. 09 April 2014. Rydgren, Jens. “The Sociology of the Radical Right.” Annual Review of Sociology 33.1 (2007): 241-262. Academic Search Premier. Web. 09 Apr. 2014. ”Vår politik.” svergiedemokraterna.se. Sverigedemokraterna, 2011 Web. 09 April 2014.
52
ETHICS AND CONSUMERISM
53
Unethical Conduct (By Dana Chapman) When you go to a fast food restaurant and purchase a meal, do you ever stop to think about what went on behind the scenes to make your meal possible? If you do not, then you are probably not aware that the meat processing industry and fast food corporations are consistently engaging in unethical behavior. They do not value or show concern for the animals that they slaughter, the consumers who purchase their goods, or the individuals whom they employ. They are literally killing their employees and their consumers by their choices of operation. Their decisions affect not only fast food, but the food that is mass-produced in grocery stores, as well. They are operating in a way that puts value on one thing: the dollar. This industry, like any other, is maintaining a consistent supply for what their consumers demand. We, the consumers, have the power to effect a change for the better in this industry and we need to use it. At the inception of the fast food industry, the consumer had all of the power. A fast food restaurant’s success or demise was determined by the consumer. Eric Schlosser states that between the 1950’s and 1970’s, the leading fast food chains spread nationwide (24). During that same time there were “countless others” that thrived briefly or never had a chance to succeed (Schlosser 23). For example, Schlosser goes on to point out that during the fast food wars in southern California, the place where Jack in the Box, McDonald’s, Taco Bell, and Carl’s Jr. began, many of the old drive-ins closed, unable to compete against the less expensive, selfservice burger joints (24). Also, if a fast food restaurant created a new menu item, the consumer told them to keep or get rid of it by how much of that particular item was purchased over a certain amount of time. The fast food industry has always catered to the demands of the consumers by eagerly pursuing us through children, the media, schools, and entertainment. Schlosser adds that the industry realized long ago that if they could get a child to see or believe in a company or service just as they see and believe in their closest relatives, then they could affect that child’s behavior (44). The fast food corporations understood that this technique would “increase not just current, but also future, consumption” (Schlosser 43). They recognized that if the children desired what they produced at a very young age, the children would persuade their family members to be a part of their consumption, and that the company would have gained additional consumers, as well as a child who would continue to be a consumer throughout his or her lifetime. As a result, corporations studied children. They studied their habits, desires, imaginations, and dreams. They were a means to target ads and collect demographic information on children. Schlosser makes known that in the 1980’s and 1990’s corporations, were getting information on children to grow their businesses. Kids’ clubs became an extremely successful way to figure out what children wanted. For example, he notes that in 1991, the Burger King Kids Club increased the sales of children’s meals by about 300% (45). The Internet was also a valuable tool in gathering information on the likes and dislikes of children, but the primary medium for children’s advertising was, and still continues to be, the television. I understand that the fast food industry is trying to run a business and gain profits, but corrupting children before they can make informed decisions has a sinister influence. On the surface, it can be easily argued that parents need to protect their children by not allowing their children access 54
to things that can harm them. But if parents do not know that the threat exists, then how can they effectively protect their children? My husband and I, for a while, were parents who did not know that children’s meals had the potential to harm our son. We were busy, so at least three times per week our son would eat a meal from a fast food restaurant. While my husband was at work, I came across some startling information about how fast food does not decay like normal food should and how fast food negatively affects the body. I then conducted some research which led me to question the foods that are being mass-produced for grocery stores. I was exposed to individuals’ stories of how fast food was shutting down their kidneys and damaging their livers. I also watched tragic stories of how parents had lost their young children due to ground meat contaminated with food-borne pathogens. My son was four years old at the time. I did not want my family to become victims of self-imposed illnesses. After sharing the information with my husband, we discontinued our patronage of all fast food establishments and we now eat locally grown, certified organic foods, and free range meats. We choose to eat free range meats because an enormous problem in the meat processing industry is animal cruelty. The Humane Society maintains that animal cruelty is running rampant with sick and injured cows being kicked, rammed with the blades of forklifts, jabbed in the eyes, poked with objects, electrically shocked, and tortured with water hoses to get them to walk to slaughter (“Rampant”). Actual footage of slaughterhouse workers reveals the workers displaying utter “disregard for the pain and misery” that they repeatedly “inflicted” on their sick and injured cattle (“Rampant”). After viewing the footage, I could not believe what I had just witnessed. It was inhumane handling, gross mistreatment, and a violation of the law. These animals were designated not only for the fast food industry, but for “federal and national programs,” as well (“Rampant”). The Humane Society of the United States conducted an investigation on Hallmark Meat Company of California. This federally inspected company’s facility is one of the largest suppliers of beef, which distributes it to families in need, the elderly, and to schools that participate in the National School Lunch Program, which accounts for over 100,000 schools and childcare facilities in 36 (“Rampant”). However, their inhumane handling methods may have endangered the health of the most vulnerable people in America (“Rampant”). The investigation showed “downed” cows, meaning too sick or injured to walk, being forced to walk to slaughter. Downed cows and bovine spongiform encephalopathy (BSE), also called mad cow disease, have been firmly linked to at least twelve of the fifteen cases of BSE-infected animals discovered in North America (“Rampant”). If the USDA insists that sick and injured cows increase human exposure to pathogens and are unfit to eat, then why would this be happening? The answer is greed. The industry wants to squeeze a profit out of every animal that they can. Food-borne pathogens contributed by animal mistreatment are also major problems in the meat processing industry. Chain reactions due to the feed given to the animals in over-crowded feedlots, the poor sanitation at the meat processing plants, excessive line speeds, and poorly trained workers are causing the spread of Escherichia coli (E.Coli) 0157:H7, Listeria, Campylobacter, and salmonella, which have been causing major concerns over the past two 55
decades (Katel 1039). An instance of this is a salmonella outbreak that occurred in 2010. In his article “Would new legislation make the food supply safer?”, Peter Katel reports on an egg-borne salmonella outbreak. Supposedly due to “the feed given to the animals,” sixteen hundred people across the nation fought for their lives (Katel 1039). Katel observes how government inspectors expressed that manure pits below the egg-laying operations were observed to be four to eight feet high (1039). This was not the only food-safety hazard. For example, Katel explains how observers said there were enormous amounts of living and dead flies and maggots, and rodent burrows also present in the facilities (1039). Contaminated meat is also infecting the workers in the slaughterhouses and processing plants. The Centers for Disease Control (CDC) estimates that 3,000 people die a year, 48 million are sickened, and 128,000 hospitalized, all due to contaminated food or drink (Katel 1039). The CDC also estimates that the bulk of the food-poisoning cases involve pathogens that have not been recognized yet (Schlosser 196). This virtual epidemic is a sign of foul play. The corporations are aware of these issues and have made minimal effort to make the simple changes necessary to prevent pathogens from occurring so often. Schlosser notes that hundreds of thousands of pounds of meat have been contaminated, and corporations have been known to disregard contamination and continue with the processing and distribution of said meat (211). Katel adds that according to current law, the US Department of Agriculture (USDA) does not have the power to recall contaminated meat (1040). Schlosser continues that they can only consult with a company that has shipped contaminated meat and make the suggestion to withdraw said meat from interstate commerce (211). Negotiations can take days, weeks, or even months for the USDA and the facility to agree on a specific amount of bad meat to recall. At the same time, the meat continues to be processed and consumed. Schlosser points out that in extreme cases, if a decision is not made the USDA can opt to remove their inspectors from the facility pending closure (211). If the public has not been made aware of meat contamination and the meat is untraceable, processing plant and slaughterhouse closures are highly unlikely because they have a very strong economic incentive to withdraw as little meat as possible. If the public has been alerted by the media, then more drastic measures are usually taken. If necessary, the company could be taken to federal court, where they could fight the decision to potentially be shut down, which more than likely will not get that far. There is one case in particular that illustrates this process. Schlosser explains that in 1999, Supreme Beef Processors’ Dallas, Texas plant failed multiple tests for salmonella (219). The test showed that about 50% of the beef contained salmonella. In spite of the test results, the USDA continued to purchase thousands of tons of meat from Supreme Beef to be distributed in schools. Supreme Beef was another one of the nation’s largest meat suppliers to the school meals program, which annually provided around 45% of its ground beef (Schlosser 219). A couple of months later, the USDA stopped purchasing and opted to remove the inspectors from the company’s plant in an effort to shut down the plant. One day later, Supreme Beef sued the USDA in federal court with claims that salmonella is a natural organism, and not an adulterant (Schlosser 220). A federal judge in Texas, A. Joe Fish, heard both arguments and 56
ordered the USDA inspectors back into the plant pending the final decision of the lawsuit. Weeks later, the USDA detected E.coli 0157:H7 in meat from Supreme Beef. This time the company voluntarily recalled 180,000 pounds of ground beef that was previously shipped to eight states. Only six weeks after the recall, the USDA restarted their purchases from Supreme Beef to provide ground beef to the nation’s schools. The next year, about six months later, the judge ruled in favor of Supreme Beef, stating that high levels of salmonella in the plant’s ground beef is not proof that the conditions there were not sanitary (Schlosser 220). Shortly after the ruling, Supreme Beef failed yet another test for salmonella. The USDA began the process of terminating its contracts with the company and announced new rules for processors hoping to supply meat to the school lunch program. The rules were set with the same type of food safety requirements that are demanded of suppliers of fast food chains: “Ground beef intended for distribution to schools would be tested for pathogens; meat that failed the tests would be rejected; and downers…could no longer be processed into the ground beef that the USDA buys for children. The meat packing industry immediately opposed the new rules” (Schlosser 221). Thus, the industry continues to process and sell downed cattle meat to schools and all other industries. In the past, the companies were punished with fines. For example, Katel states that habitual environmental law violators, like Peter Decosters’ Wright County egg operation, the company responsible for the salmonella egg outbreak, continue to be punished with fines which have proved to be ineffective (1042). They did not produce the change in the food safety; hence the salmonella outbreak, among many others. Nor did they change the environment of the animals or the workers employed by the industry. The meat processing industry today is a brutal place to work. It is known to be one of the most dangerous jobs in the world. It was not always this way. In the 1900’s, meat packing was a desired profession. Schlosser expresses that there were unions to protect the employees and the wages exceeded the national average for workers in manufacturing (153). For example, he makes known that in the 1960s Smith & Company was a privately owned meat packing company. This firm’s employees were skilled, the highest paid employees in the industry, had guaranteed longterm job security, had the ability to work with union officials to address worker grievances, and were provided with bonuses, pensions, and other benefits (Schlosser 153). Schlosser also states that the fast food industries’ expansion in the 1950’s to 1970’s, and their demand for a more uniform product, catapulted the changes in how animals were raised, slaughtered, and processed into consumable goods (154). This set a new industry into motion, one where skilled workers were no longer necessary, a system that eliminated the access to unions, benefits, pensions, and the respect for humanity. This is the system as it exists today. Today the industry believes that their employees are expendable, which is another ethical issue. Even though a fast food meal is very affordable and does not cost an arm or a leg, someone may have lost one in the process. Amputations, lacerations, and back and shoulder injuries are the most common injuries in the meat processing industry (Schlosser 185). Johnson adds that injuries and illnesses such as lung and colon cancer, senile and pre-senile psychotic conditions, subarachnoid hemorrhage, and pneumonia are also common among workers (872). The new era
57
of meat processing, “IBP revolution,” is directly responsible for the many hazards that the meat processing workers now face (Schlosser 154). These injuries are mainly due to the extremely fast pace of the disassembly lines; the faster the pace, the greater the possibility for an accident to occur. Olsson states that the giant competitors—ConAgra, IBP, Excel, owned by Cargill, and Farmland National Beef—dominate the beef industry, control 85% of the US market, and increase their earnings by maximizing the volume of the production at each plant (13). This is done by hiring cheap labor, usually nonEnglish speaking immigrants, discouraging access to unions, and maintaining insufferably high disassembly line speeds (Olsson 13). Cheap labor is an explanation for why certain brands of meat are cheaper than others. The inexperienced workers make it easy for the industry’s dominating corporations to keep costs low, which in many cases results in a high volume of lowquality meat. The pressure of having to perform consistently at a high rate of speed has even led workers to illegal drug usage; supervisors have been known to supply these workers with methamphetamines in exchange for money, time, or favors (Schlosser 174). Workers have been under the illusion that illegal drugs will increase their energy levels and make them invulnerable to potential accidents. The reality for them is the exact opposite: Drugs impair their thought processes and their ability to think critically, which puts them at a higher risk for having an accident. When accidents occurred in the past, labor unions were in place to give the workers opportunities to complain about line speeds or injury rates without fear of the potential for them to lose their jobs, but this is no longer the case. A very small percentage of the workers today belong to a union. Olsson concludes that the nonunion workers are recent immigrants, including illegal immigrants who can be terminated from employment at any time for any reason (13). These people are not in the position to speak up and more than likely will not complain about the opportunity that has been afforded to them. Many have families to support in other countries, have traveled great distances, and have the potential to be paid more in an hour here than they would in a day back home. These are the types of people who will suffer the abuse of the industry in silence in order for them to keep the privilege that is their job. The meat packing industry sees injured workers as a hindrance to profits. Consequently, the more quickly the injured are replaced, the better for the production of the particular slaughterhouse. Slowing down production to accommodate injured workers can be a major competitive disadvantage in the meat packing industry. As a result, the injured workers are given pay cuts and unpleasant jobs to give them an incentive to quit (Schlosser 188). Working conditions and food safety standards in the industry need to improve to help prevent the amount of injuries that occur in the slaughterhouses. I wholeheartedly agree with Schlosser when he concludes: Almost any workplace injury, viewed in isolation, can be described as an “accident.” Workers are routinely made to feel responsible for their own injuries, and many do indeed make mistakes. But when at least one-third of meatpacking workers are injured each year, when the causes of those injuries are well known, when the means to prevent those injuries are readily available and yet not applied, there is nothing incidental about 58
lacerations, amputations, cumulative traumas, and death in the meatpacking industry. These injuries do not stem from individual mistakes. They are systematic, and they are caused by greed. (Schlosser 265) The fines on the meat processing companies have not succeeded in promoting the necessary changes in the safety practices of the industry. New penalties should be enacted so that companies are forced to see the error in the ways that they have chosen to run such a corrupt industry. The fines should dramatically increase and there should be non-negotiable plant closures, criminal charges for negligence, and prosecutions of all meat processing executives with connections to deaths and injuries of their workers to show the industry that no one is exempt from the consequences of criminal activity. Ultimately, when public safety is compromised, the decision is made by the industry and not by the food safety experts. Enough consumers must be educated on these issues to effect a drastic change in how this industry operates. It is unclear how the consumers can be educated with the information that has been deliberately hidden from them for so many years. However, it is clear that when people are educated on what they eat, where it comes from, and what it is doing to their bodies, things will change. People will choose to buy their food outside of the corrupted industry and the industry will be forced to police itself. For example, had I not taken the time to educate myself on these issues, I would still be an uneducated consumer purchasing fast food or processed meals more than five times per week, putting myself and my family at risk with every bite. As a result of my education on these issues, I have a high resolve to no longer eat processed or fast food, or to buy it for my family. If we, the consumers, want to see a change for the better in this industry, we must vote at the cash register. If we want the industry to produce food that is not genetically altered in any way, we have to stop purchasing the genetically altered foods that the industry produces. If we want the fast food restaurants to provide foods that increase our life expectancy rather than break down our vital organs, we cannot continue to support the establishments that provide the very things that need to change. Whatever money is spent on is what the industry will produce. We show the industry what we want by purchasing it. If we purchase our food from farmers’ markets and stores that sell whole foods, and make sure that the items that we buy from the grocery stores are certified organic and locally grown, this will decrease the demand for the goods and services that this industry currently offers. If enough of us join together and make these changes, the industry will be forced to meet the new demands of our consumption or suffer the financial consequences. We cannot allow these corporations the freedom to continue to take innocent lives. Their unscrupulous behavior cannot continue to go unchallenged and unchecked. There must be checks and balances so that this kind of behavior is brought to a complete halt.
59
WORKS CITED Johnson, Eric S., et al. "Mortality in Workers Employed In Pig Abattoirs and Processing Plants." Environmental Research 111.6 (2011): 871-876. Academic Search Premier. Web. 19 Feb. 2014 Katel, Peter. "Food Safety." CQ Researcher 17 Dec. 2010: 1037-60. Web. 19 Feb. 2014. Olsson, Karen. "The Shame of Meatpacking." Nation 275.8 (2002): 11-16. Academic Search Premier. Web. 19 Feb. 2014. “Rampant Animal Cruelty at California Slaughter Plant.” Humane Society 30 Oct. 2008. Web. 9 Apr. 2014. Schlosser, Eric. Fast Food Nation: The Dark Side of the All-American Meal, 2012 ed. New York, NY: First Mariner Books, 2012. Print.
60
I Was Imagined by a Copywriter (By Romain Caïetti) Seeing the world as a kid is fascinating. Still naive, everyday life is an adventure into a world we do not know yet. But we learn, and we learn fast. Like white sheets with plenty of space for writing, our brains are empty and then gather all the information they can swallow. Extraordinarily, the first information we gather is going to be our background, our foundation for our upcoming life, values, and beliefs. But take the time to look around you objectively. What do you see? A world with people, cars, animals, advertising, media, movies, stories, food, politics, costumes, fashion, accessories, product placement and we buy, consume, and buy again. Anything you put your eyes on has something to say. But the real question is, who says it? And what does it say? Marketing is nowadays, more than ever, part of our daily lives. New generations grow up with it, developing a sense of natural interactivity and information anxiety, while previous generations learn and adapt to it. Due to its extreme persuasiveness, its potential is extremely wide, from making the good to promoting the bad; it all depends on how it is used. In this paper we will see that despite the fact people think they choose who they are, they have only conformed to a category created by advertisers. I will demonstrate through theory, studies, facts, statistics, and reasoning that marketers shape the society we live in, controlling people's tastes, realities, and lifestyles, in the process engendering some serious social and psychological problems. The purpose of this paper is ultimately to create awareness that we should all be careful regarding messages transmitted through media because they often promote a new reality, which only becomes so once people conform to it. When college students are asked, “why did you choose communication as a major?,” unfortunately most of them answer, “Because it is easy.” These students do not know they have chosen this major for the wrong reasons; they do not understand they are going to learn the most powerful weapon humans can ever own. I have personally always wanted to be a journalist. The fact that my father, my grandfather, and even my uncle were all journalists probably influenced me a little bit. But more than this, I have always truly believed that communication can save the world. One day, while I was playing a board game with two friends, we disagreed about the legitimacy of one's move. After a few minutes of argument, we just looked at the rules, which clarified the whole situation and restored peace between all of us. So, what happened exactly? We were informed of the truth. When I say “truth” here, I mean that we all accepted that the official rules of the game—those set by its inventor—decided our reality. That is the day I first understood the potential of communication. Despite the huge disappointment created when some of my fellow classmates admitting the lazy nature of their motivation for studying communication, some other important organizations and people understand, believe, and spread its true potential. For example, in 2005 the United Nations Information and Communication Technologies (ICT) Task Force released a book called Information and Communication Technology for Peace: The Role of ICT in Preventing, Responding to and Recovering from Conflict. In this book Kofi A. Annan, who was SecretaryGeneral at the United Nations from 1997 until 2006, said:
61
We are all becoming more familiar with the extraordinary power of information and communication technologies. From trade to telemedicine, from education to environmental protection, ICTs give us potential to improve standards of living throughout the world. (Annan, Preface) What the Secretary-General tries to say here is that ICTs are tools for promoting peace in the world, as well as improving lives. All this comes from using ICTs for alerting certain communities of upcoming natural disasters so that they have time to prepare for them, communicating strategies during conflicts, informing people about cultural differences that could possibly be misunderstood, promoting understanding of where a violent conflict stems from, and even simply by promoting human rights such as freedom of expression. All of these possibilities of communication can make the world better. Take, for example, the radio appeal of the general Charles de Gaulle on June 18, 1940, so well detailed by the Charles de Gaulle Foundation (2013). To provide a little background, on June 17, 1940, General Petain, who was just put in charge of the government when the French realized the Germans would likely win this war, decided to give up and explained, on the radio, that France accepts its defeat and will sign the armistice with Germany, which will from now on lead France. As Daniel Cordier, who was a French teenager at this time, tells when he first heard the speech of General Petain: “It was unbearable - it made me cry. I went to my room to cry for rather a long time. I told myself this is not possible. Not possible” (BBC). The French population completely lost hope and were about to give up as well; after all, the government itself had lost hope. The next day, General de Gaulle broadcast a wonderful message from the BBC in London, full of hope, telling the French” “France may have lost a battle, but France has not lost the war.” After his message, many French men and women engaged secretly in the resistance, for which de Gaulle became the symbol, and participated in freeing France with the help of the USA and the UK. This is an example of how powerful communication can be (“Le 18 Juin”). Other good examples include awareness campaigns such as the recent “I wish I had breast cancer,” or even the new “Save the Whales” campaign against obesity. These are examples of communication through advertising used for a good purpose, i.e. to make people aware about dangers or health concerns, and to encourage them to eat less fat, stop smoking, and so forth. But, wait! Eat less fat? Quit smoking? Were these two activities not introduced to the population by advertising based on a money motivation a few years earlier? Now we enter the dark side of communication. Advertising can be very dangerous, especially when it features a harmful product such as tobacco, or encourages wrong beliefs. According to “Ethics in Advertising: Ideological Correlates of Consumer Perceptions,” written by University of Florida advertising professors Debbie Treise, Michael Weigold, Jenneane Connac and Heather Garrison, advertising has been criticized for its obvious “negligence of societal responsibility” (59). They cite Richard Pollay, Ph.D and Professor of Advertising, saying he “suggests that advertising has profound consequences due to its pervasiveness, stereotypical portrayals, manipulative and persuasive nature, preoccupation with materialism and consumption, frequent use of sex appeals, and lack of information” (Treise et al. 59). This is just to enumerate a few problems. 62
In addition to these, I would also add the unethical targeting of advertising, such as ads which focus on or target minorities and/or people with low income and encourage them to purchase useless products; or ads which target children, who are less able to recognize what is true or false; using appeals to sex and fear; or even communicating a possibly harmful product as being good. Now, here comes the real interest of this paper. One report reveals a study conducted of consumers regarding the ethical aspects of advertising. What is surprising is that consumers, contrary to my original thought, rated some aspects of advertising as not being as unethical— namely, advertisements’ targeting of children. In fact, in this study many parents saw absolutely no harm in exposing their kids to advertisements. Another good example would be all the ads about sodas. While nowadays it is a known fact that sodas are terrible for people's health, the majority of people do not see the ethical issue that arises when ads push this product on consumers. However, Carrigan and Attalla from the University of Michigan cite Philip Kotler, founder of the Societal Marketing Movement, who said that what makes consumers happy is not always what is good for them (Carrigan & Attalla par. 5). Marketers' jobs are to create a “need” in the consumer for a random product. Once purchased, this product may bring happiness in the short term, but it is also possible that in the long run this product may have horrific effects on the consumer's health or on society, as is what happened with tobacco. This is when advertising is unethical and becomes injurious. Contrary to what most people think, it is very simple to transform a “want” into a “need.” Here is what I learned in my consumer behavior class, written in black and white by Dr. Michael R. Solomon, Professor of Marketing and Director of the Center for Consumer Research at Saint Joseph's University of Philadelphia. In his book, Consumer Behavior: Buying, Having, Being, we learn that human beings are very simple: People do what they do according to their motivations. Motivation is when we feel a need that we wish to fulfill; it becomes our goal. Also, there are two kinds of needs: the utilitarian, where people have the need to accomplish something beneficial; and the hedonic need, which is a fantasy “need” like craving a Twix because it looks delicious. These needs vary in magnitude, and this magnitude is called a drive. Eventually, this need will be transformed into a want, which is the fulfillment of a need through a specific medium that is accorded to the cultural background and the personality of the person (Solomon 118). For example, I’m hungry and I see a cheeseburger on a billboard which looks very good. Suddenly, I want to eat a cheeseburger to reduce my need. This example is very simplistic, but basically what happened here is that the advertisement I was exposed to essentially decided what I want to eat for me. This works with everything: People consume what they want to be. I want people to think I am a wealthy person, so I am going to buy a Rolex instead of a Swatch. I want people to think I am sporty, so I will buy Nike shoes instead of Converse. But the real question is this: Who decided such “things” made these statements? Who associates high or low standards to each object? The answer is, advertisers and marketers. Both work hand in hand, fixing prices and arranging campaigns or products’ looks, designing the world; they shape reality. Using a mix of structuralism, semiotics, and Neo-Marxist analysis, the French post-modern philosopher Jean Baudrillard came up with a concept that he called “hyperreality” in his book, 63
Simulation and Simulacra. But first things first: let's dissect this word and Baudrillard's theory step by step. According to Jim Powell, English novelist who graduated from Trinity Hall in Cambridge and was chosen as one of the 12 Best Novelists by the BBC, in his book Post Modernism for Beginners, Baudrillard pictured our era, our world, us, as being literally materialized: “Just as a young boy who grows with wolves become[s] wolf-like, people in postmodern society, growing up in a world of objects—become more object-like” (Powell 45). In fact, we associate with each object a certain meaning; each one represents a standard of living, a social recognition. Therefore, according to Powell, Baudrillard believed that purchasing is simply a means to be socially recognized (41-42). In other words, consumption is no longer a matter of reaching happiness by fulfilling a want, but instead it is a cultural automat that allows us to exist, to be differentiated in our society. Only a few years later, Baudrillard came up with the ideas of simulacra, simulation, and hyperreality. What are simulacra? Simulacra, the plural of simulacrum, are copies of something (Powell 56). Take Coca-Cola and Pepsi, for example. In Post Modernism for Beginners, Powell explains that both products are so similar that you do not know which one is a copy of the other one: they are simulacra. The French post-modernist views our society as being completely lost and flooded by simulacra, which prevents us from knowing what is real and what is a copy of the reality. Too many objects are too similar to each other, and we ignore where these ones begin or end (Powell 59). Look at bananas nowadays as another example. This fruit has now been modified so many times that scientists do not remember what the first real gene of the banana actually was. So, if the real banana gene cannot be found anymore, it means that the real banana does not exist anymore. Through the same reasoning, Baudrillard concluded that if we are not able to know what is real and what is not, reality does not exist anymore (Powell 59). This is how advertisers and marketers create their own realities, by suppressing previous ones. This whole concept might be slightly confusing for now, but as you go through this paper, you will better understand. As I stated before, each time marketers advertise a product, they create a certain reality. They give it a name. The word “car” and a car have no natural correlation for example, but someone gave it this name which then became a reality. Taking this one step further, the term “pre-owned” is simply a more classy denomination than “used car;” basically, here the “pre-owned” cars are simulacra of “used cars.” And if you think about it, you will realize that everything around us, every object, represents a certain idea. Macintosh used to be associated with anti-conformity. Red Bull is the energy drink of the extreme sports fanatics. German cars never die. Advertisers create realities like this every day, changing or influencing actual reality. This means that they actually control people's lives thanks to the persuasive nature of advertising. It is why Baudrillard considered our society to be a hyperreality. How do we verify such a statement? You just need to look at the sales because if ads trigger sales, it means that the reality created by marketers is being spread and accepted. According to the advertising expert John Philip Jones, advertising does trigger sales (3). Jones cites in his book When Ads Work a study based on 28 brands’ short-term advertising, which showed that ads trigger sales on average by 136% (Jones 19-22). In other words, if I have a company which sells forty oranges per day, after advertising my sales will on average grow to 104 oranges per day. 64
If we summarize all of this, we basically see that marketers create a new reality through simulacra, they spread it thanks to advertising, and people conform to this reality by buying these simulacra in order to give themselves an image, or to be socially differentiated. In spite of all of this, if you ask people if they have free will, if they feel like they decide who they are, they will say, “Yes! Of course! I decided to be a wealthy man driving in a Mercedes rather than a middleclass man with modest income driving in a Toyota.” And people do have free will, to a certain extent. Here is the definition of free will according to social psychologist Roy F. Baumeister and his co-authors in Free Will in Consumer Behavior: Self-control, Ego Depletion, and Choice: Free will, when defined strictly by random action, is difficult to explain from an evolutionary perspective because there is no apparent reason that humans would have evolved a capacity for acting randomly. (…) Our proposed conceptualization of free will as consisting of self-control, following rules, and making enlightened decisions is more plausible (as compared to free will as random behavior) from an evolutionary perspective insofar as these qualities would have clearly benefited early humans in dealing with such problems. (Baumeister et al. par. 19) In other words, the realistic free will people have is simply a matter of “self-regulation and rational choices” according to the present rules and social reality. This is when Baudrillard's hyperreality becomes relevant. As I have pointed out before, if the social reality is created by advertisers, the choices people have left are only simulacra, or hyperrealities, introduced to them thanks to ads, media, and the already unrealistic society, which is very far away from the popular idea of free will that most people share. Knowing all we have covered so far, it is now interesting to look at what kind of reality advertisers created in the past versus the ones they create today. Cultural studies scholar Juliann Sivulka has dedicated her book, Soap, Sex, and Cigarettes: A Cultural History of America Advertising, to tracing the entire history of advertising in America, starting with the promotion of America, the “New World,” at the beginning of the 17th century (Sivulka 6). This was advertised in the United Kingdom in order to encourage countryside men and women to leave for America. “Free land” was the reality created by advertisers, which has been a “reality” since then and a major value of the country, giving birth to “The American Dream.” Then there was the creation of the breakfast as we picture it nowadays. In fact, the idea of orange juice or cereal in the morning has been brought by marketers, giving a new purpose or use to these two products (Sivulka 87). Yet the best example of advertising persuasiveness is probably the tobacco industry. Until World War I, people associated cigarettes with “criminals, neurotics, or possibly drug addicts” (Sivulka 149). Then the George Washington Hill's American Tobacco Company came up with their three brands, Camel, Lucky Strike, and Chesterfield, and the multi-million dollar campaign they associated with them, changing radically the public opinion on cigarettes and making tobacco one of the most consumed products in western society (Sivulka 147-149). These three are just examples of the thousands of realities advertising has created. But creating such concepts has not always been inoffensive; on the contrary, they have often done more harm than good. In the following part of this paper I will cover three main subjects and analyze the problems related to them: gender roles and stereotypes in children, the ideal woman bodystandard, and one aspect of the dark side of consumer behavior: compulsive buying. 65
Child-targeted advertising is something that completely fascinates me—not because of the creativity used in them, but because according to Beverly A. Browne at the University of Boston, children develop their concept of gender roles by the age of seven (Browne 84). It means that the basis they have about what it is to be a man and what it is to be a woman comes directly from stereotypes displayed in ads. For example, advertising for children often shows men as being more “constructive, powerful, autonomous, and achieving,” while women are represented as emphasizing “friendship, passivity, deference, lack of intelligence and credibility” (Browne 84). The problem with stereotypes is that they are social constructs which are originally not true; there is no reason a woman could not be as credible as a man, but still ads represent people's reality as they have been exposed to them at an early age. Browne says that research has proved that women exposed to make-up ads as children are more concerned about being pretty and wearing make-up as grown-ups (Browne 85). Similarly—and alarmingly—Alexandra C. Hess showed in her thesis, Application of Stereotypes in Marketing: Gender Cues and Brand Perception, that women who have been exposed to a stereotypical threat, such as being bad in math, are more likely to avoid this threat, bringing about their failure in this field (Hess 61). The correlation might be too easily made, but this could explain many girls' tendency to avoid scientific activities. Another part of Hess's work that I find particularly interesting is her explanation of the process by which we take stereotypes for granted. In fact, according to Hess, this is a simple process that humans do all the time called categorization (Hess 18). This is the process of assimilating people, objects, ideas, and so forth which share common characteristic(s) into a category. Moreover, it is such an automatic process in human brains that we are not even conscious about it (Hess 123), which can explain why people think advertising does not influence them. Stereotyping, or categorization, is also the way advertisers have created the most influential reality on women, beauty, and body standards. In order to trigger sales, advertisers use motivational appeals. These can either be guilt, humor, or even logic, but for sure the most exploited one is sex appeal. And while we know for fact nowadays that this is a major source of stress and low self-esteem for women, it is still used. Why? Sex sells. In “Consumer Responses to Sex Appeal Advertising: A Cross Study,” three professors of advertising and marketing, Fang Liu, Hong Cheng and Jianyao Li, describe sex appeals as “based around the appearance of nudity and the use of sexual attractiveness or suggestiveness” (par. 4). They point out four reasons why sex triggers sales. First, it catches people’s attention better, and for a longer period of time. For example, someone who looks through a magazine will spend more time looking at ads using sex appeal than at ads not using it. Secondly, the memorization of the product lasts longer with sex appeal, as it is easy to recall or to associate to something familiar in our brain. Third, it creates strong emotional responses, such as “arousal, excitement or even lust” (Liu et al. par. 7), which brings us to the fourth point: Sex appeal allows the association of a good feeling with a product. It means, for example, that if a product is advertised with a confident sexy woman as an endorser, people will associate these characteristics to the product, and by purchasing it they try to attain the same sexy and confident traits as the woman shown in the ad (Liu et al. par. 8). It is thanks to all of these benefits that a lot of products nowadays feature sex appeals in their ads. 66
However, this has created a major problem in our society as it lowers people's self-esteem by creating complexes about their bodies, especially for women. In fact, psychologist Kasey Serdar points out in “Female Body Image and the Mass Media: Perspectives on How Women Internalize the Ideal Beauty Standard,” that throughout the years, mass media have spread a feminine “ideal beauty” standard that is unrealistic and unhealthy. Most of the women displayed in ads are considered as below “healthy body weight” (Serdar par. 1). By normalizing such unreal body standards, advertisers create once again their own reality, in which it becomes true that “in order for a woman to be considered beautiful, she must be unhealthy” (Serdar par. 1). This has terrifying consequences. In 1999, a study showed that 40% of teenage girls were concerned about losing weight, while most of them were situated in “the normal weight range of their age” (Serdar par. 3). Many studies following this one met with the same results, showing the harmful effects advertising could have on women. So, by being merely exposed every day to this unreal “ideal” body, women are given no other choice but to compare themselves to it, resulting in “increased levels of depression, stress, guilt, shame, and insecurity” (Serdar par. 7). However, it is important to highlight that advertising stresses people even when it promotes standards people can reach. For some people, such things can lead them to a very dark activity: compulsive buying. According to “Compulsive Buying: A Phenomenological Exploration,” written by Thomas C. O'Guinn and Ronald J. Faber, compulsive buying can be defined as “chronic, repetitive purchasing that becomes a primary response to negative events or feelings” (155), to which they added “the activity, while perhaps providing short-term positive rewards, becomes very difficult to stop and ultimately results in harmful consequences” (O'Guinn & Faber 155). If the two authors came up with such a definition, it is because throughout their study, they discovered that compulsive buying was a reflection of low self-esteem, a trait of compulsivity, and a predisposition for fantasy, which ended in terrible situations such as debt, divorce, and the psychological states associated with such stresses, like depression (O'Guinn & Faber 155). The most relevant point of O’Guinn and Faber is their findings about the motivation of these compulsive buyers that pushes them to purchase. Interestingly, they found that compulsive buying was not so much about owning the product as to trigger a positive interpersonal interaction and enhance self-perception (156). Indeed, a majority of the people they interviewed admitted buying because they expressed a sense of low self-esteem and the interaction with the salesperson made them feel better, as they were complimented while trying new products (154). Moreover, even though the feeling of owning the product is secondary, the product bought is generally specific. For example, women are more motivated by buying products that will increase the positivity of an interpersonal interaction, such as make-up, dresses, or other products that could make them feel beautiful, while men are more into products that make them feel more powerful (O'Guinn & Faber 147-156). Does this sound familiar? Remember the section about children's advertising. These products purchased by compulsive buyers, particularly the example I just gave, are the exact representations of what advertisers want children (and the general population) to believe about what makes a boy and what makes a girl: boys should be strong and powerful, while girls should be very feminine, very fashion and beauty-oriented (Browne 84).
67
This is amazing proof supporting Baudrillard's theory. We learned earlier that advertising was the association of a value, a meaning, or an image with a product made by an advertiser. Here, these compulsive buyers who experience low self-esteem buy products that are supposed to make them feel better, products that are symbols of a certain social identity and traits such as being powerful or beautiful. Marketers make us dependent on what they what they have created. As Baudrillard said, today's world is made of hyperrealities, or simulacra, to which people decide to conform. Once they pick one, they will do anything to represent it, such as buying all of the products communicating this image (Powell 46-47). While compulsive buyers are the extreme examples of such a theory, O'Guinn and Faber connect it to a broader target, the general population exposed to advertising: “Our understanding of more normal consumer behavior will be enriched by our understanding of its extreme forms” (147). In fact, compulsive buying is, once again, an extreme or exaggeration of “normal” buying behavior. It means that even if the act of buying from a “normal” consumer has not reached a harmful level, it is still stressful to fit into society by adopting a simulacrum developed by advertisers. You now might understand better why you are who you are, or, on the opposite end of the spectrum, you may feel lost and aim to know who you honestly are. Jean Baudrillard's theory of hyperreality is, to me, mind blowing. Maybe it is true! Maybe all of us really have no free will and our actions are learned manners through the media. On the other hand, some could argue that spontaneous human actions are not from hyperrealities but from natural good will, or even bad intentions. All the questions arising right now are just evidence that there is a lot more to learn and study about the ideas of hyperreality and simulacra. However, an unconditional truth that I have been able to prove here is that advertising, and media in general, certainly do influence the way we perceive the world and what people or objects are. So now I am going to ask you the same question I asked in the beginning of this paper. Take the time to look around you again: What do you see now? WORKS CITED Annan, Kofi A. Preface. “Information and Communication Technology for Peace.” The United Nations Information and Communication Technologies Task Force, 2005. Web. 19 Apr. 2014. Baumeister, Roy F., Erin A. Sparks, Tyler F. Stillman, and Kathleen D. Vohs. "Free Will in Consumer Behavior: Self-control, Ego Depletion, and Choice." Journal of Consumer Psychology 18.1 (1 Jan. 2008): 4-13. Web. 19 Apr. 2014. BBC News. “How De Gaulle Speech Changed Fate of France.” BBC, 18 June 2010. Web. 19 Apr. 2014 Browne, Beverly A. “Gender Stereotypes in Advertising on Children's Television in the 1990s: A Cross-National Analysis.” Journal of Advertising 27.1 (Spring 1998): 83-96. Web. 2 April 2014.
68
Carrigan, Marylyn and Ahmad Attalla. “The Myth of The Ethical Consumer – Do Ethics Matter in Purchase Behaviour?” Journal of Consumer Marketing 18.7 (2001): 560-578. Web. 28 March 2014. Liu, Fang, Hong Cheng and Jianyao Li. “Consumer Responses to Sex Appeal Advertising: a Cross- Cultural Study.” International Marketing Review 26.4/5 (2009): 501-520. Web. 2 April 2014. Hess, Alexandra C. “Application of Stereotypes in Marketing: Gender Cues and Brand Perception.” Thesis. University of Waikato. 2013. Web. 1 Apr. 2014. Jones, John Philip. When Ads Work: New Proof That Advertising Triggers Sales. New York: Lexington, 1995. Print. “Le 18 juin heure par heure.” La Fondation Charles de Gaulle. 2013. Web. 3 April 2014. O'Guinn, Thomas C and Ronald J. Faber. "Compulsive Buying: A Phenomenological Exploration." Journal of Consumer Research 16 (1989): 147-157. Web. 26 March 2014. Ryan, Michael. Cultural Studies: An Anthology. Oxford: Blackwell, 2008. Print. Serdar, Kasey L. Female Body Image and the Mass Media: Perspectives on How Women Internalize the Ideal Beauty Standard. Westminster College. N.p., n.d. Web. 19 Apr. 2014. Sivulka, Juliann. Soap, Sex, and Cigarettes: A Cultural History of American Advertising. Belmont, CA: Wadsworth Pub., 1998. Print. Solomon, Michael R. Consumer Behavior: Buying, Having, and Being. New Jersey: Pearson Education, 2013. Print. Treise, Debbie, et al. "Ethics In Advertising: Ideological Correlates Of Consumer Perceptions." Journal of Advertising 23.3 (1994): 59-69. Web. 29 Apr. 2014.
69
CONVINCING THE CROWD
70
Video Games: Art or Entertainment? (By Silje Solland) Today, millions of people all over the world play video games. Gaming has now been around for a while, and it is not for everyone. Many people like to think that video games are just a new medium that only can be seen as pure entertainment and that games have nothing to do with art. On the other hand, nobody wants to argue against cinema being an art form. With this in mind, I believe it is possible to prove that video games can also be seen as art and not just an activity to do for fun. To understand the view of video games not being an art form, a few major points must be considered. The most controversial of them all would be that since video games are at a relatively early stage, there has been less time to study the medium. Roger Ebert made a statement supporting this view, saying, “Let me just say that no video gamer now living will survive long enough to experience the medium as an art form” (1). To further prove what Ebert means, Larissa Hjort, in the book Games and Gaming, says that games emerged and transformed from the 1970s (19). This basically means that video games have been around roughly for 44 years, which is not a long time. Further on, Hjort also points out that games are “shaped by techno-cultures” (20). Hjort is pointing out the major role of technology for games. Hjort does an outstanding job of explaining the history of games by saying that games ultimately began with people trying to push the possibilities of technology, and it was all because of MIT’s Computer Science department, partially funded by the US Military (29). In other words, the background for the birth of games has been established very clearly. Technology is the major keyword for video games, making it a unique platform that exists today. In contrast to the old traditional arts like paintings and movies, games also have the interactive element. Going back to Roger Ebert, one of the most well-known movie critics in Hollywood, Ebert claimed that the main difference between art and games is that a game, you can win; a game you cannot win, however, defines an experience (3). Now, what I find unsettling about his statement is his view on an “experience.” In my eyes, an experience is something you in fact experience. There are no clear reasons why the joy of having a challenge presented to you in a game should be seen strictly as an entertainment value. Another way to view this could be to reverse the roles. What if the experience you have while watching a painting in a museum could only be added to your entertainment value, instead of an aesthetic experience as Roger Ebert thinks is missing from video games? To further his claim, Ebert responded to a TED talk in which a video game was being critically acclaimed for its creative and innovative gameplay. The lecturer, Kellee Santiago, explained that the game is focused on beauty of the nature and urban life; however, Ebert, without even playing the game himself, claimed this sounded like nothing more than a decorative game with no purpose (5). At this point, Ebert is contradicting himself. He, as an authority on art, should know that a lot of paintings and other traditional arts also often mirror the same values that Kellee Santiago claimed her game was focused on. Going to a museum, you are almost certain to find a painting of nature and beauty that will also serve well for a decorative purpose.
71
Another perspective opposing games as art is an interesting claim from Leil Lebovitz, writer for the news magazine New Republic. Lebovitz claims that Museum of Modern Arts, which is hosting game exhibitions, is mistaking video games for art because games are just codes (1). To support her claim, Lebovitz goes in depth to explain that codes are tasks that a computer makes to give a special function to the design, but art does not inhabit such limitation and is the only result of a free mind (2). To a certain degree, she is right about games being codes. What is wrong about her claim is that the codes a game consists of can be measured as merely a tool for the game creators. To put this in perspective, Lebovitz forgets that for a painting to be a painting, the painter needs a canvas, paintbrush, and paint. In other words, codes can also be seen as the canvas that includes the proper tools for the creator/artist. If you look apart from the interactive element, games are basically a mix of various existing mediums. Together with elements from traditional art forms, games also offer the aesthetics that traditional art can offer. A perfect example is Hideo Kojima, who has been in the gaming industry for around 26 years. He admits that to create a game, he drew inspiration from all the traditional arts such as music and films to turn them into games (3). Kojima said that “games are a collaborative art, or a synthesis of various things—t echnology, story and art” (2). To further explain his vision games, Kojima points out that there is a significant difference between games and traditional art. To put his view in perspective, he compares video games to Disneyland. Disneyland is an environment set up for people to play and have fun, but when you break down the park into individual elements, they are all built up by “artistic elements” (2). Kojima makes an interesting comparison that shows the main difference between traditional arts and games. However, the most interesting aspect to consider is this: Should this difference determine that games cannot be art? Or should the fact of artistic elements included in a game integrate it with ancient traditional arts? An important element of traditional arts is the aspect of emotions. The element can also be found in video games, which Kojima explains very well. One of the most important things that Kojima believes is important for a good game is the game’s ability to play on emotions. Kojima explains that since video games have the interactive element, they can be used in ways to “strengthen the emotional experience” (4). This aspect of emotions has actually been researched and serves as a valid backup for Kojima’s point. In 2004, the President of XEODesign, Nicole Lazzaro, wanted to know why people played video games. They conducted research that lasted over 12 years studying the emotions players experienced through gameplay, not storyline (1). The fact that research was done without taking into consideration the storyline aspect of a game is very interesting. An observation Larazzo mentions is the psychological and physical evidence of emotions that occurred in the player’s brain and body during a game. Larazzo then says that people play to experience the various sensations the body goes through emotionally (7). Larazzo points out an interesting aspect of playing games that could point to the aesthetic value games may have, in addition to just being fun for people to play. Lastly, Larazzo revealed that through the study they found out people do not necessarily play a game just for the game itself, but rather for the experience the game creates (1). These are some incredible results. Larazzo’s 72
study validates the idea of games exceeding mere entertainment value and proves scientifically where the interest of most gamers lies: within their interest for games. Traditional arts have actually been trying to include video games in recent years. Michael Gallagher, CEO of Entertainment Software Association, wrote an article in the Huffington Post discussing Smithsonian American Art Museum’s exhibit that will show the history of video games in the last 40 years, including games and technology (1). The fact that a highly respected museum decided to include games could start to open people’s eyes to the medium. Further, Gallagher points out that already the people playing games can see the artistic value in games, but now that the museum has chosen to have a gaming exhibit, non- players can experience what gamers already know (1). Gallagher then claims that the exhibit will change the gaming medium from just being entertainment to being considered as an art form. To support his claims, he refers to games’ ability to tell stories in a new way, unlike movies and paintings (1). Lastly, Gallagher emphasizes the main difference between games and other traditional mediums: that games are made together with their audiences (1). By mentioning the positive outcomes of the museum’s initiative to include games, Gallagher shows that the time for games to be considered a form of art is here, and he opens up an interesting debate about the outcomes. Another claim that clearly backs up Gallagher can be found in Andy Clarke’s book Videogames and Art. Writer Brett Martin says that a creator of video games go through many of the same processes that traditional artists go through in creating a game, and thereby claims that a video game consists of all traditional mediums that exist today (5). One traditional art form that is very similar to games is cinema. By comparing the mediums, there are more similarities seen between games and cinema/photography, both technically and aesthetically. Brett Martin claims that it is bound to happen that video games will go through a tough process before being accepted as a new artistic medium. Martin says that to further understand what evolution games are going through, we need to compare them to older traditional mediums like photography and cinema (1). Martin mentions parallels between the mediums in that they both rely on technology, together with public relations, to attract an audience (4). To further explain his comparison, Martin argues that most people see film as art; however, not all the movies are art. As an example of an artistic film, he brings up The Wizard of Oz. The movie itself was a tremendous success, not just because “of the color, but in the way color was used.” Using color as a storyteller, as when Dorothy leaves Kansas and goes to Oz in the film, was a huge technological achievement at that time and can now be viewed as art (4). So far, Martin has pulled together the essential bridge between art and technology by emphasizing how they work together. It is interesting to think how differently The Wizard of Oz would have turned out if it were not for technological advancement; today the movie might have been regarded as something completely different if they left out the color element. It might not even have been a huge success. To understand the background of cinema, Martin explains that for cinema to grow it had to rely on past mediums. Movies were made out of theatrical plays and books. Later, he explains that 73
cinema had to grow apart from the plays and books in order to develop into its own medium. With time, technology managed to create visually stunning scenes that the theatre could not do (4). Now, this is something that has been stated about video games as well, in the form of relying on ancient mediums to evolve. So far, the rise of movies is exactly the same as the creation of video games, as Hideo Kojima pointed out earlier. Martin makes an interesting observation, saying that “movies today have more to offer aesthetically than games, and until games establish themselves apart from movies they will not be considered art” (4). Now, most think of movies today as an independent medium. The break that Martin mentions implies that video games have not gone that far yet, which can be believable since movies have been around the last couple of centuries. Lastly, Martin points out that the only significant difference between cinema and movies is the interactive element that gives players control, which then leads to the experience of playing a video game (5). After having a look at the similarities between the mediums in form of developing progress, there are actually other more specific similarities. Aaron Smuts wrote a research paper where he at one point drew an interesting similarity between game developers and movie directors. Smuts points out that just like movie production, game design is also a “big collaborate” project. There is not just one person working on a full movie production, but a whole team of talented people that “pursue aesthetic goals,” which also applies to games (7). Smuts makes an interesting point to how similar they both are. To back up Smuts’ claim, author Irish wrote a book about producing video games. He says that across all mediums, a producer is the one who is in control of a whole production. This is also applicable to video games; the person responsible for making a game is called a “video game producer.” A video game producer is responsible for many tasks, from hiring a crew to overseeing the artistic aspects of a game and practicing time management (2). Both Smuts and Irish prove that both games and movies go through the same progress on the development stage. Both have people with the same kind of jobs and responsibilities. Not only this, but Irish explains that there really are no other differences between a movie producer and a game producer. The only main difference is the product they are working on. In a book on broadcasting by Jeremy Tunstall, Tunstall says that a television producer is the one responsible for time management and overall production of a TV show (1). By stating this, Tunstall confirms what Smuts and Irish are claiming. However, Tunstall also adds that the job of a television producer can also be connected to film makers and producers of documentaries (1). Not only can similarities be found within team structures, but the workers have some similarities worth mentioning. Brenda Brathwaite, a game designer, wrote a book about explaining that a game designer goes through “a process of creating the content and rules of a game” (1). To further explain, Julian Newby says that a camera man is the one in charge of the visual content by moving the camera and cranes as needed to get the right content (2). To sum it up, game designers may as well be having the same job as camera men. Brathwaithe and Newby describe the jobs as very similar to each other. Photography went through a similar hard time to be accepted by people as art. It can be claimed that video games are going through the same struggle today, caused by the public’s 74
cultural framing and a frightened media. For instance, Martin explains that when the camera was first invented in the mid-19th century, a lot of artists got scared. Painters used photos to recreate paintings instead of actually going outside. According to Martin, artists claimed that it took no real talent to take a photo; therefore, it could not be art (1). It should be understandable for us people living today to understand the fear that artists must have felt at that time. Something new and unexplored emerged from nowhere, threatening their art, creativity, and lifework. Further, Martin refers to an artist who had the courage to speak up about the art of photography—Oscar Rejlander. Rejlander took a photo which took a tremendous amount of time to process via technology that was new and groundbreaking at that time (2). Rejlander is a perfect example of how curiosity can inspire one to evolve and explore. Going back to cultural framing, video games have had a rough time in the small period they have been around. Games have frequently been blamed for violence and weird behavior. Kurt Squire, a PhD in Instructional Systems Technology at Indiana University, mentions that video games have not only been blamed for increased violence; they also have been blamed by people who lack knowledge about games, basing their assumptions solely on the overall increase of violence in America (2). Trying to prove his claim wrong, Squire refers to research done by E. Mitchell in 1985 at the game company Atari, where the company gave out free consoles to 20 families. The purpose of the study was to figure out how games impact families. They expected that the kids would get bad grades in school, family violence would increase, and family interactions would worsen. Instead, the research proved the exact opposite (2). Since 1985, a lot of things have changed in the video game industry. You find a lot more violent games on the market today than ever before; however, that is not to say Mitchell’s study is now wrong. Perhaps games are partly to blame for the increase of violence in America, but it should not be forgotten that movies today are also filled with violence. It is almost a challenge to find a movie without violent behavior. Some children may have seen too many of those movies; however, it cannot be forgotten that there are other factors in why people get violent. Upbringing, influence, and mental status should be blamed just as much as video games. The cause might be a combination of all of these. Lastly, Squire mentions how playing video games affected him as a child. He stated that even though he spent hours playing war simulation games, he would never today own a gun himself (2). I think what Squire really tries to say is that a video game can affect people differently, but the blame should not fall on the medium without solid evidence. In the end, is it the industry’s fault or the user’s fault as to how their products affect other people? Cultural framing of mediums is nothing new. Even traditional arts like music have been blamed for why people act like they do. In 2012, Mike Hohnen for MusicFeed reported that the heavy metal band Rammstein from Germany was responsible for the school shooting at Columbine that happened in April that year. It was allegedly the band’s fault because the shooter liked their music (1). Marilyn Manson, a heavy metal artist also blamed by the media for the Columbine shooting, wrote a 1999 article regarding the incident. Manson remembers hearing that the shooters worshipping him because they wore black clothes, which led to him being the cause for the terrible shooting. Later it was proved that the kids responsible never even wore black, but since the media wanted to blame someone, they picked Marilyn Manson (1). To be accused of a cruel, violent act without proper research supports the idea that people fear the unknown. 75
Significant efforts have been made by the gaming industry as well as museums to try to change people’s perception of games being frightening and make people see the aesthetics behind the games. A fine example is the efforts by MoMa, or Museum of Modern Art. According to the museum, their focus is on “the past and the present, the established and the experimental (1). In 2012, Kojima was approached by a Smithsonian museum that wanted to feature his games in an exhibition. Michael Gallagher claims that the exhibit also shows how the technology industry is pushing forward with new possibilities in graphics and design as well as inspiring young people to get an education in technology (2). Lastly, Gallagher believes that video games are already in our culture, and he is depending on inspiring people instead of having them fear the unknown potentials of future technology. There is no doubt that photography and cinema are major industries in today’s society. For instance in education, children already use movies, photography, and even paintings as part of the learning experience. Games should also be a part of learning, because they can provide everything in one medium. Squire raises important questions like what children can learn from playing games that visit ancient history, or playing simulation games such as “SimCity” from an educational perspective. In order to obtain answers to these questions, Squire encourages more studies to be conducted on these areas (3). I remember when I was younger, the school I attended used to have mathematics games that we all could try out. For me, the game made math more fun to do, and I felt that I learned something instead of reading the same old books. Maxis, a company known for simulation games like “The Sims,” decided to take their game “SimCity” to a new level. They, together with a company called GlassLab, created a new version of a game but turned it into a game focused on performing challenging tasks. According to GlassLab, they collaborate with “experts in learning and assessment to leverage digital games as powerful, data-rich learning and formative assessment environments” (1). When such a huge corporation like Maxis decides to make games for educational purposes, they have a tremendous power to change schools all over the world. It is remarkable that something that was just entertainment also can serve a useful purpose. Not only can games be used within education, but games present other possibilities such as contributing to the economy and providing training in various fields. In 2006, New York Times wrote an article about the culture minister of France, Renaud Donnedieu de Vabres, who wanted to increase the focus of video games developed in France by giving the companies a 20% tax break like in the movie industry. The main reason for this, de Vabres claims, is that video games have for too long been neglected for their “creativity and cultural value” (1). First of all, I have never heard about a government wanting to ensure that the gaming industry is safe. Chief executive Yves Guillemot for Ubisoft, a major game company, responded to the cultural minister of France by saying that the tax break is necessary to “keep France salaries competitive.” To wrap up his statement, Guillemot says that video games should be supported by the state, not only to keep outsourcing from happening, but also because movies and music are already being supported by the government (2). As it shows, games also create jobs and decent salary in the community of France. Is this not something we want for everyone in the world?
76
Not only can the economy be improved with enhancing the focus on the video game industry, but other important areas can be enhanced as well. Ann O’Connor wrote a paper about the military using video games to recruit soldiers into the U.S. Army (3). This represents an innovative way of getting recruits. Many young people today like video games, and this way the military can reach out to them by appealing to their interests. The Aircraft Owners and Pilots Association claims that simulators are an excellent tool to learn about airplanes and how to act in various scenarios while following the right procedures (1). It is comforting to know that pilots get extensive training in simulators on what to do in different situations. Of course a simulator cannot teach you exactly how you may react in real life disasters like a crash; however, I think a lot of knowledge can be taught using simulation games. In my old job in retail, we used an interactive game online to learn various methods of working and procedures. Most of my colleagues and I agreed that it was a very effective way to learn. I also felt it boosted my confidence in my work. In all, many factors come into play when considering video games as art. However, the most interesting observation I have made is the interactive element and technology; these are the only two elements that distinguish games from traditional arts. If people could accept games on the same level as music and movies, then games will without a doubt come to be considered an art form. We all live together in a world that is constantly moving forward, and people should stop being afraid of new inventions like games and embrace them. Books, theatre, and paintings all led to movies, so who knows what video games can lead to in the future? WORKS CITED “Aircraft Owners and Pilot Association – Mission Statement.” AOPA Online. Web. 13 April 14. Brathwaite, Brenda. Challenges for Game Designers. Boston: Cengage Learning, 2008. Print. Crampton, Thomas. “For France, Video Games are as Artful as Cinema.” 6 Nov. 2006. The New York Times. Web. 20 Feb. 2014. Ebert, Robert. "Video Games Can Never Be Art." Rogerebert.com. 16 April 2010. Ebert Digital. Web. 19 Feb. 2014. Gallagher, Michael. “More than the Art of Video Games.” 15 March 2012. The Huffington Post. Web. 9 Feb. 2014. “GlassLab – About Us Statement” GlassLab Online. Web. 13 April 2014. Hjort, Larissa. Games and Gaming: An Introduction to New Media. Oxford. Berg Publisher. Janurary 2011. Web. 13 April 2014. Hohnen, Mike. “Rammstein Blamed for Latest US School Shooting.” MusicFeeds. 30 August 2012. Web. Irish, Daniel. The Game Producer’s Handbook. Boston: Cengage Learning, 2005. Print. 77
Kojima, Hideo. “Metal Gear Creator on the Art of Video Games Reese Higgins.” Washington Citypaper. 19 March 2012. Web. Lazzaro, Nicole. “Why We Play Games: Four Keyes to More Emotion Without Story.” Player Experience Research and Design for Mass Market Interactive Entertainment. Oakland: XEODesign Inc., 8 March 2004. Web. Lebovits, Leil. “MoMA Has Mistaken Video Games for Art.” New Republic. 13 March 2013. Web. 29 March 2014. Manson, Marilyn. “Columbine: Whose Fault is It?” Rolling Stone Magazine. 24 June 1999. Web. 13 April. 2014. Martin, Brett. “Videogames and Art”. Eds. Grethe Mitchell and Andy Clarke. Bristol: Intellect Ltd., 2007. Print. “Museum of Modern Art – Mission Statement”. MoMA Website. Web. 13 April 2014. Newby, Julian. Inside Broadcasting. Florence: Routledge, 1997. Print. O’Connor, Ann. “The Recruiting Fallacies and Fiction: Symbolic Behavior in U.S. Army Recruitment Video Games and the Clash with Actual Soldier Experiences.” The International Journal of Organizational Diversity. 2012. Web. 13 April. 2014. Smuts, Aaron. “Are Video Games Art?” Contemporary Aesthetics. 2 November 2005. 3rd ed.
78
Dog Aggression: Nature versus Nurture (By Kylee Chun) One winter night in San Antonio, Texas, 75-year-old widow Betty Clark was on her way to give her Canyon Lake neighbors some Christmas gifts. What she did not know was that her neighbor’s two dogs would attack her. After being found, Clark remained unconscious at the University Hospital until dying about two weeks later from her injuries. Doctors claimed that another victim of a fatal dog attack had died that weekend as well (Joseph). These fatal injuries could have been prevented. Dog attacks are increasingly becoming a public health concern. In attempt to control the number of dog attacks resulting in human fatalities, legislators have turned to breed-specific laws, most of which target the American pit bull. The laws were made to protect citizens from potentially dangerous breeds. But how do we know which breeds are dangerous? Based on the number of dog attacks per breed, many view the pit bull, over other breeds, as a threat. Although a large number of the fatalities in the United States have been caused by the notorious pit bull, it is important to figure out if the breed itself contains this aggression through nature, or if the cause of the dog’s aggression comes from the nurturing of its owner. The laws banning specific breeds were put into action assuming that the cause of aggression is the nature of the dog. Researchers have studied aggression in different breeds in search of the truth. After analyzing various academic articles, magazines, and animal associations’ websites, I discovered that although aggression is to a certain extent caused by nature, aggressive behavior in dogs is also caused by the nature of their owners. Katherine Houpt, from the American College of Veterinary Behaviorists, reviews the aggressive nature of dogs in her article, “Genetics of Canine Behavior.” After analyzing multiple studies, Houpt believes aggression is based on genetics. Houpt argues that throughout the history of dogs, and with evidence from several case studies, some breeds show more aggression than others, like the Rottweiler being more aggressive than the Greyhound. She finds that genetic defects to the brain can also cause a dog to become more aggressive. Ultimately, Houpt believes that the aggressive gene in dogs will be discovered and that dogs with the genetic code for aggression should not be allowed to reproduce. I, however, disagree with Houpt. Although I believe aggressive tendencies in dogs are genetically passed on, genetics do not code for the measurement of aggression, and are not the sole determiners of aggressive behavior. Not allowing a dog to reproduce because of its genetics would be like not allowing certain humans to breed because of an unwanted trait. In the long run, the breed with that trait would go extinct and that is just cruel. In order to prove that aggression is affected by both nature and nurture, we must start by considering why aggression even exists. Aggression is a complex term to define. There are many reasons why any dog might be aggressive. When analyzing a dog’s aggression, an individual can look for signs as to why the dog is behaving that way. There are typically three reasons for such behavior: dogs will show aggression when protecting themselves and their young, or even guarding their territory (“Aggression”). When protecting itself, a dog may become aggressive if it feels harmed or threatened. It will also protect itself in social situations to keep its status when faced with a 79
group of dogs. A dog will also guard his own pups, mate, or owner. In other cases a dog can protect his own territory and items that he cherishes. Just like humans, dogs may “lash out” aggressively as a result of their frustration. In the worst case scenario a dog could also redirect aggression when someone interferes with something the dog is already being aggressive towards, such as when a person attempts to break up a dog fight. It is natural for a dog to become aggressive in all of these cases. They cannot help but follow their animal instincts when they are trying to survive. Temperament, social attraction, following, restraint, being touched, and being picked up are other factors that affect a dog’s aggression level (“Aggression”). If you analyzed humans in the same sense, you can see that we, too, become aggressive in these cases; in fact, any species will. From here, one can conclude that not only do specific dogs have the natural instinct to be aggressive, but so do all species and all breeds. Next we must examine the different breeds. When observing a dog, it can be described as one of the following types: working, hunting, herding, hound and terrier, or toy. Each classification has certain features which put a breed into its section. Working, hunting, and hound breeds will have a larger muscle mass, long legs, longer snouts, and bigger teeth than herding or toy breeds. For example, the features of a husky can be described as large, with a fit body and long nose; compared to a fluffy Shi-Tzu which is short, miniature, and has a flat face. Dogs genetically pass on these traits, each for achieving specific tasks. The Shi-Tzu, with small teeth, short legs, and a flat face, would have a difficult time chasing after prey and locking its jaws on it, yet the husky would be able to do this with ease. Genetics pass on these features within each group, making hunting dogs potentially more aggressive compared to the other groups. However, this does not necessarily mean that dogs with these features will be aggressive and dogs without these features cannot be aggressive. Likewise, a Chihuahua can be just as aggressive as a Rottweiler is capable of being. Now that we have a better understanding of aggression in dogs, we may analyze the information Houpt presents. During a study, she found that when different breeds were handled identically, the results varied. This backed up the idea that the breed of a dog will have an effect on the way it behaves. Other studies showed that gender plays a role as well; female dogs were easier to train than the male dogs, who showed more aggression. Surprisingly, the dogs that responded with aggressive behavior in some cases were not aggressive all the time. Aggression towards children was not linked with aggression in general. At the end of the experiment, aggression of a dog was measured on a scale instead of bluntly considering the dog as “aggressive” or “not aggressive.” Although dogs that showed aggression had traits that made them potentially more aggressive, it did not mean that all dogs with that specific trait would be aggressive. For example, the German shepherd and the Retriever are similar in that both have large bodies, long snouts, and sharp teeth, yet only the German shepherd is on the list for breeds that have killed people. Also, some breeds showed more aggression for reasons that other dogs were not aggressive; the Cocker Spaniel showed more possessive aggression than the Lhasa Apso, who showed more dominance aggression.
80
Genetic defects that affect neurotransmitters in the brain can also cause aggressive behavior. Drugs that block dopamine like acepromazine, which is used to sedate dogs, can cause mutation in an enzyme that causes aggression. Factors involving serotonin and norepinephrine levels have also been proven to affect one’s aggression. Overall, both the genetics of a dog and its breed can affect whether or not a dog has the potential to be/become aggressive (Houpt). As stated before, politicians have banned specific breeds in hopes of protecting people from potentially dangerous breeds. They assume that nature is to blame for aggressive behavior in individual dogs. So we begin to wonder: Are these laws are even justified? The article “Breedspecific legislation and the pit bull terrier: Are the laws justified?” by Stephen Collier describes the ban in place in Australia. Approximately 40% of households own dogs (Collier). The bans on specific breeds were placed for two reasons: (1) that the breed shows high levels of aggression based on its bite frequency, and (2) that its physical characteristics and historical functions make the breed potentially dangerous. Many of these breed-specific laws require the dog to be neutered in hopes of eventually “eliminate[ing]” the breed, which is unfair. Bans on the American pit bull were made to protect the people from the “dangerous” breed, yet from 2001 through 2003, a study revealed that out of the 547 reported dog attacks from the people in New South Wales, only 4% of the attacks were by an actual American pit bull terrier (Collier). By looking at the number of dog attacks by specific breeds, the pit bull is well below any of the other breeds. It is true that any dog is capable of being dangerous: “The [American pit bull terrier] has the potential to be dangerous, but there is no specific research to demonstrate that breeds with a fighting past are more aggressive toward people than other dogs” (French). All dogs can be made aggressive by human agency (Collier). Many have failed to notice that the owner’s actions affect the dog’s personality. Places similar to Australia are trying to fix the “problem” by continuing to ban more breeds. In Kansas, the American pit bull terrier was banned; the law states that if the dog is found in one’s ownership it will be confiscated, like a cell phone from a middle school student (Campbell). The problem with these laws is that those in charge of putting the regulations into place are not even educated or qualified to be doing so. Many other breeds have been accused of being an American pit bull terrier and have been taken away from their families, only to be returned months later when the genetic make-up of the dog shows that the individual is of another breed (Campbell). Therefore, how do we know if the number of fatalities claimed to be caused by the American pit bull is even true? If the law enforcement officials in charge of taking reports are not qualified to know what breed they are looking at, who is to say that any breed is aggressive when the dog itself cannot be identified? Nowadays, most dogs are not purebreds and instead are mixed breeds. So in dog attacks reported today, what breed is to blame? The breed responsible for the majority of its genetics, or the breed the dog’s characteristics look more like? In fact, studies have shown that breed-specific laws do not decrease the number of dog attacks. In Great Britain and Spain, studies show that their “dangerous animal acts” did not affect the number of dog attacks at all (French). Notably, the breeds responsible for most of the fatal attacks were not covered by the laws.
81
Many dog owners argue against the laws because they are not justified. It is not right to ban a breed that has been abused and trained to be aggressive when the animal itself is a loving, loyal breed. Andrea Rothwell, a Veterinary Services supervisor, claimed, “They’re wonderful, they can be unbelievably affectionate and they're intelligent. The sad part is they're also a lot of the time associated with criminals, people try to use them as protection and chain them out” (French). While legislators are focusing on the breed of the dogs that are harming people, maybe they should look at other factors in the situation. These laws were created in order to protect the people from dogs who are dangerous, yet over time, the view on which breed is considered to be dangerous has always changed. One decade it was the German Shepherds who were blacklisted; the next it was Doberman Pinschers; and currently it is the American pit bulls (Campbell). If nature were to blame, the breeds considered to be aggressive would not change over the years. Instead of focusing only on the breed, we can look at other factors that can affect a dog’s aggression, such as its environment. By looking at the dogs involved in fatal attacks, one can see that in 2006, 97% were not neutered or spayed; 78% had been dogs that were used as guard dogs, breeding dogs, or yard dogs, instead of as pets; and 84% had been dogs that were abused, abandoned, neglected, or chained by the owner (Campbell). People assume that nature is the cause of aggression in dogs, and yes, there is a correlation, but we cannot infer that it is causational without reviewing these important factors. With the given information, it is obvious that aggression is linked with the nurturing of the dogs, too. Keep in mind that a dog not being neutered or spayed is a natural cause of aggression in any species. Naturally, a dog has the potential to be aggressive, but it is nurture by the owner that defines the fine line between a dog that is actually classified as aggressive and a dog that is not. Rothwell laments, "It's unfortunate. It's really not the breed itself; it's the owner" (French). Everything that the owner does in raising the dog affects how the dog will be when it is older, whether it was during training or not. For instance, if your puppy is doing something naughty and you spank it, it may feel threatened and may begin to show some signs of aggression, such as growling or even biting. If the owner backs down once he or she is bitten, the owner is “unwillingly” teaching the dog that aggression is a way to get what it wants. When a dog is beaten for punishment, this is also telling the dog to ready itself for combat (McGreevy & Calnon). For years, the dog fighting world has acknowledged the fact that genetics play a role in making potentially aggressive dogs. Dogs that make great attackers are the ideal dog fighting breeds. Owners target breeds like the American pit bull terrier, a dog with a large, toned, muscular body, to become fighters. Dog trainers put their pit bulls through harmful beatings, which make the dog progressively more aggressive. They start the dog’s first fights as young as 15 months (Silverman). From the start, they are already trying to see which pups show more aggression than others. The fighters would rather choose the pup that is already picking fights than the shy one hiding in the corner. By chaining the dog up and attaching weights to the chain, trainers can build the dog’s upper body strength. Some even go to the extent of using steroids and making the dog run on a treadmill. To increase the jaw strength, fighters often make their dogs bite on objects like hanging tires. Training often involves using a small animal as bait and allowing the dog to kill 82
them when the training is over. Most dogs are kept in tight cages facing other dogs to make the animals more irritable and angry towards others. Some trainers will do the opposite and keep the dog away from both dogs and people so that it will show aggression towards strangers. Then, as their training progresses, the pit bull becomes an aggressive fighting machine. Pit bulls that are trained to be aggressive are more likely to be the culprits when it comes to fatal attacks on humans by that specific breed. The dogs have been trained to be aggressive by their owners, and this definitely shows how the owner can impact the life of the individual dog. In contrast to many fighting dogs, many pit bulls are loving, loyal, gentle animals. In fact, some dog owners abuse the loyal characteristic of the breed in order to make it aggressive towards strangers. Of course animal instinct will make a dog aggressive in a particular situation, but it is how the dog is trained to handle the situation which helps us to determine the aggression of the dog. Although dogs have been forced into the aggressive sport of dog fighting, not all hope is lost because these individuals can still be saved. In 2007 Michael Vick, former quarterback of the Atlanta Falcons, was exposed for organizing of a criminal dog fighting operation that had taken place for six years. About 50 dogs were confiscated from Vick’s “Bad Newz Kennel” alive (Boone). These dogs who had been abused, neglected, underfed, and forcefully bred were a handful that were lucky to be saved before being executed by the means of electrocution, being drowned, or otherwise disposed of. The Bark Post explains that “people said even though the situation was horrible, these dogs should be put down, because of their obvious violent natures and killer instincts.” (Boone). However, groups like the Best Friends Animal Sanctuary adopted these traumatized dogs, giving them a second chance at a real home by rehabilitating them. Dogs like Squeaker, Oscar, Handsome Dan, and Cherry Garcia have all been through the nightmares of Vick’s dog fighting operation, but they each have grown to become loving members of their new families. Cherry’s owner shares, “People can learn about abused animals and that fighting dogs are not inherently evil dogs. Cherry was bred and trained to fight, and yet that is the last thing he wants to do" (Boone). Almost all of these dogs that have been adopted surprisingly live alongside other dogs, even small infants (Boone). It is touching to see how a dog who wants to be saved can make a difference if people give them the chance to be. Humans are so worried about dogs attacking us that we sometimes forget that the dogs themselves can be victims, too. As we learned before, abuse is a method that dog fighters use to make their dogs more aggressive. Many dog trainers also spank their dogs or have some form of punishment when the dogs disobey. It seems that human aggression towards dogs is more common than the other way around. Also, human aggression itself can even cause a dog to be aggressive as a response, creating a cycle of aggression that began with the owner. An interesting study showed that aggressive dogs tend to be owned by “first-time” dog owners and less aggressive dogs by older people (McGreevy & Calnon). One can see that the owner’s decisions will reflect upon the dog’s personality. That is why one pit bull can be a loving pet, and the another a dangerous threat. Just as dog fighters try to breed genetically more aggressive dogs, many are also trying to breed dogs with a good temperament.
83
Dogs, just like human beings, can be naturally aggressive, but it is how the dog is nurtured which defines how aggressive a dog will be. Scientists are working to discover the exact genetic code of aggression in hopes of extinguishing it, but is it genetic aggression that is really the problem? It is important to understand the history of dogs and their own nature when picking a suitable pet. Everything you do in raising a pet affects its personality. There is only so much you can do to make the genetically ideal dog when selecting a pet, but you cannot blame a dog for its actions if you have raised it to be that way. WORKS CITED "Aggression in Dogs." ASPCA. N.p., n.d. Web. 20 Feb. 2014. Boone, John. "These Dogs, Who Were Rescued From Michael Vick's Fighting Ring, Will Make You Believe in Happy Endings." E! Online. N.p., 16 Apr. 2014. Web. 17 Apr. 2014. Campbell, Dana M. "Pit Bull Bans: The State of Breed-Specific Legislation." GPSOLO 2009. Google Scholars. Web. 19 Feb. 2014. Collier, Stephen. "Breed-specific legislation and the pit bull terrier: Are the laws justified?." Journal of Veterinary Behavior: Clinical Applications and Research 1.1 (2006): 17-22. Google Scholars. Web. 19 Feb. 2014. French, Clifton. "Aggressive Pit Bulls: Nature or Nurture?" NBC-2.com. N.p., 1 May 2013. Web. 20 Feb. 2014. Houpt, Katherine. "Review Article Genetics of Canine Behavior." Acta Vet 76 (2007): 431-44. Google Scholars. Web. 19 Feb. 2014. Joseph, Drew. "Doctors Warn of Public Health Problem of Dog Attacks." San Antonio ExpressNews. N.p., 13 Apr. 2014. Web. 28 Apr. 2014. McGreevy, P. D., and D. Calnon. “Getting Canine Aggression in Perspective.� Veterinary Journal Oct. 2012: 1+. Academic Search Premier. Web. 18 Feb. 2014.
84
Eyes of Realism: Personal Perspective (By Samantha Patanapaiboon) When photography comes to mind, most people think art and images of anything and everything; therefore photography is highly correlated with the idea of creativity. However, there have been previously naïve ideas of realism that consider photography as lacking artistic skill because it is a simple mechanical reproduction of an object. When Kodak first came out, their sales pitch was, "You press the button, we do the rest" (Shusterman 76). Around the same time period, the Yashica Elector-35 GT came out and its ad read: “The space age camera your family will love. Take beautiful pictures day or night. Automatically, without any nonsense. Just, aim, focus, and shoot. The GT’s computer brain and electronic shutter will do the rest” (Sontag 14). This beginning era of photography gave people the impression that cameras would take the most flawless, honest-to-nature photos ever, and this would not take any more effort than simply eye looking into a viewfinder and pushing down a button. This oversimplification implies that images are formed automatically, when in reality that is not entirely true. Many individuals think that photography is an act without the "creative intervention of man" (Benovsky 379). Similarly, Walter Benjamin once said that the artistic function of photography dwindles down to "only upon the eye looking into a lens" (Shusterman 67). When manufacturers sold cameras, they ensured their customers that taking pictures took no skill or expertise at all, saying that it was as easy as turning the key to the ignition of a car or pulling the trigger of a gun (Sontag 14). In earlier eras, cameras were only able to take photos in their most natural state because they did not have the features our cameras today have. Consequently, as time went on and technology advanced, the process of taking a photo advanced also. Present-day cameras include many adjustable features that require more human ability than pushing down a thumb onto a button. Cameras today have changeable features like shutter speed, aperture, and exposure. The features encourage human creativity by presenting the photographer with a range of different workable settings and setting combinations. The reality of nature is still present in an image, regardless of the features used. This is because this ability to interpret a subject through varying photographic styles extends the definition of photo realism. In photography, total objective realism is not desired or possible. Instead, photographers are "supposed to do more than just see the world as it is, including its already acclaimed marvel; they are to create interest, by visual decisions" (Sontag 89). Photographers also seek to capture the subjective relationship that is experienced with the object at the time the photograph is taken. Realism, a term that occasionally pertains to the actuality and factual nature of a subject, is a way to reveal the truth about its focus. However, when realism is defined in terms of photography, the idea of human creativity comes into play because a photo may not look identical to the actual object when it is physically viewed. Thus, the ability to alter the setting of camera features causes controversy over what is considered to be a “real” photo. Also, many arguments arise pertaining to the mechanics, intentions, and interpretation of photography.
85
One feature that adds to human creativity is shutter speed, a feature that controls the exposure time of a camera, making moving objects look incredibly still or blurry and in motion. This feature is measured by fractions, such as 1/8, 1/60, 1/250, and so forth. As the shutter speed slows, increasing the fraction, the exposure decreases. A fast shutter speed like 1/8 would freeze movement, while a slower shutter speed at 1/250 blurs movement (Langford & Andrews 29). The faster shutter speeds are usually used in brighter settings because they allow for smaller amounts of light to enter the lens, whereas a slower shutter speed enables a larger amount of light to enter (Worldbook 412). Therefore this feature emphasizes certain motions of movement made by the subject and “controls the time the image is allowed to act on the film” (Langford & Andrews 31). The aperture setting allows an image to vary its field of depth, enabling parts of an image to turn out blurred while the subject is in focus. Aperture is a feature focused on zone sharpness and measured by “f stops.” The smaller the “f stop,” the larger the aperture will be and vice versa. For example, the aperture setting of f2.0 would present a lower zone of sharpness that focuses mainly on the subject, whereas an aperture of f16 would present a greater sharpness zone that increases the depth of field or foreground to background sharpness (Langford & Andrews 31). Aperture settings are usually adjusted according to the atmosphere because a larger aperture is used in dim light scenarios while small apertures are used in bright settings (Langford & Andrews 29). These two features are able to work hand in hand, in a system called reciprocity. Reciprocity is when the aperture size is decreased and the shutter speed is adjusted to a slower speed, maintaining the correct amount of exposure (Wright 35). The intention of these features is to “ensure that your audience is concentrating on the part of the picture that you, as the photographer, deem as important" (Langford & Andrews 66). Aside from shutter speed and aperture, there are also steps that can be taken, such as adjusting the exposure based upon the atmosphere, deciding the frame and orientation of the photo, and controlling the length of the lens to adjust the zoom. According to Benovsky, the necessary decisions to be made include adjusting aperture, shutter speed, focal length, ISO sensitivity, exposure, white balance, brightness, saturation, and contrast (382). All this makes the photographer the actual center of the work because it is the photographer who makes these necessary decisions. Therefore, aside from pressing down a button, taking a picture now requires more bodily effort for different reasons (Shusterman 69). Effort is needed while taking photos from different angles and approaches. For example, when a photographer decides not to use a tripod to mount his camera, it becomes time to decide either to stand, squat incredibly still, or find a surface to hold the camera. As stated in Jerry Thompson's Truth and Photography when he spoke of photographer Evan Walker, "he projects himself by seeing and seizing a view. He seizes by deliberate camera placement and by delicate arrangement accomplished, not by dragging the thing around but by placing himself" (36). Here is an image by Bruce Dale of a plane on the runway:
86
Source: “How Photography Connects to Us”
Bruce Dale carefully placed his camera on the tail of the plane to capture this image, showing his viewer what taking off looks like, different from the common perspective from someone sitting in the plane. His placement becomes evidence that camera placement is an important factor of photography, especially when another viewpoint wants to be explored and shared. On the other hand, Jiri Benovsky states in Three Kinds of Realism about Photography that “photographs are often blurred, while the world is not” (76). Thus he is saying that the image does not reveal the actuality of the subject in its natural form. On the same note, Peter Henry Emerson, a British photographer, felt that the standard of the art should be to capture the object in its natural or purist state. This is because the portrayal of an object in its purist state is as close to nature as it is going to get. Emerson further suggested that when art is “true to nature, art has been good” and that when “the artist has neglected nature and follows his imagination, there has resulted bad art” (Sandler 17). As a result, Sandler and Benovsky both greatly associate realism with the idea of relating images to the natural world. In turn, this leads to an assumption that images denying the pure form of a subject is an invitation towards fantasy and speculation to the viewers (Sontag 23). Through my interpretation, Susan Sontag is stating that the action of taking photos with adjusted features and edits creates an “imaginary or false image” because Sontag believes if an image does not explain any part of reality, then the naturalist form of objective realism is not present (23). However, when it comes to the realism of an image, much about the image can never completely be labeled as “real” because to determine what is consider as “real” is all based upon individual perception. Jiri Benovsky mentions that photographs tell stories, and what the photographer wants to convey will determine where the focus of the subject will be centered. This therefore concedes the argument that realism does not always portray the subject as is but becomes a matter of what the photographer wants to convey through the photograph. Also, through capturing a subject the way that a photographer does, the image turns into a reflection of how the photographer responded and experienced the subject (Thompson 22). Thus, through the eyes of the beholder, different connection can be made with the same photograph, a goal that reveals what is considered to be the realism of a photo. Images then turn into a memento that not only reminds the photographer of a past experience, but now encourages its audience to do the same. Therefore, the opinion that solely recommends photographing objects in their “pure” state will dispose of the emotions and personal expression that are shared between a photographer and his or her subject: "Presumably a sympathetic viewer, looking at the 87
print, will 'feel' something like the photographer 'felt' when looking at the scene" (Thompson 23). In David Griffin's Ted Talk, "How Photography Connects to Us," he mentions an example addressing an image of an amputated soldier who is undergoing rehabilitation, taken by James Nachtwey:
The prime focus is the soldier. When you see this image it turns into a visual narrative and enables you to connect with it, whether through the thought of a past experience, a reminder of a soldier in your life, or a simple sense of gratitude felt for all the guards in the military. Camera features allow for a photographer to create an image that not only attracts the audiences to a focal point, but enables a photographer to be creative in a way that reveals possible emotion and a clear idea of what should be the most visible and sharpest scene for the entire image. By having a focal point and blurring out the remaining foreground of an image, photographers can create a connection with their viewers. The idea of photos creating a “false” image should never be considered because the point of photos is to make a connection with the audience. To critique and label images is a personal attribute to the work, but with versatility in image production, emotion felt while viewing the photograph is enhanced. With that said, photos are still memories of an event wished to be remembered and shared; thus, if photos lack of creativity and cannot go beyond the “natural state,” connection with an audience can never go beyond casual emotions. On social networks like Instagram, a photo and video sharing site, it is evident that there are many technicalities of a camera that go beyond pressing the shutter button. While scrolling through the site, I see a lot of landscape and ocean photography, but within the images I noticed blurred backgrounds, lighting stronger on certain parts of the subject, and varying coloration of 88
the environment. These characteristics of the image reflect the decisions that the photographer made before capturing the actual photo. Taking photos is a great way to learn because it allows you to undergo trial and error while holding an idea of what you want your resulting photo to look like. Although mechanics does play a part in this process, the angles and depth put into a photo change the focus—an action similar to what we experience in life. Therefore, my experience with taking a simple picture of a wave becomes more than a mechanical reproduction of an object in nature; it turns into an example of effort that demonstrates different approaches and camera settings. There are more elements involved than just the mechanical aspects. In turn, through examples of modern photography and present-day cameras, the act of taking a photo is a far more complicated act than it is often thought to be. With so much more to it than button pushing, setting angles, and finding something to photograph, the resulting image reflects the stylistics of the photographer. Hence, the realism of a photo is measured by the interpretation of the artist, and the appreciation that the photographer has for the subject. What is "real" is represented through the artistic and personal approaches made by photographers to showcase how they perceive the subject. It "means something like correspondence: a statement or expression contained in a picture corresponds to the artist's experience of an object in the world" (Thompson 23). This instinct of how to capture the subject is what reflects the art of the image, instead of focusing on the actual object that is being photographed (Shusterman 75). How one thinks and responds to an experience while taking a photo is shown through the image, and that ability to express emotions is what photographers consider to be real (Thompson 50). This is because the way they react and the emotion they feel while taking the photo is real, and in order to express this feeling, they take a photo with a sense of self-representation. Still, with many combinations of photo-taking methods, the inner objective of a photographer is to be able to use different camera features to take an image that tells the story the photographer is aiming for (Benovsky 386). As an amateur photographer, I like to take photos of subjects I find interesting and reflective. When I go out on adventures, I bring my camera with me and I am able to capture moments, turning them into still memories. An example from my personal experience is when a group of friends and I went on a beach adventure. I was taking photos of the ocean, surfers, landscapes surrounding us, and candid and posed portraits of friends. While doing this, I adjusted different settings on the camera because they allowed me to capture a subject differently than how it appears in nature. Specifically, when I was taking shots of the waves, I adjusted the shutter speed so that I could catch the waves as a still motion as they crashed onto the rocks:
89
There is a saying that “beauty is in the eyes of the beholder,” and as an amateur recreational photographer, taking a photo is more than just a mechanical reproduction of nature’s object; it is a way of catching what I think are the wonders of life. Photography is a way to experiment with life, and depending on how I want a photo to come out, it can be used to express how I feel. For example, when I took a picture of a wave crashing onto a rock, at the time it expressed my feelings of struggle and hardship; the simple nature image of the ocean crashing onto a rock became an indirect personal interpretation of my thoughts of life crashing down on me. In consequence, the picture turned into a representational image; I am the rock and the wave is life collapsing onto me. All in all, camera features do not eliminate the total realism of an object but contribute to the meaning of the object by including a sense of individualism and creativity. Photo realism is all about taking an object provided by nature and capturing that object in the form that represents how the photographer interprets it through their individual photographic style. Extending the styles that can be used to take a photo does not restrict realism to any image that is taken with no feature adjustments. Instead, it expands the term far beyond personal interpretation because again, total objective realism is not desired or possible in photography. Photographers are artists, and artists are built to create new images that are representational, abstract, and reflect what they consider to be as real. To live in a world that lacks creativity and opinions on photo realism is like living in a world without color because everything will be viewed as either black or white. There will be no dimensions to life, and the many aspects of life will be demolished into a single perspective that forces all to value what is suggested to be right. If that occurs, interpretation and personal understandings will be diminished. In the end, photography should be credited more for its works and abilities. There are strategies, techniques, and human attributes and decisions that go far beyond the idea of naïve realism. It should be established that there is no set range of photo realism, because the ability to capture an image reflecting the photographer’s inner self while telling a moving story open for interpretation will vary amongst the individuals. A fitting quote is what John Aäsp wrote about photography: “I am a camera, with its shutter open—and it is left to you to determine how what you capture will become developed, carefully printed, and fixed [and in this case, interpreted]” (20).
90
WORKS CITED Aäsp, John. "Shutters Open: The Gesture of Photography." Afterimage 41.2 (2013): 1720. Academic Search Premier. Web. 13 Mar. 2014. Benovsky, Jiri. “Three Kinds of Realism about Photography.” The Journal of Speculative Philosophy 25.4 (2011): 375-395. Academic Search Premier. Web. 6 Feb. 2014. Breitbach, Julia. "The Photo-As-Thing." European Journal of English Studies 15.1 (2011): 3143. Academic Search Premier. Web. 13 Mar. 2014. Griffin, David. "How Photography Connects to Us." TED2008. TED. Monterey, CA. February 2008. Lecture. Langford, Michael John, and Philip Andrews. “Camera, Sensors, and Film.” Langford’s Starting Photography: The Guide to Creating Great Images. 6th ed. Oxford: Focal, 2008. 29-31. Print. “Photography.” Worldbook. 2012 ed. N.d. Print. Sandler, Martin W. “Photography as an Art.” Photography: An Illustrated History. Oxford: Oxford University Press, 2002. Print. Shusterman, Richard. "Photography as Performative Process." Journal of Aesthetics & Art Criticism 70.1 (2012): 67-78. Academic Search Premier. Web. 12 Mar. 2014. Sontag, Susan. On Photography. United States: Farrar, Straus, and Giroux, 1977. Print. Stroebel, Leslie D., and Richard D. Zakia. The Focal Encyclopedia of Photography. Boston: Focal, 1993. Print. Thompson, Jerry L. Truth and Photography: Notes on Looking and Photographing. Chicago: Ivan R. Dee, 2003. Print. Wright, Michael. "Composition." Digital Photography: A Complete Visual Guide. Irvington: Hylas, 2006. Print.
91
CULTURES AND SUBCULTURES
92
The Divergence and Denigration of Chinese Language (By Selah Chung) China is one of the largest economic world powers in existence, with a population of over 1.35 billion people. Mandarin, the country’s official language, is quickly becoming one of the highestdemand languages for foreign affairs. However, what many know as modern “Chinese” is only the tip of the iceberg. Over the past century, the Chinese language has been incessantly questioned, challenged, and reformed. This has resulted in the loss of classical Chinese culture and the denigration of the Chinese language. Before analyzing the evolution of the Chinese language, one must first understand the relationship between language and those who speak it. Language is a tool; it reflects the culture that it is a part of. Sheng Ding & Robert A. Sunders, professors of Political Science at Bloomsburg University, define culture as “a set of distinctive spiritual, material, intellectual and emotional features of society or a social group and that it encompasses, in addition to art and literature, lifestyles, ways of living together, value systems traditions and beliefs” (5). This said, culture and language are inherently connected. Language preserves culture, reflects society, and provides an identity to those who use it. There is a constant push and pull between the formation and usage of language and the people who speak it; Ding & Saunders identify the “high” culture of the elites, such as intellectuals and political figures, and the “popular,” or pop culture of the masses. When it comes to shaping culture, language is one of the most effective and influential facets available (5). In addition, according to language specialist Dieter Buttjesas, “language acquisition does not follow a universal sequence but differs across cultures; the process of becoming a competent member of society is realized through exchanges of language in particular social situations . . . the native learners, in addition to language, acquires also the paralinguistic patterns and the kinesics of his or her culture” (Ding & Saunders 6). The Chinese language, like any other, has followed the course of its people’s history. However, rather than being celebrated as a symbol of national pride, the Chinese language was once scrutinized and belittled by its own people. Traditional Chinese values were first publically challenged during the era of Mao and the Cultural Revolution. During a nationwide sociocultural attack occurred against the “Four Olds,” which consisted of old ideas, habits, custom, and culture (Ding & Saunders 14). This fundamental doubt would later lead to the destruction of the very core of China’s culture. At one point in time, the Chinese language was not only tolerated, it was highly esteemed. Ted Rule of Quadrant magazine states that preceding the twentieth century China was one of the most powerful and conservative nations in existence. Up until this time, the Chinese people highly respected their classical Confucian values (89). In classical Confucian China, literacy was considered a sacred art, only available to those of status. Edward McDonald, Professor at the National University of Singapore, notes that literature and linguistics in traditional China were known as xiaoxue, meaning “small learning.” This was preparation for the “big learning,” which was the study of China’s history and philosophy—their culture and way of life (55). In turn, maintaining historical knowledge of reading and writing Chinese formed a significant part of Chinese philosophy. In particular, there was a very distinct difference between the spoken 93
and written languages. Much like English, the grammar and vocabulary that the Chinese use to speak to one another is far different from the way one would write to communicate. According to Rule, throughout history, official Chinese prose was never written in vernacular style. Only “undignified” pieces of literature such as pornography would be composed in the non-standard style of prose (89). In classical Confucian China, literacy was treated as a highclass skill that could only be taught to those of status. However, with the infiltration of European missionaries, every man was assumed to be equal under the eye of God, and therefore had the right to learn to read and write (Rule 90). Despite this, the language of China was considered to be a mark of national pride and dignity. Some may even say that it signified a sort of ethnic superiority, a common national belief among certain countries. However, as the political and societal aspects of China took a turn for the worst, so did the nation’s pride. Sang Bing of Sun Yat-sen University in China explains that, near the end of the 1900’s, China’s economic and political status began to suffer. In turn, intellects began to blame the Chinese language for the nation’s fall (73). They believed that if the Chinese spoken and written language could be unified, China would regain its former power. This notion caused a domino effect, which resulted in many attempts to alter the Chinese language. Bing explains that intellectuals who opposed the classical Chinese language varied in ideals but were all united by the same notion: that the economic and worldly fall of China was due to the lack of education given to the masses. This was subsequently caused by the difficulty of writing classical Chinese characters (78). During the 1900’s, Western civilizations were perceived as economically flourishing, and therefore, many Chinese parties proposed to reformulate the Chinese language to become more westernized. Bing notes that the ultimate goal of reforming the Chinese language, specifically switching from Chinese characters to pinyin or Romanization, is to bring about universal education, and to once again make China a world power (78). In his research, Bing discusses Qian Xuantong (1887-1939), one of the more radical leaders in the Chinese character abolition movement. In 1918, Qian wrote: “[The Chinese language] is the most backward of pictographs—difficult to recognize, and inconvenient to write.” Qian criticized the grammar and Chinese sayings, deeming them vague and impractical. Moreover, he proposed the abolition of the Chinese language entirely (Bing 73-74). Qian was just one of the many parties who blamed the Chinese language for the failure of their country. As a result of all of this protest, there have been many attempts to reform the Chinese language. This includes translating the Chinese characters into pinyin, a phonetic system of translating Chinese characters, and getting rid of the written style of language altogether, forcing communication to rely on the notion of “write how you speak.” Furthermore, the traditional Chinese characters were simplified to make literacy more obtainable to the uneducated masses. Rue documents the first simplification scheme being released to the public in 1950, and the second in 1955. The simplification of the Chinese characters took a long break during the Cultural Revolution, and a third list was printed in the late 1970s but was later withdrawn and denied by government officials (Rule 91). Despite all of this, Chinese language reforms have been unsuccessful in unifying the written and spoken languages, as the hundreds of different Chinese dialects make this impossible. Furthermore, it now more difficult to communicate with speakers of other dialects, as a standard 94
writing system no longer exists. Bing affirms that because there are so many different dialects within China, pinyin from main dialects can only be used in certain regions of China. In turn, Qing officials promoted Mandarin as the national language, using propaganda such as “literature for the national language, a national language for literature” (81). However, a recent announcement made by state media revealed that over 400 million Chinese people, approximately 30% of the population, cannot speak Mandarin, and much of the rest of the population speak it very poorly (Davidson para. 1). In addition, studies have proven that literacy is directly linked to educational systems and has no correlation to the specific language (Bing 90). Rather, it is more likely that the lack of literacy is due to the sheer size of the nation and lack of educational programs in the copious rural areas of the country. As a result of the language reform, China’s government began to implement new linguistic propaganda and slogans in hopes of doing away with China’s classical traditions. In turn, China’s societal values began to shift. For over 2,000 years China has been influenced by the doctrines of Confucianism, Taoism, and Buddhism. According to Xing Lu and Cheng GuoMing of DePaul University and University of Rhode Island, among these doctrines exist the principals of “Ethical Idealism,” “Hierarchy in Family Relationships,” and “The Kinship System” (57). With the reform of the Chinese language, the traditional values have transformed in turn. These shifts included ethical idealism to materialism, hierarchy to equality in family relationships, and kinship to guanxi (Lu & Guo-Ming 59). Confucian belief defines the “Five Principles” of ethical idealism as benevolence, righteousness, prosperity, wisdom, and trustworthiness. It is believed that wealth will corrupt one’s heart and so instead man should live a humble life filled with spirituality and self-contentment (Lu & GuoMing 58). With the economic reform of the 1980s, the values of ethical idealism were for the first time challenged. Government-issued propaganda encouraged the public to embrace the values of wealth and success. Lu and Guo-Ming describe this new set of values as being driven by ethos such as “make money and become rich,” “reform and open the door to the West,” “material civilization,” and “profit and efficiency.” New metaphors and words such as da kugn (big money carrier) were created; additionally greetings such as “wishing you rich and wealth” or “wishing your life filled with money” are now commonly implemented among the Chinese people (59). These new words and phrases were born out of the desire to influence the mindset of the masses. While polar opposite from traditional related values, the government seized the opportunity presented by China’s growing success to re-build the national pride of its people. Along with this, the traditional severity in family order has been altered as well. Lu and GuoMing explain hierarchy in family relationships exist to create a sense of order and harmony, so that the family bond will be strong and not fall apart. Within the relationship of a husband and wife, the wife should be obedient and submit to her husband (58). She is a subordinate and is treated as a lesser being in the hierarchical scale. Furthermore, insulting linguistic expressions such as “wicked,” “slave,” “ignorant,” or “prostitute” exist to undesirably portray Chinese wives (Lu & Guo-Ming 58).
95
The devastating treatment of women in traditional Chinese culture put all leadership of family affairs in the hands of the husband. Ironically, modern-day Chinese wives often hold more if not equal power in the family as their spouse. Lu and Guo-Ming confirm that women are now expected to be treated as equals and government-issued propaganda supports women’s equality, in attempt to do away with the traditional treatment of women. In fact, women now own approximately 35% of private enterprises in China. Moreover, modern Chinese husbands submit to their wives at home, giving full reign of finances to the woman of the house (Lu & Guo-Ming 60). In addition, divorce is no longer perceived as a shameful act for a Chinese woman. In fact, women are the “front men” of the family and many men are considered “model husbands” (Lu and Guo-Ming 60). While government-issued propaganda has diminished some of the more sacred aspects of customary Chinese culture, it has also positively affected the dynamic of everyday familial interactions. Furthermore, this more modern notion of equality has allowed new female leaders to rise up and contribute to the nation’s success. Lastly, the sociocultural concept of kinship, which describes the dignified bond and loyalty one has for another, has been replaced by a more commercialized idea of guanxi, similar to what Americans know as social networking. Lu and Guo-Ming describe that the kinship system was used to communicate within immediate and surname family members. It designed a set role for each person and that person behaved according to such. While this may seem like a formality, the kinship system is actually a system that allows family members to build strong bonds of trust with one another. Similarly, the idea of “blood is thicker than water” is very relevant to the purpose of the kinship system (59). While the kinship system is implemented to structure how each family member should address one another to maintain harmony, guanxi is used as a means to obtain something or signifies certain obligations between people. With the shift of values and language, the importance of family ties have weakened, and social networking has gained prominent value in modern Chinese culture (Lu & Guo-Ming 61). Again, while this modernization of Chinese values is expected from one of the world’s strongest economic powers, there is also a certain tragedy to the loss of the hallowed notion of kinship. As China is taking on more Westernized sociocultural values, the cultural identity of China is beginning to be washed away. In addition to Western interactions, English as a second language is now being implemented into Chinese school systems as a mandatory skill. Tao Rui and C. Phyllis Ghim-Lian Chew of the Singapore National Institute of Education state that English was first introduced in Chinese school systems in 2001 and became a mandatory subject in 2004 (317). Since then, English has now become an absolutely necessary skill for students to master in order to obtain acceptance to or graduate from high school and college (Rui & Chew 320). Chinese children are taught from a very young age that to be successful, it is an absolute necessity to master English. Those who are not able to do so are at an educational disadvantage similar to Americans who are not able to graduate from high school or college. Being able to find high incomes or work for larger established companies becomes nearly an impossible feat. Yan Guo, doctor of language and literacy education, and Gulbahar Beckett, professor of Socio and Applied linguistics at the University of Cincinnati, argue that despite the benefits of learning English as a second language, English being implemented into the Chinese classrooms puts 96
Chinese youth in danger of forgetting their heritage, culture, and very identities. This in turn only promotes the degrading of traditional teachings and cultural awareness (117). Chinese children raised in such English -intensive environments may come to believe that English is a language superior to their own, which displays how the language reform ideals of the 1900’s still play a role in the beliefs of modern Chinese society. While the Chinese language was not severely distorted from its original form, the changing ideals behind the language reform have created a ripple effect. These changes have caused China to lose not only its classical Confucian morals but also have degraded the status of the Chinese language itself in the eyes of its people. Robert L. Moore, psychoanalyst at the Chicago Theological Seminary, notes that children born of the Cultural Revolution generation, also known as Millennials, have embraced the ideal of individualism. Strongly influenced by Western pop culture, this desire to be unique is severely different from the collectivist values implemented in the Mao era (357). Much of this cultural shift relates to the effects of the linguistic propaganda implemented during the Industrial Revolution. The lack of traditional Chinese values in modern day upbringing leaves China’s youth to latch onto values of other cultures and explore various cultural identities. Moore explains, “The acceptance of new values by young people in the face of resistance by their elders is a pattern commonly found in modern societies where popular culture flourishes via mass media. It is also common for the younger generation to emphasize its association with their new values via a pervasively used slang term” (358). For example, Moore discusses the relatively new notion of ku (357). Based on the western term “cool,” generation Millennia has derived the slang term, which is now heavily used by youth culture and is closely related to individualistic and self-indulgent acts that China’s adolescents now embrace. Some of these include dyed hair and interest in Western fashion, lack of diligence in academics, and the pursuit of male/female relationships (Moore 364). This lack of discipline is far different from the strict behavior expected of youth in more traditional Asian cultures and strongly resonates with behaviors observed in American teenagers. Moore highlights that in recent interviews with young adults in Bejing, he found that “The ku person is someone we might have called qiguai (merely strange) ten years ago, [when] China was not so open, so it would not accept someone who is so special” (365). While these young adults describe their behavior as “special,” they are simply imitating the norm of more westernized cultures that are not their own. Furthermore, as social behavior moves farther away from the original values of the Chinese culture, the next generations will experience less and less of the true culture of their country. In the last hundred years, the very core of the Chinese language has been critiqued and defamed by intellectuals and government officials alike. This in turn has created the cultural belief that the Chinese language, and furthermore, traditional way of life, is inadequate and therefore disposable. Language is history. It holds the stories, triumphs, failures, and dreams of those who came before us. Language is a mirror to the culture of those who speak it. So let me ask: When you look in that mirror, what will you see?
97
WORKS CITED Bing, Sang. "The Divergence and Convergence of China's Written and Spoken Languages: Reassessing the Vernacular Language during the May Fourth Period." Twentieth-Century China 38.1 (2013): 71-93. Academic Search Premier. Web. 5 Feb. 2014. Davidson, Kavitha A. "China Says 400 Million Can't Speak National Language as Government Promotes Mandarin Education." The Huffington Post. TheHuffingtonPost.com, 05 Sept. 2013. Web. 07 Feb. 2014. http://www.huffingtonpost.com/2013/09/05/china-nationallanguage_n_3872004.html. Ding, Sheng, and Robert A. Saunders. "Talking Up China: An Analysis of China's Rising Cultural Power and Global Promotion of the Chinese Language." East Asia: An International Quarterly 23.2 (2006): 3-33. Academic Search Premier. Web. 12 Feb. 2014. Lu, Xing, and Guo-Ming Chen. “Language Change and Value Orientations in Chinese Culture.� China Media Research 7.3 (2011): 56-63. Communication & Mass Media Complete. Web. 12 Feb. 2014. McDonald, Edward. "Humanistic Spirit or Scientism? Conflicting Ideologies in Modern Chinese Language Reform." Histoire Epistemologie Language 24.2 (2002): 51-74. ProQuest. Web. 5 Feb. 2014. Moore, Robert L. "Generation Ku: Individualism and China's Millennial Youth." Ethnology 44.4 (2005): 357-376. Academic Search Premier. Web. 12 Feb. 2014. Premaratne, Dilhara D. "Reforming Chinese Characters in the PRC and Japan: New Directions in the Twenty-First Century." Current Issues In Language Planning 13.4 (2012): 305-319. Education Research Complete. Web. 3 Feb. 2014. Rui, Tao, and Phyllis Ghim-Lian Chew. "Pedagogical Use of Two Languages in a Chinese Elementary School." Language, Culture & Curriculum 26.3 (2013): 317-331. Academic Search Premier. Web. 5 Feb. 2014. Rule, Ted. "What is the Chinese Language?" Quadrant Magazine 53.6 (2009): 89-95. Academic Search Premier. Web. 5 Feb. 2014. Yan, Guo, and Gulbahar H. Beckett. "The Hegemony of English as a Global Language: Reclaiming Local Knowledge and Culture in China." Convergence 40.1/2 (2007): 117131. Academic Search Premier. Web. 12 Feb. 2014.
98
The Philippines: Questioning the Country’s Independence (By Janine Mariano) The Philippines is a country that has been conquered by several global powers. For 333 years, they were a colony of Spain. After the Spanish-American War, America held ownership of the Philippines for nearly fifty years. After gaining independence from the Americans, the Philippines were under the dictatorship of Ferdinand Marcos. The glorious country that was known as the “Pearl of the Orient” is now a third world country that struggles to keep its independence. While researching the constant conquest and oppression of the Philippines by foreign powers, I have come to the conclusion that the Philippines has experienced little to no form of independence, ever. This can be seen throughout history, starting with Spanish colonization. The Philippine Islands were found by Portuguese explorer Ferdinand Magellan and were claimed for Spain in 1521. In the years that followed, the indigenous people of the Philippines, who are also called indios by the Spanish, were constantly seen as the lowest class of people. The indios were not called “Filipinos” either. The coined term “Filipino” was exclusively used to refer to Spaniards who resided in the Philippines, not to the indigenous people (Ileto 122). In their attempt to become like the “Filipinos,” and in attempt to gain salvation according to the Catholic Church, the indigenous people were losing their culture. Many sermons were in either Latin or Spanish. This prompted the language of the indigenous Filipinos to merge with Spanish so that they could understand. In essence, the indigenous people of the Philippines were attempting to become like the Spanish. This went on for hundreds of years until the cultures of Spain and the Philippines merged into one. The evolved behavior of the indigenous Filipino caught the attention of Filipino scholar José Rizal. Ileto writes that José Rizal, while annotating a seventeenth century text on the history of the Philippines, discovered that with Spanish influence, the indigenous Filipino “forgot their native alphabet, their songs, their poetry, their laws, in order to parrot other doctrines that they did not understand” (30-31). Not only that, but the native Filipinos were still seen as lower-class citizens of the Philippines, three hundred years after being colonized. James Gregor found that a plantation system, introduced to the Philippines by the Spanish, caused a new class, referred to as the gentlemen farmers, to have command over major economic resources in the countryside. This caused a major class of farmers who farmed with the sole purpose of surviving to be “eked out [of] a livelihood on small tracts [of land]” (195-196). Many of the systems put into place by the Spanish kept the native Filipinos poor or barely middle class. Reynaldo Ileto notes that up until 1872, natives of the Philippines accepted Spanish rule without question. In 1872, however, the country experienced a state of realization and realized that the Spanish had much to do with the torment that the natives had been undergoing (30). Upon realizing the oppression they endured from the Spanish, the native Filipinos decided to take action in what is called the Philippine Revolution. The Philippine Revolution began in 1896 with the leadership of Andrés Bonifacio. The main goal of the revolution was to gain independence from Spain through armed revolt. Francia states that during the early stages of the Philippine Revolution against the Spanish, Rizal was asked by 99
Dr. Pio Valenzuela to support the uprising but, to his surprise, Rizal disliked violence; he believed it “cause[s] the deaths of thousands of innocent people” (53). Rizal believed that there was a way to go about gaining freedom, or at least more representation in Spanish court, in a peaceful way. To get away from the uprising, Rizal offered his services as a doctor to those who contracted yellow fever in Cuba. En route to Cuba, however, he was seized by the Spanish and was sent to stand trial for involvement in the revolution. The friars in the Philippines saw him as a threat because of his novels, which portrayed the friars in a negative light. He was convicted of rebellion, sedition, and conspiracy, and was sentenced to death by firing squad. This event motivated many Filipinos to continue fighting for their freedom. The native Filipinos won independence in 1898, the same year that the Spanish entered the Spanish-American War. Francia summarizes Jose Rizal’s essay “Filipinas dentro Cien Años” (“The Philippines a Hundred Years from Now”) by explaining how Rizal predicted that the Philippines would eventually gain independence from the mother country, Spain, and that other foreign powers would want to conquer, but that the United States would be the Philippines’ “direct threat” (50). When the Spanish lost the Spanish-American War, they were forced to forfeit many of the lands they controlled, which included the Philippines despite the country declaring independence years before. Although they declared their independence after being subjected to Spanish rule, the United States did not recognize the Philippines’ independence. According to Kessler’s research, America did not recognize the Philippines as an independent nation, even after President Emilio Aguinaldo declared independence on June 12, 1989, and tried to liberate the country for the second time (10). The country came into US possession after the Philippines lost the PhilippineAmerican War in 1902. After seizing the Philippines, the American government received a backlash of criticism. There were few who supported the move to maintain all of the Philippines; some suggested to only keep very few parts of the archipelago, and to give the parts that were not being used by the US back to Spain (Miller 14). The situation was similar to when the Americans acquired the Hawaiian Islands in that many were concerned about those of color getting citizenship. Miller wrote that in the late nineteenth and early twentieth centuries, when America first acquired the Philippines, there were many racist remarks among anti-imperialists. Many of them were concerned with the mixing of races being let into the country and many were also concerned with the fact that they saw the indigenous people of the Philippines to be inferior (125). Even after much criticism, the President chose to keep the Philippines. President McKinley kept the Philippines after the Spanish-American War, and even after quickly restoring Cuban sovereignty, because he wanted to “educate the Filipinos, and uplift and civilize and Christianize them . . .” (Wertheim 500). Under U.S. control, the Philippines were again morphed into something different from what they originally were to model their new oppressors. While in U.S. possession, the American government worked with the Philippines to create a new government modeled after the United States. President Roosevelt made three observations on how the US was going to handle the Philippines: control with little interference on the US’s part; the Philippines is like a child still in a learning stage; and until the Filipinos gain the knowledge they need to govern themselves, imperial rule will continue (Wertheim 501). The Filipinos were 100
still being looked down on by foreign powers, but the Filipinos were willing learners. The more pervasive the US influence, the more cooperative the Filipinos became. During this time American military bases were sprouting in several areas of the Philippines, and the U.S. military budget continued to increase. The World from New York criticized the $400,000,000 military budget and demanded answers as to why they were sending more troops into the Philippines if the Filipinos were becoming more “pacified” (Miller, 158). In addition to increased military presence, there were advancements in several fields. A countrywide education reform started and a reform of the land was implemented (Goh, 264). These reforms are responsible for the building of cities and hundreds of schools across the country. Hogan argues that the Americans brought many advancements to the Philippine islands, such as education and some economic growth. With these advancements, the Filipinos were quick to see the Americans as “liberators” rather than oppressors (126). Still, America has made it almost impossible for the Philippines to function without it. Because the Philippines were not able to grow on their own, they relied on their “liberator” to be constantly by their side. The Philippines eventually gained independence from the United States on July 4, 1946, which practically mirrors America’s Independence Day, but this “independence” was short-lived. Within twenty years of gaining independence, the Filipinos were subjected to the dictatorship of Ferdinand Marcos. Marcos came to power in 1965, nineteen years after the Philippines gained independence from the United States. By the late 1960s there was turmoil in the major parts of the Philippines, including northern Luzon, Manila, and Mindanao; immigrants from the North were abundant, which caused a shortage of jobs in urban industry in central Luzon along with poverty; restrictions on adjustments were made by the political system, and most politicians did not share the interests of the majority. There was no possible way for the Philippines to advance with all the problems at hand (Gregor 196). Marcos knew that he had a lot of work to do. In his State of the Nation address, he revealed that he wanted to implement government and economic reform as well as infrastructure reform with the building of new roads, bridges, and public works. Ironically, he also declared that he wanted to reduce corruption within the government. There were many things that came to fruition during Marcos’ presidency. The Philippines was at its best during the beginning. The Philippines’ economy improved and thousands of kilometers of roads were built. But this bliss was short-lived, as the corruption in Marcos’ regime increased as the years went by. The economy eventually started to fail due to expensive campaigns and unwise spending and borrowing. The national debt continued to rise due to Marcos’ land reform failing, which caused local sales to drop. This in turn had disastrous effects on the Philippine economy during the world recession of the 1980s (Gregor 198). Marcos was losing favor with the Filipino people and he was also losing his grip on the country. In his bid for reelection, Marcos managed to spend around $50 million, which caused the Philippine currency, the peso, to go down to nearly half its value (Burton, 75). With his power over the country slipping, Marcos declared martial law in 1972 to extend his two-term limit. In his defense, Marcos states that this was done because of threats of Communist and Muslim insurgencies, one of the biggest problems the Philippines faced during that time. The declaration of martial law was heavily criticized by the Filipino people. Martial law limited the press’s 101
freedom and the peoples’ civil liberties. It also closed down Congress, multiple media establishments, and political opponents who opposed him. Within the twenty years they were granted freedom, the Philippines was again stripped of their independence, this time by one of their own. Martial law was used by Marcos to start a new society. He disliked the government style the Americans had put into place. Burton notes that Marcos found that the Philippines’ Americanstyle democratic system was unproductive in solving the country’s problems and he sought to create a new type of government. In his youth, Marcos managed to create a constitutional authoritarian form of government, which he thought would be able to increase economic growth (59). During the nine years of martial law, there was constant disapproval of his declaration. The Marcos regime is considered to be similar to the Spanish rule over the Philippines in it made empty promises, essentially about a newly transformed society (Ileto, 172). Although the Filipinos’ rights were being abused and taken away, foreign powers like America turned a blind eye to the injustices. The U.S. openly supported Marcos, with Vice President George Bush publically applauding Marcos’ “adherence to democratic principles and to the democratic process” in 1981, towards the end of martial law (Burton 111). In an attempt to gain favor in the newly appointed Reagan Administration in America among other foreign leaders around the world, President Marcos publically declared in January 1981 that he would be ending martial law (Burton 110). This shows that even after rebuking American-style government within his country, Marcos still relied on foreign aid, most importantly America’s aid, and approval. Even though Marcos had the freedom to do as he wished with the Philippines, he still needed America on his side. In the years that followed, Marcos’ power over the Philippines decreased drastically. In 1983, Marcos’ most notable opponent, Ninoy Aquino, was assassinated. This sparked a series of events which resulted in his regime being overthrown. After the assassination of Aquino, the people revolted in what is known today as the People Power Revolution. The people of the Philippines rallied under Aquino’s widow, Corazon “Cory” Aquino, who made a bid for the presidency without a platform, political party, or any clear alliance (Burton, 317). To prove his reign legitimate, Marcos called for the Snap Elections of 1986. In her bid for election, Corazon “Cory” Aquino sought for justice in both her husband’s assassination and the suffering of the Filipino people, all the while promising a new Philippine society (Kessler, 27). The elections were tarnished by electoral fraud and violence. Although Marcos seemingly won the elections by three million more votes than Cory Aquino, the people, along with Aquino and former senator Salvador Laurel, did not accept the results and overthrew the Marcos regime anyway. After years of oppression under the Marcos regime, the newly appointed president, Cory Aquino, was a refreshing change. Aquino was relatable: she was seen as a common housewife. This common housewife, though, inherited the multitude of problems that Marcos caused or could not resolve during his presidency. Burton observed that during the final years of Marcos’ hold on the Philippines and the first few months of the newly instated Aquino administration, the state of the Philippine economy was very poor. In the transition of power, investments into the country ceased and almost 40 percent of the country’s income had gone to the billion-dollar deficit. 102
There was also a sharp decrease in jobs within the urban industry, causing many workers to go back to their families, who were already suffering from decreasing prices in the few commodities the Philippines was able to produce (533). On top of a failing economy, Aquino also had to deal with the rise of Communism in the southern islands of the Philippines. In a newspaper called Ang Bayan, the Communist Party of the Philippines (CPP) reported in a December 1986 issue that there were more than thirty thousand Filipino members of the CPP (Kessler, 28). The Aquino administration was not prepared for this amount of work to be done. With little experience in the world of politics, Aquino entered office with no platform but, rather, beliefs that democracy and freedom could triumph over Communism (Burton 528). With the poor state of the Philippines’ economy and the numerous problems inherited from the Marcos regime, the Aquino administration looked to foreign powers for help. With their new president, the Philippines made an attempt to rebuild what was destroyed and corrupted by the Marcos regime. The beginning of the Aquino administration was met with opposition by Marcos’ supporters. During the first two years of her election, there were several attempts at a military coup on the Aquino administration that were caused by negative reviews of the reconciliation policy (Kessler, 4). As her presidency progressed, the attempted coups subsided and Aquino, along with her administration, was able to focus on the problems that plagued the Philippines: the economy and the threat of Communism. These problems alone were more than a handful for the Aquino administration and it was unclear how the Philippine government was going to solve them. In one of their lowest phases as an independent nation, the Philippines looked towards America and other foreign powers for aid. Although the U.S. had backed the Marcos regime, they were quick to aid the Aquino administration economically and militarily. However, Aquino was reluctant to accept help because she wanted to lower the country’s dependency on foreign aid. While attempting to decrease foreign presence in the Philippines, President Aquino noted that the U.S. military base was one of the largest employers and stated that it was a necessary presence, on the basis of regional security (Burton, 536). U.S. military presence not only protected the Philippines in their vulnerable state, but it provided some economic stability. During Aquino’s presidency, though, the U.S. finally ended its military base and their constant presence in the Philippines. The United States was also a big trade partner with the Philippines, and a large contributor to the Philippine economy. While President Aquino wanted to decrease foreign presence in the Philippines, she also saw it as necessary for them to be there, showing that the Philippines was still significantly dependent on America. Months after being sworn into office, Aquino was already in peace talks in regards to Communist insurgency and Islamist secession movements in the South. One of her main policies upon coming into office was to do away with Communism in the Philippines. With the rise in Communism during the new administration, America, along with Japan, had offered to provide aid to the country (Gregor, 203). With the help of foreign powers, the Philippines were able to keep Communism from conquering many parts of the Philippines. Although the
103
Communists of the South were subdued, they continued to oppose the democratic-republic government of the north. Since their discovery in 1521, the Philippines have been conquered and oppressed many times over, first by the Spanish, then by the Americans, and finally by one of their own. Today, they are a sovereign island country that is independent from the rule of any foreign power. During the Spanish colonization of the Philippines, the culture of the indigenous people of the Philippines was fused with European ideals. This is the result of a world power conquering a developing country and molding it into an ideal state in which they could operate. When America came into possession of the Philippines, the Filipino people, along with almost every aspect of their lives, were remodeled to mirror the American way of doing things. Because the Philippines had little room to grow independently as a nation for the first couple hundred years of its existence, the country was easily subjected to dictatorship at the hand of a Filipino, which led to dependency on America. With this knowledge, I have come to the conclusion that the Philippines has experienced little to no form of independence since their discovery. WORKS CITED Burton, Sandra. “Aquino’s Philippines: The Center Holds.” Time Magazine (1987): 524-537 Academic Search Premier. Web. 15 Mar. 2014 Burton, Sandra. Impossible Dream: The Marcoses, the Aquinos, and the Unfinished Revolution. New York: Warner Books, 1989. Print. Francia, Luis H. “Jose Rizal: A Man For All Generations.” Antioch Review 72.1 (2014): 44-60. Academic Search Premier. Web. 20 Feb. 2014 Goh, Daniel P. S. “Postcolonial Disorientations: Colonial Ethnography and the Vectors of the Philippine Nation in the Imperial Frontier.” Postcolonial Studies 11.3 (2008): 259-276. Gregor, A. J. “After the Fall: The Prospects for Democracy after Marcos.” World Affairs 149.4 (1987): 195-208. Web. 08 March, 2014. Ileto, Reynaldo C. Filipinos and their Revolution: Event, Discourse, and Historiography. Quezon City: Ateneo de Manila University Press, 1998. Print. Kessler, Richard J. Rebellion and Repression in the Philippines. New Haven: Yale University Press, 1989. Print. Miller, Stuart Creighton. “Benevolent Assimilation” The American Conquest of the Philippines, 1899-1903. New Haven: Yale University Press, 1982. Print. Wertheim, Stephen. “Reluctant Liberator: Theodore Roosevelt’s Philosophy of Self Government and Preparation for Philippine Independence.” Presidential Studies Quarterly 39.3 (2009) 494-518. Academic Search Premier. Web. 104
American Dream: Americans vs. Immigrants and Global Citizenship (By Pancy Lwin) “Why do you want me to study in America?” I asked my parents. My mother said, “You will have a comfortable life without enduring the harsh life of lacking electricity and clean water.” My father said, “You will be wiser and more educated, pursuing the American education and the American Dream.” According to my parents, the “American Dream” means I will be successful when I am in America. I understand that my parents’ imaginations of the American Dream stem from the successful stories of Myanmar immigrants they know in their social circle. The image of the American Dream in people from developing countries is to cut ties with their pasts and start new lives in the United States. This made me think that there are different American Dreams for Americans and immigrants. However, the American Dream, or the driving power to be successful, is the same for all people who try to pursue their dreams in America, even though the expectations and the definitions of success change over time for everyone. Finding the difference between immigrants and Americans gave a solution to renew the American Dream: global citizenship. My first assumption about the American Dream was that it had a concrete definition people have agreed on. Therefore, I tried to find a specific definition of the American Dream. I browsed books about the American Dream in the hope of finding one agreed-upon definition of the concept. First I found some definitions of American Dream in an anthology, The American Dream in the 21st Century, introduced by professors in sociology and politics Sandra L. Hanson and John Kenneth White. In the introduction, they referenced a branch of surveys to support their argument that “the American Dream is not a static concept” (Hanson 9). The survey collections showed that American Dream means different things for different people: “being able to get a high school education, owning a home, being able to send one’s children to college, being optimistic about the future, being able to get a college education, being financially secure enough to have ample time for leisure pursuits, doing better than your parents did, being able to start a business on one’s own, being able to rise from clerk or worker to president of a company” all constitute the “American Dream” (Hanson 10). Therefore, even in a recent collection of surveys that took place within a year, the American Dream for Americans is different according to personal and individualistic definitions of success. The fact that its meaning varies does not make the American Dream impossible to achieve. Going back to history, the American Dream is not the same in different historical periods. Jim Cullen, a famous expert on American history, in his book, History of American Dream, believed that “the Dream in the abstract can be summarized as a belief that anything is possible in some form if one wants it badly enough” (Cullen 19). Moreover, in his article, “Twilight’s Gleaming: The American Dream and the Ends of Republics,” he explained that “the historical reality is one
105
of a series of discrete, and sometimes competing Dreams: the Dream of upward mobility, Dream of homeownership, the Dream of racial justice and so on” (Cullen 19). Essentially, the American Dream means everyone from any social and economic standards or any place will gain whatever they strive for, whether it is freedom, equality or wealth. Moreover, Cullen wrote that the Pilgrims and Puritans sought religious tolerance, the founding fathers fought for independence, and African Americans and feminists fought for equality. The search for the American Dream in history showed that different generations of Americans had to create different opportunities for themselves. The most significant aspect of the American Dream is that core meaning is the same, although today’s American Dream includes more materialistic values than the past American Dreams of equality and freedom. James Truslow Adams, historian credited for popularizing the American Dream in his book, The Epic of American Dream, said: It is not a dream of motor cars and high wages merely, but a dream of social order in which each man and each woman shall be able to attain to the fullest stature of which they are innately capable, and be recognized by others for what they are, regardless of the fortuitous circumstances of their birth . . . It has been a dream of being able to grow to fullest development as man and woman unhampered by the barriers which had slowly been erected in older civilizations, unrepressed by social orders which had developed for the benefit of classes rather than for the simple human being of any and every class. And that dream has been realized more fully in actual life here than anywhere else, though very imperfectly even among ourselves [qtd. in Hanson 3]. According to Adams, the American Dream has a core value of upward mobility, or the opportunities Americans have to grow and pursue whatever they want without being prevented by unequal social disorder. Moreover, Hanson and White made an argument that stems from Adams’ definition of American Dream. They argued that the American Dream is “an enduring optimism given to people who might be tempted to succumb to the travails of adversity, but who instead, repeatedly rise from the ashes to continue to build a great nation” (Hanson 3). Indeed, Adams’ definition means more than overcoming hardship. It means someone who never ceases to dream whenever he falls. So what is the American Dream for immigrants? Will they dream to have plenty of leisure time? Will they dream to own a home? Will they dream to be the president of a company? Immigrants’ perceptions of the American Dream are reflected in the struggle of Irish immigrants in The Immigrant Experience in America, written by New York lawyer Frank J. Coppa and New York politician Thomas J. Curran, in which they note that Irish people were forbidden to own land and rent under the colonialists. The Irish famine in 1845 and hatred for the Colonists forced them to leave their homeland. Even getting to the United States and escaping from the country was difficult for them. They needed to buy tickets on the ships delivering cotton wool to America. For most impoverished Irish who could not afford the tickets, the chance to have the American Dream was impossible. Moreover, the chance of dying on the ships to America was the same as on ships bringing slaves from Africa. 106
However, some Irish people had the wish to create a new life and hoped that they would be welcomed. That was their American Dream. They did not care what manual labor they had to do. In reality, they were not welcomed by the Protestant community in America or by Americans who lost their jobs to immigrants. The Irish immigrants’ American Dream started as an nightmare. Nevertheless, because of their spirit of striving and their pursuit of the American Dream for education, the Irish-American community would go on to provide three candidates for American Presidency: Al Smith, Charles O’Connor and the first successful Irish-American president, John F. Kennedy, were descendants of the Irish immigrants who achieved the American Dream after a lot of nightmares and hardship. Throughout history, immigrants have come to America with their own images of the American Dream in their minds. For them, first, the American Dream means escaping from all the difficulties they face in their own countries, leaving their pasts and starting new lives with new dreams of upward mobility. Famine, civil war and genocide were the motivating factors. They risked whatever they had and cut their family ties simply to become Americans. Because of their survival and risk-taking spirits, their descendants could even participate in the political processes. How does President Kennedy’s American Dream differ from that of his great Irish ancestors who immigrated to America? Immigrants’ success stories become more pervasive after the 2008 economic crisis in America. Experts and scholars started to mention the immigrants’ entrepreneurship as their way of pursuing the American Dream. A lot of books were published to explain immigrant entrepreneurship as a way to survive the downturn of the American Dream. “Immigrant, Inc.,” an article by Richard T. Herman and Robert Smith. Herman, defined the immigrant as a “man or woman who came to the country with nothing but a dream” (Herman 17). The authors had experiences with immigrants when they launched a law firm, Herman & Associates. According to them, the immigrants usually showed up at their office stammering English uncomfortably. However, after a year, they would see him again about making a business license. Throughout their research, Herman and Smith found out that “Immigrants are almost twice as likely as native-born Americans to start a business and immigrants are more likely to earn an advanced degree, invented something and be awarded as U.S patent” (Herman 18). How can immigrants become entrepreneurs? Entrepreneurship means daring to start a business while chances to get profit might be low. They dare to take bold steps, in the same ways their ancestors took to the ships and came to America with nothing but a dream. The spirit of entrepreneurship is in the blood of immigrants, with their upbringing hinging on perseverance and the opportunities their ancestors fought for. From the founder of Google to the CEO of Intel, all entrepreneurs from immigrant families gained a U.S. education; mixed with their families’ spirit of struggle, they could start a lot of companies and became billionaires. According to Herman and Smith, a keen sense of adventure, a reverence for education, love and respect for family, an eagerness to collaborate, a tolerance for risk and failure, passion born out of desperation, and a tendency to dream shaped immigrants in order to become entrepreneurs (Herman 200). In other words, they tended to do anything to establish roots and become citizens.
107
That drive to create opportunities for themselves to get out of poverty is called “Immigrant Entrepreneurial Spirit.” Many people, including me, doubt that immigrant entrepreneurship is immigrants’ identity rather than a part of their American Dream. Although the bravery to keep dreaming despite the hardship is the same for both Americans and immigrants, the risk-taking aspect of the American Dream becomes significant in immigrants’ entrepreneurship, which people wrongly suppose is different from Americans’ American Dream. In fact, immigrant entrepreneurship, the so-called immigrants’ American Dream, is the same American Dream; it was a driving force of Americans in the past. The adventure, collaboration, tolerance for risk and failure—all of these so-called aspects of immigrant entrepreneurship are similar to what was once the American Dream of Americans, as explained by Hanson and White, and was such even when Adams popularized the word “American Dream.” As I have already explained, the American Dream, the driving force to struggle and take risks, is the same for both Americans and immigrants. However, for Americans, the sense of struggle to gain the American Dream has changed over time. Now, the American Dream for Americans tends to mean the improvement of what they have already gained in America: liberty, equality, education, and a specific living standard—having a job, a car, bills, a house, and providing a quality standard of living for the next generation. However, Americans may possess less of a willingness to make similar sacrifices and take the risk that an immigrant might. Therefore, the entrepreneurial spirit seems to belong primarily to immigrants; their American Dream is their own, although they still share the sense of struggle after gaining equality and freedom. This is what the American Dream really means for anyone in America, immigrant and American alike. However, because of the economic downturn in 2008, many Americans feel less confident in the American Dream. The collections of polls made in different times in Hanson and White’s book show that Americans doubt the invincibility of the American Dream: 75 percent claim the American Dream is not as attainable today as it was when George W. Bush was elected president in 2000 (Zogby International 2008). 59 percent believe the American Dream will be harder for today’s children under the age of eighteen to achieve (Greenberg Quinlan Rosner Research and Public Opinion Strategies 2009). 57 percent say the American Dream will be harder for them to achieve in the next decade (Time/Abt SRBI 2009). 54 percent believe the American Dream has become “impossible” for most people to achieve. (Opinion Research Corporation 2006) [qtd. in Hanson 11]. These statistics show that the American Dream has become less believable for everyone in America. Moreover, the American Dream has become more difficult to pursue not only for poor people but now also for the middle class. The American Dream has another unique aspect: immigrants speak of the American Dream rather than the “European Dream,” or any other countries’ “dream,” for that matter. Why is America considered the land of opportunity for immigrants? I n the documentary The American Dream of the Chinese, one of the stories is about a poor Chinese lady who was trying to pursue the American Dream. Ann, an undocumented immigrant who owned two furniture stores back in 108
China, had a hard time working as a manicurist, her first job in this dream land. She said, “In my country, I can find money above my shoulder while in America, I find money below my shower” (The American Dream of the Chinese). In her first job in America, the “American Dream” is not as sweet as she had imagined and had seen in the movies. Washing eight or ten pairs of feet each day was very difficult for her, especially since she was a boss once in China. Moreover, she said, “Some customers are really picky. Although my English is little, I can see and feel how they treated me. It really hurt my pride” (The American Dream of the Chinese). Still, she declared that America will give her what she wants if she works hard enough. It is a flicker of hope in the despair of an immigrant striving in her manual labor. The American Dream is difficult for her, and no famine or communism was forcing her to leave China. So, why did she leave her homeland and decide to pursue the American Dream? In that documentary, Ann’s confidence in America and willingness to dream came from America’s acceptance of any nationality and any ethnicity from any corner of the world. America is not only the land of opportunities but also the land of the cultural hot pot. Anyone of any identity can fit in America, pursue the American Dream, and give the next generations a chance to pursue the American Dream. Therefore, one condition that makes the American Dream unique is that there is nowhere else in the world where immigrants easily adapt to and fit into the culture. Both the spirit of risk-taking and acceptance of all identities make the American Dream unique and shared by both Americans and immigrants. One story of an Asian American from an immigrant family gave the sense of how Americans should renew their American Dream. An interview of the entrepreneur reflects the thoughts of an Asian American who is going to back to the country his immigrated parents will never look back on. Chris Tran was born after his parents sailed from a fallen Saigon to America in 1975. He never thought that his trip to find his American Dream would take him to Vietnam, where the living standard is much lower than America. His mother, who lives in Westminster, was very surprised and worried by her son’s choice. She explained, “A lot of immigrants do not imagine even to have a vacation back in Vietnam but my son decided to work there” (Tran). His mother is unwilling to go back to the country where she and her generation lived during the time of the Vietnam War. Once immigrants become American citizens, their expectations often change. As Tran’s mother has sacrificed a lot, she does not want to go back and leave behind the high living standard she has gained in America. However, Tran’s thought was different from his mother’s. Tran says that a large part of his success in Vietnam is because he stands out as an American (Tran). By analyzing his experience, he could pursue the American Dream because of the education he got in America. America became a place to learn, but not the place to apply the knowledge he learned. Another Asian American is David Lee, who went back to start his film production company and left his family in Los Angeles. Although the commute between China and the United States is not easy, it does not bother him because he believes, “Opportunit[ies] in China were like a siren’s call” (Tran). This is the pursuit of the American Dream by Asian Americans who have gone back to Asia.
109
What is making Asian Americans move back to Asia? What is making some Asian Americans confident that their American Dreams will be possible at any place in the world? First, an undeniable fact is that Asian Americans, later generations of immigrants, see themselves as Americans after pursuing the American living standard and education. Second, their status of being bilingual and/or bi-racial makes it easier for them to adapt to any place in the world. Their American Dreams in America make them confident as Americans, but their willingness to take risks in other places than America and their ability to adapt to other cultures helped them survive the economic recession in 2008 as immigrant entrepreneurs in other parts of the world. Therefore, being bilingual or adapting to world cultures as global citizens is a quality needed for American entrepreneurs to pursue their American Dreams in any corner of the world. Indeed, it is time for everyone in America to renew their dreams if they think the American Dream has become difficult to pursue in America. The American Dream for this new generation of immigrants includes learning to become a global citizen, and to be willing to take risks and walk outside of the comfortable dream land, America. As America is a place where it is relatively easy for immigrants to assimilate, it is time for Americans to become bilingual and become global citizens, adapting to different cultures around the world. During my research on the different American Dreams for immigrants and Americans, I found out that the immigrant entrepreneurship or the immigrant’s American Dream was reflected in Americans’ American Dream during the times and struggles of the Pilgrims, the Puritans, the Founding Fathers, the African Americans, and the women’s rights protesters. The rising standards and expectations of Americans in the modern time make them sometimes forget that the American Dreams they still have today—their standards of living, life, liberty, and the pursuit of happiness—were fought for or sought out by American ancestors. As Asian Americans are pursuing their American Dreams in Asia, Americans who have already had the openmindedness to accept any ethnicity should now relive their American Dreams also in lands they have never been to or amid cultures they have never been exposed to. WORKS CITED Coppa, Frank J., and Thomas J. Curran. The Immigrant Experience in America. Boston, MA: Twayne Publishers, 1973. Print. Cullen, Jim. American Dream: A Short History of an Idea That Shaped a Nation. New York: Oxford University Press, 2003. Print. Hanson, Sandra, and John Kenneth White. The American Dream in the 21st Century. Philadelphia: Temple University Press, 2011. Print. Herman, Richard T., and Richard L. Smith. Immigrant, Inc.: Why Immigrant Entrepreneurs Are Driving the New Economy (and how they will save the American worker). New Jersey: John Wiley and Sons, Inc. 2010. Print.
110
Tran, Chris and David Lee. Interviewed by Michael Bloecher and Jack Moody. “With Reverse Migration, Children of Immigrants Chase “American Dream” Abroad.” KCET Network. 16 Jan 2013. Web. The American Dream of the Chinese. Dir. Xin, Feng. Films on Demand. Films Media Group, 2010. Web.
111
The Smell of Gunpowder (By Ray Abarintos) It’s been six months since I left the Navy as a Corpsman. It’s the best job in the Navy other than the SEALs. I was very adamant on leaving the service and didn’t want to re-enlist. I served my time in the military. With so much stress, it felt like my aging process accelerated faster than a methamphetamine junkie’s features. Now that I am discharged from Uncle Sam’s leash, I can pretty much tell you my war story that I will be able to tell to my grandchildren in the future. It’s been a long road for me. As a veteran, I look back and realize military life was very stressful, especially when you’ve experienced combat. It wasn’t for the weak. It wasn’t like the video games. War is not like what you see on television. Lives are on the line and you’d do anything to survive. I never thought to myself that I would be patriotic. Most of my classmates went to college and got their fancy degrees while I served. When I enlisted, I was never scared knowing someday my boots would be on the ground. The media portrayed soldiers kicking down doors and taking dangerous missions, which seemed like an awesome experience to me. I was hoping someday I would be doing the same things they’re doing. From the day I stepped foot on Afghanistan soil, I realized war wasn’t a walk in the park, and the stress of combat kicked in really fast. I joined the Navy on August 16, 2007, but didn’t deploy until November 15, 2010. I was stationed in Jacksonville, North Carolina. All those years prior were spent on training. The training was mostly combat medicine, emergency medicine, trauma management, and surgery. Some days were spent on the rifle range wasting taxpayers’ money. I was a Navy Corpsman. To a civilian’s viewpoint, they would be considered “Combat Medics.” Since the U.S. Marines were under the Department of the Navy and didn’t have a medical department, the Navy sent me to 2nd Battalion 3rd Marines, stationed in Kaneohe Bay, HI. This is a “grunt” unit, the nucleus of dirty, hard work. They are the ones who are sent on the front lines of combat. They are irresistibly attracted to war and violence like moths to a flame. I was tasked with a grunt platoon and was the sole medical caregiver and provider for my platoon of Marines. They called me “Doc” since I was their only medical specialist. They respected me and cared for me knowing I was a special asset in the front lines. They knew I would take care of them if they got shot or injured in combat. I’m the one who would bring them back to their sobbing parents and spouses. We trained until we received orders to be sent to Afghanistan in November 15, 2010. We stayed on a remote location in Marjah that was heavily populated with militia and Taliban. It was an abysmal valley nicknamed “Carnage Alley” due to the multiple roadside bombings and explosions that occurred every time a vehicle passed through. The place was so hot, even the “redheads” in my platoon would think twice about staying out in the, sun even with sunscreen. They described the heat as “like being showered with flakes of fire that’s raining down against their naked bodies.” We had to sleep in our underwear, and we would wake up in a puddle of sweat. No one drowns in their own sweat, but I thought I had every time I woke up. When we arrived there was no clean water for three months, so we didn’t shower for 90 112
consecutive days. Our only water source was the water bottles the government sent us, and we had to conserve it. We smelled so foul, even flies dared not to get close to us. “Baby wipe” showers were the norm. There was no time to worry about personal hygiene. We were in a war zone, and the only thing we had to worry about was to minimize casualties and stay alive. It was a scary period for me. I had to bury my emotions deeply, especially the fear of death and killing. My heart raced every time we stepped out of the wire. I thought my heart would come out of my chest every time we patrolled. We never knew who we were going to encounter. The days were monotonous and gloomy. We were withdrawn from the world like a psychopath in solitary confinement. The skies were always luminous, but the blazing heat destroyed our morale. The infernal winds brought unquenchable thirst. We drank a gallon of water every day and still ended up dehydrated no matter how much water we consumed. Being away for so long and patrolling in Afghanistan made us very miserable. We lost a ton of weight. We became stressed and depressed as the days flew by. In order to kill the boredom, we would spend our days talking about home, girlfriends, and the first food we would eat as soon as we get back to the United States. The same routine continued until we had our first fire-fight and combat action on January 30, 2011. A group of local Afghan mercenaries started shooting at us while patrolling. I was glad no one got hit. We retained our composure and didn’t panic. We were ready to eradicate the enemy without hesitation. That’s what we signed up for. They sprinted towards the marsh canals while we stayed in a torn-up mud house adjacent to their position. We exchanged fire back and forth. Bullets were flying everywhere and I had to remain very low or else my head would get shot. There was so much noise that I thought my eardrums would explode. I was next to a Machine Gunner, and my ears started ringing from the noise his machine gun made. For minutes I couldn’t hear anything, as if I were underwater. Within moments I heard my squad leader yelling at me and telling me to start shooting. So there I was, locked and loaded. I was ready to send bullets down range. It was the first time I shot my rifle toward a human being. They were not regular human beings. They wanted to take our lives away, so we had to act fast and take their lives before they took ours. Adrenaline started pumping, and without hesitation or mercy, I began shooting towards the enemy. There was only one priority on my mind while I was shooting at them: bullets hitting their skulls and sending them into the abyss. I continued shooting back and forth and exchanged fire until I almost ran out of ammunition. The smell of gunpowder coming from the rifle added fuel to my rage. Shell casings from the expelled rounds were flying around and started piling up. I took a breather as the action took a toll on my physical conditioning. The battle went on for what seemed like eternity until they stopped shooting at us. We knew we annihilated them. The Fire Team leaders threw shock grenades. The Machine Gunners expelled thousands of their armor-piercing rounds, and the rest of the squad used up half of our ammunition. There was no movement in their position. We slowly closed in towards them in an attack column and made sure we finished what we started. We did what we had to do. It’s what we were trained for. We had to follow our “Warrior Ethos” that had been imprinted into our minds since boot camp. Luckily, no one got injured and I didn’t have to patch anyone other than
113
minor cuts and bruises. We were tired and shell shocked, but it was a victorious day. We called in reinforcements to take over and returned back to base. After that day, we continued our patrols. We experienced a few more fire fights, but not as chaotic as our previous one. Months flew by, and with an optimistic view in life we finished our deployment on June 13, 2011, and flew back to the United States. There were no casualties. We felt as if the angels above in the heavens had guided our paths. We were looking forward to seeing our families and friends. One of the best moments in life is when you get down from the airplane and see your loved ones cry with joy. It was a very emotional day for everyone. Months went by, and we had to transition back to the garrison lifestyle. We were not in war anymore. It wasn’t easy for me. The flashbacks consumed my mind like vultures on fresh carrion. The experiences I had made me value my life better. They gave me a positive outlook in life instead of a negative one. We never knew if we were going to make it. Only a few will understand what it’s like to be 23 and write your own will. No one should go to war. It’s a life-altering event. No one goes to war and comes back the same. Some soldiers suffered psychological problems. While I did make it back in better psychological shape, it did take a while for me to return to my usual personality and mindset. Faith and my family got me through the hardships.
114
World of Cosplay: Is it Weird? (By Candace Ferris) Introduction Nowadays it is common to go to an anime or comic book convention and see people dressed up as characters. Cosplay is the practice of dressing up as a character from a movie, book, or video game, especially one from a manga or anime. It can be used to refer the practice itself, or to refer to someone’s costume (ex: “Oh, look at that person’s beautiful cosplay!”) Adorned in their handmade or store-bought cosplays, cosplayers are fictional characters brought to life. Fans often spend lots of time and money creating their cosplays; some even go the extra mile and try to act like their character as much as possible. People inexperienced in conventions and cosplay are put off by this, considering cosplayers to be obnoxious and loud. But does this apply to the entire community? Or is it only the select few that place everyone else in a bad light? As someone with many friends who cosplay, and as I am interested in trying it myself, I am curious about why people would think that. I had little prior knowledge about cosplayers aside from what I had learned in passing, and I wanted to find out more about them. In order to do this, I interviewed a close friend and frequent cosplayer to see what the community is really like. Cosplay vs. Costuming Cosplaying and costuming are almost the same thing—they both involve wearing costumes based on novels, TV shows, or other forms of media. However, cosplay is a newer term, whereas costuming was more common for a longer period of time. Costuming was most common at science fiction and fantasy conventions, where fans would create and wear costumes based on their favorite characters. There were also stage masquerades, where people from novice to masters could display their costumes. Nov Takahashi, a Japanese reporter, witnessed this at a WorldCon held in Los Angeles in 1984. To best describe what he saw, he combined the two words "costume" and "play," and thus the term “cosplay” was born (Feldmann). Japanese readers were inspired by his article and started making their own costumes, basing them off of their anime and manga. In Japan, cosplay not only means to look like the character, but to act like them as well. When Americans got into anime and manga in the 90s, cosplay was also introduced and quickly became popular (The History of Costuming, 2005). Now, there is even a TV series on the Syfy channel called Heroes of Cosplay, where a group of cosplayers compete by creating the best cosplays and getting the chance to win a money prize. While “costuming” is still a proper term to use, cosplay is the more popular term used by fans. The Cosplay Community Today, cosplay is a very common sight at anime conventions, especially in Japan. But cosplayers are not restricted to conventions only. In Hawaii, conventions sometimes hold their own event-days at certain locations, where people come and partake in activities. One event I remember was Anime Beach Day: people would cosplay as characters in beachwear and play games like suikawari, a game similar to piñata, but with a watermelon. During my interview with my friend Nakamura, she talked about finding other times to cosplay when there is not an upcoming event: 115
I cosplay a few times a month or so. It doesn't mean that I go out to a convention every month, but I do find other ways to enjoy cosplay even when it's not for an official event. Like contacting my friends and just make a random excuse to go out and dress up. Or dress up inside my own house for enjoyment. (Nakamura, 2013) So even if there is no event to dress up for, cosplayers will do it anyway because it is fun, especially with friends. One aspect of cosplaying is that it has the ability to bring people together. It allows those with similar interests—whether a show or book or game— to meet in conventions and do what they love. While I am not very familiar with the local cosplay community, I am more familiar with the community online. The internet is a great way for people to show off their cosplays and discover other amazing cosplayers as well. There are many cosplay forums, or communities where people can ask for advice on crafting their work or keeping tabs on fellow cosplayers. Many offer helpful tips on crafting for cosplayers with low budgets. There are even sites that allow you to buy custom-made cosplays and wigs for those who cannot or do not want to make one themselves. Becoming a Cosplayer Lots of factors can inspire fans to try cosplaying, either for personal reasons or just for fun. For seasoned cosplayer Yaya Han, it is a creative outlet. Before she started cosplaying, she was a fan artist who sold her work at auctions. “But once I discovered cosplay,” she says, “it was like, ‘I don't have to draw my favorite characters, I can become my favorite characters’” (Hoevel, 2011). In Nakamura’s case, she explains that she wanted to cosplay because it was fun: A lot of aspects inspired me to start cosplaying. Once was when I started watching cosplayer videos on YouTube. It seemed fun, since everyone in the video was laughing and enjoying themselves. So I wanted to see if I could have fun while doing it too. After making my first cosplay, I searched through the internet and saw beautiful photos of cosplayers from all around the world. They had astounding makeup and big detailed costumes and props. It made me want to pursue a life of a cosplayer even more, since I wanted to see if I could do big costumes and props myself. (Nakamura, 2013) While each person might have a different reason for cosplaying, what seems to be true for most is that cosplaying is fun. Even if the building process is long and challenging, it is worth it once you put your costume on and get to interact with others at an event. Creating Cosplays Creating a cosplay is no easy task, to say the least. While sewing is the major skill needed to make a good cosplay, many other skills can be used. Depending on the character’s outfit, one may need painting, sculpting, and jewelry-making skills, not to mention wig-styling skills or other crafting skills for accessories to complete the cosplay. The better the materials, the more authentic the cosplay looks. However, higher-quality materials come at a price; some cosplayers spend over a thousand dollars creating a grand authentic cosplay for conventions. “The least
116
favorite thing about cosplaying is the factor of money,” Nakamura explains. “Cosplay is a very expensive hobby, especially if someone does it for more than once a year.” According to Nakamura, the time it takes to plan and make a cosplay varies for each person. She finds that she chooses what character to cosplay months before so that she has time to get started on the outfit. She prefers to choose a favorite character in an anime or manga, one she will not forget or “get over.” Once she has chosen, building the cosplay depends on the character design and her skills. She asks herself things such as, “How many props or components are on the character's attire? What is necessary on the outfit and what isn't needed? Do I know how to make most of the components that are on the character?” Depending on how complicated the cosplay is, Nakamura estimates that it takes between one to seven days to make. One can only imagine the amount of time it takes for professional cosplayers to finish an extravagant cosplay. Convention Time Conventions are what many cosplayers work towards. For days or weeks leading up to the convention day, cosplayers are working at their sewing machines or crafting the final accessory for their outfits. When the day comes, they don their cosplays and become walking art. When I asked Nakamura the benefits of cosplay and when she first started, she had a lot to say about the experience: Cosplaying for the first time was a really frightening experience for me, since I didn't know anyone around the convention center at the time, and on the second day of the convention I was alone with no friends. I just walked aimlessly around the area... but when people started coming up too me asking for pictures, I started to feel better. A day in cosplay is a day of fun, where I can let loose and do whatever I feel like without being pressured by other things. It's like going out with friends, but I'm in a not so normal outfit. The only difference is that I feel more comfortable in cosplay than when I'm dressed normally. After starting to cosplay, I became more open and sociable. I tend to speak my mind more, since cosplaying overall has given me confidence to be who I am and to be proud of it. My most favorite thing would be moments where I'm cosplaying and a child comes up to me thinking I'm the actual character. (Nakamura 2013) When I attended my first convention at Kawaii Kon 2013, I was amazed by the amount of cosplayers and their beautiful costumes. Almost everyone gets something out of cosplays. It is fun to take pictures as cosplayers interact with others or do a dynamic pose. It is also a joy to see or become your favorite characters, and you get to interact with people who have the same interests as yours. While I was only able to attend the convention on the last day, the experience I had makes me want to attend it again next year, hopefully for all three days. A Case of Strangeness Through my personal experience, I have seen that some people consider cosplaying to be weird. I remember the stares people would give my friends as they walked around in cosplay. If you type in Google search the words “cosplayers are,” it will automatically fill in popular phrases such as “cosplayers are weird,” or “cosplayers are losers.” This makes me confused at times. When football fans dress in jerseys or go shirtless and paint themselves all sorts of colors, it is considered all right, so why is it that when cosplayers dress up, it is considered to be weird? The 117
media especially loves to make fun of cosplayers who are chubby or large, and the media are often the ones spreading the misconception that cosplayers are “weird” and losers. When I asked Nakamura about the judgment of cosplayers, she had this to say: I'm usually not looked down upon or criticized by people or friends. When it does happen, I brush it off or think of trying to improve on my cosplay... but usually it's myself that does the criticizing. When I see pictures of myself in the outfit or feel I sewed something wrong, I end up getting depressed for a while. But ultimately I love what I do so I won’t quit even if someone tells me to. I feel that people who call cosplayers "weird" or "losers," just want to make themselves feel better by trying to beat down another person's confidence. Since most people nowadays don't say those things out loud ... and the cosplayers who are being called names shouldn't think anything of it, since the types of people who call them "weird" or "losers" don't know anything about the cosplay community (Nakamura, 2013) One blog goes into depth and talks about how cosplayers are loud and obnoxious, but the writer mainly talks about teenage cosplayers (Elle, 2012). Cosplayers come from all races and age groups, but teenagers can be a bit overzealous at times (this is from personal experience). Do not dislike an entire community for something only a small percentage of the group does. There are plenty of mature and well-behaved cosplayers for every annoying one. For example, as a fan of Homestuck, a popular (but strange) webcomic, I am aware that there are fans who cosplay, and some of them tend to be over-excited and loud, which annoys other convention-goers. Despite that, I know other fans who are better behaved; it is just that the louder group of the two is often associated with whenever Homestuck fans are mentioned. While it might be odd to see someone dressed as a character, especially for those not used to cosplay, it does not mean that the cosplayer is odd in real life. Cosplayers are ordinary people; they have lives and jobs and families just like anyone else. When I asked Nakamura what she did for a living, she replied that she was currently looking for jobs and internships, but was interested in turning cosplaying into a business of her own. In a video uploaded on Business Insider, reporters asked several cosplayers what they do in their normal lives. A man dressed as Kakashi from Naruto works as a security dispatch for a school district. Another dressed as Walter White from Breaking Bad is an artist and designer of pajamas and sleepwear. Dressed as the Joker in a nurse outfit, a man joked, “What are you talking about? This is who I am every day.” Afterward, he said he was attending college and training to become a cop (Kakoyiannis & Angelova, 2013) A Case of Perfection Strangely, there is a stereotype that both cosplayers and non-cosplayers believe in, and it is the one that irks me the most: you have to match the character’s design, or else the cosplay is not “right.” What I mean is, if your body type or skin color does not match the character you cosplay as, then it is considered a “fail,” regardless of how well the cosplay is made. For many people, accuracy is more important than the fun aspect of cosplaying, and those who do not meet the criteria will be judged. Nakamura also discussed her experiences involving this stereotype:
118
I have many good and bad thoughts on the cosplaying community... but to put it simply, I think the cosplaying community is "two faced,” since everyone has their own idea on what cosplay should be, and it usually leads to some sort of drama or annoyance for some people. I've seen people and cosplayers comment on each other's outfits for not being accurate enough, or not the having right body type. And that same person can go to a friend cosplaying, and tell them "You look beautiful!" This impacted me to the point where I'll re-evaluate how I should act around certain people and question what kind of friends I'm making. (Nakamura, 2013) When it comes to cosplaying myself, I am guilty of thinking in this mindset, at least toward myself. I hesitate to try it because of my dark skin, and when it comes to cosplay ideas, I find myself only sticking to characters with skin colors similar to mine. And while I do want to cosplay people of color, I also want to try cosplaying as other characters, despite the fact that they have lighter skin than me. Chelsea Medua, a black cosplayer, sums up pretty well the reason why cosplaying does not (or shoud not) involve your skin or body shape: These people think you need to look like a character right down to their weight, height, and skin color. The basic definition of cosplay is to just dress as a fictional or real character. Last time I checked, your skin, weight, and height weren't fabric, so they have nothing to do with your costume (Orsini, 2012). Others in the cosplay community encourage and support cosplaying regardless of ethnicity, disability, or body shape. Earlier this year, Hot Topic—a popular store chain aimed at teenagers—released a shirt with the word “cosplay” with a definition underneath that said, “Do it right or not at all.” The shirt received a lot of negative comments from those in the community, and was soon pulled off of shelves (Granshaw, 2013). While many in the community and outside of it still believe in matching the characters’ looks perfectly, there are still those fighting for everyone’s right to dress as whoever they want. Conclusion Over the course of my research, I have found that there is no real reason to consider cosplayers to be odd. While there may be the occasional “weird” or obnoxious character, most cosplayers are as normal as everyone else. Granted, what I have observed includes only a few perspectives of the cosplaying world, one of which was my own. Everyone has their own opinion, and I did not fully explore every perspective out there. Still, during the course of this essay I have learned much more about the cosplay community, and I hope to delve deeper into it in the future. I will leave readers with this observation: Most cosplayers are not weird, and cosplaying is not a bad thing for fans or newcomers to try. In fact, it can be quite enjoyable. REFERENCES Elle. (2012, May 12). Why Cosplayers Are Weird (And Give Normal Fans A Bad Name). Retrieved October 27, 2013, from One Girl And The World: http://xonegirlandtheworldx.blogspot.com/2012/05/why-cosplayers-are-weird-andgive.html Feldmann, S. (n.d.). The History of Cosplay. Retrieved October 27, 2013, from Strange Land Costuming: http://www.strangelandcostumes.com/history.html 119
Granshaw, L. (2013, November 16). Hot Topic yanks shirt that infuriated the cosplaying community. Retrieved November 30, 2013, from The Daily Dot: http://www.dailydot.com/fandom/hot-topic-cosplay-definition-community-twitterbacklash/ Hoevel, A. (2011, November 3). The art of cosplay is not the art of getting hit on. Retrieved October 27, 2013, from CNN: http://geekout.blogs.cnn.com/2011/11/03/the-art-ofcosplay-is-not-the-art-of-getting-hit-on/ Kakoyiannis, A., & Angelova, K. (2013, October 15). We Asked Cosplayers At Comic Con What They Do In Real Life And Their Answers Were Awesome. Retrieved October 27, 2013, from Business Insider: http://www.businessinsider.com/real-jobs-of-comic-concosplayers-2013-10 Nakamura. (2013, October 27). Cosplaying Interview. (C. Farris, Interviewer) Orsini, L. (2012, June 27). Cosplaying While Black: New Tumblr challenges stereotypes with photo project. Retrieved November 30, 2013, from The Daily Dot: http://www.dailydot.com/society/cosplaying-while-black-chelsea-medua-interview/ The History of Costuming. (2005). Retrieved October 27, 2013, from Costuming.org: http://www.costuming.org/history.html
120
In Pursuit of Hearing: Looking into Cochlear Implants (By Jessica Bie) The screen opens up on YouTube and in view is a beautiful eight-month-old baby boy sucking on a pacifier in his mother’s arms. The child looks like any other baby, but we soon find out that he was born deaf and has never heard any sounds in his young life. However, this video was taken right after the child had cochlear implant surgery, and the doctor begins to activate the implant for the first time. Once on, the cochlear implant makes it possible for this child to hear his mother for the first time ever. His bright little smile lights up the room as his pacifier drops out of his mouth at the simple words of “Hi, Johnathan,” from his mother. Johnathan’s adorable smile, in awe at hearing his mommy’s voice, is beautiful enough to make me get teary-eyed. How is this miracle possible? Previously I thought that once someone lost their hearing, it was a detrimental, irreversible handicap. I thought that, with cochlear implants, Deaf people do not have to suffer from deafness. After watching this video, it appeared that the world could undergo a new change, and Deaf people would not have to endure the challenges of trying to fit into the hearing world because they can readily be a part of it. I questioned: if this incredible miracle of technology is available, why do all Deaf people not have it? However, after looking at the research I have conducted and the implications of cochlear implants on the Deaf community, my views on this “miracle technology that can solve deafness” began to change into viewing the implants as more of a curse than a blessing for the Deaf world. With the loss of Deaf culture, ASL, and community, and the implications for those who get the surgery, it can be said that the new cochlear implant technology is taking something away from the Deaf community. First, the cochlear implant has been seen in action as a way to aid in hearing for the Deaf, but how does this “miracle” really work and where did it come from? According to Rutgers University professor Stuart S. Blume’s book Artificial Ear: Cochlear Implants and the Culture of Deafness, the cochlear implant was invented in 1961 by William F. House, who earned a medical degree in oral surgery from the University of Southern California after first earning a doctorate in dentistry from UC Berkeley (Blume 45). However, this model was first rejected from the person’s body, meaning that the body had a negative immune response to a foreign object implanted into it, as when a body rejects an organ transplant. House further developed the technology with many more tests and models until finally he created the House/3M Cochlear Implant in November 1984. Twenty-three years later, the House/3M was approved by the Food and Drug Administration (FDA) for deaf adults at the time. The FDA stated that this first approved cochlear implant “allows many cochlear implant patients to detect environmental sounds and conversational speech at comfortable listening levels” (Blume 46). Since then, cochlear implant technology has improved even more, providing hearing for any child or adult who is profoundly deaf or who has significant hearing loss. Today, the cochlear implant surgery is approved for anyone with significant hearing loss over the age of 12 months. With such power, one would think cochlear implants would include a series of many complex parts with names only otologists could pronounce. Even though the process of making a Deaf person hear seems extremely complicated, the parts to it are not so complex or foreign. The 121
cochlear implant is made up of four main parts; the microphone, speech processor, transmitter and receiver/stimulator, and electrodes. According to the American Speech-Language-Hearing Association (ASHA), a cochlear implant is simply “a small electronic device consisting of surgically implanted internal components with an externally worn speech processor” (ASHA 1). The Public Broadcasting Service’s (PBS) website explains in detail of the step-by-step process of cochlear implants and their parts. This small electronic device works collaboratively as it systematically follows through each of the four parts. The first part is a microphone inside of a headset that is worn outside of the patient’s ear. The microphone inside picks up sound from the headpiece and a cord carries that outside sound into the speech processor, also worn externally. The speech processor digitizes the sound using an extremely powerful tiny computer that transforms the sound into coded signals. A specific speech processor is tailored to each cochlear implant patient’s needs and can range from programmable volume controls to different sizes and batteries. The next part to the process is the transmitter and receiver. The transmitter is also behind the ear next to the microphone, held in place by small magnets in the receiver under the skin. The receiver is like a little radio surgically placed under the skin behind the ear. The transmitter sends the coded signals through electrical impulses to a coil that then goes to the receiver. Now inside of the ear, the receiver wire leads these impulses to an electron pool inside of the cochlea fluid. This transfer then stimulates the auditory nerves in the inner ear. The final part, which is naturally occurring in the human body, is the electrodes. The cochlear implant gives the appropriate electrical energy to the pool of electrodes, which causes the auditory nerve to transmit the electrical signal to the brain, interpreting it as a sound (PBS 1-48). The way cochlear implants operate is a pretty remarkable process indeed, but there is also something else that uses a similar process to create sound: the human ear. The cochlear implant replicates how a human would hear but just adds the components so it is possible for the Deaf person to hear. Even though the implant approximates the processes of normal hearing, it most certainly is not able to accurately replicate all of the processes. Currently, a cochlear implant cannot provide normal hearing, but ultimately if successful, the implant can provide auditory information to improve the Deaf person’s auditory information and awareness of surroundings to Deaf children and adults. As for the procedure and implantation, the surgery lasts between three and five hours, not including the therapy that goes along with it. According to the graphic description from the ASHA, after the patient is sedated, the surgeon implants it through a “surgical cut and drilling of the skull just behind the ear into the mastoid bone to access the inner ear (the cochlea) insertion of the electrodes into the cochlea, and placement of the receiver in the temporal bone, above the mastoid bone” (ASHA 8). After the long surgery, between two to four weeks later, the patient returns to the clinic. There the implant is mapped and programmed until it is ready to be turned on. Once this happens, the patient returns multiple times to have tests and therapy done to assimilate them to hearing with their new cochlear implant. The way in which the cochlear implant has been described is very simplistic and concrete, but in actuality, the success rate of the cochlear implant does not follow the pattern as evenly. Several doctors from the American Medical Association in the Journal of the American Medical 122
Association (JAMA) network conducted a study on the outcomes of twenty-five profoundly Deaf children who received cochlear implants through the decision of their parents. The outcome stated that “success,” where the outcome equalled or exceeded the prognosis, occurred in 19 cases (76%). The successful group contained some ‘limited gains’ cases where the results were “nonetheless in line with expectations and parental satisfaction” (JAMA 12). This study showed that more than three-quarters of the patients experience success (success meaning the implant was not rejected from the person’s body, not the quality of their hearing after surgery), while the other 24% experienced no benefit from the surgery. However, the interesting part is that within that three-quarters, those who had a successful surgery defined their gains as “limited,” saying that the surgery was not worthwhile and did not meet their expectations. It can be inferred that parents felt these gains were ”limited,” but they are still considered part of the successful group, since the implant was accepted by the body. However, the degree of hearing is still a factor. Furthermore, in the “limited gains” cases, the children’s hearing may not have improved so greatly; therefore the parents felt that the risks and costs of undergoing cochlear implant surgery for their children outweighed the benefits. The same study also discovered that implantation of children at younger age proved to be more successful, where later implantation had lower success rates. The study supports this by saying, “The detrimental effect of delayed implantation was evident” (JAMA 13). With support from this study, the “miracle” I was viewing in that first YouTube video of the deaf eight-month-old may not, and often does not, account for most of cochlear implant surgeries. Learning about the limited success of cochlear implants spurred me to find more about the Deaf community. Throughout history, Deaf people have overcome many challenges to create their own rich culture. In the past, Deaf people went through significant oppression, such as banning of sign language, workplace discrimination, and overall exclusion, because they were seen as “broken” by the hearing world. Damned for Their Difference: The Cultural Construction of Deaf People as Disabled is a groundbreaking sociological history of Deaf culture co-authored by Jan Branson, Director for Australia’s National Institute for Deaf Studies and Sign Language Research, and Don Miller, Monash University Professor of Sociology and Anthropology. The book highlights how hearing people believe that the Deaf community should fit into the hearing world and not practice sign language and their Deaf identity, just because they use language differently (Branson 61). Throughout my research, I have used the word “deaf” with both a capital “D” and a lowercase “d.” This is not merely a typing error; the differentiation between the capitalization of “deaf” or not plays a part in Deaf Culture. Janet L. Pray and Jordan I. King, both from the Department of Social Work at Gallaudet University, published an article on PubMed titled, ”The Deaf Community And Culture At A Crossroads: Issues And Challenges,” with a note at the end of the article that makes the differentiation clear. They state that the lowercase “d” would be used when exercising deafness as a physical quality, such as being hard-of-hearing or profoundly deaf, most likely used in a medical setting. However, the capital “D” is used to describe Deaf people who are proud of their Deaf Culture and use of sign language, and who fully embrace and accept their Deaf identity.
123
Thus, a person who would consider him or herself “deaf” would be one who rejects the Deaf community and tries to assimilate into the hearing world by using methods such as lip-reading, speech, and ignoring their Deaf identity (Pray & King 95). A “Deaf” person might look down upon people who see themselves as “deaf,” because they are basically rejecting a whole community that they should belong to . It seems as if the “deaf” people would be more likely to choose cochlear implant surgery than a “Deaf” person would, and therefore a “deaf” person would follow the hearing-dominant view over the Deaf community, despite being deaf themselves, since they are trying to be “fixed.” Despite my own assumptions, the most common practice is to only refer to deaf people with the lowercase “d” when deafness is primarily a medical condition, but this becomes an issue because each individual has their own view of their deafness, leaving this issue unsettled (Pray & King 95). Therefore, when referring to a Deaf person when you do not know the importance of their Deaf identity for them, I would think it is respectful to use the capital “D” as to not risk offending. In addition to the oppression Deaf people endured throughout history, I found a resource for Deaf people called Deaf Websites, giving basic information and resources to Deaf people about culture, sign language, technology, and what I looked at most, education. Deaf Websites provides information regarding Deaf education, not only currently, but the oppressive nature of it in the past. Deaf students in schools were not only discouraged from using sign language, they were scolded. As an alternative, Deaf Websites highlighted how oralism was promoted, which is defined as teaching children lip-reading to read speech cues, and teaching them to speak while banning signs (Deaf Websites 1). With the oralism method, Deaf children would be forced to use speech and lip-reading as their only forms of communication. Iain Hutchison, a graduate from the University of Stirling and now a Research Affiliate in History of Medicine at the University of Glasgow in Scotland, wrote ”Oralism: A Sign Of The Times?” Hutchison highlights the logical issues presented by the oralism method, stating that oralism does not work for deaf children because an oral-only system is very difficult for Deaf children to learn and 75% of lip-reading is guesswork, setting them up for failure (Hutchison 5). Even if oralism is taught to assimilate them, “progress is slow and the final result is often an adult who can communicate only with other deaf people and with a limited number of hearing people” (Hutchison 4). With the ban on sign language, discrimination in the workplace and schools, and an attitude of complete pity toward Deaf people, the oppressive nature of the hearing world took enough of a toll on the Deaf community. However, the constant oppression did lead to change, resulting in Deaf people fighting back to gain rights as individuals. According to the National Association of the Deaf website organization, the earliest actions of the NAD brought Deaf people together to fight discrimination in schools and workplaces, such as securing the right to federal civil service employment and the Disabilities Education Act (NAD 3-4). This is significant to Deaf History because in effort to save American Sign Language, the NAD made a series of films in Sign Language to spread knowledge of Deaf culture. Overall, the NAD fought over the years in order to protect Deaf rights. In doing so, Deaf people went through many protests to stand up and assert their rights as individuals and to protect their Deaf culture. 124
John B. Christiansen and Sharon N. Barnartt, both teachers at the all Deaf college Gallaudet University, highlight the Deaf community’s overcoming of adversity in their book, Deaf President Now!: The 1988 Revolution at Gallaudet University, where they discuss the day-by day protests and the effectiveness of the outcomes. The protest began at Gallaudet University when Deaf students demanded a Deaf person as president, but they were denied by the board, who selected another hearing president. As a result, the Deaf students and faculty began the campaign, and after gaining worldwide recognition, their wishes were met and a Deaf president was selected. According to Christiansen, this campaign is just one of the many instances in which Deaf people showed that they can overcome obstacles despite their disability. By overcoming adversity together,, Deaf people have created a mutually supportive community. Gallaudet University published A Place of Their Own: Creating the Deaf Community in America by John V. Van Cleve. Van Cleve highlights how truly strong the Deaf community is by portraying them as a large group coming together for a cause they believe in and making a real change despite a disability. Through many examples in history, Van Cleve depicts how the oppression and discrimination undergone by Deaf people certainly did not break them down. If anything, it brought them stronger together as one. This aspect is especially apparent when he talks about the creation and recognition of the NAD (National Association of the Deaf) in 1880 as an entity, bringing Deaf people together to fight discrimination in schools and workplaces. One of the most important aspects of Deaf culture in addition to their supportive community is the way in which they communicate. American Sign Language is what piqued my interest in the Deaf community, as I chose to take ASL as an elective throughout high school. There are many misconceptions of sign language as an actual language, but the article “Discriminant Features And Temporal Structure Of Nonmanuals In American Sign Language” by Fabian C. BenitezQuiroz, provides clarity. As many people do not know, sign language is not universal. Most spoken languages have their own sign language; for example, Spanish Sign Language, French Sign Language, German Sign Language, and so forth. According to the Linguistics Department at the University of California at Santa Cruz, a language can be defined as “a complex of knowledge and abilities enabling speakers of the language to communicate with each other, to express ideas, hypotheses, emotions, desires, and all the other things that need expressing” (UCSC 1). If this is what defines a language, sign language does meet each of these qualifications. Sign language just communicates differently with gestures and facial expressions, through manual and non-manual signs, instead of via spoken word. Each form of sign language has its own set of grammar and linguistics, and therefore, each is a language. Signing is different than spoken language, not inferior to it. Sign Language is the easiest way for Deaf people to communicate with each other. So why should Deaf people conform and try to fit into the hearing world by adopting a new language when they have a perfectly legitimate language to communicate with? After learning about the Deaf culture and how it needs to be preserved, it can be seen that cochlear implants might threaten the Deaf community. One aspect is the loss of Deaf culture. More and more Deaf people are finding that with the development of cochlear implants, new generations of Deaf children will forget and not be taught about the oppression of Deaf people. 125
Pray and King also address concerns of Deaf people, especially regarding the education of Deaf children with cochlear implants. They talk of “Deaf People Empowered” and its implications (Pray & King 18-26). Highlighted in this concept is the fear that Deaf children will not be taught about their rich culture and ASL. These things are important because they help define the characteristics of Deaf culture and make them part of a linguistic minority, not people with a disability (Pray & King 22). Once again, in A Place of Their Own, Van Cleve discusses how even in the late 1980’s, the allDeaf school of Gallaudet University “still did not offer a course that would help its students understand their past” (Van Cleve, vii). The Deaf community does not want to go back to the lack of education in Deaf history, when the achievements of Deaf people were ignored by the public and not recorded in history because of the preconceived notion that Deaf people could ever achieve anything. Furthermore, Deaf people’s fear of losing their culture and identity is not irrational. The worst thing to do is to let history repeat itself and go back to the time where Deaf people had very few rights, because they have already fought so hard to gain them. That would mean their hard work did not amount to anything. Another aspect that the Deaf community is afraid of losing due to of cochlear implants is the loss of American Sign Language. This is a growing problem because Deaf children born into hearing families will not be taught sign language. A hearing family with a Deaf child will most likely elect to get their child a cochlear implant at a young age for best results, due to the success of young implantation and the fact that hearing families generally do not know about the Deaf community. “W(h)ither the deaf community?”, a medical study conducted by T. Johnston at the University of Newcastle in Sydney, highlights how in Australia (also consistent with the U.S., Britain, and other developed countries), there are fewer and fewer Deaf people because of advanced technologies (Johnston). Before new the development of technology like cochlear implants, deafness would result in a more culturally Deaf-influenced life than many have today. The study supports this by showing that “declining prevalence and incidence rates have immediate implications for sign-based education, teacher-of-the-deaf training programs, and educational interpreting” (Johnston). As a result of the declining number of people who need sign language services, less and less funding will be provided for the Deaf community. This is catastrophic for Deaf people because even though their strength in numbers is decreasing, there are still Deaf people who need these services and there will always be people who rely on signing. In the past with a larger number of Deaf people, services of sign language education and interpretation became a necessary human right for the Deaf community. However, if the amount of people getting cochlear implants increases, the decreasing amount of fully Deaf people will cause sign language to become an ignored linguistic aspect of the minority (Johnston). According to Robert Sparrow of Monash University in ”Defending Deaf Culture: The Case Of Cochlear Implants,” the loss of sign language is detrimental due to the myriad of ways one can express themselves and communicate in ways that just cannot be done in spoken language (Sparrow 137). Sparrow highlights this when he says, “People who are deaf often have skills and abilities that hearing people lack. First and foremost of these is the ability to communicate in Sign as a natural first language. But deaf persons may also have a superior consciousness of 126
subtlety of gesture and of the movement of bodies through space than do hearing persons” (138). Deaf people may have superior alternative methods of non-sound messages. Therefore, to what extent should deafness be considered a disability when deafness means not being able to use a particular sense? These losses of identity, culture, and American Sign Language, caused by a rise in cochlear implant surgeries, brings to mind many questions and objections: Why is hearing considered so much more highly valued among the senses, when there are four other senses that are not only fully functional but heightened due to their lack of one sense? Could this even make Deaf people superior in certain environments? Therefore, does the Deaf community really need to be “fixed” with cochlear implants? The questions raised thus far have originated from my confusion over why cochlear implants were not more widely used as a solution to hearing loss. However, this view was only from a hearing-dominant perspective. This is a turning point in my research process because I realize that if I were to see the same question —cochlear implants as a solution— from a Deaf person’s perspective, it completely misses the actual questions that are more important to Deaf people: Why is there such an emphasis in trying to “cure” me of my Deafness? Why can’t I be allowed to exist using my other senses, to use ASL, and to express my Deaf identity? Losing ASL hits home personally for me because I have spent three years of my life learning American Sign Language. From my experience, it is a beautiful language through which I can express myself in ways I cannot using spoken English. It is difficult to imagine how Deaf people feel, who have learned sign language their entire lives and who are now at risk of losing their language in time. As more of the consequences of the not-so-miracle technology of cochlear implants are discovered, the harsh implications for Deaf people and their culture are revealed, resulting in a paradigm shift for me. At first, upon seeing that sweet, tear-jerking YouTube video of the baby hearing for the first time, I thought of cochlear implant surgery as a blessing for Deaf children and their families everywhere, so that they could fit into the hearing world and assimilate. However, what I am finding is that the patients with cochlear implants do not fit into the Deaf nor the hearing world, because of the way hearing and Deaf people see them, and what a cochlear implant is itself. A primary goal of a Deaf person who gets the cochlear implant is obviously to hear, but usually so they could fit into the hearing world for social reasons (such as functioning at a job) or for other personal reasons. However, even though these Deaf people paid costs exceeding $40,000 for their surgery and treatment (according to the ASHA), the hearing world generally do not think of people with cochlear implants as “hearing” because they are not naturally hearing. Chongmin Lee’s “Deafness And Cochlear Implants” does a great job of highlighting the aspect that, cochlear implant or not, deaf people are still deaf. He says, “Although cochlear implants can improve the hearing of deaf or hard of hearing children, they cannot fix deafness. By extension, use of cochlear implants is not a guarantee of improved language or cognitive development. As a result, implanted children still remain Deaf people who need others to accommodate their needs to access public services” (821). 127
Since hearing with a cochlear implant is not the same as for normal hearing people, this raises the question: If special accommodations and access to public services and education are still necessary, why should Deaf people get cochlear implants? (Lee 821). This Deaf scholar is able to argue this point in an unbiased way. He is Deaf and feels empowered by it, but he also knows that he has limitations because of his deafness (Lee 823). Another study done on the television show “What Would You Do?,” available on YouTube, portrays how even with disability rights, Deaf people are still being discriminated against, specifically in the work place. In this episode, all of the main characters are actors, depicting the manager at a local coffee shop refusing the application of two Deaf girls, who are actually deaf in real life. It is likely these girls have received the cochlear implant surgery; they have the ability to speak, and appear to wear hearing aids that resemble cochlear implants. Either way, these girls are able to sign and speak but are targeted as deaf for using sign language. The point of the show is to see if any real customers react to the situation and how they intervene. Although many customers just ignored the situation happening before their eyes, some actually reacted and spoke to the manager, three of whom have jobs in recruiting and human resources. One would think that people in these occupations who know the legal obligations to accommodate people with disabilities in the workplace would try and help the Deaf girls in this situation, but they do just the opposite. These people all have a similar response to the manager, telling him that he has to just give the girls applications to fill out and not tell them straight to their faces that he would never hire a Deaf person in his establishment. By this study, it is apparent that even if Deaf people can communicate with hearing people, read lips, and have cochlear implants, they are still considered deaf and are looked down upon as being disabled and not as equals. Despite much development, time, and money for the cochlear implant technology, and even disability acts to promote fairness for Deaf people, employers can easily disregard them behind closed doors. It is nearly impossible for a deaf person to fit in because the preconceived views of many hearing people prior to learning about the cochlear implant are very hard to shake. The cochlear implant is not a “cure” for deafness whatsoever, not only since it does not provide “normal” hearing, but because it is not a permanent solution. Once the cochlear implant is removed, the person is still physically deaf. The implant only works when all the parts are on and hooked up; the headset, transmitter, and speech processor must be connected in order for it to work. However, all these parts cannot be worn at all times. The parts cannot get wet, so the patient will always be deaf in the shower, pool, or beach. Also, they do not sleep with it on, so they would have to use what Lee describes in his journal article as the Deaf person’s alarm clock, a vibrating bed, instead of waking up to the beeping alarm. Yes, these times when the patients cannot wear the cochlear implant seem greatly limited, but when there is no form of communication they could use at this time, a deaf person can feel lost and speechless in the sound-filled world from lack of knowledge of what it means to be Deaf. However, if the cochlear implantee is still seen as disabled in the eyes of the “superior” hearing world, can they just go back to the Deaf community, since they still are deaf once the implant is taken off? 128
As sad as it is, the Deaf community is not so welcoming of cochlear implantees into their community as equals. This aspect can especially be seen in the 2002 documentary Sound and Fury about two families with young Deaf children, conflicted over giving them cochlear implants. The six-year-old girl in the all-Deaf family, who is thinking about getting a cochlear implant, has grown up learning sign language and Deaf culture. Her parents made it her choice to have the surgery or not, but the Deaf community had a lot of influence on her as well. The Deaf community brought up that they see cochlear implant patients as ruining their culture and destroying their sign language. Some may argue that the child can get a cochlear implant and still learn sign language, so they will still fit into both hearing and Deaf worlds, but when the family visits the cochlear implant school, they see the one-sided way of speech Deaf children are taught, quickly diminishing their Deaf identity. The reason for this is that children with cochlear implants are only taught speech and discouraged from using signs, which seems strangely similar to the ban on sign language in schools in 1880, because they believe the child will get too confused having to learn both to speak and sign at the same time, leading them to have difficulty in both areas. Either way, once again they will be left out of the hearing and Deaf worlds. The other case brought up in the documentary concerns a family with newborn twins, one deaf, with hearing parents. The issue is the cochlear implant surgery is best conducted while the child is an infant, as shown earlier in my paper.. The reason surgery works best when the child is an infant is because infants learn spoken language easier and can get used to hearing the world through cochlear-implanted ears. However, the logical issue brought up to this family is not only the risk involved; their Deaf community on Long Island argues that they are making an unfair choice for their child by performing surgery on a delicate newborn because it is only what the uninformed parent thinks is best. The Deaf community asks them, “What if when the child grows up, he wants to choose his Deaf identity, not a hearing one?� In retrospect, suppose the child wants his Deaf identity, but he was never taught ASL or Deaf culture. He is stuck until he learns sign language on his own, which is also more difficult to learn as age increases. Even after overcoming the challenge of having to learn the Deaf form of communication, the implant receiver is permanently under the skin and cannot be removed easily without additional surgery. Therefore, the Deaf community argues not to implant the child at a young age and wait until he is educated and mature enough to make his own decision. Yet while this method may seem tactful, will the success of the surgery decrease dramatically by waiting? As a rebuttal, the Deaf community brings up the question of whether being deaf is really a handicap. Cochlear implants or not, the only thing Deaf people cannot do that hearing people can is hear. To implant or not to implant: that is the question. By conducting this research, my views on cochlear implants have changed from a solution-oriented examination to a better understanding of the complexity of the issues raised by cochlear implants. Through my research and reading, I now see that my previous views and approach to this question were biased because I was only operating from a hearing-dominant perspective. However, in the process of my research, I began to see the issue from Deaf person’s perspective. 129
This raises the question of whether those who cannot hear are actually disabled, because the label of “disabled” itself is biased from a single-sense dominant perspective. Deaf people use other senses to communicate by signing without any auditory information and are still able to communicate. My understanding of the ratifications of a “fix or cure” approach as a solution is also not as clear-cut as I first thought, because deaf people belong to more than one culture. The surgical procedure could very well end up isolating an individual from both groups. My research also helped me to see that the dividing lines raised by cochlear implants affect families, who must make decisions about when to consider such a surgical procedure despite the implications of the device for sign language, Deaf culture, and the individual. There are many positives and negatives of cochlear implants, and there are many arguments of why to keep the Deaf identity alive. Overall, I have learned that it is a much more complex question, one that needs much more consideration of all other factors before it should and can be answered. Furthermore, the issues brought up in the research are larger than just cochlear implants. People need to stop seeing Deaf people as handicapped and unable to live in the way they do. Deaf people should not have to change; society should make a change. The Deaf community should be accepted for who they are and be given equal opportunities. Either way, you do not have to hear to listen. WORKS CITED Benitez-Quiroz, C. Fabian, et al. "Discriminant Features and Temporal Structure of Nonmanuals in American Sign Language." Plos ONE 9.2 (2014): 1-17. Academic Search Premier. Web. 14 Mar. 2014. Black, Jane, Louise Hickson, and Bruce Black. "Defining and Evaluating Success in Pediatric Cochlear Implantation--An Exploratory Study." International Journal of Pediatric Otorhinolaryngology 76.9 (2012): 1317-1326. MEDLINE. Web. 14 Mar. 2014. Blume, Stuart S. The Artificial Ear. New Brunswick: Rutgers University Press, 2009. Print. Branson, Jan, and Don Miller. Damned for their Difference: The Cultural Construction of Deaf People as Disabled: A Sociological History. Washington: Gallaudet University Press, 2002. Print. Christiansen, John B., and Sharon N. Barnartt. Deaf President Now!: The 1988 Revolution at Gallaudet University. Washington, D.C.: Gallaudet University Press, 1995. Print. Crouch, Robert A. "Letting The Deaf Be Deaf." Hastings Center Report 27.4 (1997): 14. Academic Search Premier. Web. 1 Mar. 2014. Deaf Websites. "Oralism." Oralism. Deaf Websites, 2013. Web. 8 Apr. 2014.
130
Harris, Jeffrey P., Dr., John P. Anderson, Dr., and Robert Novak, Dr. “An Outcomes Study of Cochlear Implants in Deaf Patients.” American Medical Association. JAMA Network, Apr. 1995. Web. 1 Mar. 2014. Hutchison, Iain. "Oralism: A Sign of the Times? The Contest for Deaf Communication in Education Provision in Late Nineteenth-Century Scotland." European Review of History 14.4 (2007): 481-501. Academic Search Premier. Web. 14 Mar. 2014. Johnston, T. "Result Filters." National Center for Biotechnology Information. [Abstract] U.S. National Library of Medicine, 2004. Web. 10 Apr. 2014. Lee, Chongmin. "Deafness and Cochlear Implants: A Deaf Scholar’s Perspective." Journal Of Child Neurology 27.6 (2012): 821-823. Academic Search Premier. Web. 1 Mar. 2014. Marschark, Marc. Sign Language Interpreting and Interpreter Education: Directions for Research and Practice. New York: Oxford University Press, 2005. Print. "National Association of the Deaf." Civil Rights Laws. National Association of the Deaf (NAD), n.d. Web. 09 Apr. 2014. Public Broadcasting Service (PBS). “How Cochlear Implants Work.” PBS. PBS, n.d. Web. 24 April. 2014. PR, Newswire. "Actress and Advocate for Deaf Marlee Matlin Visits Bunker Hill Community College." PR Newswire US 28 Feb. 2014: Regional Business News. Web. 14 Mar. 2014. Pray, Janet L., and I. King Jordan. "The Deaf Community and Culture at a Crossroads: Issues and Challenges." Journal of Social Work In Disability & Rehabilitation 9.2/3 (2010): 168193. Academic Search Premier. Web. 1 Mar. 2014. Sound and Fury. Dir. Josh Aronson. Perf. Peter Artinian, Karl Katz, Chris Artinian, Mari Artinian. PBS, 2002. DVD. Sparrow, Robert. "Defending Deaf Culture: The Case Of Cochlear Implants." Journal of Political Philosophy 13.2 (2005): 135-152. Academic Search Premier. Web. 1 Mar. 2014. Stevethevlogger. “Deaf Discrimination on ABC's ‘What Would You Do?’” Online video clip. YouTube. YouTube, 4 Feb. 2011. Web. 8 April 2014. UCSC. "What Is Linguistics?" UCSC.edu. Regents of the University of California, n.d. Web. 14 Mar. 2014. Van Cleve, John V. A Place of Their Own: Creating the Deaf Community in America. Washington, D.C.: Gallaudet UP, 1989. books.google.com. Web. 14 Mar. 2014.
131
Zoot Cadillac. "8 Month Old Deaf Baby's Reaction To Cochlear Implant Being Activated." Online video clip. YouTube. YouTube, 5 June 2010. Web. 13 March 2014. Zwolan, Terry, PhD. “Cochlear Implants.” American Speech-Language-Hearing Association (ASHA), 2011. Web. 14 Mar. 2014.
132
LOOKING AT LITERATURE
133
Fitzgerald and Steinbeck: Is the American Dream Really Dead? (By Elizabeth Dash) American history and the human life stages have a lot more in common than one would think. The history of America went through the rebellious stage, such as when it declared itself free from its mother nation, Great Britain. America had to learn how to “play nice” with others, such as when it went to war, and even as it went through the trials and errors of establishing its niche. Now that America has grown from being a young and perceivably naïve nation, it seems to be the most influential country in the world. America has gained a reputation through its history of being a place that offers the opportunity to make a better life for oneself. As a result, the American Dream has become its staple. This has caused the nation to be given nicknames such as, ”The Melting Pot,” ”The Promised Land,” “The Home of the Free,” and “The Land of the Brave.” As portrayed by such nicknames, eventually the Dream became more than a reputation; it has transformed into an identity. However, in recent years people have begun to question whether or not the nation has lost its status of prosperity and opportunity. Having grown up in the 21st century, I have witnessed phenomena that have both tested the validity of the American Dream and confirmed its existence. For example, I have witnessed the tragic World Trade Center attack in 2001 and the fear of American citizens everywhere as a result. Conversely, I also witnessed moments of great triumph, such as when the first African American was elected as president. However, in the midst of a post economic recession, I have been hearing everyone, from my college roommates to my grandmother, debate whether or not the American Dream actually exists. Listening to these debates led me to question how and why citizens have become so pessimistic about what is supposed to be their country’s unique mystique. At first, I believed that the Dream meant the opportunity to raise a family comfortably, own a home with a white picket fence, and host family barbeques on the Fourth of July. However, after researching the topic further, I have learned that there is not just one American Dream. The American Dream means different things to different people, depending on characteristics that include age and political stance. Furthermore, I discovered that the definition of the American Dream has changed with time. In order to assess the question of whether or not the American Dream is dead—a question that is being asked so frequently today—one must first examine its history and how its definition has changed throughout the years. Then, one must scrutinize the present economic conditions and the current state of politics. The question, although on the surface it may seem simple, involves examining the pride of a nation and the concerns of its people. Since the concept of the American Dream involves so many considerations, it is up to the individual to determine whether or not the American Dream is dead for them. In order to explain the history of the American Dream, it is important to note when and where the term was first clearly defined. Jim Cullen, author of The American Dream: A Short History of an Idea that Shaped a Nation, is an American philosopher who explains that the term was defined in 1931 by James Truslow Adams. In his book, The Epic of America, Adams gives his definition of the American Dream as follows: 134
That American dream of a better, richer, and happier life for all our citizens of every rank, which is the greatest contributions we have made to the thought and welfare of the world. That dream or hope has been present from the start. Ever since we became an independent nation, each generation has seen an uprising of ordinary Americans to save that dream from the forces, which appeared to be overwhelming it. (Adams, qtd. in Cullen 4) In his definition, Adams states that the American Dream was present “ever since we became an independent nation,” meaning ever since the Declaration of Independence was signed in 1776 (the document signifying America’s freedom from Great Britain). The thought that the inexperienced 13 original colonies could join together to win a war against an established country, such as Great Britain, would eventually evolve into the idea that hard work results in success. This is the more traditional version of the American Dream, which is just one of many that has developed throughout America’s history. Adams goes on further to explain that different generations have saved the Dream from forces which have threatened its existence. One of these forces was the Great Depression in the early 1930s. Adams’ perspective views America in a very nationalistic way, even though he wrote this definition in the worst economic state of the nation thus far (Cullen 4). It may seem ironic that Adams could have such a positive outlook on the Dream during such a tragic period of American History; however, the fact that the American Dream still existed to some extent after the Great Depression supports his claim that the American Dream can withstand the test of time and hardships. In order to decide if the American Dream has actually withstood the test of time as Adams suggests, one must analyze how its definition has varied in years and how people’s perceptions of the Dream were influenced. Six years after Adams defined the American Dream in The Epic of America, F. Scott Fitzgerald wrote The Great Gatsby, an iconic book that portrays the nature of America just a decade before the Great Depression. While reading this book in the classroom, my high school English teacher began a discussion about how Fitzgerald’s novel comments on the American Dream. The protagonist, Jay Gatsby, was a wealthy New Yorker who appeared to have it all. He owned a large estate on Long Island and threw lavish parties every weekend. However, there was a dark side to his success. The reader eventually learns that Gatsby earned his money through illicit alcohol trafficking, and that the only reason he threw such extravaganzas was to win the love of his life’s affection. At first, the reader is led to believe that Gatsby is the epitome of the American Dream, since he made a prosperous living for himself despite his poor parents. However, the question arises of whether or not this can actually be considered the American Dream if Gatsby earned his wealth illegally. This raises suspicion regarding of the moral values of the American Dream. In a class discussion, some Some held the opinion that it does not matter how he earned the money; what mattered was that he was able to climb up the social latter to the top. Others believed there is no way it could be considered the American Dream if Gatsby broke American laws to achieve success. Whether or not Gatsby’s lifestyle was a true portrait of the American Dream also depends on the answer to another debatable question: how can one say Gatsby truly achieved the American Dream if he was not happy? Many argued the American Dream is just as much about happiness as it is about 135
prosperity. Therefore, based on these two questions, the book begins to portray an underlying meaning of the American Dream that adds more dimensions to Adams’ definition: Along with Adams’ view that the American Dream stands for upward mobility, what role does happiness play in someone’s achievement of the American Dream? Thus, it begins to become apparent that the American Dream is much more complex than one would originally anticipate—in fact, more complex than even Adams anticipated, for even though The Great Gatsby was written before The Epic of America, its depictions of that time period complicates the simpler definition Adams provides. Another iconic novel often taught in classrooms was published eight years after Adams wrote his definition but, like The Great Gatsby, also further complicated the definition of the American Dream. The Grapes of Wrath, written by John Steinbeck, was published in 1939 and tells the story of the Joad family who uprooted their entire lives to look for opportunity in California after the Dust Bowl destroyed their farm. The Joads had to drive a jalopy across several states in the burning sun and they were forced to live in conditions similar to modern-day slums. Despite their best efforts, the family was less than half its original size by the novel’s conclusion, and their goal of reliving their modest lifestyle on a farm was never reached. When I read this story with my English class, students could not believe the ending. It seemed impossible that America, the nation that acts as a role model for developing nations, could ever let hardworking citizens live in conditions so inhumane. This story of an American family failed by the government’s lack of regulation raises another question regarding the American Dream: Can it be considered the American Dream when not all Americans get the chance to achieve it? This is a rather difficult question to answer, but it helps clarify that the American Dream is in fact a highly complex idea that is hard to define. When one first reads Adams’ definition, the American Dream seems to hold nothing but promise. However, concerns, such as this one derived from The Grapes of Wrath, seem to point out a flaw in Adams’ definition, in that the definition is too broad to refer to all of America’s history and peoples. Thus, the definition needs to change. Just as the Constitution has been amended over time, so must the American Dream be modified to reflect changing times. Here it is important to examine the different stages the American Dream has experienced. After explaining Adams’ quote, Cullen goes on further to say that six stages of American history directly influenced the definition of the American Dream: the Pilgrims, the Declaration of Independence, upward mobility, equality, home ownership, and personal fulfillment (Cullen 8). While it is apparent that the stages of the Pilgrims and Declaration of Independence have concluded, the other stages are still ongoing to this day. We must further analyze these stages that have not yet been completed in order to expand upon Adams’ definition of the Dream so that it applies to the 21st century. The first ongoing stage Cullen mentions is the opportunity for upward mobility, which includes the promise that as long as one works diligently he/she is entitled to the American Dream. Christopher Jencks is a social policy professor at Harvard University, who articulates that in America, unlike several other Eastern Asian and Middle Eastern countries, one can still make a better life for himself/herself if he/she is born in a low social class (Jencks par. 5). This relates back to Adams’ quote when he conveys that the ideas for a “better, richer, and happier life” are 136
the “greatest contributions we have made to the thought and welfare of the world” (Adams qtd. in Cullen 4). Although Americans have valued this promise their country seemed to offer, they are now beginning to question if this stage of the Dream is also coming to an end. Fareed Zakaria, Time Magazine author and editor, expresses his concern with the upward mobility stage of the American Dream and addresses some of its causes in his article, “Restoring the American Dream.” Zakaria came to America from India for a college education when he was just a teenager. After living in the states for several years, he had noticed something peculiar: What he thought was a nation with an abundance of ambitious go-getters actually became a nation of people who are “glum, dispirited and angry” (Zakaria par. 4). His article revealed a poll showing that 63% of Americans believe their current decent standard of living is only temporary (Zakaria par. 4). Taking this into consideration, it is no wonder Zakaria comments that developing nations like India, who are beginning to develop rapidly, hold more promise for realizing the American Dream than the current state of America (par. 4). This is a disheartening sentiment that seems to prove valid, especially when numerous corporations today outsource jobs (such as telemarketing) to citizens of other countries such as India. The insight that Zakaria provides can give one an indication that the upward mobility stage of the Dream is being challenged, casting doubt on Adams’ assurance that upward mobility is what has caused America to become one of the most influential countries in the world. Although America has kept its promise for upward mobility, it is being tested as times have changed. Thus, since upward mobility is a stage of the American Dream, the Dream may have to be altered to keep up with the times in order to survive. This leads us into discussing forces of which challenge the Dream, which can only be comprehended after the additional stages of the American Dream are evaluated. In addition to upward mobility, Cullen discusses home ownership as a stage of the American Dream that people still hope to achieve today (8). Although this stage is important for the economy of America, it is also important for Americans in order to gain a sense of nationalism. If citizens call America their home country and owns homes in America, it gives them a more substantial sense of pride for their country. Since the home ownership stage of the American Dream positively impacts both one’s nationalism and the economy, the home ownership stage of America should never cease. However, within recent years Americans have become concerned with achieving this stage of the Dream. Modern Americans are used to hearing the phrase, “It’s a buyer’s market right now,” and now regard it as a societal norm for that to be the case. Nevertheless, a buyer’s market means that homeowners are having a difficult time selling their property and that the actual worth of their homes is declining. Indications such as these are alarming, making Americans question whether they will ever succeed in owning a home. Thus, like upward mobility, the home ownership stage of the American Dream is experiencing a period of weakness. Concerns arise as to whether the government should help ensure that the dream of home ownership can remain achievable, but such a task is not easily carried out. Another stage that has gained a substantial amount of attention in the 21st century is equality (Cullen 8). Equality means that everyone, despite their personal beliefs or background, has the chance to achieve what they believe the American Dream is. Equality has been achieved in the 137
legal sense of the definition by the 14th Amendment, which requires all people to remain equally free and to receive the same rights as any other. What was once a country where slavery was a reality in everyday life eventually became a place where segregation was eliminated altogether. Although this took decades to accomplish, it represents how the American Dream can be reformed to reflect changes of time through amendments. This idea is an important value of the American Dream, but at the same time it is very difficult to determine to what extent absolute equality exists. This definition of freedom and who exactly is protected under this amendment is still ambiguous to some extent today. For example, equality has recently taken on a new meaning in the 21st century with the lesbian and gay Movement sparking our political system to legalize gay marriage in several states within the past few years. Prior to this movement, gay marriage was been overlooked as an issue of equality, but now many citizens of all sexual orientations are encouraging this new addition to the definition of equality. Just as it was discovered that the upward mobility stage of the American Dream becomes complicated as times change, so does the equality stage of the Dream. This raises questions as to how valid the American Dream still holds today. Can Americans believe in an American Dream that does not include total acceptance of everyone? Questions similar to this have been asked and are not easily answered. The personal fulfillment stage appears to overlap with the equality stage of the American Dream because neither involve any reference to materialism or monetary goals. Personal fulfillment involves people feeling proud, knowing that they have succeeded in something that exceeded their own expectations. An information box in a journal article regarding America’s public school system states that this stage of the American Dream is fairly recent and shows that success may not always be measured by wealth or materialism, but by happiness (“Teens” 3). In other words, the American Dream can be measured by how content a person is, rather than just the monetary amount they earn. Therefore, it might be more meaningful for someone to achieve the American Dream through personal fulfillment rather than monetary success. In times of hardship it seems as though this aspect of the American Dream is the one that motivates people when they feel that all the other aspects are unattainable. For me personally, this is the most important stage of the American Dream. This is mainly because I believe in the old saying that “Money can’t buy happiness,” and that bliss can be achieved by other means rather than through money. If one can achieve personal fulfillment to the extent that there seems to be nothing that could take away from their happiness or the value of their life, I think one can say they have achieved the Dream everyone is in search of achieving. These stages further help define and clarify Adams’ initial definition of the Dream. The stages illustrate that the American Dream as an ideal has become complex. However, despite the fact that the American Dream has changed and encompasses so much more than Adams’ definition initially specified, modern Americans are suspicious of the Dream. These suspicions are what have caused them to wonder if the American Dream is dead, and even if it has ever existed to its fullest promise in the first place. The causes for such concerns are many, and like the definition, they are very intricate. Nonetheless, it is vital to examine the current vulnerable state the American Dream is
138
experiencing by analyzing different views held by Americans and the state of America today, in order to decide if one personally believes the Dream has ceased to exist. Sandra Hanson and John White, Professors of Sociology, argue in American Dream in the 21st Century that “America has lost its way as well as its legacy of core values of economic and social justice” (Hanson & White 141). The evidence behind such a powerful statement can be derived from both a declining economy and lack of trust in politics. For example, in 2007, the United States experienced its worst economic downturn since the Great Depression in the 1930s (Hanson & White 77). This calamity triggered a trend in which America has been spiraling downward. Ever since the crisis on Wall Street, Americans have taken out more and more debt in order to recover, while the middle class seems to be vanishing. This disappearing act is apparent when analyzing the skyrocketing unemployment rates of average wage earners. In fact, Robert Borosage and Katerina Heuval, authors of American sociology, quantify this transition in their article, “The American Dream: Can a Movement Save It,” when they explain that “25 million Americans are in need of full-time work, wages are declining and one in six people lives in poverty, the highest level in 50 years” (11). These statistics are partly attributed to corporations who have also been affected by the economic crisis. Concerned with increasing revenue, corporations continue to lay workers off, while outsourcing their labor to various countries (Borosage & Heuvel 11). This fairly recent trend in labor cost minimization has given rise to a new aspect of competition. Not only are Americans competing with other American job seekers, but also with foreign job seekers. This has given rise to arguments that it is unethical for American corporations to rely heavily on hiring employees abroad to fulfill their business operation needs. Taking possible jobs away from American citizens is often viewed as a means of slowly stripping citizens of the American Dream. After all, the American Dream is much harder to achieve if one is unemployed. The current trend of unemployment also poses a high threat to both the upward mobility and home ownership stages of the American Dream, and is just one concern people have involving its achievability. However, as stated before, one’s attitude toward the American Dream isone’s personal opinion. In fact, Hanson and White state their opinion that the American Dream does still exist; however, they believe that the Dream is much harder to achieve today due to a variety of reasons, the main reason being unemployment (Hanson and White 141). Unemployment is a valid concern that needs be addressed in order to settle the worries of Americans such as Hanson and White, who believe the current state of the economy threatens the American Dream. However, the likeliness that the United States government will reach a conclusion of how to solve the problem is slim to none. This is mainly due to the fact that American politics are split (Jencks par. 9). The Republicans and Democrats, the two most influential and historic political parties in the United States, cannot agree on how to assist in settling Americans’ concerns. These different methods of dealing with the issue result partly from the parties’ inability to agree on what the definition of the American Dream is. The Republicans believe that a select number of people who portray “individual talent and effort” can start their own successful business entity and become wealthy (Jencks par. 5). Conversely, the Democrats generally believe “everyone 139
who works hard and behaves responsibly can achieve a decent standard of living” (Jencks par. 5). The Left Wing (Democrats) and Right Wing (Republicans) separation of politics is highlighted by this divide because the Democrats are usually viewed as the more liberal party that believes more government regulation should be implemented, while the Republicans are generally more conservative and believe that government deregulation is best. Since both parties have different perceptions of what the American Dream is, it makes sense that both parties have significantly different methods of reconciling concerns Americans have with the vulnerability of the American Dream. Although it is possible for one to stand between both sides, most politicians hold a firm stance on just one end of the political spectrum. The way in which American politics are split in this way reflects how the American Dream has evolved from Adams’ simplistic definition of opportunity to a much more complicated topic involving discussion and even political debate. For one to determine if the Dream is still alive, one must first be informed of how both political parties view the current state of the Dream and how they plan on resolving Americans’ concerns. Barack Obama, current U.S. president, addresses the Democratic stance on the American Dream in his autobiography, The Audacity of Hope: Thoughts on Reclaiming the American Dream. This book was published in 2006 while Obama was serving as the Senator of Illinois, before he was elected president. The title of the book is almost startling to the common reader since it verifies that the President does not even see the American Dream as in a vulnerable state, but absent altogether. This is seen through his word choice of “reclaiming.” Instead of using words such as “appraising” or “evaluating,” that give hope that the Dream might still exist, Obama chooses a harsh word that implies the Dream is already gone. Obama continues this theme throughout his book, after he defines what his personal definition of the American Dream is. He begins his definition by describing the characteristics he noticed Americans portray during his campaign for Senator and the ones that he believes foster the American Dream. He states: Not only did my encounters with voters confirm the fundamental decency of the American people, they also reminded me that at the core of the American experience are a set of ideals that continue to stir our collective conscience; a common set of values that bind us together despite our differences; a running thread of hope that makes our improbable experiment in democracy work. These values and ideals find expression not just in the marble slabs of monuments or in the recitation of history books. They remain alive in the hearts and minds of most Americans and can inspire us to pride, duty and sacrifice. (Obama 8) This quote makes it apparent that Obama does have pride and faith in Americans even though he goes on further to explain that one of his major concerns involving U.S. politics is the lack of citizens’ trust in the political system. To illustrate this point, Obama describes the Democratic Party as being “smug, detached, and dogmatic at time,” which coincides with his claim that “you don’t need a poll to know that the vast majority of Americans—Republican, Democratic, and Independent—are weary of the dead zone that politics has become” (8). Obama’s blunt view of his own political party mirrors his bleak view of the American Dream, evident even before the reader opens the book. As he claims that Americans fear the “dead zone” of politics, Obama implies that in order to first assess the issue of the American Dream, trust in politics must be restored. This idea is also seen in the way Obama explains that the purpose for 140
writing his book is to describe “how we might begin the process of changing our politics and our civic life” (9). Obama’s desire to focus mainly on reforming politics is one that certainly needs to be addressed, but how has the political system become so distressed? According to Hanson and White, ever since the Watergate Scandal in 1972 in which President Nixon resigned due to the illegal taping of confidential political conversations, Americans’ faith in politics has been declining further and further (31). Obama believes once the confidence in politics is restored, the pursuit of economic security will ensue (9). However, his methods for achieving his goals are what cause the Republican Party to disagree with his method of regaining hope in the American Dream. Marco Rubio, a Republican House of Representative from Florida, responded to Obama’s thoughts of the current state of the American Dream when he delivered his speech, “Opportunity Isn’t Bestowed Upon Us By Washington.” According to Rubio, Obama’s tactic would also involve taxing the more wealthy people in society in order to create programs such as free healthcare for people of poorer economic status (par. 9). Since citizens tend to dislike it when taxes increase, it is no wonder why Obama wants to restore trust in the political system first. Rubio then continues to explain what he believes needs to be done. As opposed to regaining citizens’ trust and then taxing them right away, the Republicans believe the government should focus on reforming their immigration policies instead. With such a substantial amount of illegal immigrants in America, Republicans believe legislation should be passed allowing only the brightest immigrants (in terms of education and success) to enter (Rubio par. 34). Republicans trust that this careful increase in immigration and decrease in illegal immigration will help the economy more than increasing taxes on the wealthy. Like Obama’s plan to increase taxes, this Republican campaign is highly controversial. As an American myself, I am not sure if restricting immigration is the right decision to help the economy, but I also do not think the upper class will easily accept another tax increase. It cannot be said, however, that one side is right and the other is wrong because it depends on one’s personal opinion. Since both political parties’ views of saving politics and the economy in order to address the current state of the American Dream are very conflicting, it is helpful to analyze people’s opinions of how to assess the American Dream who are not politicians. Award-winning professors and authors of sociology have different opinions of how to evaluate the current stance of the American Dream, but there seems to be a recurring theme throughout each of their arguments. Many believe that the political parties should not intervene at all with trying to rescue certain aspects of the Dream. As Borosage and Huevel argue, the government should not be responsible for reclaiming the American Dream since it has been inefficient with saving it thus far: “In the face of a failed economy and corrupted politics, the only hope for renewal is that citizens lead and politics follow” (Borosage, and Heuvel par. 11). Although this might challenge societal norms that the government is needed to help citizens create a safe and prosperous society, it may be the answer to finally breaching the stalemate of politics between Democrats and Republicans. For example, a group of yuppies (young urban college professionals) have recently banded together to take the matter into their own hands. They initiated a movement named “The American Dream Movement,” in which they take the street and protest that the American Dream 141
is dead and that something must be done to resurrect it. The images of young Americans waving signs in the air on the streets of cities create a lot of attention for the issue of how the American Dream can be saved. Although this is a fairly recent movement, more pressure and attention stressed on the issue supports Borosage and Heuvel’s claim that if Americans take action, maybe the government will not be far behind (Borosage & Heuvel par. 23). The “American Dream Movement” has magnified Americans’ belief that the issue is a sound one and that it needs to be dealt with now. It may seem interesting that the Americans leading such a movement are the young Americans who probably earned, or who are working toward earning, a college degree. Why then, are those who seem likely to have such a promising future be protesting that the American Dream is dead? According to Marco Rubio, their frustration is mainly a side effect of the thousands of dollars in loans many borrowed in order to pay for their education. Rubio explains that he graduated from college with over $100,000 worth of debt and did not finish paying the loans until a few months prior to delivering his speech (par. 44). It is exasperating how much money students need to borrow just to earn an education, but since the job market has become increasingly competitive, this extended education is appearing to become more like a necessity than a choice. In fact, “70% of teens believe post-secondary education is essential to achieving the American Dream” (“Teens” 3). Thus, a cycle emerges in which young Americans take out loans in order to have a successful career, just so they are able to pay off the loans they took out in order to have the career. As a result, this educational paradox causes current college students, recent graduates, and even professionals to become weary of the true authenticity of the American Dream’s promise for success. However, one statistic reveals the faith of young Americans in their capabilities to achieve the American Dream, with 71% believing the American Dream can be achieved today (“Teens” 3). What causes teens to appear so optimistic is hard to pinpoint, for it could be from a multitude of reasons. Some may be more naïve than their parents who feel they have been failed by the Dream, but others may actually still hold a genuine sense of pride for their nation, perhaps due to witnessing firsthand evidence to the Dream still exists. Hence, stories of people who have achieved great success act as role models for these young Americans who, unlike their parents, still have at least some confidence in the American Dream. A prime example of a success story is the election of the first black president in American history, which these young Americans experienced firsthand. These young Americans were more likely to appreciate such a monumental event because they were less likely to dislike the President strictly based on his political party association. In fact, many of them innocently saw an election of a black president, rather than the election of a Democratic black president. I was able to observe this distinction during the election and inauguration period, when teenage Americans discussed in the hallways at school how great it was that our nation had elected a man who did not fit the typical profile of an American president. Although some may have disagreed with his political stance, many felt worthy that they lived in a time period when this was possible. Along with young Americans’ optimistic responses, many Americans believed that Obama’s inauguration was a sign that racial discrimination has been eliminated permanently, while others were surprised the first female president was not elected first. Nevertheless, it was an event that resounded across the globe. Obama’s father came to the University of Hawai’i from Kenya, and 142
during his time abroad, he met the Caucasian woman he would marry (Hanson & White 37). The perception that a man could come from an underdeveloped country such as Kenya and sire the future President of the United States gave many young Americans hope that perhaps the American Dream still exists. Young Americans of every ethnicity became hopeful that one day, that might be them signing into office. This inspiring concept is what has led several people to begin questioning if the Dream is in fact dead, or if it just needs to be reclaimed or redefined. From the perceptions of political leaders, established authors, and even young Americans, it is evident that the American Dream does not just have one definition. From Adams’ original nationalistic definition, the idea has evolved through time to reflect the changes America has gone through. Just as the politicians have significantly different views of what the Dream is and how to defend it, so do all Americans. Each person’s definition of the Dream will vary depending on his experiences, education, and background. The American Dream is a combination of various definitions and therefore, ways in which to ease concerns that is dead will vary too. As a young American pursuing a college education in business, I want nothing more than to see our nation prosper and restore the American Dream to its former glory. Although it will take time for all Americans to once again believe the American Dream is achievable, I believe that our country has already survived enough trials and errors that we can once again restore pride in America and its Dream. WORKS CITED Borosage, Robert, and Katerina Heuvel. “The American Dream: Can a Movement Save It?” Nation 10 Oct. 2011: 11-15. Print. Cullen, Jim. American Dream: A Short History of an Idea that Shaped a Nation. Cary, NC: Oxford University Press, 2003. Print. Fitzgerald, F. Scott. The Great Gatsby. New York: Scribner, 2004. Print. Hanson, Sandra, and John White. American Dream in the 21st Century. Philadelphia, PA: Temple University Press, 2011. Print. Jencks, Christopher. “Reinventing the American Dream.” Chronicle of Higher Education 55.8 (2008): B6-B8. Print. Marco, Rubio. “Opportunity Isn’t Bestowed on Us by Washington.” Vital Speeches of the Day 72.4 (2013): 105-107. Print. Obama, Barack. The Audacity of Hope: Thoughts on Reclaiming the American Dream. New York: Three Rivers Press, 2006. Print. Steinbeck, John. The Grapes of Wrath. New York: Penguin Classics, 1992. Print. “Teens Say Post-Secondary Education Key to Achieving American Dream.” School Planning. and Management 44 (2005): 3. Print. Zakaria, Fareed. “Restoring the American Dream.” Time 1 Nov. 2010: 30-35. Print. 143
Feminists Never Rest: A Study of “The Yellow Wallpaper” (By Rosario Kuhl) Imagine you are slowly losing your mind: you experience extreme bouts of depression yet there is no one to talk to. You are to the point of frustration, the point where your body betrays you and you suffer nausea, headaches, pain in your limbs, and even seizures, yet no one can explain why this is happening to your body. Your emotional and physical distress is questioned by doctors who see no reason why your behavior is “chaotic” and “selfish,” according to them. The doctors who do try to help you only worsen your condition and it will be years before anyone discovers and devises a treatment for your mysterious illness. Sounds like a nightmare, right? Well, for the American women of the late 18th century and early 19th centuries suffering from mental illness, this nightmare was a reality. During that time period, doctors did not understand the nature of mental illness, so treatments were based on physical damage, which did more harm than good. Doctors’ knowledge about why women could not perform their daily tasks were guided by barbaric medical theories and physical exams that exploited the bodies of ill women. From what doctors gathered from their inaccurate assessment of women, doctors came to the conclusion that women were suffering from a disease called “hysteria” that caused women to act out in a negative manner and had a physical effect on the body as well. The idea of hysteria was unfamiliar to society at the time, which led women to be at the mercy of male doctors who had the mindset that they knew what was best for the patient, and whose sexist attitudes dismissed any concerns voiced by women. Back then, society’s attitude regarding expectations of women’s roles in relationships definitely had an influence on how doctors listened to their female patients and whether or not they believed them. Doctors’ poor expertise in women’s mental illness, combined with their sexist attitudes, caused misdiagnoses and treatments fell short because doctors used their misguided judgment rather than research to help treat their patients. Some treatments were modified and some were eliminated altogether when one of the most common of them, the rest cure, was brought to attention through literature by a woman who endured the mistreatment of a famous doctor and ended up writing a story about it. After the story was published, skepticism about the effects of the rest cure began to develop, leading to new discoveries in psychology and the development of treatments that actually helped women with mental illness. What started off as a cautionary tale to doctors turned into the development of a cure for oppressed women and became a starting point for feminism. The particular short story I referred to earlier perfectly describes the torture of becoming more ill from a treatment with intentions to heal, not harm. “The Yellow Wallpaper” is a short story written in 1892 by Charlotte Perkins Gilman, about a young woman slowly losing her mind while in the process of recovering from a mental illness that no one around her took seriously. Isolated in a room covered in hideous yet alluring yellow wallpaper, the young woman is not driven mad by her illness but instead by the rest cure treatment she is receiving, and in the end her condition worsens to clinically insane (Gilman “Yellow Wallpaper”). 144
Gilman wrote this story to share her personal experience with mental illness and the negative effects of the rest cure, along with a warning to the creator of the rest cure, Dr. Silas Weir Mitchell. According to Thomas L. Erskine and Connie L. Richards, authors of the introduction to Charlotte Perkins Gilman’s “The Yellow Wallpaper” (a book of composed critical essays to “The Yellow Wallpaper”), Gilman’s spiral towards depression that started after the birth of her daughter, Katherine. Gilman suffered from the then-unknown condition of post-partum depression and sought the help of Mitchell. Mitchell prescribed her the rest cure and Gilman became sicker from a treatment that was supposed to heal her. Gilman decided to quit Mitchell’s treatment and formulated a method that required her to cure herself by stimulating her mind with a mental activity such as writing. After realizing the flaws of the rest cure, Gilman wrote “The Yellow Wallpaper” to warn other physicians of the possible negative outcomes the “rest cure” would have on female patients (Erskine & Richards 6). Since then, the story has been praised and analyzed for deeper comprehension of mental illness and the points Gilman tried to convey. Doctors were puzzled and frustrated during the late 19th century and early 20th century by the latest condition that seemed to appear in many women in America. Hysteria, a term used to describe frequent nervous breakdowns, was a condition that apparently turned the most loving mothers and devoted wives into crazed, intolerable women. Carroll Smith-Rosenberg, retired professor who taught Women’s Studies, History and American Culture, and who received a grant from the National Institutes of Health to research role conflicts in the 19th century in America, informs people of the symptoms of hysteria and who it affected. From her collected evidence it seems hysteria did not discriminate, but was found in urban and upper middle-class women with ages ranging between fifteen to forty years old. Hysteria exhibited physical symptoms in some cases that ranged from concerning to life-threatening. Loss of senses, numbness in skin, stomach aches, headaches, and pain in limbs were all common symptoms. Rare cases showed women who experienced seizures that were triggered by strong emotion or who became violent and aggressive towards themselves and others (Smith-Rosenberg 83). Though uncertain regarding the specifics of hysteria, many doctors offered some possible causes that lacked scientific findings to back up their reasoning. Most explanations prove nothing except the sexist behavior that doctors tie in with their logic. Gilman wrote her own piece about mental illness, “The ‘Nervous Breakdown’ of Women,” which introduces some doctors’ opinions about what causes hysteria. In her piece, one doctor claimed that the breakdown was caused by the conflict of the maternal role clashing with the pressure to live a life without domestic labor (Gilman “‘Nervous Breakdown’” 68). Smith-Rosenberg also mentions that doctors claimed their female patients were sent mixed messages about what men expect of a future bride and mother. Women were told that men wanted women to be strong for household tasks and child care yet at the end of the day they were expected to be dependent on their husbands and vulnerable. The confusion that resulted from mixed messages turned women helpless and hysterical. Others blamed women and their “sexual impulses.” According to some doctors, once women lose the ability to control their sexual desires, then hysteria must occur immediately after. These explanations only point fingers at women portrayed as weak-minded or as sexual deviants. 145
Other doctors had different ideas on what is responsible for hysteria among women (SmithRosenberg 89). Silas Weir Mitchell, an early eighteenth century physician specializing in neurology who developed the “rest cure” and published his findings about working with female patients, blames the women who sought to challenge gender roles and accuses them of bringing hysteria upon themselves. Mitchell writes in his essay, “Women are physically unfit to take on the duties of a man; how will she eagerly want to take on manly duties if she can’t survive?” (Mitchell 105). Even with all these explanations for hysteria, there was not an obvious solution to cure it, and that fact alone was taking a serious toll on doctors. There were those doctors, fed up with frustration with their female patients, who claimed hysteria was nothing but an escape or an imagined illness. Women who claimed hysteria were said to be faking symptoms to escape their responsibilities at home. These doctors noted that their patients seemed vain and sexually aroused yet cold-natured at the same time, which led doctors to begin to tire of the hysterical mood-swings (Smith-Rosenberg 84). Doctors became so desperate to discover some source or a cure for hysteria that they even resorted to dangerous procedures such as shock treatment, operations and amputations, yet their attempts at long-term treatment failed. Doctors could not fathom how their female patients could exhibit no physical symptoms yet be so unstable. Women were tormented as doctors repeatedly told them that they were physically fine but could not offer explanations for why these women were expressing emotions in such a dramatic and harmful manner. Conrad Shumaker published his analysis of “The Yellow Wallpaper” in American Literature in December 1995, discussing how Jane’s internal anguish is harbored inside her and it only increased her mental illness. Jane feels guilty and embarrassed whenever she voices her simple complaints about her illness to her husband, John, who concludes she is overreacting and overimaginative. John only cares for her in a physical aspect and notes her small accomplishments of eating well and sleeping well, but he never once entertains the thought of mental illness. Jane, in return, tries to reason with herself that she should be feeling good since her body is physically fine (Shumaker 129). However, doctors and patients needed more than some convincing, so what better way to be more physically prepared than with the rest cure? Weir Mitchell came to the conclusion that what women suffering from hysteria needed was little mental and physical interaction; thus he invented the rest cure. An article by James Pearce published in the Journal of Neurology, Neurosurgery, and Psychiatry in 2004 illustrates Mitchell’s brilliance and arrogance toward women while working in medicine. Pearce agrees that Mitchell was very intelligent and praised in his expertise in neurology, yet his opinion toward women revealed a different, negative side of Mitchell. Mitchell regarded female patients who demanded to know what procedures he was going to perform as “terrible patients” because they “are the ones that ask too many questions” and “believe they are being tricked” because he refused to inform them what medicine he was giving them (Mitchell 110). Pearce brings up a good example of how Mitchell’s methods were also unorthodox; one patient claimed to be too sick to leave her bed, and he promptly lit her bed on fire. In minutes she fled her bed and he announced she was cured (Pearce). Yet despite these questionable decisions and 146
actions, doctors everywhere continued to follow and practice the rest cure. The “rest cure” is another alternative for long-term treatment because it is basically bed rest for six to eight weeks, depending how hysterical the woman seems. According to Dr. Mitchell, little movement is required in order for success. The woman can only progress to different stages of the cure when she is deemed ready. The first few weeks involve no movement and a nurse is in charge of catering to the needs of the patient, such as feeding her by spoon and helping her with bowel movements. The patient is to be treated as though physically ill, not mentally. Denise Knight’s 2004 article, “‘All the Facts of the Case’ Gilman’s Lost Letter To Dr. S. Weir Mitchell” mentions the most damaging part of the treatment: the deprivation of mental stimulation. The article is about the discovery of writings by Gilman, who penned her thoughts about her upcoming rest cure treatment, and a letter she composed to Mitchell. Knight writes about how people can now see Gilman’s perspective on the treatment beforehand and the demands the patient must obey. For example, no reading, writing, or anything that might excite or put strain on the female patient is allowed. Gilman recalls that during the cure, Mitchell recommended to limit activities that engage the mind to two hours per day (Knight ). Mitchell’s logic was that after weeks of resting and not doing anything productive, the moment that Mitchell permits the patient to return to her household duties, the woman will be eager to serve her husband (Mitchell 106). Mitchell’s rest cure was accurately portrayed in “The Yellow Wallpaper,” and many literary critics wrote papers on their insights regarding Jane’s treatment. Dana Seitler, author of an American Quarterly essay about motherhood, feminists, and Gilman’s story, claimed that Jane’s room was animalistic because of the bars on the window, the claw scratches and bite marks; Seitler states that the point of trapping Jane in the room was to dehumanize her. After Jane is dehumanized, then John can slowly “evolve” her into a mother again—a chilling comparison of punishing and training an animal (Seitler 61). But that is what the rest cure is about: recovering from hysteria or chronic fatigue syndrome to go back to the duties of a woman. In the article “The Writings on the Wall: Symbolic Orders in The Yellow Wallpaper,” Barbara A. Suess analyzes how Jane follows John’s advice out of guilt and shame for not fitting the standards of a housewife. John is the man, so naturally he is in charge and what he says goes even if it is against best judgment (Suess). Conrad Shumaker sums up in a short and simple way what the rest cure does by using John as an example: he attempts to “cure” her through purely physical means, only to find he has destroyed her in the process” (Shumaker 132). Jane’s and John’s fictional case resembled the real-life scenarios happening across America, with husbands trying to cure their wives through questionable treatments. The “rest cure” was heavily flawed and sadly, it was put into effect long before experts could prove it worthless. After careful research done by Michael Sharpe and Simon Wessely, whose collected information regarding the “rest cure” was published in BMJ: British Medical Journal (International Edition), the cure once popular among Victorian doctors to treat chronic fatigue syndrome and hysteria was slowly eliminated. The fact that reading and exercise seemed toxic to ill women resulted in little to no mental and physical activities. This led to loss of strength, poor posture, decrease of sleep, and extended fatigue in some patients. While aggressive exercise is not helpful, what is 147
recommended (and what Mitchell denied) was simple exercising like stretching muscles and limbs. It is proven that simple exercise also healed the mind as well as restored the body, so this explains why the rest cure caused so much destruction to the female patients (Sharpe & Wessely). Yet despite the obvious medical reasons this failed, the rest cure was forgotten because of the breakthroughs of the human mind and this eventually led to recovery. The man who pioneered the way to curing women’s hysteria was Sigmund Freud. Freud was a famous psychologist in the nineteenth century who published over twenty volumes of books about his theories on personality. His concept of personality, found in the latest Psychology A Concise Introduction textbook, revolves around the idea of the unconscious mindset that is responsible for “our biological instinctual drives” and “our repressed unacceptable thoughts, memories, and feelings” (Griggs 292). Like the name suggests, the unconscious mind is hard to access because humans are not fully aware of these thoughts and therefore cannot control them without professional assistance from a psychologist. Freud played a crucial role in understanding why these hysterical women were behaving in such an irrational way. Freud studied hysteria by observing and interviewing various women in order to understand the so-called disease before formulating a cure unlike what doctors have done before him. The discoveries that Freud had come upon during the interviewing process were not only the keys to helping the hysterical women but also to changing the way society viewed gender roles as well. “Freud As New Woman Writer: Maternal Ambivalence In ‘Studies in Hysteria.,’ a 2010 article by Nicole Fluhr, discusses Freud’s findings on mental illness when he worked with two patients in the early 19th century. F reud worked on and published two particular cases in his book, Studies On Hysteria, that led him to a better understanding of hysteria—the cases of Miss Lucy R. and Frau Emmy Von N. Etienne Trillat, the latter of whom studied hysteria and believed psychoanalytic theory derived from hysteria, as suggested when she says, “All psychoanalytic theory was born from hysteria, but the mother died after the birth” (Fluhr 283). Lucy and Emmy provided Freud the opportunity to apply his psychoanalytic theory to his patients and see how it correlates. Freud recognized that women struggled with explaining their hysteric behavior because their mental illness was unfamiliar to them, so Freud strategized how to get women to open up to him willingly. The first step was to talk to the women without judgment or criticism so the women would feel secure enough to reveal their emotions and thoughts. By doing this, the women were more honest and explored possibilities of what internal conflicts were causing their hysteria. After they learned to share their experiences, they started to heal themselves in the process of communicating how they feel without backlash (Fluhr 289). The result of making it acceptable to talk about the women’s struggles gave Freud valuable insight that helped him to learn and theorize about what women struggle with from a mother’s and wife’s perspective. Freud writes about how the woman’s “needs conflict with those she attributes to her children” and how the woman “refuse[s] to subordinate her life to theirs” (Fluhr 291). The most memorable case is of Frau Emmy Von N., mother of two. Emmy felt guilty for motherhood not coming naturally and she started to lash out at her children to ease her pain. 148
Emmy also admits to feeling smothered by their neediness, as found in Nicole Fluhr’s analysis of Freud’s findings: “anxiety for the children’s welfare and the expectation that their needs take priority over her own” (Fluhr 292). To answer the question of whether or not sexual impulses played a part in hysteria, Freud noted that patients like Emmy were reserved about the concept of sexual desire rather than eager, as other doctors of the time suggested. Freud decided that what was best for Emmy was providing more counseling and even advising Emmy to limit time from her children. While people might scoff at the idea of a mother spending time away from her child, Emmy’s condition actually improved when she took some time for herself. These cases provided two methods to help stabilize women from the crippling effects of hysteria: counseling (which proved effective), and distancing the mother from her children so she can recover and spend time on her own without stress that would slow recovery progress. Freud was not the only person to recommend spending time away from children. In fact, Gilman proposed the idea as well and wrote a short story about a mother becoming better after time away from her child. The story, “Making A Change,” is narrated by a woman, Julia Gordon, who feels frustrated and guilty about not being able to care for her baby, and the poor childcare puts extreme stress on her marriage with Frank Gordon. Gilman writes, “if her nerves were weak, her pride was strong. The child was her child, it was her duty to take care of it, and take of it she would” (Gilman “Making A Change” 57). Back in the late 19th century and early 20th century, a woman’s worth depended on whether she could respond to her “natural instinct” and care for her child; no man wanted to marry a dysfunctional woman. In the story, Frank also puts pressure on Julia because he is baffled and compares helpless Julia to her devoted, mother which did not benefit Julia’s developing mental illness. Julia suffers more problems on top of raising a child because she is conflicted between being a responsible mother and following her passion of teaching people to play the piano. In the end, Julia discovers a way to pursue her dreams and live up to her husband’s expectations by having Frank’s mother babysit while she offers lessons. What Gilman is trying to inform the readers here is that sometimes life needs a little bit of compromising—a short time away from a baby for one’s sanity for example. While that change might have been overdramatic to some at the time, the story ends on a positive note with the husband grateful his wife is healthy and functional, to convince readers to be open-minded to solutions to mental illness (Gilman “Making A Change”). Gilman used not only sed distance from her child to heal herself, but also used creative outlets as part of the cure for her mental illness. Gilman loved reading and writing, so when Dr. Mitchell prohibited her favorite hobbies, it drove her insane not being able to practice what she loved. She notes, “The creature must be satisfied with itself; it must do what it likes to do, and like to do what it does” (Gilman, “Nervous Breakdown” 68). It is truly ironic that mental stimulation is exactly what cured Gilman. For Gilman, writing was essential because it allowed her to be creative and to understand her struggles. Other authors, like Sandra M. Gilbert and Susan Gubar, who co-wrote The Madwoman in the Attic: The Woman Writer and the Nineteenth-Century Literary Imagination, agree that writing is 149
crucial to figuring out who one is and what one’s purpose is by allowing the mind to explore with pen and paper. They explain, “Recording their own distinctively female experience, [women] are secretly working through and within the conventions of literary texts to define their own lives” (Gilbert & Gubar 117). That is what Gilman did, using her works of writing to fulfill her purpose of reaching out to other women and stopping mistreatment of women’s mental illness. Gilman’s works have sparked controversy and captured the attention of many on topics like hysteria and solutions to cure distraught women, as well as her underlying message of female empowerment and women’s rights to encourage women to take charge of their own minds and bodies. I first read “The Yellow Wallpaper” in high school after it was assigned for homework in my English class. During that time, I was going through depression and when I confessed to my mom how I felt, I was told it was “all in my head.” After I read the story, I was shocked and disturbed by the strong connection I felt with the protagonist, Jane. To have these negative feelings harbored inside and have no explanation for feeling that way: that was something Jane was going through and something I could relate to. Though I resolved my issues through a therapist who had placed no judgments on me and listened to my every word, Jane did not have that option. Perhaps I would not have the luxury of being cured without physical harm done to me from treatments or the risk of becoming insane like Jane, had Gilman had never written a story that which opened people’s eyes to mental illness. Though written over a century ago, “The Yellow Wallpaper” deserves credit for improving the way women are treated through medicine and through gender roles. The late 19th and early 20th century was a time when medicine was still in its primitive stage and doctors had not unlocked all the discoveries of the human mind. With roles and expectations of women set in stone and psychology still in its adolescent stage, women had no access to resources to provide them with a cure for their illnesses. Women suffered so much humiliation and shame from encountering insensitive doctors. Women have endured inhumane treatments, some that lasted for months and others that ended in permanent physical damage, to escape the nightmare known as mental illness. Doctors like Mitchell inflicted the “rest cure” and banished all activities that gave women freedom of expression and to be imaginative—things women needed in order to escape the constrictions of being a woman, even if it was for a short time. With help from Freud and Gilman, women have prevailed not only in the fight for sanity but also in the fight to reshape the roles of women. There is no more settling for being only a loving mother and loyal wife; instead women are able to be separate people, not the property of a man or child. Women now want to distinguish themselves and their time to be who they are and do what they want. For 2014 that seems like a small accomplishment, but for women of the early 20th century it was more than that. Freedom and expression were not rights to have for those women, but they were necessities for their minds, for the sake of their families and for themselves. “The Yellow Wallpaper” may be forgotten by some, but it is immortalized through literature and lives on through the changes in humankind thanks to Gilman, who chose to rise from her dark time— because, after all, a feminist never rests. 150
WORKS CITED Cayleff, Susan. “‘Prisoners of Their Own Feebleness’: Women, Nerves, and Western Medicine: A historical overview.” Social Science & Medicine 26.12 (1988): 1199-1208. Science Direct. Web. Erskine, Thomas L. and Connie L. Richards, eds. Charlotte Perkins Gilman, “The Yellow Wallpaper.” New Brunswick, NJ: Rutgers University Press, 1993. Print. Fluhr, Nicole. “Freud as New Woman Writer: Maternal Ambivalence In ‘Studies in Hysteria.’” English Literature in Transition, 1880-1920 53.3 (2010): 283-307. Academic Search Premier. Web. Gilbert, Sandra and Gubar, Susan. The Madwoman in the Attic: The Woman Writer and the Nineteenth-Century Literary Imagination. New Haven: Yale University Press, 1979. 8592. Print. Gilman, Charlotte Perkins. “Making a Change.” Erskine and Richards. 57-66. Print. Gilman, Charlotte Perkins. “The ‘Nervous Breakdown’ of Women.” Erskine and Richards. 6776. Print. Gilman, Charlotte Perkins. “The Yellow Wallpaper.” Erskine and Richards. 29-50. Print. Griggs, Richard A. “The Psychoanalytic Approach to Personality.” Psychology: A Concise Introduction. Ed. Catherine Woods. New York: Worth Publishers, 2012. 290-316. Print. Knight, Denise D. “‘All the Facts of the Case’ Gilman’s Lost Letter to Dr. S. Weir Mitchell.” American Literary Realism 37.3 (2005): 259-277. America: History and Life with Full Text. Web. Pearce, JMS. “Silas Weir Mitchell and The ‘Rest Cure.’” Journal of Neurology, Neurosurgery, and Psychiatry 75.3 (2004): 381. MEDLINE. Web. Seitler, Dana. “Unnatural Selection: Mothers, Eugenic Feminism, And Charlotte Perkins Gilman’s Regeneration Narrative.” American Quarterly 55.1 (2003): 61. Academic Search Premier. Web. Sharpe, Michael and Wessely, Simon. “Putting the Rest Cure to Rest—Again.” BMJ: British Medical Journal (International Edition) 14 Mar. 1998: 796. Academic Search Premier. Web. Shumaker, Conrad. “‘Too Terribly Good to Be Printed’ Charlotte Gilman’s “The Yellow Wallpaper.” Erskine and Richards. 125-137. Print. 151
Smith-Rosenberg, Carroll. “The Hysterical Woman: Sex Roles and Role Conflict in NineteenthCentury America.” Erskine and Richards. 77-104. Print. Suess, Barbara A. “The Writings on the Wall: Symbolic Orders in The Yellow Wallpaper.” Women’s Studies 32.1 (2003): 79-97. Taylor & Francis Online. Web. Weir Mitchell, S. “Selections From Fat and Blood, Wear and Tear, and Doctor and Patient.” Erskine and Richards. 105-111. Print.
152
The Great Gatsby: The Shallow Reality of Daisy Buchanan (By Sophia Suarez) Gangsters, crime, and Speakeasies are only some aspects of American history that portray the 1920’s as a “roaring” or “golden” decade. The problems of Prohibition consumed so much of American society that changes to culture were almost immediate. Bootlegging became a common career because of the high demand for alcohol. Drinking was a prime activity for both men and women, which ultimately contradicted the common perspective of women; despite the passing of the Nineteenth Amendment, women were not yet recognized as independent, capable of making their own choices, or of choosing to do something with their life other than being a housewife. While today America views women with better appreciation, the way we look back at the Roaring Twenties is still not in a modern light. According English professor and well-known Fitzgerald expert Matthew Bruccoli, when critics and readers alike think of the Roaring Twenties, one particular author often comes to mind: F. Scott Fitzgerald, whose “work has become automatically identified with … The Roaring Twenties” (Bruccoli ix). Fitzgerald’s most popular work The Great Gatsby often serves as a prime example of a kind of social commentary or allegory on America’s high class society. His novel provides more than just a story for entertainment; it illuminates a clear distinction between differences of class and illustrates attitudes towards deceit within relationships, thereby describing a contradiction that is evident among characters of different social positions within the novel. The central focus of this essay, therefore, is the double standard created for women of different social class within the novel, more specifically the female characters Daisy Buchanan and Myrtle Wilson. As both women venture upon adultery, Myrtle, curiously, is punished more severely than Daisy, both in the reader’s perspective and in the novel. This double standard derives as the result of the 1920’s societal norms illuminated in The Great Gatsby, such as gender roles and the importance of class and wealth. The Great Gatsby follows protagonist James Gatz, infamously known as Jay Gatsby, and his struggle with his love for narrator Nick Caraway’s cousin Daisy Buchanan. Daisy meets Jay Gatsby at the young age of eighteen. Gatsby, being a few years older, enchants her. Her spontaneous decision to follow Gatsby for that short time causes disappointment with her family. Yet her summer love for Gatsby is eventually forgotten as she quickly moves on to marry Tom Buchanan. After a few moments of doubt, seen in a letter from Gatsby prior to their marriage, Daisy goes on to have a child with Tom, quickly eliminating the possibility of any future with Gatsby. It is years later that Gatsby has built himself up socially to gain a rich, mysterious persona, all in the hopes of gaining Daisy’s attention once again through his new image and elaborate parties. However, it is not until Nick Caraway moves in next door to Gatsby that the once love birds are able to rekindle their connection, but even after years of waiting, it is still accompanied by consequences. Throughout the novel the emergence of three major relationships becomes apparent. One is the legal marriage between Tom and Daisy, in which case the infidelity may suggest that their marriage is fairly superficial. A second relationship is between Tom and his also married 153
mistress, Myrtle. Finally, a third relationship included in The Great Gatsby is that between Gatsby and Daisy. It is in the last two relationships that the double standard between Daisy and Myrtle emerges. Gatsby’s rags-to-riches story and his infinite love for Daisy have enchanted readers in America for decades—so much so that the flaws in Gatsby and his society are much too often overlooked. Readers often focus on his love for her, not the fact that in reality he is chasing a married woman. Another overlooked idea is that Daisy’s encouragement towards Gatsby and his love for her is deemed romantic, but Tom’s mistress Myrtle is seen as trashy, unsophisticated, and immoral. Why it is that society today accepts Daisy’s infidelity and not Myrtle’s? Modern readers seem so quick to judge Myrtle as a mistress, but the idea of Daisy as a mistress is rarely examined. Partly, it is due to the then-societal norms and the importance of class and wealth to women’s lives during the Roaring Twenties time period. For instance, both love affairs share a common factor of wealth and money. The 1920’s illuminated the ideas of old money and new money. While the Buchanans represent old money, as does Nick Carraway, Gatsby symbolizes the self-made man with new money, founded from the 1920’s bootlegging industry. But another aspect is the honesty in the love in Daisy’s affair, which is not present in Myrtle’s affair with Tom. Taking a glance at the time period, women were only recently given the right to vote, illustrating a start to a change in how America viewed women. However, most women were still housewives; rarely did women go to college or make careers for themselves. The expectation for their lives after they left home essentially was to marry and to bear children. Daisy Buchanan is a prime example of this. Daisy is more than just a female character in The Great Gatsby; she is also a victim who falls short of the ideal woman figure that she is expected to be. From the perspective of Leland Person, Jr., Professor of English at the University of Cincinnati, the actions that she commits are not by her conscious choosing, but a direct result of the treatment she endures throughout the novel (151). What is more interesting than her character representing societal norms is that her other actions are overlooked. Women during this period of Prohibition and bootlegging engaged in drinking, Daisy included. Drinking alcohol is not seen as a bad thing. It truly seems amazing that drinking alcohol is socially accepted or even expected in high-class American society, but the desire to do something more with one’s lives than just get married is extremely rare. It is this ultimate set-up in Fitzgerald’s imaginary society that illuminates the tremendous hypocrisy in 1920’s society that readers overlook and accept. It would seem that the idea of Kulturpessimismus is relevant to the way readers view the activities that Daisy and Myrtle engage in. As described by Johannes Malkmes in his book, American Consumer Culture and its Society: From F. Scott Fitzgerald 1920’s Modernism to Bret Easton Ellis’ 1980s Blank Fiction, Kulturpessimismus is described as “the belief that modernity is without ... public morality and that a return to the values of the past is the only possible solution” (42). The framework of The Great Gatsby directly correlates with this idea because Daisy’s acknowledgement of Tom’s affair with Myrtle is hidden when she follows the overall gender role to look the other way. She gives in to his role as the provider and says nothing. It is the embodiment of the idea that because Tom takes care of Daisy financially and has given her a proper high-class life, she should be more than satisfied. She is living the ideal lifestyle of riches, 154
and therefore should fake oblivion to knowing about Tom’s infidelity. This idea that a woman should stay in her place and not acknowledge her husband’s adultery mainly stems from patriarchal ideals. Retreating back to patriarchal ideals is the same as going back to values of the past as described by kulturepessimismus, especially as women at this time were beginning to gain more independence. While it is often said that “The Great Gatsby is the defining novel of the twenties,” it is much more; it is “Fitzgerald’s response to” the twenties (Bruccoli ix). The way Fitzgerald describes Daisy’s knowledge of Myrtle portrays her perspective nonchalantly, again, looking the other way. It would seem that Daisy’s attempts to disregard her husband’s affair are not by choice. According to Marriage, Violence, and the Nation in the American Literary West by William R. Handley, Fitzgerald renders “marriage, and its constellation of jealousies and conquests [as] the plot-shaping structure through which Fitzgerald dissects the ways that ethnic and class divisions perpetuate both the romance and violence of American civilization” (159). This idea is seen most directly in relation to how characters in the novel view one’s social position as a vital part of one’s reputation. Because of how important social class is in 1920’s society, marriage is coextensive with property (Handley 159). The parallels between real estate and relationships result ultimately in Gatsby’s “rich-young-suffering-hero persona” who attempts to win Daisy back, and almost succeeds (Fraser 556). Edwin Fussell, Professor of English at University of California, San Diego, claims that Gatsby’s wealth is questionable, mainly by the leisure class because they lack knowledge of his background (Fussell 295). Yet his riches still grasp Daisy’s attention. But why? Married to Tom Buchanan, Daisy’s social position is at its prime, so why bother giving Gatsby a look in his direction? Daisy’s doubtful actions are what spark the perceptions of her as an “innocent princess and sensual femme fatale,” which does nothing but “enhance [her] enigmatic charm,” says University of Maryland Professor Emerita of Women’s Studies Joan Korenman (Korenman 578). Daisy’s hopes of her daughter being a “beautiful little fool” reflect her actions of looking the other way because in reality, in such a judgmental society, what other option does she have? (Fitzgerald 21). Yet contradicting Daisy’s attitude is how she begins easing herself into infidelity. When narrator Nick Caraway attempts to aid Gatsby in rekindling his lost love, he is essentially aiding his cousin to begin to cheat on her husband. Maybe it is the idea of a long-lasting romance that makes readers want Gatsby and Daisy to end up together, an idea similar to the forbidden romance between Romeo and Juliet. Maybe it is how Gatsby literally reinvented himself just for the chance of a glance from her that makes the audience want Gatsby to get the girl. But the reality is much deeper than that. Although the audience may feel for Daisy, since she is being cheated on by Tom and serves as nothing more than a trophy wife, her actions are just as bad as Myrtle’s. In the end it is Myrtle, who is of lower class than Daisy, who ends up getting the punishment. Handley introduces the importance of class differences to the entire romantic web that is The Great Gatsby. The falling action of the novel starts with Daisy as she kills Myrtle with Gatsby’s car, but again, Daisy escapes free of blame, and Gatsby ultimately takes the fall. The Buchanans, 155
“who are elitist by racial and class standards, escape blame and harm in the chain of violence they set in motion” (Handley 160). If they were not of high class, their actions would have followed them. This idea of karma for some reason avoiding those of higher class is depicted through Myrtle and her death. Being of a lower class, becoming Tom’s mistress is not exactly out of the ordinary, especially given the time period. Unless one was of Old Money, passed down from generation to generation, or the often self-proclaimed New Money, one’s only chance of moving up in class was to marry in as it “offers the quickest route up” (Handley 161). The predicament there was that most Old Money embraced patriarchal values, the ideas of a picture-perfect family— essentially, the Buchanans. Therefore, in reality Myrtle’s life as a mistress may be seen as her attempt to move up away from her mechanic working-class husband to a life of riches and sophistication. Myrtle’s attitude towards her own marriage, viewing it as her actually marrying down instead of up in terms of her husband George’s class, illustrates that to her, marriage is not about love (Fitzgerald 39). Instead she portrays marriage as something “particularly about the value of one’s social property” (Handley 161). This perspective portrays how Tom truly is more than just an affair to Myrtle; he is her way of moving into a class she thinks she belongs in. She even discusses how she once thought her husband George Wilson was charming but in reality “wasn’t fit to lick [her] shoe,” and she even continues to admit that she “knew right away that [she] had made a mistake” when she married him (Fitzgerald 39). The truth of the matter is, however, that Tom truly has no intention of ever moving his relationship with Myrtle past being a mistress. This can be bluntly seen through his clear attempts to keep both women separate. In a way, Myrtle is his escape from reality, so when Myrtle begins chanting Daisy’s name, Tom “broke her nose with his open hand” (Fitzgerald 41). Her words indicate the beginning of his two lives coming together, his relationship with Myrtle and his marriage with Daisy. As Handley says, this, however, cannot occur: the mixing of the two classes is not only frowned upon but socially unacceptable. All of Fitzgerald’s characters are extremely conscious or aware of class differences and how it is embedded into their 1920’s culture (Handley 161). It is ultimately these class differences that make Daisy’s actions of infidelity and Myrtle’s actions of infidelity vastly different scenarios. Myrtle’s attempt to move up the social ranks of the 1920’s serves as a motive to cheat on her husband George Wilson. While the reader may see this as immoral, selfish, or shallow, what is often overlooked is Daisy and her side of the webbed romances presented in the novel. Daisy should be seen in the same light as Myrtle Wilson. In frank terms, Daisy cheats on her husband, kills her husband’s mistress, lets the man that truly loves her—Gatsby—take the blame, and does not even have the decency or respect to show up to his funeral after George Wilson seeks vengeance for Myrtle and kills Gatsby. Instead, she flees with Tom and her child. In a way, Daisy as an adulterer is worse than Myrtle. She essentially leads Gatsby to believe she truly loves him but does not follow through with her feelings. She is truly one of the most hardhearted characters in the book. In contrast, yes, Myrtle does cheat on George and does not truly love him, but she is far more honest regarding this than Daisy is. After all, she openly discusses it early on in the novel, as previously mentioned. 156
The differences between the characters continue to raise the question of why Daisy’s infidelity is usually more approved of than Myrtle’s. It is a strong misconception that fuels the entire novel, and it is my hope that after reading this paper you may have a shift in understanding where there is such a double standard and misconception, just as I did after all. Many critics, according to Person, view Daisy as “a monster of bitchery,” as she clearly has feelings for Gatsby prior to her marriage with Tom and through other obvious instances throughout the novel; yet she wallows and does not leave Tom (253). It would seem that the most plausible reason readers root for the forbidden love of Daisy and Gatsby is because of Gatsby’s aspect. His love for her is almost charming because he innocently believes their socalled love stands a chance against Tom’s wrath. Gatsby’s eternal love for Daisy is really the foundation for the double standard that Fitzgerald presents in The Great Gatsby. To the average reader, Gatsby fulfills the American dream as an underdog. He rises from the bottom and becomes a rich man. But his success in gaining a fortune and reputation for his mysterious persona, Gatsby, blinds him from reality. His position as a new money, high- class man allows him to mistakenly think he can buy anything, from throwing elaborate parties to gaining some sort of attention from Daisy to actually buying Daisy herself (Bruccoli XI). Gatsby’s main problem is that he confuses the idea of the American Dream of being successful financially to being successful and getting Daisy back into his life (Bruccoli XI). It is his hopeless love for her and the way he puts her on a certain kind of pedestal that persuades the reader to favor his and Daisy’s affair rather than Tom’s and Myrtle’s affair. In all honesty, it should not be accepted that one woman gets off easier for her infidelity than the other; it is really just not fair. However, there is a double standard because of the importance of one’s class in the 1920’s, which essentially defines one in terms of how the rest of society sees you. As previously mentioned, Daisy and Myrtle are at the center of that double standard. The important aspect of this is that as a reader, this double standard is often overlooked. The reality that Daisy kills someone and simply ignores her crime is often seen as a young woman scared of her accidental actions, while Myrtle basically had it coming to her, as she was unfaithful to her loving husband. Perhaps it is more than just Gatsby’s never dying love for Daisy that makes readers blame Myrtle for her consequences and root for Daisy’s and Gatsby’s forbidden love. Or perhaps it is the fact that Tom does not truly love Daisy while Gatsby does, and George Wilson truly admires Myrtle. These facts would make Myrtle the heartbreaker and Daisy simply fleeing her superficial marriage to seek the attention of a real potential love. If this is the case for Daisy’s adultery, it is common for a reader to be okay with it, but the harsh truth is that she still is as bad as Myrtle by committing the same actions. The ending of The Great Gatsby, in which Daisy and Tom flee their home, demonstrates how bad Daisy undermines Tom. It would be far more detrimental to Tom’s reputation as a man to have his high-class peers find out about Daisy’s affair than for his peers to know that he had an affair. This relates back to the idea of gender roles. While the 1920’s served as the beginnings of women gaining more independence, the momentum for this movement was not yet enough to make a difference in women’s roles, 157
especially in this novel. Their place as housewives was therefore included within the novel and contributes to the ending. Ultimately it is the presentation of Daisy in a false, superficial marriage that deems her a victim. This victimization fuels readers to side with her and her apparent struggle with her feelings for Gatsby. Moreover, it is also the idea of a struggling victim that wins audiences over. Writing a character as a victim creates a sense of pity from the reader for the character. Additionally, as Westbrook points out, not allowing a character to have things fall perfectly into place, like an ideal lifestyle, creates the ability to relate to said character, especially being personified in the “ostensibly shallow world” that is The Great Gatsby (79). It is evident that this prevails as the main reason that readers go for Daisy’s affair as more moral than Myrtle’s affair. However, it is my personal opinion that Myrtle is more of a victim than Daisy is because of her treatment from Tom and her position in society that seemingly cannot be changed. The main issue I find with attempting to pity Daisy is that Gatsby loves her wholeheartedly, forgiving her for forgetting about him and marrying Tom. He adores her to the point that he willingly would overlook that because of the pedestal of perfection and beauty he has put her on, and while she may not deserve to be seen in such a pure light, his feelings are honest. As he is attempting to “fix the past” which is impossible, his feelings remain genuine (Handley 160). Meanwhile Tom, who reflects his upper-class society, unveils values purely focused on “money and material possessions, not the development of character and taste” (Kerr 420). Tom’s character and feelings are deceptive; he does not love Daisy but sees her as an object or accessory to his already glamorous lifestyle. The issue with victimizing Daisy is that it seems wildly inaccurate and inappropriate. She chooses to stay with him, instead of going against a society to which she already views in a sarcastic light (Malkmes 42). Ultimately though, neither Myrtle nor Daisy can be excused from their unfaithful ways. Even Myrtle, prior to her death, has no alternative than to stay with George, as her attempts to move up on the social ladder ultimately fail. Daisy is presented with two paths, and she chooses the one that creates the least damage to her reputation. This final decision of Daisy’s is when the reality of her personality prevails. She is not the pure and golden figure her name symbolizes; she is harsh, selfish, and shallow, a perfect representation of the Roaring Twenties society. WORKS CITED Bruccoli, Matthew J. Preface. The Great Gatsby. By F. Scott Fitzgerald. 2003. First ScribnerTrade Paperback ed. New York: Scribner Paperback Fiction, 2003. Print. Fitzgerald, F. Scott. The Great Gatsby. New York: First Scribner Trade Paperback ed. Fiction, 2003. Print. Fraser, John. "Dust and Dreams and the Great Gatsby." ELH 32.4 (1965): 554-64. JSTOR. Web.03 Apr. 2014. Fussell, Edwin S. "Fitzgerald's Brave New World." ELH 19.4 (1952): 291-306. JSTOR. Web. 04 Apr. 2014. 158
Handley, William R. Marriage, Violence, and the Nation in the American Literary West. Cambridge, UK: Cambridge UP, 2002. Ebrary. Ebrary. Web. 19 Feb. 2014. Kerr, Frances. "Feeling ‘Half Feminine’: Modernism and the Politics of Emotion in The Great Gatsby." American Literature 68.2 (1996): 405-31. JSTOR. Web. 31 Mar. 2014. Korenman, Joan S. "’Only Her Hairdresser...’: Another Look at Daisy Buchanan." American Literature 46.4 (1975): 574-78. JSTOR. Web. 02 Apr. 2014. Malkmes, Johannes. “American Consumer Culture and Its Society: From F. Scott Fitzgerald's 1920s Modernism to Bret Easton Ellis’ 1980s Blank Fiction.” Hamburg: Diplomica-Verl, 2011. Ebrary. Ebrary. Web. 19 Feb. 2014. Person, Jr. Leland S. "’Herstory’ and Daisy Buchanan." American Literature 50.2 (1978): 25057. JSTOR. Web. 13 Mar. 2014. Westbrook, J. S. "Nature and Optics in The Great Gatsby." American Literature 32.1 (1960): 784. JSTOR. Web. 07 Apr. 2014.
159
Shadow of the Imagination: The Dark Side of Peter Pan (By Everett J. Y. Fujii) When author and playwright James Mathew Barrie released his character Peter Pan to the world, audiences found themselves enthralled by the boy’s precociousness and incredible adventures. To this day Peter Pan is synonymous with ideas of never-ending youth and a sense of wonder. Peter Pan has gone through numerous incarnations, from stage to cartoon, to appearances in primetime television. Yet often there is a tendency to ignore or overlook the original character as written by Barrie. So popular was Barrie’s 1904 stage play Peter Pan or The Boy Who Wouldn’t Grow Up that in 1911, Barrie would follow up the play with a well-received book titled, Peter and Wendy. Peter was so beloved by audiences that a book review in The Spectator would say, “We have known and loved him, many of us, across the footlights, but here we have him under our very eyes” (Peter Pan, 1911, p. 9). The basic elements of Peter’s story remind adults what it was like to be carefree and demonstrate to children the power of their imagination. An essential part of growing up is pretend play, which is common across all cultures and an essential part of cognitive and creative development (Russ, Robins, & Christiano, 1999, p. 129). Peter and Wendy speaks to the reader in a unique and somewhat nostalgic way because pretend play is a universal. If we focus solely on the book, however, Peter may not seem as innocent as we believe. There exists the possibility that Peter Pan is a decidedly abnormal young boy and may be behaving in a disturbing way. In Peter and Wendy, Barrie’s creation Peter Pan exists in a magical world, Neverland. Peter is both quick-witted and brave and spends his days in search of constant adventure. He is introduced to the readers after having returned to the London nursery of the Darling children, Wendy, John, and Michael, in search of his lost shadow. Soon Peter has whisked the Darling children away from their home and parents, taking them to Neverland. In Neverland, Wendy assumes the role of mother for all the boys and takes on the role of both mother and wife to Peter. John and Michael become members of Peter’s “lost boys,” a group of young boys that Peter had brought to Neverland to take part in his adventures. While they are in Neverland, the Darling children engage in battles with pirates, play with mermaids, and fight Indians of the “Piccaninny” (an antiquated derogatory term for small black children) tribe. Eventually the Darlings begin to miss home and decide to return, taking with them all the lost boys. In the end, all the children return to the real world, while Peter, who refuses to rejoin society, remains in Neverland, eternally youthful, with only Tinker Bell the fairy for company. It is normal for individuals to exhibit antisocial behavior from time to time; what becomes alarming is when the behavior is seen with greater frequency than is age-appropriate. Peter’s actual age cannot be measured, as he never grows up; his physical age, we can only guess at. Barrie’s first description of Peter in his novel gives us a clue: “He was a lovely boy…but the most entrancing thing about him was that he had all his first teeth” (Barrie, 1911, p. 15). According to the Mayo Clinic, children begin to lose their baby teeth at age six (Carr, 2011, para. 1). This would place Peter’s physical age at or before six years old. With an understanding of 160
Peter’s age, it is possible to compare his behavior with that of other six-year-olds and determine if his actions are age appropriate. There are common traits that can be found in children who exhibit disturbing behavior and an inability to follow social mores. Professor of forensic psychiatry M. Dolan, in her overview on the subject, pointed to three factors that are part of problematic behavior: narcissism, callousness, and impulsivity. Narcissism is defined as “an arrogant, deceitful interpersonal style, involving dishonesty, manipulation, grandiosity and glibness.” Callousness is a “defective emotional experience involving lack of remorse, poor empathy, shallow emotions and a lack of responsibility for one’s own actions.” Impulsivity is “behavioral manifestations of impulsiveness, irresponsibility and sensation-seeking” (2004, p. 466). These three traits— narcissism, callousness, and impulsivity—are all exhibited by Peter Pan, which may call into question our own beliefs about Peter. Peter has an unshakable confidence in himself and often takes credit for the actions of others. Wendy is woken in the Darling nursery by Peter, who had been unable to reattach his lost shadow. When Wendy sews the wayward shadow onto Peter, he does not thank her; instead he attributes the feat to his own cleverness: “He had already forgotten he owed his bliss to Wendy. He thought he had attached the shadow himself. ‘How clever I am!’ he crowed rapturously, ‘oh the cleverness of me!’” (Barrie, 1911, p. 27). Later when Peter has accomplished the release of the Redskin princess Tiger Lily through guile, he ignores Wendy’s attempt to leave: “She would have liked to swim away, but Peter would not budge. He was tingling with life and also top-heavy conceit. ‘Am I not a wonder, oh I am a wonder!’ he whispered to her” (p. 80). Peter’s desire to be held up on a pedestal can be seen in his interaction with the “redskins” after rescuing their princess: “’The great white father,’ he would say to them in a very lordly manner, as they groveled at his feet, ‘is glad to see the Piccaninny warriors protecting his wigwam from pirates’,” and, “Always when he said, ‘Peter Pan has spoken,’ it meant that they must now shut up, and accept it humbly in that spirit…” (p. 91). When given authority over others who feel grateful for his deeds, Peter feels the need to exert such control over them that they must treat him as a king. Peter also shows a complete willingness to use manipulation to get what he wants. When Peter decides he wants to take Wendy back to Neverland with him, he proceeds to pull her towards the open window of the nursery with no thought to her desires. When Wendy begins to resist, Peter turns to manipulation. He promises to teach her to fly: “Wendy, Wendy, when you are sleeping in your silly bed you might be flying about with me saying funny things to the stars” (Barrie, 1911, p. 32). He tempts her with mermaids, and finally with a role Wendy so desperately wants to fill, that of a mother: “He had become frightfully cunning, ‘Wendy… how we shall respect you … You could tuck us in at night … None of us has ever been tucked in at night’” (32). Thus, recognizing Wendy’s adult desire to be a mother and her childlike desire to fly amongst the stars, Peter achieves his goals without a thought to Wendy’s well-being, or how her parents would react when they came home to an empty nursery. When the Darling children, with the lost boys in tow, are making their way back home from Neverland, Peter initially flies ahead of the children and reaches the nursery before them. He 161
says, “Quick Tink…Close the window and bar it! That’s right…when Wendy comes she will think her mother has barred her out, and she will have to go back with me” (Barrie, 1911, p. 144). Peter to the last is attempting to manipulate Wendy, regardless of how it might make her feel to be rejected by her mother. Peter’s plan to trick Wendy into thinking her mother had rejected her is simply not typical of a young child. The levels of grandiosity and arrogance that Peter exhibits cannot be attributed to age-appropriate behavior. Barrie’s narrator even says, “This conceit of Peter was one of his most fascinating qualities. To put it with brutal frankness, there never was a cockier boy” (1911, p. 27). If Peter is truly the cockiest (a synonym for arrogant) boy ever, his behavior cannot be defined as age-appropriate. Combined with Peter’s arrogance, a lack of empathy gives Peter the ability to kill without remorse. When the Darlings first reach Neverland, Peter asks them, “Would you like an adventure now … or would you like to have your tea first?” and when John inquires as to the nature of the adventure, Peter replies, “There's a pirate asleep in the pampas just beneath us…If you like, we'll go down and kill him” (Barrie, 1911, p. 44). Killing, to Peter, is as natural as having tea. Peter acknowledges no morality beyond his own, and Peter’s sense of morality is defined more by his needs and desires than by any code. He sees nothing wrong with killing and doing what is necessary to get his way because he does not empathize with others. Peter saw nothing wrong in manipulating Wendy, just like he does not see anything wrong in killing a pirate. Peter controls the lost boys to such a degree that the children “never exactly knew whether there would be a real meal or just a make-believe, it all depended upon Peter's whim … Of course it was trying, but you simply had to follow his lead…” (Barrie, 1911, p. 71). Peter did not care if anyone was hungry; they ate when he let them. The lost boys were brought to Neverland by Peter, yet Peter has no remorse fighting them when the mood strikes him: “It was a sanguinary affair, and especially interesting as showing one of Peter's peculiarities, which was that in the middle of a fight he would suddenly change sides” (Barrie, 1911, p. 73). In the middle of a bloody fight with the “redskins,” Peter switches sides on a whim, attacking the lost boys. None of these are the acts of an empathic person, or even a loyal one. Peter does not genuinely care for the lost boys, or even for Wendy, beyond his ability to control them, to get what he wants. When the children are captured by Captain Hook and taken back to his ship for execution, Peter’s primary focus is not rescue; it is on killing Captain Hook: “As he swam he had but one thought: ‘Hook or me this time.’” (Barrie, 1911, p. 127). Indeed, instead of simply rescuing the children and running away, Peter chooses to fight Hook, involving all the lost boys in a deadly battle. While flying across the ocean to Neverland, the Darling children, who have just learned to fly, are falling asleep. Upon falling asleep, the children fall from the air towards the sea below, and “the awful thing was that Peter thought this funny” (39). Peter would save the falling children just before they hit the water, “but he always waited till the last moment, and you felt it was his cleverness that interested him and not the saving of human life” (Barrie, 1911, p. 39). Not only does Peter give the impression that he does not care for human life, but his actions in this case are solely motivated by the need to feel clever.
162
Immediately upon reaching Neverland, Wendy is struck by an arrow, apparently killed. Peter never reflects on his role in her death; that if he had not brought Wendy to Neverland, she would never have been struck by the arrow. “’Whose arrow?’ he demanded sternly” (Barrie, 1911, p. 61). With the arrow in Peter’s hand, one of the lost boys admits to the deed: “’Oh, dastard hand,’ Peter said, and he raised the arrow to use it as a dagger.” Peter fully intends to kill the offender, one of his own lost boys; it is only the grasp of the now-conscious Wendy that physically restrains Peter’s murderous hand (Barrie, 1911, p. 62). Peter is willing to kill his own lost boys as readily as he kills pirates, with no thoughts beyond his own desire. When Peter thought that Wendy was dead, his initial reaction was to kill the offender, not mourn Wendy’s death. Peter takes no responsibility for the negative consequences of his actions; nor does he exhibit a full range of normal and appropriate emotional responses to situations. Peter Pan certainly does not emphasize with anyone. Throughout the book, Peter exhibits a surprising amount of violent behavior and violent ideation. It is not just thrill-seeking that Peter is after; there is a violent element to his thrill-seeking: “He often went out alone, and when he came back you were never absolutely certain whether he had had an adventure or not. He might have forgotten it so completely that he said nothing about it; and then when you went out you found the body” (Barrie, 1911, p. 73). Peter’s adventures typically involve someone or something dying or Peter trying to kill things. As Peter lives forever, his adventures in Neverland have surely left a mountain of corpses. At no age is casual murder an age-appropriate act. This is more evidence that Peter lacks a sense of genuine empathy. When it comes to impulsivity, Peter has an overabundance of it. Peter rarely thinks anything through, too often throwing himself into situations without a thought as to the consequences. After rescuing the Piccaninny princess, Tiger Lily, from the pirates, Peter Pan throws himself into a fight that could have been avoided, placing Wendy and the lost boys at risk. Peter’s need for stimulation leads to constant adventures: “Adventures, of course … were of daily occurrence” (Barrie, 1911, p. 72). Peter’s constant need for stimulation results in his battles with both the “redskins” and the pirates. When asked if he had killed many pirates, Peter responds, “Tons” (Barrie, 1911, pg. 45). While bringing the Darling children to Neverland, Peter is constantly moving from one game to the next. At first he would “pursue birds who had food in their mouths suitable for humans and snatch it from them; then the birds would follow and snatch it back; and they would all go chasing each other gaily for miles” (Barrie, 1911, p. 38). Stealing food from birds was not enough, though: “When playing Follow my Leader, Peter would fly close to the water and touch each shark's tail in passing, just as in the street you may run your finger along an iron railing” (Barrie, 1911, p. 39). From killing pirates and Indians to stealing from birds and taunting sharks, Peter is a constant thrill-seeker. While some of this might be typical in a child Peter’s age, seeking deadly fights with others is not age-appropriate behavior. The argument could be made that Peter Pan’s activities were simply the result of an overactive imagination. Some readers of Peter and Wendy express the belief that Peter kills pirates and “redskins” in the same manner that children play cops and robbers or other similar games. A 163
study conducted by psychologists Judy Dunne and Claire Hughes looked to find a link between violent pretend play and “hard-to-manage” preschool age children (2001, p. 491). Dunne and Hughes gathered forty participants screened as problematic children and a control group of children matched to the problematic children by socioeconomic status, race, school, age, and even neighborhood. Both groups were observed engaging in play with a friend. There was no significant difference between the groups in the amount of pretend play. The hard-to-manage children, however, scored significantly higher on the amount of violence contained in their pretend play, particularly violence directed towards other people. Additionally, Dunne and Hughes found that “the content of the children’s pretend play was also related to their emotional regulation … children who chose to engage in frequent violent pretend play were more frequently angry and less positive in their responses to friends, and engaged in more frequent antisocial acts—bullying, teasing, violence, and rule-breaking” (2001, p. 502). Nearly all of Peter’s pretend play involves violence or bullying, as seen with the lost boys and the Piccaninny tribe. Even if Peter is just pretending and not literally killing people, his actions are indicative of something deeper than an active imagination. Dunne and Hughes followed up with the participants two years later and found that “the children who engaged in a high proportion of violent pretense as 4-year-olds were less likely as 6-year-olds to give empathic explanations for the feeling of those involved as victims … and more likely to give ‘hedonistic’ or ‘external’ explanations of how the transgressor felt” (2001, p. 503). So given the nature of Peter’s play, he is more likely than other children to exhibit a lack of empathy or callousness. In 1976, psychologists Whitehill, DeMyer-Gapin, and Scott ran an experiment to determine if children with a particularly problematic diagnosis had a greater need for stimulation than most other children. A small group of boys institutionalized in a treatment center were selected to participate. Three groups were created: those with neuroticism, those with antisocial behavior, and a control group made up of non-institutionalized boys. The participants were placed in a dark room and presented images through a carousel slide projector. The slides changed every twenty seconds, but the participants had the option of changing the slides earlier if they so desired by pushing a button. The pictures were of a college campus and contained no stimuli other than buildings. The same slides were shown repeatedly with no variation. The goal of the experiment was to see if any of the three groups would move through the slides more quickly than the others. The experimenters found that the normal group and the antisocial group had no differences on the first block of slides, but the antisocial group moved through each subsequent block of repetitive slide at a significantly faster rate than the other groups. This result “presents evidence of heightened stimulation seeking in preadolescent children” (Whitehill, DeMyer-Gapin, & Scott, 1976, p. 103). While this merely added weight to an already popular idea that antisocial individuals seek stimulation, there has been very little study on the reason why. The experimenters found that “the data suggests rapid habituation rather than lower basal reactivity as the mechanism that underlies stimulation seeking” (p. 103).
164
The reason these antisocial children moved through the slides more quickly than others was due to simple boredom; they craved new sensations that the repetitive slides could not provide. Peter Pan himself is in need of constant stimulation; whether it is fighting pirates, snatching food from animals, or bossing around the Piccaninny, Peter requires the attention and the action, “but about this time Peter invented, with Wendy's help, a new game that fascinated him enormously, until he suddenly had no more interest in it, which, as you have been told, was what always happened with his games” (Barrie, 1911, p. 72). Peter cannot find a game to consistently keep his attention because he gets bored so quickly. While simple hyperactivity and a short attention span might not be so problematic, Whitehill, DeMyer, & Scott thought otherwise: “Early unmanageable antisocial patterns have a dismal tendency to continue into adulthood” (Whitehill, DeMyer-Gapin, & Scott, 1972, p. 104). Peter’s need for stimulation is unlikely to get better as he gets older; he is not going to just calm down and grow out of it. Unlike a clinician who must use observation and interview techniques to diagnose a patient, Barrie’s novel provides us perfect insight into Peter’s thoughts and actions. Psychologist Lawrence Kohlberg developed a theory of moral development that is widely accepted and used in developmental psychology. Kohlberg’s model has three levels: “preconventional,” “conventional,” and “postconventional.” Broadly speaking, individuals in the preconventional level of moral development “interpret morality based on a calculation of how much better or worse off they would be for acting in a certain way” (Heine, 2012, p. 499). Individuals at the conventional level see actions as morally wrong “if they involve violating any rules or laws that the social order has maintained, regardless of what those rules or laws are about” (Heine, 2012, pp. 499-50). At the postconventional level, “moral reasoning is based on the consideration of abstract ethical principles of what is right and wrong, and moral decisions are reached based on the logical extension of those principles” (Heine, 2012, p. 450). Psychologists Frank Campagna and Susan Harter conducted a study to see if children with a severe form of antisocial behavior shared similarities in moral reasoning to a matched group of more normal children. Using testing devised by Kohlberg to measure the level of moral reasoning an individual had attained, Campagna and Harter tested the two groups of children and found that the normal children had reached a higher level of moral reasoning. The group with the severe form of antisocial behavior “demonstrated solidly preconventional reasoning … the normal group reached a transitional stage … between preconventional and conventional levels of moral development” (1975, p. 203). We know that there are no ethical quandaries in Peter’s actions. Morality is a vague concept to Peter. Peter does not seem to share the same moral viewpoint of the other children. He is willing to attack his own group, forces them to pretend to eat when it suits his whim, and endangers them at will when the mood suits him. Peter’s moral understanding is lower than what would be age-appropriate. Peter conforms more to the antisocial group of Campagna and Harter’s study than to the normal group. The Diagnostic and Statistical Manual of Mental Disorders (DSM) is the most widely used diagnostic manual for mental disorders in the United States. It is often referred to as the mental health bible. The fifth edition of the DSM added the same criteria to both antisocial personality disorder (APD), which can only be applied to adults, and conduct disorder (CD). Peter’s actions 165
are demonstrably consistent with the diagnostic criteria for conduct disorder. A diagnosis for CD includes bullying, using weapons, and cruelty to individuals or animals. Two additional criteria are particularly accurate for Peter: “Often initiates physical fights” and “Has run away from home overnight … or once without returning for a lengthy period” (DSM-5, pp. 221-22). Only a single instance of any of these criteria over a six-month period would be sufficient for a diagnosis of conduct disorder. Peter also shows signs of two out of three specifiers: callousness and lack of remorse, which would be enough to add “with limited prosocial emotions” (p. 223) to his diagnosis. Not every individual who suffers from CD develops into a criminal or could later be diagnosed with APD. A diagnosis of APD, however, requires the presence of CD or behavior consistent with CD before the age of fifteen. With Peter, we have a child who has run away from home, kills with impunity, fights for fun, and has no aversion to recounting his past violence: “I cut off a bit of him … his right hand” (Barrie, 1911, p. 45). No shame, no remorse, no empathy. Peter tells this to the Darling children more to brag than to confess. An argument could be made that Peter is suffering from narcissistic personality disorder (NPD) rather than antisocial personality disorder or conduct disorder. For example, the lost boys “are forbidden by Peter to look in the least like him” (Barrie, 1911, p. 49). So great is his need to be the center of attention that Peter forces the lost boys to “wear the skins of the bears” (p. 49) as a sort of uniform, while he wears whatever he wants. Peter does exhibit many of the symptoms of NPD, as seen by reading this definition: “People with narcissistic personality disorder have an inflated and grandiose sense of themselves and an extreme need for admiration” (Nevid, Rathus, & Greene, 2014, p.452). While there are similarities between some individuals with antisocial personality disorder and those with NPD, a study by psychologists Paulhus and Williams (2002) looked at three personality types referred to as the “dark triad” of personalities. One of the three personalities was an extreme form of antisocial personality disorder (APD). The others were narcissism and Machiavellianism. Machiavellianism is the need to control others through cunning and manipulation. Paulhus and Williams studied a sample of 245 for patterns in personality traits. Their idea was that all three types of antisocial personalities would exhibit similar patterns in personality traits. This turned out not to be the case: “Our data do not support the contention that, when measured in normal populations, these three constructs are equivalent.” Furthermore, while both Machiavellianism and narcissism were found to have high levels of neuroticism, the third extreme form of APD has very low levels. The individuals with extreme APD and those with narcissism were “associated with extraversion and openness” (p. 560). While there are similarities between the dark triad in terms of behaviors, the underlying personalities of the individuals lead to the variance in behavior and violence, and the categorical differences between the three points of the dark triad. Peter’s low levels of neuroticism or anxiety differentiate him from individuals with Machiavellianism and narcissism. While Peter does exhibit the traits of narcissism, he does not suffer from NPD. Psychologist and geneticist S. Alexandra Burt looked at the whole of antisocial behavior and aggression in her 2012 review. Burt focused on two components of the antisocial personality: 166
aggressive/violent behavior and rule breaking. Burt pointed to age differences between both types of behavior. Child onset, or life-course persistent, was correlated to violent aggression and adolescent onset to rule-breaking behavior. While both categories were identified as subtypes of the antisocial personality, the child onset violent behavior was the most stable. Burt states that “those with early onset antisocial behavior are at increased risk for academic delay/dropout, low professional achievement, substance dependence, and incarceration … in adulthood” (Burt, 2012, p. 272). Peter, by virtue of his age, cannot qualify for adolescent onset antisocial behavior. His young age and penchant for violent behavior indicates he falls into the life-course persistent, childhood onset category, yet he still exhibits signs of rule-breaking which is inconsistent with this model. Burt states that combined with negative emotionality, “high neuroticism was particularly characteristic of AGG (aggressive antisocial behavior)” (Tacket, 2010). This statement would seem to refute the Whitehill experiment that compared antisocial preadolescents with neurotic ones and found categorical differences in the need for stimulation between the two groups, as well as the Paulhus and Williams study which found that individuals with APD had low levels of neuroticism as a personality trait. Peter unquestionably desires constant stimulation and is certainly violent, yet he is without fear or anxiety, elements of neuroticism. Another component of Peter’s behavior that appears to be inconsistent with Burt’s model is Peter’s callousness. Burt actually does address this in her review when reconciling her model with the diagnostic and statistical manual, or DSM. Peter does suffer from APD, but from a very rare and well known form of APD – psychopathy. The Paulhus & Williams study (2002) compared psychopathy with Machiavellianism and narcissism. Campagna & Harter (1975) was looking at the moral reasoning of childhood psychopaths. Whithall, Scott, & DeMyer-Gapin (1976) examined the child psychopath’s need for stimulation. The Dolan overview on extreme behavior was an overview on childhood psychopathy. The reason why Peter does not conform to all of Burt’s assertions on APD is that Peter does not just have APD. An individual with aggressive and violent APD is typically highly neurotic, filled with anxiety. Peter is decidedly not anxious. Burt said that she would “expect individuals high on CU (callousness), AGG (Aggressive and violent behavior), and RB (rule breaking) to evidence a rare and particularly pernicious form of antisocial behavior characterized by extreme and violent aggression, wanton impulsivity, and an utter lack of regard for the feelings of others. Such individuals would very likely be considered psychopaths” (2012, p. 274). Peter exhibits a highly callous attitude towards others and behavior both aggressive and violent. “If you like, we'll go down and kill him” (Barrie, 1911, p. 44), is just a single example of Peter’s violence and his complete disregard for life. Not even the lost boys are free of Peter’s arbitrary rules and punishments, as evidenced in this passage: “The boys on the island vary, of course, in numbers … and when they seem to be growing up, which is against the rules, Peter thins them out” (Barrie, 1911, p. 49). Peter does not want grown-up lost boys, so he kills them when they start to show signs of growing up—behavior that is both violent and callous. Peter’s impulsivity leads him to risk Wendy’s life just to show Captain Hook it was he that tricked Hook’s men into letting Tiger Lily loose. Peter’s need to break rules is so great that he has broken the greatest rule of life: we all grow older and we all die. 167
Understanding the need to diagnose children as early as possible to both understand psychopathy and to treat it, psychiatrist Olivier Colins led a team that attempted to create a new diagnostic tool to identify childhood psychopathy (Colins et al., 2013). Typically, diagnosis of children is done using a tool constructed to assess adults. This leads to some misdiagnosis and confusion (p. 7). Colins’ team would create the Child Problematic Traits Inventory (CPTI). The CPTI would focus on three factors, similar factors as those Professor Dolan identified in her overview of child psychopathy: Grandiose-deceitful, Callous-Unemotional, and Impulsive-Need for Stimulation (p. 8). The CPTI was developed to be rated by teachers. The CPTI consists of 28 items, rated from never to very often, and results were tabulated along each of the three factors. The CPTI was found to be “validated in a large representative sample of children” (p. 17). The CPTI was successful in identifying children with psychopathy despite being administered by a teacher rather than a mental health care professional. Looking at just a few of the CPTI questions and how they apply to Peter gives us an idea of how he might be diagnosed. Under the grandiose, deceitful factor we have “seems to see himself/herself as superior compared to others” and “is often superior and arrogant towards others” (Colins et al., 2013, p. 21). Peter is demonstrably narcissistic and consistently exhibits his feelings of superiority. Under the callous dimension, Peter’s lack of empathy, remorse, and feelings of shame indicate that he would score high in this dimension as well. Peter meets the criteria for high impulsivity and need for stimulation to a tee. Peter often does things without thinking, has a need for change and excitement, seems to get bored quickly, and as we have seen, “seems to do certain things just for the thrill of it” (p. 21), like killing pirates or taunting sharks. Peter’s behavior would likely score him a diagnosis of psychopathy according to the clinical diagnostic tool, the CPTI. Through Dolan (2004) we identified psychopathy as containing three dimensions: grandiosity, callousness, and impulsivity/thrill-seeking. These same dimensions became criteria for the CPTI (Colins et al., 2013). Burt (2012) identified that a child with aggressive rule-breaking behavior should be highly neurotic, and that those who were not neurotic but exhibited callousness were likely psychopaths. Barrie (1911) gave us the insight into Peter and his interactions; all we needed to see was how perfectly he conforms to the highly narcissistic, callous, impulsive, and violent psychopath. The implications of Peter’s psychopathy reverberate beyond simple academic discourse. There is a tendency to view characters from various entertainments in a polar way. The characters are either good or bad, and we attribute more positive characteristics to those we decide are good. We ignore their alarming characteristics in a way that may be unhealthy and is surely doing a disservice to the work itself. When we stop thinking about these characters and questioning our own ideas about them, we reduce the complex messages contained in fiction to basic morality plays. We hammer home messages in digestible, monotonous, non-threatening ways, because it is easier than actually discussing the more complex messages and themes contained within. Not looking at fictional characters lets us avoid looking at real people. When we accept that characters we love may have a dark side, we may also have to accept that those real people we love may have a dark side, children included. We can no longer attribute characteristics in polar ways. Parents who dismiss the violent actions of their children as childhood exuberance fail to 168
properly assess their children’s actions as the antisocial acts they are. Antisocial children may become antisocial adults. Imagine Peter as an adult. He can be charming but needs to be the center of attention. He uses manipulation to get what he wants and uses violence as a tool. Peter kills without conscience and cares nothing for the feelings of others; in fact, he does not even acknowledge that other people have feelings unless prompted. An adult Peter Pan is no longer just a case of “a boy being a boy.” Barrie’s disturbing portrait of Peter has become synonymous with youthful exuberance. Peter has become a symbol of never-ending innocence; indeed, we often attribute innocence to children. Thus, if Peter is an eternal child, a reasonable assertion would be his eternal innocence. Peter, however, is not an eternal symbol of innocence and adventure. In truth, Peter is an eternal symbol of hedonism and psychopathy. REFERENCES Barrie, J. M., and Billone, A. (2005). Peter Pan. New York: Barnes and Noble. Campagna, Anthony F., and Susan Harter. (1975). Moral judgment in sociopathic and normal children. Journal of Personality and Social Psychology, 31(2), 199-205. Carr, A. (2011, April 16). Children's health. Baby teeth: When do children start losing them? Retrieved from http://www.mayoclinic.org/healthy-living/childrens-health/expertanswers/baby-teeth/faq-20058532 Collins, O., Andershed, H., Frogner, H., Lopez-Romero, L., Veen, V., & Andershed, A-K. (2014). A new measure to assess psychopathic personality in children: The child problematic traits inventory. Journal of Psychopathology and Behavioral Assessment, 36, 4-21. "Disruptive, Impulse-Control, and Conduct Disorders." (2013). In Diagnostic and statistical manual of mental disorders: DSM-5. (5th ed). (pp. 219-226). Washington, D.C.: American Psychiatric Association. Dolan, M. (2004). Psychopathic personality in young people. Advances in Psychiatric Treatment, 10(6), 466-473. Dunn, J., & Hughes, C. (2001). ’I got some swords and you're dead!’: Violent fantasy, antisocial behavior, friendship, and moral sensibility in young children. Child Development, 72(2), 491-505. Frick, P. J., O'Brien, B. S., Wootton, J. M., & McBurnett, K. (1994). Psychopathy and conduct problems in children. Journal of Abnormal Psychology, 103(4), 700-707. Heine, S. J. (2008). Morality, religion, and justice. In Cultural psychology. (pp. 449-450). New 169
York: W.W. Norton. Nevid, J., Rathus, S. & Greene, B. (2014). Personality disorders and impulse-control disorders. In Abnormal psychology (9th ed). (p. 452). New Jersey: Pearson. Paulhus, D., & Williams, K. (2002). The dark triad of personality: Narcissism, Machiavellianism, and psychopathy. Journal of Research in Personality, 36(6), 556-563. Peter Pan Again. (1911, November 18). The Spectator, 9. Russ, S. W., Robins, A. L., & Christiano, B. A. (1999). Pretend play: Longitudinal prediction of creativity and affect in fantasy in children. Creativity Research Journal, 12(2), 129-139.
170
MEET THE WRITERS! I am from Philadelphia, Pennsylvania, and my major is social work. I plan to get a Master’s degree after graduating from HPU and become a medical social worker. I like living in Hawaii and attending HPU due to the cultural diversity. You can meet many students from different parts of the world. Although the cost of living is high, it is an amazing place to live. I enjoy experiencing different world cultures, exploring unique ecosystems, enjoying ocean activities, warm weather year round, and being surrounded by beautiful scenery.
Ray Abarintos
My name is Jessica Bie and I was born and raised in Commack, New York, on Long Island. My major is Multimedia Cinematic Production. I hope to work in some part of the film world whether it be editing, scriptwriting, or better yet, directing. No matter what I do, I want to use my creative skills, especially with movies in my future. The best thing about Hawaii for me personally is that I have been an avid swimmer my entire life, and Hawaii has given me the opportunity to explore open ocean swimming year round. HPU has also given me opportunities to grow more as a student by providing some multimedia students with full press passes to the Hawaii International Film Festival, allowing us the opportunity to explore to write reviews and interviews and apply our learning outside of the classroom.
Jessica Bie
I am originally from Florida, but I grew up mostly in Taiwan (7 years) and Singapore (9 years) before I came to Hawaii to attend HPU. My major is Justice Administration, and I am considering a minor in Writing. Currently, I am aiming to go to law school and work for US law enforcement agencies in the future. I love the multi-cultural aspect of HPU because it gives me numerous opportunities to interact and socialize with people from all over both the US and the world. Wesley Chai
171
I was born and raised here in Hawaii and attended school at Pearl City High School. Before starting college I actually wanted to become an engineer, but that slowly changed as I finally decided to go to school for business. Growing up, I have always loved fashion and leading others, so what better way to combine the two than to own my very own business. Fashion has always been a hobby for me, but I'd like to use what I learn in school to better gain my advantages for the future. The best thing I like about HPU is the diversity within the students. You can really get to know people from all over the world, and see how we are so different yet so similar in so many ways. Meeting new people allows me to see things from different points of view, and that is great! With fashion being one of my passions, attending HPU also allows me to get an insight about the many cultures’ clothing, style, merchandising, trends and more. With everything I am learning, I look forward to beginning my future. After graduating I hope to move to the mainland where I can expand my horizons, and work with some of the business we do not have down here, getting some hands-on experience. I can't wait to see what the future has in store for me!
Kylee Chun
I am from Oahu, Hawaii. I'm double majoring in Integrated Multimedia and Public Relations. I'd like to one day own my own art-related business. The best part of HPU is the diversity of the student body. Selah Chung
My name is Lallo Darbo, I'm from Stockholm, Sweden. I'm currently on a one year leave from school, but when I studied at HPU my major was international studies. I'm not sure what I want to do exactly in the future but I have a passion for politics, so my dream is to be a diplomat or an MP in the Swedish parliament. What I loved about studying at HPU was the unique composition of students from so many different countries. Never have I ever been in such a vibrant and unique school environment. Being able to talk to people from around the world at the same time you're studying about the world provides a unique learning experience that is hard to come by.
172
Lallo Darbo
I am from Wantagh, New York, and went to Hawai'i Pacific University for my freshman year of college. I am currently a major in Accounting with an interest in Business Journalism. I hope that one day I can either work in tax or audit for one of the Big Four accounting firms. I plan to take the CPA exam and get my Master’s degree in a more specific area of accounting. Hopefully, one day after working for the Big Four I can open my own accounting firm. My favorite thing about living in Hawai'i was the diversity there. Even HPU was so diverse, and you could meet people from all over the world by just sitting in your classroom. Elizabeth Dash
I’m from Oahu and my major is currently Computer Science. In the future I would like to be involved in software or video game development. My favorite thing about living in Oahu is the people! They are much nicer here than other places I’ve been to. At HPU, I like the small class sizes. I enjoy close interaction with my classmates and professors; it really helps me absorb the course material better, and I don't feel like just a random person in a crowd! Candace Farris
Amanda Fish
I am from San Jose, California. My major is International Studies, and my minor is Diplomacy and Military Studies. After I graduate I don't have any cemented career plans, but I will most likely be moving to Washington, DC. The thing I like best about attending HPU is how diverse and international the student body is, because of the cultural and intellectual exposure I get to experience.
I'm from Gaithersburg, Maryland. My major is International Studies with a focus on International Political Economy & Development. I plan on becoming a Foreign Service Officer for the United States Department of State. The best part about attending HPU is the diversity of the student body. I have connections from all over the world and I've learned so much about the world just by being in Hawaii. 173
Raj Kolloru
Pancy Thein Lwin
I am pursuing a BA in Political Science and Governmental Studies while minoring in Economics at Hawaii Pacific University (HPU). I am an Honor’s Scholar. My career goal is to promote Hawaii’s strategic role in in the political, social, and economic relations between the United States and Southeast Asia, especially in regards to Myanmar. I enjoyed my time at HPU sharing world views on politics and economics with international students from around the globe.
I was born in the Philippines but was raised in Ewa Beach, Hawaii. My declared major is Finance. If I stick with finance, I hope to go into real estate. The thing I like best about living here is the diversity in people, places, and especially food.
Janine Mariano
I am from Ewa Beach, Hawaii, and my major is Biology - Human & Health Science, with a minor in psychology. My future plans consist of attending medical school to become a pediatrician or general practitioner. With the degree, I would like to travel and help the less fortunate who have trouble obtaining medical attention.
Samantha Patanapaiboon
The fact that I'm able to be close to family is best part of attending HPU because I am able to watch my siblings grow as well as help out my mom. Plus, I don't have to deal with feeling homesick and the crazy mainland winter.
I was born and raised in Haugesund, Norway. I am 21 years old and majoring in Integrated Multimedia, currently in my second year. I have always been interested in technology and media ever since I was young. My future career plans are still foggy, but I definitely want to start somewhere within production and see where that choice will take me in life. I enjoy the amazing nature in Hawaii, and I like to hike every now and then. On the weekends I usually take a book to the beach with a friend. I also currently work as a photographer/journalist for HPU Kalamalama. 174
Silje Lie Solland
I am from Austin, Texas, and a Health Science Major, hoping to become a physician. The weather, nature, and outdoorsy activities year around make me love studying at HPU.
Eimi Smith
Hi, I’m from Bellevue, Washington, just outside Seattle. I’m a business management major seeking an English minor. Hopefully, upon graduation from HPU, I will continue my education in graduate school, preferably through the University of Washington. However, I would also like to take a semester off of school to travel. I enjoy spending time at the beach, as well as reading. Sophia Suarez
Kelly Wapenski I am from Morgan Hill, California. I am currently a Psychology major, but I am applying to be a Social Work major next semester and my goal is to double major Social Work with psychology). My future career plans are to be a counselor and help anyone that I can. The thing I love best about Hawai'i and HPU is the people I have met here. These people have changed my life and I cannot imagine not having met them. Everyone is always helpful and understanding.
NOT PICTURED: Romain Caïetti, Dana Chapman, Everett Fujii, and Rosario Kuhl 175
MAHALO!
The editors and contributors would like to acknowledge the contributions and support of the following people: David Lanoue, Dean, HPU College of Humanities and Social Sciences William Potter, Associate Dean, HPU College of Humanities and Social Sciences Shane Teranishi, HPU Web Services Kathleen Cassity, Chair, HPU English Department Brittany McGarry, Editorial Intern Savannah Halbrook, Editorial Intern Lily Nazareno, Cover Artist Nominating Instructors: Kathleen Cassity Sara Davis David Falgout Tyler McMahon Deborah Ross Micheline Soong Patrice Wilson
176