Pinnacle 2014

Page 1

Pinnacle The Academic Journal of The Derryfield School

“Leading Lives of Passion and Purpose”



Pinnacle

Volume 3, Fall 2014

Pinnacle

The Academic Journal of The Derryfield School 1


Pinnacle

Volume 3, Fall 2014

PINNACLE STAFF:

LETTER FROM THE EDITORS

Editor-In-Chief Francesca “Franky” Barradale

Managing Editor Welcome to the 2014 Issue of Pinnacle!

Samantha “Sam” Carbery

As the Derryfield School celebrates its 50th year, we thought it especially appropriate for the community to share a compilation of some of the best academic work from the prior year. Derryfield’s tradition of encouraging students to follow their own path is on display in this issue. In a break from the prior issues of Pinnacle, the focus of this

Editors: Anthony Esielionis Cameron “Cam” Huftalen

publication is student work only in the form of essays or research papers. This narrower focus has allowed the inclusion of a greater variety of works in a wider range of fields. Also, the student voices on

Faculty Advisors:

these pages represent a broad spectrum of the community; some are

Lindley Shutz

varsity athletes, some are artists, some are more mathematically and

Becky Josephson

scientifically inclined, and some are most comfortable on stage. In other words, the work and voices herein represent the breadth of this small, diverse community.

Crafting a publication requires the assistance of many people. We are grateful to all of the teachers who assisted us in obtaining submissions and to all of the students who were willing to share their work. We are particularly thankful to Ms. Shutz and Ms. Josephson for encouraging us to undertake this project and for shepherding our efforts.

Enjoy, Franky and Sam

Cover Photo - “Leading Lives of Passion and Purpose” by Zane Richer ‘15

2


Pinnacle

Volume 3, Fall 2014

Table of Contents Pinnacle, Volume 3, Fall 2014

A Foot in Each Box: Society’s Definition of Sexuality………………………………………………………………………………………...Page 5 Sam Fogel The Societal Transformation Hypothesis …………………………………………………………………………………………………………Page 7 Franky Barradale Zen and the Introvert…………………………………………………………………………………………………………………………………….Page 11 Kaitlin Cintorino Stress: A Challenge Convoluted by Misconception………………………………………………………………………………………..….Page 13 Natalie Duncan Selling the Excess: How 1920s Advertisers Built a Consumer Culture in America………………………………………………..Page 16 Rebecca Teevan The Rise of the American Celebrity: Criminals, Presidents, and All That Jazz…………………………………………………….Page 21 Sam Carbery America’s 20th Century Interventionist Doctrine: New Opportunities, Drastic Measures……………………………………Page 27 Mike Roffo La dualidad de ser………………………………………………………………………………………………………………………………………..Page 34 Zach Barragán Poetry Comparison In-Class Essay…………………………………………………………………………………………………………..…….Page 36 Samuel Lynn Telemachus and Hamlet: The Fear of Action…………………………………………………………………………………………………..Page 38 Jessey Bryan The Road to Peace and Revenge are Similar Beasts…………………………………………………………………………………….…….Page 41 Grant Kegel New Ideas on Correlation Between Speciation and Reproductive Barriers…………………………………………….……………Page 43 Anja Stadelmann Revolution in Russia and Egypt: History Repeats…………………………………………………………………………………………….Page 44 Paige Voss Feeding a Hungry Mind………………………………………………………………………………………………………………………………..Page 49 Anja Stadelmann

3


Pinnacle

Volume 3, Fall 2014

Dedication: To the Founders

4


Pinnacle

Volume 3, Fall 2014

A Foot in Each Box: Society’s Definition of Sexuality Sam Fogel AP English Language and Composition Two boxes stand separated, one labeled “gay,” and one labeled “straight.” In these boxes lie society’s predetermined characteristics of what is considered acceptable for a stereotypical straight man and what is acceptable for a stereotypical gay man. However, is it possible for a man to stand with one foot in each box? Tobias Fünke of Arrested Development seems to break down the walls of these boxes, forever provoking the audience to question his actions regarding his sexuality. On the one hand, Tobias Fünke perpetuates the role of the “family man”; on the other hand, he also perpetuates the traits typically associated with someone who is flamboyantly gay. Tobias’s purgatory between sexualities exemplifies how, as undefined sexuality surfaces as a topic in modern culture, people implement this ambivalence into their life. Since the beginning of the media’s “nuclear family” mentality of the 1950’s, the man of the house has always been portrayed as a glorified provider, father, and husband trying to please his family. Tobias Fünke embodies these characteristics as a successful doctor, a husband married to a beautiful socialite, and father of a lovely daughter. Media’s portrayal of a father has always been as the “god of the house, the one that you took all your problems to” (Man of the House). This emerging stereotype suggested that the man of the house was the figure that everybody idealized and relied on. The stereotypical family man was expected to shoulder these responsibilities of supporting his family, “like having to wear an ill-fitting coat for one’s entire life” (Theroux 379). Theroux’s description of the coat being ill fitting suggests that these men succumb to, but may not always want to invest in, the enormous pressure that society has placed upon them. As society assigns this role of responsibility to men, a stereotype begins to accompany it. This stereotype of the “family man” forces an emphasis on appeasement, the notion that the perfect father must please everyone in his family in a constant effort to keep everybody happy. Because of this profile men have been boxed into, “It does not surprise [me] that when the President of the United States has his customary weekend off he dresses like a cowboy-it is both a measure of his insecurity and his willingness to please” (Theroux, 380). Theroux’s comment exemplifies how the pressure of this stereotype accompanies even the most powerful of men. Appeasement mentality is echoed in Tobias Fünke’s attitude towards his family. Tobias dons Theroux’s metaphorical coat as he constantly strives to serve as a provider as well as win the approval of his family; for instance, in Season 2, Episode 13, Tobias abandons his career as a psychiatrist to pursue a life of acting. In an effort to convince his wife Lindsay that this is a sound career choice, he promises her, “If this takes off I might finally be able to buy the happiness you deserve” (2.13). Tobias wants nothing more than to placate his wife and his persistence in this is what exemplifies him as such a strong masculine stereotype. He furthers his stereotype in the next episode, when after a dispute about his new career, Lindsay kicks Tobias out of the house. As Tobias fails to meet Lindsay’s predetermined expectations of how the perfect husband should act, Arrested Development exemplifies how difficult it is to live up to media’s stereotype. Tobias’ ardent desire to regain his family and win their approval leads him to enact the plot of Mrs. Doubtfire: he dresses up like a maid and offers to clean the family household in an attempt to remain close to his wife and daughter (2.14). Despite his somewhat misguided efforts, Tobias has a clear devotion to his family, striving only to do what is best for them. Tobias’ dedication to his family furthers the media’s portrayal of the stereotypical man of the house, however, the manner in which he goes about this surfaces his more feminine side.

5


Pinnacle

Volume 3, Fall 2014

In contrast to Tobias’ embodiment of the stereotypical man of the house, his ambiguous understanding of his sexuality places him in another box, forcing the viewer to question his gender identity. Aaron Devor’s essay on The Gendered Self explains the societal expectations that accompany gender identity: “As we move through our lives, society demands different gender performances from us and rewards, tolerates, or punishes us differently for conformity to, or digression from, social norms” (Devor). For instance, society expects that sexual orientation is directly correlated with gender. A heterosexual male would be married to a heterosexual female. A stereotypical, heterosexual man is expected to dress and talk a certain way and because of society’s inability to distinguish between gender and sexuality, digression from these social norms is associated with homosexuality. Devor suggests that “body posture, speech patterns, and styles of dress which demonstrate and support the assumption of dominance and authority convey an impression of masculinity” (Devor). Consequently, while Tobias embodies all the ideological attributes of being a stereotypical “man of the house”, a.k.a. a heterosexual, his mannerisms suggest a more feminine physique. Tobias’ clothing and Freudian slips further his sexual ambiguity, challenging our expectations of his behaviors given his sexual identity. In Episode Seventeen of Season 2, Tobias reveals his closeted act of perpetually wearing cutoff jean shorts underneath all his clothes. While on the surface, Tobias continues to portray typical masculine attire, he is hiding another side of himself, one which embraces clothing and attributes more commonly associated with femininity. Along with his closeted feminine attire, Tobias frequently slips sexual double entendres into conversation, such as referring to himself as a “blow hard” and titling his gay, cult-hit autobiography, The Man Inside Me (Arrested Development Wiki). These vernacular digressions from Devor’s definition of masculine speech patterns demonstrate Tobias’ continuation of breaking away from the stereotype. Through these references to an obscure sexual alignment, as seen by his feminine attire and double entendres, Tobias breaks out from the conventional image. Although Tobias Fünke plants one foot in the box of archetypal father figure, he also plants his other foot in the less conclusive and structured box of sexual confusion. By accepting neither box to concretely define him, Tobias epitomizes a “break out character.” Through his character, the media addresses society’s growing concerns with issues such as gay rights and sexual or gender confusion. Tobias’ breakaway from masculine stereotypes honors these concerns and gives them great importance. However, his ability to balance life as a patriarch along with his sexual uncertainty begins the next step media must address: Has society made it impossible to embody both - life as a family man and life with an undefined sexuality? Those who are unsettled by this question may argue that America isn’t ready for a society willing to depart from the ideas of concrete gender identity. Much of Arrested Development’s audience would scoff or even be repulsed by the idea of Tobias’ digression from the stereotypical heterosexual. However, Tobias Fünke provides such intrigue because of his ability to capture both the age-old comfort of stereotypes while twisting the entire concept into one that is progressive and controversial. As a hybrid of conformity and ingenuity, Tobias reaches a large audience, repulsing a portion but providing comfort to another. There are those, of course, who feel they relate to him, such as gay fathers or men confused with their sexuality. The media’s introduction of a character as ambiguous as Tobias seems to be the first step in sensitizing America to an important progression in society, giving the first stages of acceptance to a new demographic that is ambivalent in gender and sexuality. As well as accepting this ambiguity, media’s gentle introduction of the topic, while still incorporating classic stereotype to lessen the impending discomfort, lays a foundation for breaking down an important wall between sexuality and gender identity. Works Cited: "Episode 2: Man of the House." PBS. PBS, n.d. Web. 10 Feb. 2014. Devor, Aaron. Becoming Members of Society: Learning the Social Meanings of Gender. Diss. Print. "Tobias Fünke." Arrested Development Wiki. N.p., n.d. Web. "Immaculate Election." Arrested Development. . Television. <http://movies.netflix.com/WiMovie/70140358?strkid=721999289_0_0&trkid=222336&movieid=70140358>. "Motherboy XXX." Arrested Development. . Television. <http://movies.netflix.com/WiMovie/70140358?strkid=721999289_0_0&trkid=222336&movieid=70140358>.

6


Pinnacle

Volume 3, Fall 2014

The Societal Transformation Hypothesis Franky Barradale AP English Language and Composition Though media outlets such as the Internet are on the rise in today’s society, television is still the most powerful purveyor of public opinion.

In fact, the average American watches five hours of television every day

(Zalaznick). Television is specifically formatted to attract viewers. For television to be captivating, it must air shows that have characters worth watching: specifically, breakout characters. A breakout character is someone who defies or reshapes a stereotype, consequently providing social commentary about preconceived cultural ideals. The media is “the most prolific progenitor of stereotypes” because it has the capacity to spread these stereotypes “across borders and cultures with terrific ease” (Bitches, Bimbos, and Ballbreakers 8). Breakout characters can capture the attention of a vast audience by challenging ubiquitous stereotypes. One such character from The Big Bang Theory is Amy Farrah Fowler, a burgeoning mix of both the stereotypical nerd and female. Though Amy is introduced on the show as socially inept and averse to intimate human relationships, which are stereotypically nerdy qualities, she gradually becomes droll, quickwitted, and devoted to her friends – signs that she also has the capacity to be stereotypically effeminate. Through characters such as Amy who challenge societal norms, the media frames public opinion by manipulating stereotypes to propose a new ideal. The Big Bang Theory is a critical show to analyze because of its popularity. In fact, the show is ranked the number one comedy in the United States in both total viewers (averaging 20.0 million per show) and adult viewers, aged 1849 (averaging 6% of the 126,960,000 adults in that age range who live in a house with a television) (Bibel, “CBS”). On Thursday, February 6, 2014 alone, 17.53 million people tuned in to watch The Big Bang Theory (Bibel, “Thursday Final Ratings”). In comparison, only the Winter Olympics were viewed more, generating an audience of 20.02 million people (Bibel, “Thursday Final Ratings”). The Big Bang Theory entered syndication after Season Four and began to air five nights a week on local television stations across the United States (“The Big Bang Theory Fansite”). The show is now in its seventh season. The show’s popularity and pervasiveness give it a strong influence within society, setting it up to be an outlet for breakout characters. The contradictory parochial stereotypes of nerds and women serve as the foundation for Amy’s success as a breakout character. In his piece, Becoming Members of Society: Learning the Social Meanings of Gender, the sociologist and sexologist Aaron H. Devor notes that female stereotypes embody the belief that the goals “of all biological females…revolve around heterosexuality and maternity.” Femininity in this sense is defined by “warm and continued relationships with men...interest in caring for children, and the capacity to work…in female occupations” (Devor). These vague and Freudian criteria restrict women to domestic lives which only contain happiness through other individuals. This stereotype is diametrically opposite to the popular stereotype of the nerd, which, according to Lori Kendall, a professor of sociology at the University of Illinois at Urbana-Champaign, is a “sartorially challenged, anti-social white male”. Not only is the stereotype of the nerd limited to males, but it suggests that nerds are awkward and lead lives of isolation from others. The ubiquity of and contrast between the stereotypes of nerds and women lay the foundation for a character like Amy Farrah Fowler to defy both preconceived judgments.

7


Pinnacle

Volume 3, Fall 2014

During her four seasons on The Big Bang Theory, Amy undergoes a dramatic shift in character that allows her to embody both the female and nerd stereotypes. In Season Four, when she becomes a permanent character on the show, Amy – clad in sweaters, button-down shirts, skirts, knee-length socks, and loafers, which are complemented by her straight, mousy brown hair and square glasses – is a clear stereotypical nerd. Her speech and social interactions are awkward to the point of being robotic. While her physical appearance does not change throughout the course of the show, her personality transforms greatly. One of the main areas of her development is in her relationship with Sheldon, her “boyfriend.” Initially, Amy feels that romance is “an unnecessary cultural construct that adds no value to human relationships” (The Desperation Emanation). Thus her platonic relationship with Sheldon is completely satisfactory to her. However, Amy’s views are tested in the episode, "The Alien Parasite Hypothesis," when she becomes sexually attracted to one of Penny’s ex-boyfriends. Amy desperately turns to her scientific background for guidance to explain her arousal logically: “I have a stomach, I get hungry. I have genitals, I have the potential for sexual arousal.” Though she comforts herself with this explanation, the experience is psychologically crippling for Amy, who realizes that she can no longer ignore her heterosexual urges. This test of her character marks the beginning of Amy’s evolution into a logicdriven woman who is conscious of her desires. While becoming more aware of her sexuality, Amy is similarly becoming more social in her interactions with the other main characters. However, she has not had many friends, and she is thus unaware of the standards for acceptable behavior. In some cases, this limitation causes Amy to overreach boundaries, such as when she confesses her gratitude to Penny for teaching her how to be more sociable: "[B]efore I met you, I was a mousy wallflower. But look at me now. I'm like some kind of downtown hipster party girl. With a posse, a boyfriend, and a new lace bra that hooks in the front, of all things!" (The Rothman Disintegration) Though Amy is oblivious to the fact that parts of her comment may be too personal, this encounter reveals that she is evolving into a more uninhibited, social individual, thus becoming more like a stereotypical woman. In other situations, Amy relies too heavily on her vast stores of knowledge for guidance, not realizing that the resulting behavior is not socially appropriate. For example, this social ineptitude causes Penny and Bernadette, her two female friends, to shun her and to shop for bridesmaid dresses without her. Bernadette and Penny explain to their concerned male friends why they chose to shop without Amy:

Bernadette: She keeps on telling us stories about bridesmaid traditions in other cultures, and they’re all about getting naked and washing each other. Penny: Yeah, and she keeps trying to figure out if our cycles have synced up so we can call ourselves the Three Menstrua-teers. (The Isolation Permutation) Penny and Bernadette represent society’s response to an individual like Amy, who is shunned because she is oblivious to her inappropriate behavior. Though the writers appear to agree with society's perception that being a nerdy, socially inept female is undesirable, they use Amy’s friends to further a different opinion. Later in the episode, Bernadette – a female scientist who is much more socially adept – offers Amy the position as her maid of honor as an apology (The Isolation Permutation). Though she is awkward, Amy earns the sympathy of her friends, showing that people should be tolerant of everyone – including individuals who are socially inept. This dynamic of Amy’s friendships reflects the conflict between having a passion for science – something that is not easy to share – and wanting stereotypical female interactions, which often involve bonding over interests all women share. Though social interactions are an important platform to display Amy’s breakout personality, the show best achieves social commentary by exploiting humor and irony to question society’s perception about female nerds. In one

8


Pinnacle

Volume 3, Fall 2014

episode, Sheldon purchases “jewelry” in an attempt to end a fight with Amy. Initially, Amy is furious with Sheldon for being “the most shallow, self-centered person I have ever met,” but her frustration breaks when she sees Sheldon’s gift: “ “Do you really think another transparently manip...oh! It's a tiara! A tiara! I have a tiara! Put it on me, put it on me, put it on me, put it on me, put it on me, put it on me, put it on me!" (The Shiny Trinket Maneuver) Amy’s about-face - after just having denounced Sheldon for his shallowness - reveals through humor that she is a mix of the serious, analytical nerd and the unrepressed, little girl who wants to be a princess. The image of Amy as a princess is also used in the episode, “The Contractual Obligation Implementation,” in which she, Bernadette, and Penny go to Disneyland and get princess makeovers. While the girls are away, Sheldon, Howard, and Leonard must speak to a group of middle school girls about careers in science, encouraging them to pursue futures in the field. But the guys can do little to interest the girls, so Sheldon calls Amy and Bernadette to have them speak over the phone, since they are actual women in science. While looking at herself in a mirror and applying lipstick, Amy says into the phone, "The world of science needs more women, but from a young age, we girls are encouraged to care more about the way we look than about the power of our minds" (The Contractual Obligation Implementation). This comment exposes the direct commentary the show makes about the capacity for girls to be scientists. The comical combination of Amy’s vanity and her intellectualism exposes the fact that stereotypes of females conflict with Amy's message. The show is therefore suggesting through this event that society's perception of the stereotypes must be changed and that the conflict between the two stereotypes should cease. Since the media is the purveyor of socially acceptable characteristics, the fact that a character such as Amy has emerged suggests that society is becoming more accepting of female scientists. The evolution of female television characters displays the gradual deviation from the image of the perfect woman and the growth of flawed characters that develop more in response to experiences than social pressure (“The Independent Woman”). Just as Amy has grown and developed within her appearances on The Big Bang Theory, her emergence as a character is indicative of shifting societal ideals. According to Lauren Zalaznick, a television executive, the reason for the materialization of a breakout character is often the result of an actual event or social change that triggers the overthrow of a stereotype. Amy’s creation has most likely resulted from the conflicting ideals of women and scientists that have been in place for decades. As Betty Friedan, a leading figure in the women’s movement in the United States, notes in her book, The Feminine Mystique, girls were taught in the mid-twentieth century that “truly feminine women do not want careers, higher education,” or even “political rights” (Friedan 16). Consequently, women were discouraged from seeking intellectually-dominated professions, much less careers in the sciences. During the Cold War, scientists realized that “America’s greatest source of unused brainpower was women,” who rejected the sciences because they were “unfeminine” (Friedan 17). Female stereotypes had a pervasive effect on society by dictating the correct way for women to live: in aversion to the sciences. This psychological separation of women and science still exists and deters women from pursuing scientific careers, particularly of the technological variety: “they think that if they go into computer science they’re going to have to be antisocial. That turns people off who don’t see themselves fitting that stereotype” (University of Illinois). In order to oppose this clash of stereotypes, characters such as Amy have been created to change these cultural misconceptions; by creating Amy, television is spreading the idea that women can also live happily as scientists. The use of television to circulate new concepts is critical because “psychologists are convinced that the projection of stereotypes leads to stereotyped behavior” (Bitches, Bimbos, and Ballbreakers). The rise of Amy as a character will therefore lead to the ultimate transformation of society: the emergence of more women who feel comfortable pursuing a career in the sciences. By proposing a new social ideal through characters such as Amy, the media has the power not only to change public opinion, but also to change culture.

9


Pinnacle

Volume 3, Fall 2014

The media uses characters such as Amy to manipulate stereotypes, ultimately transforming public opinion to become open to a new societal ideal. Though some breakout characters only serve to challenge a single stereotype, Amy defies the misconception that the conventional stereotypes of females and nerds are antagonistic. Though she experiences social challenges, Amy shows that intellectualism and femininity can adapt to coexist. The most important indicator of Amy’s role as challenging social norms is the humor employed in the show, which highlights irony and creates undertones of criticism. A crucial factor that determines a breakout character’s success is the genre of the show from which she emerges. Comedy, though seemingly light and harmless, is powerful, utilizing appropriate contexts to soften the inherent social commentary that makes the viewer realize that certain stereotypes are no longer valid. Through the genre of The Big Bang Theory and the breakout character within it, the media attempts to change the way that people typically think about female scientists. With the influence of characters such as Amy, society will ultimately become more accepting of intellectual females, and the combined stereotype will emerge as a prevailing reality. Thus in the end, the media uses breakout characters as a tool to allow for the evolution of society.

Works Cited: Bibel, Sarah. "Thursday Final Ratings: 'The Big Bang Theory', 'American Idol', 'Two and a Half Men' & the Olympics Adjusted Up; 'The Millers' Adjusted Down." TV by the Numbers. Zap 2 It, 07 Feb 2014. Web. 8 Feb 2014. <http://tvbythenumbers.zap2it.com/2014/02/07/thursday-final-ratings-the-big-bang-theory-american-idoltwo-and-a-half-men-the-millers-adjusted-down/235346/>. Bibel, Sarah. "CBS Has Four of the Top Five Comedies in Viewers and Adults 18-49, Including Number 1, 'The Big Bang Theor'y." TV by the Numbers. Zap 2 It, 16 Oct 2013. Web. 8 Feb 2014. <http://tvbythenumbers.zap2it.com/2013/10/16/cbs-has-four-of-the-top-five-comedies-in-viewers-and-adults-18-49-including-number-1-the -big-bang-theory/209546/>. "The Big Bang Theory Enters Syndication." The Big Bang Theory Fansite. N.p., 19 Sep 2011. Web. 9 Feb 2014. <http://the-big-bang-theory.com/story/1620/The-Big-BangTheory-enters-syndication/>. Bitches, Bimbos, and Ballbreakers: The Guerilla Girls' Illustrated Guide to Female Stereotypes. New York: Penguin Books, 2003. Print. Devor, Aaron H. "Becoming Members of Society: Learning the Social Meanings of Gender." Trans. ArrayGender Blending: Confronting the Limits of Duality. New York: Library of Congress Cataloging-in-Publication Data, 1989. Print. Friedan, Betty. "The Problem That Has No Name." Trans. Array The Feminine Mystique. New York: Norton, 1963. Print. "The Independent Woman." PBS: 30 Oct 2011. Television. <http://video.pbs.org/video/2160358437/>. University of Illinois at Urbana-Champaign. "Geeks May Be Chic, But Negative Nerd Stereotype Still Exists, Professor Says." ScienceDaily. ScienceDaily, 8 March 2009. <www.sciencedaily.com/releases/2009/03/090303123810.htm>. Zalaznick, Lauren. "The Conscience of Television." TEDTalk. TED. Sep 2011. Lecture.

10


Pinnacle

Volume 3, Fall 2014

Zen and the Introvert Kaitlin Cintorino AP English Language and Composition In today’s busy, complex, jumbled-up society, an individual’s character is consistently judged and modified. Our world prefers to group people into categories of introverts and extroverts - those who gain energy from being alone or from being around others, respectively. Introverts are scolded for being antisocial while extroverts are praised for participating; however, judging these two drastically different individuals is wrong, and society needs to realize the flaw in its thinking. We live in a biased world where the extrovert is valued more than the introvert. That outspoken person with wild ideas and no filter to hinder them is thought to be more intelligent, simply due to their excessive contributions. However, those who speak out are not always the ones who should. It is the person silently sitting in the corner of a room - the person sketching abstract lines while listening to a lecture; keeping her mouth shut; only speaking when called upon, yet every time, stating the correct answer with absolute certainty, only to quickly retreat back into silent contemplation - it is this person who has the most to say and the most intriguing ideas, if one could only take the time to prompt her to speak. The world is too busy talking to listen. The introvert practices an underrated way of thinking that is necessary for our growth and, subsequently, how we relate to society. The biggest fault in our society is the naivety with which people view introverts. Those who spend time alone improving themselves are referred to as “loners” in a derogatory sense of the word. Some other synonyms that are found for “introvert” are words such as narcissist or egotist. However, in reality, this word is defined as “one who retreats mentally.” How did such an innocent, well-meant concept become so looked down upon? Is it the concept that is looked down upon, or just the word? Mental retreats have been regarded highly and practiced in a variety of ways for years. Many cultures participate in practices such as yoga or meditation, both being highly personal exercises of focusing within. These other cultures have come to terms with the importance of silence and reflection, somehow leaving the Western world behind in the process. Especially prominent in Asia, this meditative culture is not a new advancement nor recently discovered; instead, it is a foundation that these cultures have based society on since early times. Buddhism is a religion largely practiced in Asian countries, focusing on the improvement of one’s life through one’s own thoughts and actions. Buddhists practice silent meditation in order to clear their thoughts. In these cultures, introversion is accepted, even encouraged. There are countless benefits that come from meditation and introspection, if the busy world would just stop to consider it. A wise zen master once explained to me the importance of clarity of mind. Our minds are like a glass of muddy water, thick and clouded due to the swirling masses of debris that occupy the space. No one can see through a glass such as this. If one settles his mind - his glass of water - for long enough, the mud and confusions will settle, transforming into a clear glass of water while leaving the mud on the bottom. Constant interactions, disruptions, and outside ideas act as the stirring rod that mixes the liquid, and practicing meditation or simply spending time alone are ways to allow the mud to set. Until one is able to settle her own mind, no one will be able to see clearly. The way people interact with the world first depends on the way they interact with themselves. Self is the base of society. If unable to understand oneself, how can one hope for anyone else to understand or relate to him? Once someone is able to clear his own mind and let the mud settle on the bottom of the glass, he will be able to share that glass of water with the world.

11


Pinnacle

Volume 3, Fall 2014

If human beings spend too much time moving, socializing, and stirring up their muddy water, it will never become clear. It is because of this that time spent alone is integral to one’s persona. When making an important decision or attempting to think critically, what might a person do, but seclude herself from society and look into herself in order to find her answer? We ourselves hold all the answers of self, though we may be missing the key to find them. It is when one is alone that one makes the largest discoveries, the most shocking epiphanies. Far too many people spend minimal - if any time at all - discovering themselves; that a person may live her whole life in the presence of others is an abomination of self and self worth. When one is able to take a step back and clarify her thoughts, she is then able to rejoin society and offer her brand new discoveries to the world. Yet, without taking a step back, meditating, being silent, and thinking for a moment, the person may have never come up with these ideas. Our world is built upon the very fact that people think about new things, and it is impossible to have a creative thought process when it is constantly being infiltrated by the ideas of others. Our thoughts are tainted by others, and rarely are we able to evade them. The world is full of rampant thoughts, and silence must be actively sought. Though peppered with land and the seven billion people thriving on it, the earth is approximately seventy-five percent water. Water is found in streams and rivers; in the wild, open oceans; in rain pouring down from the sky; in the dew that collects upon early-morning blades of grass, and in the bodies of humans. Even in the driest of deserts, small pockets of water must be stored inside the seemingly invincible cactus in order for them to survive. Everyone knows that water is essential for every living being to live. Life needs water, water creates life. The seven billion of our world are as prominent as the water coating the surface. People are the water of the unnatural world. If people are water, the extroverts and the rest of the public realm are the ocean - prevalent and encompassing most of the earth. When one thinks of society, as one would think of water, they immediately jump to the idea of salty masses of liquid lapping against the land of the planet. The ocean is large and playful, and many necessary beings grow and thrive within it; however, ocean water is impotable. Life cannot be sustained by salt water alone, as society cannot be held up by extroverts. It is the rivers, calm lakes, and mountain fresh springs that provide drinking water for the world; these are equivalent to alone time, and the introverts of the world to water. These silent streams are the essential parts of life, providing the water that quenches the thirst of society. Though they move along without a fanfare, they are much more integral than one would imagine. The salt of the ocean dehydrates your skin, only making the intake of water, or the absorbance of peace, all the more important. Though the ocean makes up most of the world, the calm, freshwater rivers are the piece that we must have, much like the necessity of introverts in our society, and peace in our lives. Until people are able to allow introversion, a silent stream, into their lives, our world will continue to live with a large gap. This gap comes from the brilliant ideas and ingenious contributions that we miss out on by stifling our own thought processes. One day, the rapidly-talking, quick thinking, preoccupied beings that inhabit this planet will come to terms with the importance of silence. On that day, the introverts will rejoice. Society will flourish. It will be then that the pounding of the ocean of extroverts will reduce to a low hum, and more of our thoughts will flow into quiet streams. These streams will nearly flood, supplying limitless quantities of ideas throughout the world. The world’s thirst for knowledge will be quenched with an endless flood of information and creativity like never before. From that day forward, introverts will never have to live as the disadvantaged class again; in return, they will bring order and peace to society.

12


Pinnacle

Volume 3, Fall 2014

Stress: A Challenge Convoluted by Misconception Natalie Duncan AP English Language and Composition In any workplace or school, the phrase “I am so stressed” is not far from anyone’s ears. Stress seems universal, and universally miserable. However, this feeling that we are inclined to title “stress” is misleading: the problem is not the stress, the real problem is how we relate to our stress. The difficulties associated with stress in society are no illusion feelings of stress do yield impairment - but the root of this challenge is not simply an emotion labeled stress; it is instead our response to our feelings of stress. To confront this challenge, we must focus on how we address stress at its origin. Revamping our insights and reactions to stressful emotions could allow for a positive relationship between society and stress, and therefore eliminate problems. Cultivating mindfulness in the presence of stressful feelings presents great potential as the cure to the stress-relationship plight. Stress is the greatest challenge facing society because it consumes people in a cloud of misconception preventing a positive relationship with stress. Modern society has dismissed stress’s initial purpose, but although no longer essential, it continues to plague us with a multitude of hindrances. Stress’s origins are found in the need to survive; however, this need has become antiquated by modern civilization, so stress is no longer essential for survival. Our bodies are capable of releasing stress hormones because early humans needed stress as a “fight-or-flight mechanism” which ultimately “led to the animal’s escape from possible injury or death,” says author, stress expert, and founder of company ActivInsight/the Resilience Academy, Andrew Bernstein. However, in today’s world, “The stress that most humans experience…offers no survival advantages” and in fact “experiencing stress damages [our] bod[ies] and leads to a long list of diseases” (Bernstein). Stress is an unappealing feeling which serves little purpose in a world rife with safety precautions, resulting in no need for a danger alarm. Countering the frustrating, extraneous feeling of stress is a process usually greeted with failure to purge stress from our lives in a lasting manner. This failure is derived from “knowing that [we] are stressed and trying to fix it but failing to put [our] finger[s] on the true source” of our stress. We are inclined to “point out conditions, but [conditions] are not stress, because conditions are always there”, instead stress is “our way of relating to conditions”. Society has realized that stress is an encumbrance, but when we try to act against this impeding feeling we make little progress. Fletcher diagnoses this failure as an inability to comprehend the true roots of our stress. Although stress’s original purpose is obsolete, its presence is preserved. Society needs to find and embrace an effective method for relating to stress, or else its existence as a handicap will continue. Stress poses a challenge to society because, although it is no longer necessary in our bodies, its existence and accompanying consequences persist. To combat these adverse outcomes associated with stress, awareness towards how we relate to stress must be catalyzed. Stress is a word which most often evokes a negative reaction; however, it is often forgotten that stress is inevitable. Stress is felt by 80% of all Americans in the workplace, reports an NIOSH survey for the American Institute of Stress. Stress’s status as an epidemic is supported by the widespread nature of this challenge. In order to control the negative image stress holds in our brains, our relationship to stress must be rewired. The broad term, stress, can be separated into two contrasting groups: “distress” and “eustress.” James Gordon, a Harvard-educated psychiatrist and stress expert, found that these two branches of stress can be “characterized by the same apparent physiological reaction” but distress tends “to lead to physical illness” while eustress results in “a state of well-being and satisfaction.” How does one determine if she is experiencing distress or eustress when she feels strained? The type of stress one feels is largely de13


Pinnacle

Volume 3, Fall 2014

pendent on who is experiencing it; the problem of stress is not stress itself but one’s reaction to stress. Hans Selye, a scientist who specialized in the study of stress, found that the differences between distress and eustress “were in the match between the stressor and the person the stress affected; and, more particularly, in the differing attitudes of people subjected to the same stress,” a finding detailed in Gordon’s book Stress Management. Selye found that the key to how we relate to stress is based on our outlook on life, and that situational reactions are completely individual, labeled “differing attitudes” by Selye. Take for example the varied responses to public speaking within society: public speaking is often recognized as a widespread phobia, but some people who enjoy public speaking find it energizing and exciting. Andrew Bernstein observed that “Unhappiness arises internally, through the subconscious mind. And it is dissolved through insight.” Our brain, which categorizes certain situations as sad or stressful, adds stress when the situation itself does not necessarily yield it. Unhappiness and stress both are shaped by each individual personally; the anxiety of one situation, such as public speaking, can take many different forms depending on individual reactions. We cannot purge our lives of situations which will produce worry or tension. Many people believe that the cure to stress is not a change of mind, but instead a temporary departure from our normal habits. It is easy to believe that “vacations, exercise, yoga, relaxation, and massage, or anti-anxiety medications” can cure stress, but exercising daily cannot prevent a stressful scenario such as moving to a new house (Gordon 16). The two are totally irrelevant to one another, proving that in order to gain control over stress, one’s reactions - not one’s surroundings - must be managed. Methods such as those listed earlier are traditionally prescribed to help manage stress but cannot end stress. Instead, “The key to eliminating stress, and not just managing or escaping it, is to create a fundamental and lasting shift in the way you think” (Gordon 16). Stress is inescapable, but the way to curb negative reactions to stress is by reworking how your mind processes stress. Learning that stress is reaction-based and not purely situational is welcome news because it means that stress can be controlled, but this news generates a pressing question: how can we control our stress? Gaining awareness of our stress and learning how to control it is central to improving our response to stress, and this awareness and control can be implemented through mindfulness. Margaret Fletcher, a stress and mindfulness expert and the founder of the mindfulness-based organization WellAware, defines mindfulness as a technique based upon “raising awareness about how things actually are”. Because effectively solving the challenge of stress relies on awareness of one’s surroundings and thought process, mindfulness is an effective solution. Lynn Rossy, a mindfulness expert from the University of Missouri, defines mindfulness as “moment-to moment awareness without judgment.” Translating this consciousness to our reactions to stress is a vital step in the stress elimination process. When we approach stressful situations, our reactions are unconscious, but unconsciousness provides no escape from negative, stress-evoking responses. Under the counsel of mindful thinking, we “realize that [we] have a choice in regards to distress” (Fletcher). Mental awareness reminds us that the negative reaction stressful scenarios spur is unnecessary. Mindfulness not only works to fight adverse response to stressful times, but also serves to remind us about the source of our discomfort. When we practice mindfulness, we “do not change [our]selves, [we] just become aware of what is occurring around us and see the array of choices that are really available to [us], choices that are normally concealed under the chaos of stress”. The sharp awareness that mindfulness cultivates allows us to notice our reactions to our surroundings at their beginning. Mindfulness is “a stealth game, where you simply develop your attention and have your thoughts and then let them go- the thoughts that would be worrisome just don’t take hold.” Mindfulness doesn’t involve avoidance of the stressful but instead acceptance. Stressful situations are inevitable, but this act of acceptance proves more effective in the stress-fighting battle. When we try to ignore our stress or stressful situations, worry can accumulate and ultimately lead to greater stress. Under the influence of mindfulness, “Worries are immediately addressed” and then categorized as trivial and undeserving of any thought. While a brain familiar with stress may see important worries as opportunities to fret, a mindful brain thinks clearly and is “aware of

14


Pinnacle

Volume 3, Fall 2014

choices and their outcomes”. Mindful thinkers know that time is their companion, and take time that could be devoted to stress and change it into an opportunity to calmly examine the situation. The effect of stress becomes less devastating as we benefit from mindfulness, because mindfulness allows acceptance of stressful situations and the choices they provide. Mindfulness allows thinkers to develop their awareness, so that they can notice which surroundings cause stress. Being mindful also serves as a reminder to slow down; it lets us thoroughly consider a situation so that we do not exaggerate the worry the situation should generate. Stress may seem an omnipresent and daunting epidemic, but mindfulness proves an effective cure, helping us view our surroundings in a logical and conscious manner. Our lives are flooded in stress, but this pervasive emotion yields no benefit for contemporary humans who are not in need of the “danger alarm” stress originally provided. A purposeless feeling, stress validates its need for deliberation through its tendency to decrease health and happiness. Stress is a universally unwelcome emotion, but unfortunately ignorance of proper stress management is almost as universal. This lack of knowledge regarding how to control stress stems from a common misconception about the true problem. While we are inclined to blame our surroundings for feelings of stress, it is actually our mental relationship to our surroundings from which stress arises. In order to advance our relationship with stress, we must rely on our minds. Lifting the immense burden of stress placed upon society is a seemingly impossible task; however, through rationality and awareness, mindfulness can lighten the weight of this burden. Stress is an epidemic-like challenge in both its widespread commonality and its slew of negative effects to both health and happiness. To fight this epidemic, we must depend on our minds and the practice of mindfulness to change our relationship with stress and only once we are mindful in our living will stress’s burden be lifted.

Works Cited: Bernstein, Andrew. The Myth of Stress: Where Stress Really Comes From, and How to Live a Happier and Healthier Life. New York, NY: Free, 2010. Print. Fletcher, Margaret. "Mindfulness and Stress." Personal interview. 7 Apr. 2014 Gordon, James S. Stress Management. New York: Chelsea House, 1990. Print. NIOSH. "Workplace Stress." The American Institute of Stress. N.p., n.d. Web. 10 May 2014. Rossy, Lynn. "Mindfulness Cuts Stress, Boosts Productivity." T+D 67.8 (2013): 70. MasterFILE Premier. Web. 23 Apr. 2014.

15


Pinnacle

Volume 3, Fall 2014

Selling the Excess: How 1920s Advertisers Built a Consumer Culture in America Rebecca Teevan AP United States History “[Advertising] is the most potent influence in adapting and changing the habits and modes of life, affecting what we eat, what we wear, and the work and play of the whole nation” - Calvin Coolidge, 1926 (Fox, 97) In the Roaring Twenties, a time of abundant production, advertisers ushered in an era of consumerism by convincing the wealthy populace that luxury, instead of necessity, should be the basis for their purchasing decisions. Store shelves started to pop with products in a variety of colors and designs, so buyers began to view items based on aesthetic appeal rather than function. Companies broadcasted a luxurious standard of living reliant on household commodities, persuading customers to upgrade their homes through consumption. Playing on women’s longing to look and feel attractive, advertisers presented beautifying products as vital accessories to a pleasant appearance. Marketers throughout the country employed the new psychological tactic of putting a name to widespread anxieties and then marketing products to calm them. New marketing methods saturated society with ads to make consumption a constant presence in the American mind. Indeed, ad makers had such great influence during this decade that they were able to reverse the majority opinion on smoking and create a thriving market for cigarettes completely detached from the idea of necessity. By the end of the decade, consumption had become the basis of American identity, as consumers chose products based not on what function they needed them to perform, but on how they themselves hoped to be perceived. Thus 1920s advertisers, responding to booming postwar production and the increasing affluence of the American people, constructed a consumer culture in which purchases were based on want instead of need. By producing merchandise in a greater variety of colors and designs, advertisers shifted the focus of purchasing decisions from a product’s utility to its aesthetic appeal. Throughout the prosperous 1920s, both higher wages and a greater number of women joining the workforce contributed to an increase in family income (Sivulka, 340). By this decade, many Americans could securely afford a means of subsistence as well as a significant measure of added goods and services; it was “a society in which people could buy items not just because of need but for pleasure as well” (Brinkley, 641). And as the country’s manufacturing output rose – over 60% by the end of the 20s - distribution and marketing replaced production as the limit on industrial activity, and to advertisers fell the task of luring affluent customers to the abundance of available goods (Brinkley, 634; Fox, 79). To start, they revisited the old idea of using “fashion” to enhance a product’s appeal, thus increasing its value and price, a process that often boiled down to varying the color or design of otherwise identical items (Sivulka, 177). With a seemingly greater array of merchandise now available, consumers’ final decisions focused not what function they needed the product to perform, but what they wanted the product to look like; therefore the appearance became equally, if not more, important than its utilitarian purpose. And the tactic worked: textile company Cannon Mills found that buyers were willing to spend four times the price of a plain towel on one of their new and heavily advertised line of colorful, patterned towels (Sivulka, 127). Customers, no longer focused on the item’s utility, were willing to shell out extra cash for the added flare of an embellished towel. The use of aesthetics in advertising took off from there, as companies introduced color schemes and ensembles in every market from cars to clothes to household décor, and “merchandise sales skyrocketed” (Sivulka, 129). One 1927 Chevrolet ad boasted that the “beautiful upholstery fabrics” of the automobile were “patterned to harmonize with the body colors [and] give to closed car interiors the 16


Pinnacle

Volume 3, Fall 2014

comfort and charm of a drawing room” (Cross, 50). Consumers could now buy collections of merchandise based on aesthetic preference, and the function of the individual products became a secondary consideration. By introducing varied designs and color schemes, advertisers subordinated the utility of a product to its visual appeal, convincing consumers to make purchases based on the look they wanted rather than the function they needed. Expanding on this approach, marketers of household items added convenience, cleanliness, and luxury as lures for consumers. To heighten the demand for household luxuries, advertisers promoted a new, higher standard of living that could only be attained through consumption. As a 1928 N.W. Ayer & Son ad stated, women were the “purchasing agents for their families,” and as such they were the target of most household advertising (Lears, 188). Many ads, like one in 1924 for the Hoover vacuum, implied that the majority of household chores were arduous trials for housewives, and promised to lighten the burden of the “brave little woman” at home (Goodrum, 178). No matter the product – washing machine, vacuum cleaner, laundry detergent, canned soup and so on – “ad makers made ease, comfort, and convenience a major selling point” (Sivulka, 129). After emphasizing the grueling nature of housework, advertisers promoted merchandise as an escape from the drudgery, using the lure of convenience to generate a demand for household products amongst women. It wasn’t that the housewives of America needed washing machines – soap and wash buckets had worked well enough for the past few centuries – but rather that they wanted machines to make doing laundry easier. At the same time, advertisers also encouraged cleanliness in the home, as kitchens and bathrooms across the country made the switch from porous materials like wood and cast iron to more sanitary surfaces of tile and linoleum (Fox, 100). Personal hygiene had been on the rise since the end of WWI, when military service had introduced millions of Americans to the toothbrush – one prewar survey showed that only 26% of Americans took care of their teeth, but by 1926 that proportion had risen to 40% and continued to climb (Fox, 99). Companies like Cannon Mills, who backed the “bath-a-day” habit because it increased the number of towels people would use, supported the hygiene trend to encourage consumption (Sivulka, 164). Advertisers integrated cleanliness into the image of the modern home and the lifestyle of the modern American, thus creating a market for the products that facilitated this new standard of hygiene. Building upon this theme, a 1925 Crane bathroom equipment ad bragged that “From a mere utility, the modern bathroom has developed into a spacious shrine of cleanliness and health,” simultaneously underlining the newfound importance of hygiene and employing the idea of luxury to heighten their products’ appeal (Fox, 100). Both Crane and Kohler ran full-page, four color ads to display the “possibilities of bathroom beauty,” adding appearance to function as a selling point for appliances (Sivulka, 163). With these eye-catching displays, ad makers persuaded American consumers that it no longer mattered solely that their household commodities be functional – they also had to be aesthetically appealing. Thus, by creating a luxurious image of the modern home constructed of products that lent it convenience, cleanliness, and style, advertisers were able to reinvigorate the market for established household necessities and create a demand for new innovations. In other markets, however, there was no need to begin with, and advertisers had to rely entirely on want to generate demand. Advertisers of the ‘20s convinced women to buy beauty products, entirely non-essential items, by exploiting their aspirations for attractiveness. During this decade, cosmetics became less associated with “painted women” as “respectable ladies” began using them to meet the beauty standard exalted by women’s magazines of the day – a fresh, youthful appearance (Cross, 42). Advertisers seized this opportunity to expand the cosmetics market by offering not what women needed, but the physical beauty that they wanted, linking “material goods to immaterial longings” (Cross, 38). Advertising agency J. Walter Thompson pioneered this tactic in promoting Woodbury facial soap, promising women “a skin you love to touch” and “the greater loveliness you have longed for” through use of this product (Fox, 87). And that was just it – women already longed to be pretty, so all advertisers had left to do was to convince them that their product was the best way to become beautiful. Other ads, like the 1928 Palmolive soap ad that portrayed a smiling woman hold17


Pinnacle

Volume 3, Fall 2014

ing a bouquet of flowers, captioned, “He remembered – that schoolgirl complexion,” blended promises of both youthful beauty and romantic success (Goodrum, 130). These ads established a causal chain that began with purchasing their product and ended with romance – i.e., he sent flowers because of her youthful complexion, and she has her youthful complexion thanks to Palmolive soap. Ad makers realized that the main reason women wanted to be attractive was so they could attract men, and appealed to this longing in their ads. Because there was no actual need for cosmetics, advertisers wove them into the fabric of the developing consumer culture by marketing them as a means for women to attain the pleasing appearance they wanted. In other markets, ad makers played on consumer’s fears rather than their desires to generate demand for non-essential items. Advertisers of the 1920s pioneered the psychological ploy of exploiting worries and insecurities to trick the public into purchasing products they didn’t need. This trend began with the J. Walter Thompson agency, who conditioned consumers to want merchandise by appealing to their “irrational drives – some parlay of vanity, fear, and jealousy” instead of their needs (Sivulka, 151; Fox, 90). For instance, J. Walter Thompson advertised Lifebuoy’s Lux soap by first coining the term “B.O.” – body odor – and then assuring customers that Lux could “protect” them from it (Sivulka, 160). This model of advertising by fear made consumers buy products not because they needed to use them but because they were afraid of what would happen if they didn’t. In what was perhaps the most successful of these fear-based campaigns, marketers of Listerine cultivated widespread fear of “halitosis,” warning Americans that their offensive breath could wreck their friendships, love lives, or careers, and then capitalized on it by assuring the populace that they could avert this disaster through daily use of Listerine mouthwash (Cross, 36). In fact, it was a 1925 Listerine ad that coined the cliché, “Often a bridesmaid but never a bride,” and demonstrated through an elaborate narrative how one woman’s halitosis had kept her from securing a husband; luckily for her, the ad went on to explain, Listerine’s “breath deodorant” would end this tragedy by keeping her breath “sweet, fresh, and clean” (Sivulka, 158). Much like the cosmetic ads that had promised a successful love life through use of their product, this ad guaranteed failure to those who neglected to use mouthwash. As advertisements flooded America with anxiety over halitosis, Listerine sales soared and the company generated a net profit of $4 million in the years between 1922 and 1928 (Sivulka, 158). Because products like Lux and Listerine lacked both utilitarian necessity and visual appeal as selling points, advertisers had to rely on psychological subterfuge to convince the public they were worthwhile purchases. Thus marketers enfolded fear into the consumer culture and made anxiety an accepted reason for consumption. Yet to guarantee the success of their consumerist crusade, ad makers had to ensure their messages reached the public. Advertisers of the 1920s eagerly employed new methods of communication to make consumption a constant consideration in the lives of Americans. As America hit the road in their newly affordable automobiles, signs bearing snippets of advertising slogans cropped up along country highways, following a trend that started in 1925 with the famous Burma Shave signs (Goodrum, 216). Like one 1920s slogan that joked, “THE BEARDED LADY / TRIED A JAR / SHE’S NOW / A FAMOUS / MOVIE STAR / BURMA SHAVE,” Burma Shave slogans were written as comical, rhyming couplets separated into short segments easily digestible to the potential customers driving by (Goodrum, 216). After the introduction of the road signs, sales of Burma Shave rose annually until 1947 as the humorous slogans transformed the product they peddled into a household name (Goodrum, 217). These catchy couplets, persistent reminders of consumption, stuck in motorists’ minds and expanded advertisers’ influence. In 1922, advertising invaded the home with the first radio advertisement, broadcasted by New York station WEAF (19 and 20 Century Advertising). Following the broadcast, the th

th

real estate firm featured in the ad quickly sold a number of apartments, and in less than a year WEAF’s sponsors totaled twenty-five, and radio advertising soon became a national phenomenon (Sivulka, 146). While many early commercial radio broadcasts were musical variety shows backed by advertisers, like “The Ever Ready Battery Hour” and the “A&P 18


Pinnacle

Volume 3, Fall 2014

Gypsies,” sponsored by Ever Ready Battery and A&P grocery chain respectively, some companies aired programs related to their product, like Gilette’s radio show on men’s beard fashion (Sivulka, 146). Trying a different tactic to market their Lucky Strike cigarettes, the American Tobacco Company bombarded listeners with advertising slogans both during Lucky Strike radio shows and at other times throughout the day (Sivulka, 148). Advertisers exploited the radio to send encouragements of consumption directly into the homes of potential customers, merging the economic and domestic spheres. Whether painted on a road sign or borne on the airwaves, ads saturated American society, pulling people’s attention ever-increasingly towards the plethora of purchasing opportunities and fortifying the fledgling consumer culture. By the close of the decade, advertising had such great influence on the hearts and minds of Americans that marketers were able to swing majority opinion. For instance, in the developing consumer culture of the 1920s, advertisers had such powerful influence that they were able to overturn popular opinion to build a booming cigarette industry. Though cigarette smoking had been considered an undesirable habit for men and a virtually unthinkable vice for women until then, during WWI Americans found that cigarettes were cheaper and more hygienic than chewing tobacco (Sivulka, 148). In this time of shifting attitudes, Camel, the first cigarette brand sold nationally, quickly achieved market dominance, but its reign was soon challenged by the American Tobacco Company (Sivulka, 148). Determined to win over the women’s market, American Tobacco Company poured massive funds into advertising their richer, sweeter tasting Lucky Strike cigarettes (Sivulka, 149). Like beauty products, cigarettes were by no means a necessity, so marketers would have to rely entirely on want to sell the smoking habit; however, unlike in the cosmetics industry, advertisers were contesting the mainstream and were hindered rather than helped by existing cultural principles. In 1926, a Chesterfield cigarette ad began to break down these boundaries, and while this billboard depicting a romantic moonlit scene of a man smoking a cigarette and a woman requesting, “Blow some my way” was met with widespread outrage, Chesterfield persisted and built an entire campaign on this slogan, cracking open the public taboos against women smoking (Goodrum, 196). American Tobacco Company then pushed Lucky Strikes through this breach with endorsements from female celebrities, popularizing “the image of the fashionable lady who, while she indeed smoked, still appeared stylish and respectable” (Sivulka, 149). Next, American Tobacco barraged the public with Lucky Strike slogans like “Be Happy, Go Lucky/Be Happy Go Lucky Strike Today” and “Reach for a Lucky Instead of a Sweet,” which assured women that smoking would help them watch their figures (Goodrum, 197). Worried the dark green Lucky Strike package would repel women buyers by clashing with their outfits, the company president enlisted the help of publicist Edward Bernays to make green chic in women’s fashion to neutralize this threat (Sivulka, 149). Advertisers used the same lures of fashion and physical beauty they had used to sell more conventional items like towels and soap to make women want cigarettes, transforming smoking habits from vice to vogue. These maneuvers proved successful; by the thirties, American Tobacco had overtaken Camel as market leader, and remained so until 1955 (Goodrum, 197). The rise of the cigarette demonstrated the power advertisers had over public opinion and the skill with which they manipulated it to encourage excess consumption. By the end of the decade, manipulation by marketers had reshaped American society so thoroughly as to make material possessions the basis of social identity. In the newly formed consumer culture, products changed from utilitarian necessities to symbols of status and identity, as Americans began making purchases based not on what they needed to live, but on how they wanted to be perceived. When old associations like family and neighborhood wouldn’t suffice, merchandise gave Americans a way of distinguishing themselves as individuals (Cross, 38). Young women were able to distance themselves from their families and establish their own separate identities with cosmetics and fashion, constructing the image of their personality from material goods (Cross, 41). As taste and style replaced social class as the characterizing factor in society, people sought to express themselves through their consumption choices; in the new consumer culture created by advertising, “everyday 19


Pinnacle

Volume 3, Fall 2014

objects could lose their functional qualities and become objects of display, establishing the social standing of their owners” and “the act of consumption” became “a manifest sign of social status” (Friese, 11, 15). Thus material merchandise transcended the economic sphere and gained social significance, which further increased its appeal. In this consumer society, Americans could buy products not just to define their social standing but to elevate it; 1920s ads urged the flourishing middle class and nouveau riche alike to make purchases “not because they ‘needed’ the material goods, but they ‘wanted’ them to enhance their status” (Sivulka, 129). Appealing directly to consumers’ egos, marketers employed promises of social ascension to lure in customers aspiring to a higher societal standing. By joining material merchandise to abstract ideas of identity and status, advertisers transformed products from utilitarian items to social symbols and created a culture in which consumption was an expression of who people wanted to be instead of a way to satisfy basic needs. What began as advertisers’ attempt to attract a prosperous people to the abundance of available products in the postwar boom of the 1920s blossomed into a consumer culture that has defined America ever since. By distracting customers with flashy aesthetics, convenience, and luxury, marketers made utility a secondary factor in the purchasing process. Ads manipulated consumers by appealing to psychological drives, like women’s desire to be beautiful and the general public’s fear of giving offense. Road signs and radio sets bombarded Americans with advertising, constantly reminding them of consumption and giving advertisers such great influence over public opinion that they could turn the current of the mainstream in favor of cigarettes. By the end of the decade, ad makers had managed to construct a consumer society in which consumption formed the foundation of identity and status. And while the American consumer culture and many of the marketing methods that created it have endured and flourished until present day, its survival is still contingent on abundant production and a wealthy populace – conditions now threatened by factors like dwindling natural resources and a rising poverty rate. In light of these threats, our culture of indulgence, begun by advertisers of the 1920s, may be nearing its end.

Works Cited:

Cross, Gary S. An All-consuming Century: Why Commercialism Won in Modern America. New York: Columbia UP, 2000. Print. Fox, Stephen R. The Mirror Makers: A History of American Advertising and Its Creators. New York: Morrow, 1984. Print. Friese, Susanne. "Coming to Live in a Consumer Society." Coming to Live in a Consumer Society. Academia.edu, n.d. Web. 07 Mar. 2014. Goodrum, Charles A., and Helen Dalrymple. Advertising in America: The First 200 Years. New York: Harry N. Abrams, 1990. Print. Lears, Jackson. Fables of Abundance: A Cultural History of Advertising in America. New York: Basic, 1994. Print. "19th and 20th Century Advertising." 19th and 20th Century Advertising. N.p., n.d. Web. 07 Mar. 2014. Sivulka, Juliann. Soap, Sex, and Cigarettes: A Cultural History of American Advertising. Belmont, CA: Wadsworth Pub., 1998. Print.

20


Pinnacle

Volume 3, Fall 2014

The Rise of the American Celebrity: Criminals, Presidents, and All That Jazz Sam Carbery AP United States History On the cover of the January 5, 1962 issue of Time Magazine, Pietro Annigoni’s John F. Kennedy broods within his rectangular frame. Hardly a year after the issue’s publication, JFK’s death rocked the nation, derailing the optimism of the early 1960s, and yet his image haunts the cover of Time Magazine in issues as recent as 2013. It is more than a potent sense of nostalgia that keeps JFK and others like him alive in the memory of thousands. World War I left in its wake a generation of Americans who would take the first steps in a trend of celebrity worship that isolated and romanticized individuals like JFK into telling expressions of the society in which they lived. This inclination to preserve famous people as records of a specific time would begin with artistic icons before spreading to affect all walks of life with a particularly important impact on the presidency. Ultimately, the American concept of celebrity would create a new set of expectations for the modern day president that would come to characterize that celebrity. The 1920s and 1930s took America’s interest in the individual and elevated it to a national fascination that molded popular figures into revealing reflections of society at the time, redefining the role of the president as representative of the American people. American society in the 1920s idolized celebrities who exemplified its tenacious desire to break away from the restraints of the old world. The atmosphere of the 1920s, aptly summed up as the “Roaring Twenties” is crucial to understanding what enabled society to turn actual individuals into grander representations of a population. In the years following World War I, life became increasingly centered in urban environments with a keen emphasis on material possessions and unrestricted entertainment (Douglas 17). Two groups in particular would come to define the 20s: America’s new generation of women, the flappers, and its artists, the Lost Generation. The flapper was one of the earliest signs of postwar America splitting from its prewar roots, as the opportunities opened by necessity to women during World War I continued to encourage young women to pursue sexual and economic freedom after its conclusion. During World War I, female employment rates increased from 23.6% of the population in 1914 to 37.7%-46.7% in 1918 to compensate for deployed male workers (Striking Women). The war effort had a strikingly different impact on the morale of men and women in America; whereas men involved in the war, “… saw the darkness of a world in disarray… torn apart by the forces of nationalism, greed, and dehumanization, women saw something more compelling: their own lights beginning to glimmer, more and more brightly” (Douglas 15). By 1929, major cities like Chicago and Philadelphia had employed a little over a quarter of women, more than half of whom were single and living by themselves in private apartments, free from the censure of overbearing parents (Zeitz 29). This newfound independence emboldened young women of the decade and fed an energized sexual revolution that reinvented the feminine ideal. The fictitious “Gibson girl” that had dominated early 20 century magazines was an archetypical paragon of virtue with, “long hair, high brow, precise, anatomically narth

row waist, thirty-six inch bust, broad hips, well concealed legs, [and] a maternal and wifey manner” (Douglas 6). As the late 19 century gave way to the early 20 , the flapper rose to take the Gibson Girl’s place as the feminine ideal. A flapper th

th

was defined by the following: “[She] cut off her hair, concealed her forehead, flattened her chest, de-emphasized her waist, dieted away her hips, and kept as much of her leg in plain view as possible; she smoked and drank recklessly, swore passionately and flaunted her sexuality, and, perhaps most importantly of all, the flapper inspired America’s collection of up-and-coming artists, the Lost Generation” (7). The Lost Generation was a loose coalition of artists, a majority of whom were American men who had been drafted into World War I and then returned to America, “disillusioned by the 21


Pinnacle

Volume 3, Fall 2014

war’s bitter fruits” (14). 1920s literature expressed the belief that “moral guideposts” were ultimately invalid and acting virtuously had no real impact on the quality of a person’s life, a lesson learned at the knee of World War I’s hardships (Montgomery College). It was the flappers and the Lost Generation that would produce the poster children of the Jazz Age, F. Scott and Zelda Fitzgerald. F. Scott Fitzgerald, one of the most influential members of the Lost Generation, and his flapper wife, Zelda, were an early example of how two individuals could be molded by 1920s ideology into an iconic couple that encapsulated the lifestyle of the Jazz Age. Infamous for their reckless, unorthodox behavior and tempestuous marriage, the Fitzgeralds led highly publicized lives that documented the decade in which they lived. Born in 1900, Zelda was the quintessential flapper, known for her spitfire personality and careless flouting of old-fashioned rules; in one letter she describes the number of suitors she entertained at once, writing, “… Yesterday Bill LeGrand and I drove his car to Auburn and came back with ten boys to liven things up—Of course [sic] the day was vastly exciting—and the night more so…” (Zeitz 25) As the confirmed inspiration for many of Fitzgerald’s heroines, Zelda became an “instant celebrity,” one who was often sought out for interviews where she “contributed her opinions on modern love, marriage, and childrearing to an eager media” (Curnutt). Fitzgerald, in his “greatest deed of literary license”, had turned Zelda into the “prototype of the American flapper” (Zeitz 49). He put this prototype to good use in his first breakthrough work, a novel titled This Side of Paradise. Published in 1920, Fitzgerald’s first widely-received novel anointed him almost immediately as “the recognized spokesman of the younger generation—the dancing, flirting, frizolizing, lightly philosophizing young America…” (39) The youth who thrived on late night parties and the seedy underbelly of city life flocked to his work as a representation of the life they led, recording the new values of 1920s America (39). But their individual artistic accomplishments, although undoubtedly influential, are not what have so permanently immortalized the Fitzgeralds as a symbol of the Roaring Twenties. F. Scott and Zelda’s marriage became a “celebrity unto itself” as their wild antics fascinated the nation (Baker). A friend and fellow writer of F. Scott Fitzgerald described him as “a kind of king of [the] American youth,” while the queen was his “beautiful, witty, and unstable wife, Zelda. The royal couple became almost as well known for their madcap antics as for his [F. Scott’s] writing” (This Fabulous Century 49). As the appointed representatives of the youth culture of the 1920s, the Fitzgeralds acted out their part with abandon. They rode on the hood of taxies and frequented drunken parties in a highly publicized lifestyle that emulated the scandalous expectations of the 1920s (Baker). They were a “young, handsome, exuberant, and risqué embodiment of the confident spirit of the postwar era,” and the American people flocked to the lifestyle they represented (Zeitz 53). The expectations of the public so permeated their lives that even the Fitzgeralds noticed; in a moment of frank self-awareness, F. Scott Fitzgerald wrote, “Sometimes I don’t know whether Zelda and I are real or whether we are characters in one of my novels” (Baker). Silent screen star Lillian Gish observed that the Fitzgeralds, “didn’t make the twenties; they were the twenties.” (Zeitz 50). The way in which the Fitzgeralds were immortalized not so much as individuals but as icons of a time period was an early indicator of a trend that would intensify in America as the 1920s drew to a close. The Great Depression began in 1929 and brought the decadence of the Roaring Twenties to a screeching halt as the American economic policy of the 1920s lead to a devastating crash that affected million of Americans. In the 1920s, “America’s plan for prosperity was planless… they worshipped the marketplace as a god who moved in mysterious ways” (McElvaine 29). A disproportionate focus on luxury spending, combined with industrial and agricultural overproduction and inconsistent market prices for consumer goods, undermined the stability of the economy (41). Historian Michael Harrington aptly states the problem: “The capitalist genius for production was on a collision course with the capitalist limits on consumption” (49). When the stock market crashed in 1929, it started the devastating recession known as the Great Depression. Industrial and agricultural production stalled, and over 20% of the youth population was forced to 22


Pinnacle

Volume 3, Fall 2014

work grueling jobs with little pay (Wormser 48). Small farms subsisted on bare necessities; an article from a 1939 publication of the magazine Nation reads, “... gone are any thoughts of new cars, new clothes, new radios; the farmers are thinking in terms of food and feed for family and stock” (31). The frustration of the Great Depression and the position of helplessness it forced Americans into would find its outlet in a rising couple who, like the Fitzgeralds in the 1920s, would propagate the evolving pattern of celebrities typifying the times. The criminal couple Bonnie Parker and Clyde Barrow appealed to the nation as a product of the desperation of the Great Depression, becoming symbols of citizens’ distress during the 1930s. Bonnie and Clyde were two young people bonded by their desire for “more out of life than the hand [they] had been dealt,” a common sentiment in the 1930s (Crime Museum). Bonnie was a pretty girl who, after her father’s early death, became a married high school dropout living in her grandmother’s home, while Clyde was born into a poverty-stricken family of farmers (Crime Museum). From the start, Bonnie and Clyde’s stories land on sympathetic ears; Bonnie, struggling to provide for herself after being abandoned by her husband, and Clyde, an ambitious young man held back by the lack of opportunity in the nation he was born into, are clear representations of a much larger audience (FBI). Clyde expressed a bitterness at the government’s role in the Depression that echoed countrywide. The government was reluctant to provide financial relief during the Depression, convinced that it would end naturally like previous recessions (Wormser 9). As a result, citizens were left more or less alone to deal with the recession’s ramifications. One factory worker put the resentment brewing among downtrodden workers into words, stating, “There is a terrible rage in my heart. I want to learn what crushes out the lives of workers and what takes the children of these people and places them in stuffy factories, even before they have time to fill their lungs with fresh air” (58). In a society so oppressed by the burdens of the Great Depression, it is unsurprising that Bonnie and Clyde sparked a national fascination as they traveled the country with their gang, leading news reporters on a wild chase across America. Suspected in numerous instances of automobile theft, bank burglaries, and numerous murders (including several policemen), Bonnie and Clyde simultaneously thrilled and horrified America (FBI). Newspaper and radio reports kept audiences up to date on the dangerous duo’s latest exploits, turning them into criminal celebrities. Bruce Chadwick tackles the seemingly strange fascination the 1930s had for Bonnie and Clyde, writing, “People should have hated them for their lives of crime. Instead, they loved them. The country considered bank robbers not only heroes, but stars… The nation’s newspapers treated them like modern-day crusaders for justice and splattered their pictures all over their front pages” (Chadwick). The appeal lay in the duo’s obvious disdain for banks, the most visible symbol of oppression during the Great Depression. “Bonnie and Clyde… were criminals that the bank-hating public and headline-starved media of the 1930s turned into folk heroes… because they stuck it to the banks” (Chadwick). In many ways, Bonnie and Clyde were a pair of “American Robin Hoods” (Chadwick). The crimes they committed were easily viewed as expressions of a discontent that resonated with suffering Americans. Bonnie and Clyde’s actions were an extreme culmination of the distress felt by millions in America during the 1930s, preserving the spirit of the Great Depression through their highly public life of crime. When Bonnie and Clyde were finally killed by a posse of policemen in 1934, their deaths were as highly publicized as their lifetime exploits (FBI). Footage from the film The Retribution of Clyde Barrow and Bonnie Parker shows large crowds milling about the funeral homes housing Bonnie and Clyde’s bodies. “Crowds fill the sidewalk in front of the Dallas funeral home where Clyde Barrow's body has been brought for burial… people lay wreaths on the coffin, and the first shovels of earth are placed in the grave.” The transcript reads, “Three miles away, at another funeral home, even larger crowds line up to view remains of Bonnie Parker” (Jamieson Film Company). The hundreds who turned up for Bonnie and Clyde’s funeral speaks to the level of fascination they inspired in Americans as symbols of the 1930s. Their wide-range appeal as two downtrodden young people lashing out at the oppression of the Great Depression had turned them into cultural icons despite their criminal acts, immortalizing them not as individuals

23


Pinnacle

Volume 3, Fall 2014

but as telling icons of a 1930s riddled with desperation, as popular as Fitzgerald in the 1920s. As the concept of individual fame and celebrity developed throughout the 1920s and 1930s, it had a gradual but marked impact on the role of presidents and the ways in which they could enhance their power. President Franklin Roosevelt utilized these changes as an opportunity to establish himself as a strong president who, through his bold tactics and popularity, became a representative of the society in which he lived. FDR was energetically combating the effects of the Great Depression as the governor of New York when the Democratic party nominated him for the 1932 presidential election. He immediately shocked the convention by breaking with tradition and flying to Chicago to personally deliver a speech thanking the party for his nomination (FDR Library). Roosevelt’s speech laid a clear groundwork for his later policies: “I have started out on the tasks that lie ahead by breaking the absurd traditions… Let it be from now on the task of our party to break foolish traditions… I pledge you, I pledge myself, to a new deal for the American people” (Leuchtenburg 8). FDR was a master politician who was skilled at appealing to a broad audience. From his first inaugural speech, for which a hundred thousand gathered in front of the capitol and millions of others listened at their radios, he aimed to raise morale (Hamby 120). “This great nation will endure,” he promised his listeners. “There is nothing to fear but fear itself” (Roosevelt). He attacked bankers, sympathizing with those suffering in the Depression by calling the “unscrupulous money changers” to task for “leading the country astray” (Hamby 121). FDR also endeared himself to Americans through his “relief, recovery, and reform” policies regarding the Great Depression (FDR Library). Perhaps most importantly, FDR was highly aware that the key to success lay in setting himself apart from past presidents by aspiring to be more than just a government leader. He elevated himself to be a national icon by capturing the imagination of America, providing for the “material and emotional needs of his supporters” (Hamby 3). He was more than just a policy maker focused on combating the Great Depression; he was a “magnetic leader of the people… tribune of democracy; the champion politician; the manager of a multifaceted policy initiative… without precedent in American history; and the chief diplomat entrusted with guarding American security” (3). He took advantage of the growing popularity of the radio to establish “fireside chats,” which imparted information while developing a bond between president and citizen (124). It was a speaking tactic unprecedented in any other American president (124). FDR’s masterful use of new technology to appeal to the interests of the American people gave him the power he needed to enact many of his New Deal policies. The phenomenon rooted in the 1920s of turning individuals into icons was used by Roosevelt to give himself a power he would otherwise have lacked that helped his presidency have a more powerful impact on the United States. FDR’s career was just the beginning of the effect the celebrity ideal would have on presidents. This phenomenon, although highly influential in FDR’s career, would grow almost to dominate the presidency of John F. Kennedy. Perhaps the most iconic of recent presidents, John F. Kennedy is the epitome of the modern era’s concept of celebrity taking an individual and elevating him to a national icon, preserving JFK past his early death as an American symbol. Despite a presidency that only lasted three years due to his assassination in 1963, JFK has been permanently associated with the youth culture idealism permeating the 1960s. Young, handsome, and full of an infectiously energetic optimism, JFK was the first of what would eventually become known as a “television president” (JFK Library). In the 1960s, television sales increased as TV became a source of information as well as entertainment (Gallagher). The president, with his easy style and friendly demeanor, was perfect for taking advantage of this new form of communication. “On television in 1960, Mr. Kennedy was a picture of grace,” explains New York Times television critic John Corry. “Television news coverage expanded during the Kennedy Administration, and in large part Mr. Kennedy inspired the expansion” (Fairlie). His screen presence endeared JFK to millions, establishing a personal connection between him and America. Henry Fairlie notes that JFK, with boyish charms, polite but relaxed mannerisms, and engaging intelligence, “had to invite from all but the most skeptical a measure of trust and reassurance” (Fairlie). Indeed, his ability to com24


Pinnacle

Volume 3, Fall 2014

municate through television is easily one of the most defining aspects of the United States’ memory of JFK, preserving him as a readily accessible icon of 1960s society. Even JFK’s death is better remembered as a TV drama than a presidential assassination (Fairlie). The individual JFK, the man who had extramarital affairs and often twisted speeches to deliver a more appealing truth to the public, has only recently received scrutiny (JFK Library). More consistently he is remembered as the poster child of the idealistic youth he so inspired in the early Sixties, the thousands of Americans who rallied around the Civil Rights movement and education reform (Fairlie). An apt description of this idolization is written by historian Earl Latham, who states, “There is a real sense in which the president may be regarded as the representative of all the people… it is therefore the president who symbolizes the whole” (Latham 15). JFK was one such president who leveraged the new flavor of fame established in the Twenties, one that had matured through the Sixties and become an icon for America as a whole. JFK’s transformation from man to idea, and the lasting impact he had on America as a result, can be summed up in his own words: “A man may die, nations may rise and fall, but an idea lives on" (Kennedy). A trend started in the 1920s that would slowly redefine American culture, changing the worlds of art and politics alike. By taking real individuals and morphing them into romanticized ideals behind which a whole nation of people can gather, America has punctuated its own history with figures that stand apart from the rest as records of the time in which they lived. The celebrities who are preserved in the memory of the United States are those who epitomized the decades to which they belong. It is this fascination with individual celebrity that keeps This Side of Paradise in continual print, new television series inspired by Bonnie and Clyde constantly on the air, and JFK forever etched on the cover of Time Magazine. By turning public figures into extraordinary symbols, Americans have created their own national icons to decorate the history of a country based on the importance of individualism. Annigoni’s JFK, meditating on the shelves of newspaper stands across the country in 1962, may not have anticipated the extent to which it would become a national icon, but it remains to this day a reminder of the unique history of the United States as an ongoing narrative of the lives of its celebrities.

Works Cited: Annigoni, Pietro. "John F. Kennedy, Man of the Year." Time 5 Jan. 1962: n. pag. Print. Baker, Sarah. "Beautiful and Damned: Zelda Sayre Fitzgerald." Flapper Jane. N.p., 2004. Web. 23 May 2014. "Biography of Franklin D. Roosevelt." Franklin D. Roosevelt Presidential Library. Franklin D. Roosevelt Presidential Library and Museum, n.d. Web. 20 Apr. 2014. "Bonnie & Clyde." Crime Library and Museum. National Museum of Crime and Punishment, n.d. Web. 29 Mar. 2014. "Bonnie and Clyde." FBI. U.S. Department of Justice, 21 May 2010. Web. 07 Mar. 2014. Chadwick, Bruce. "Bonnie & Clyde Shoot Up America in the Great Depression." History News Network. N.p., 13 Dec. 2011. Web. 30 Apr. 2014. Curnutt, Kirk. "Zelda Sayre Fitzgerald." Encyclopedia of Alabama: Zelda Sayre Fitzgerald. Encyclopedia of Alabama, 15 Mar. 2007. Web. 7 May 2014. Douglas, George H. Women of the 20s. Dallas: Saybrook, 1986. Print. Fairlie, Henry. "Television's Love Affair with John F. Kennedy." New Republic. N.p., 21 Nov. 2013. Web. 16 Mar. 2014. Gallagher, Mona. "The History and Evolution of Television the 1960s and 1970s." Entertainment: Scene 360. R.R. Donnelly, 1 Sept. 2007. Web. 30 May 2014. Hamby, Alonzo L.. For the Survival of Democracy: Franklin Roosevelt and the World Crisis of the 1930s. New York: Free Press, 2004. Print. "JFK in History." John F. Kennedy Presidential Library. John F. Kennedy Presidential Library and Museum, n.d. Web. 30 May. 2014. Kennedy, John F. Address at the University of Maine, October 19, 1963. Washington, D.C.: U.S. G.P.O., 1964. Print. Latham, Earl. J.F. Kennedy and Presidential Power. Lexington, MA: Heath, 1972. Print. Leuchtenburg, William E.. Franklin D. Roosevelt and the New Deal, 1932-1940. 1st ed. New York: Harper & Row, 1963. Print.

25


Pinnacle

Volume 3, Fall 2014

McElvaine, Robert S. The Great Depression. New York: Times, 1984. Print. Roosevelt, Franklin D. First Inaugural Address. N.p.: n.p., . Mar. 4 1933, Web. Apr. 30 2014. "The Lost Generation." The Lost Generation. Montgomery College, n.d. Web. 10 Mar. 2014. The Retribution of Clyde Barrow and Bonnie Parker. Jamieson Film Company, 1934. Transcript. This Fabulous Century, 1920-1930. Alexandria, Va.: Time-Life Books, 1969. Print "Women's Work in WWI." Striking Women, n.d. Web. 18 May 2014. Wormser, Richard. Growing up in the Great Depression. New York: Atheneum, 1994. Print. Zeitz, Joshua. Flapper: a madcap story of sex, style, celebrity, and the women who made America modern. New York: Crown Publishers, 2006. Print.

26


Pinnacle

Volume 3, Fall 2014

America’s 20th Century Interventionist Doctrine: New Opportunities, Drastic Measures Mike Roffo AP United States History In his Farewell Address, George Washington advised the United States to avoid entanglement in European affairs, and America did so faithfully for 125 years - until American troops landed in Europe in 1917. Whereas previously U.S. foreign policy had been primarily concerned with conquering new land, such a direct assertion of military power to impact European events indicated a sharp turn in its foreign policy. Yet the U.S. had several powerful incentives to intervene in World War I, because its momentous ramifications for the U.S. motivated Americans to protect their interests. On the one hand an unprecedented threat loomed: an Allied defeat would mean German domination over France and Britain; and with the democracies of Europe broken, the German monarchy would stand as the unopposed superpower of Europe, discrediting democracy and threatening the security of the democratic United States. On the other hand appeared an unprecedented opportunity: a victory of the democratic Allied Powers over the German Kaiser would be a compelling ideological validation of democracy over monarchy; meanwhile the U.S. would rise out of the post-war ashes free and uncontested, the newest political, economic, and military superpower of the world. The combination of these incentives motivated the U.S. to assert its military power to achieve a greater footprint of political and economic influence in Europe. The United States demonstrated its new power as it exercised hands-on influence of the world’s political and economic affairs in the decades after the war. Indeed, the United States reversed its long-standing isolationist trend during World War I not merely to mitigate the German threat but also to seize an unprecedented opportunity for expanding its international political and economic influence. First, the arrival of U.S. troops in Europe displayed a tectonic shift in U.S. foreign policy because it attempted to manipulate the outcomes of conflicts between European nations, whereas most of the U.S.’s previous military excursions were motivated by the genuine American passion for outward expansion. For instance, in the War of 1812, American troops attempted to conquer Canada for the United States. So fervent was the American war fever that Thomas Jefferson called the annexation of Canada “a mere matter of marching” (Nugent 73). Speaker of the House Henry Clay boasted that the militiamen of Kentucky could do it themselves (Bailey 138). In fact, the American conquerers were repelled easily, but such confident rhetoric from the nation’s leaders displayed a deep fervor for expansion as a central objective of foreign policy. Similarly, the U.S.’s invasion of Texas and Mexico in the Mexican War (1846-1848) sought to solidify the U.S.’s annexation of Texas; but the U.S. also claimed New Mexico, the Nueces strip south of Texas, and California in the Treaty of Guadalupe Hidalgo. Though the annexation of Texas had sparked the war, in the end Americans claimed a slice of the American Southwest larger than the 1803 Louisiana Purchase (Nugent 215). This zealous land-grabbing demonstrated that a passion for expansion still pervaded the American mind in the 1840s. Finally the Spanish-American war confirmed the survival of this expansionist mentality at least up to 1898. Since hearing of the native rebellion in the Spanish colony of Cuba, the American yellow press was evoking terrible images of unspeakable horrors inflicted upon Cuban revolutionaries by Spanish oppressors. For example, an illustrator sent to Cuba to detail the massacres telegraphed home: “Everything is quiet. There is no trouble here. There will be no war,” to which his editor replied: “You furnish the pictures and I’ll furnish the war.” In addition, war hawks exploited the explosion of the American battleship Maine in Havana harbor by citing it as evidence of Spanish aggression. In fact, other evidence suggested that the explosion had been an accident; completely unrelated to Spanish treachery (Williamson 453-4). By painting the Spaniards as 27


Pinnacle

Volume 3, Fall 2014

cruel oppressors, through shameless falsehoods and bloated exaggeration, the American presses mobilized America to fight a moral war to stop the Spanish tyranny over Cuba. The United States claimed Puerto Rico, Guam, and the Philippines while forcing Spain to recognize Cuban independence (Brinkley 560). So ingrained was expansionism in the U.S. conscience by this time that the presses could lie and exaggerate events simply to justify invasion. Therefore, why the U.S. would change the objectives of its foreign policy to European intervention when the expansionist passion seemed to be at high tide is certainly a subject of intrigue. In short, America’s history of restricting foreign conflict to local expansionist crusades made World War I the single greatest instance of military foreign intervention at that point, marking a tectonic shift in the foreign policy of the United States. But what unprecedented conditions in the world politic merited such an unprecedented military response from the United States as an invasion of Europe? Americans perceived a colossal threat to the United States—indeed, certainly unmatched in the brief history of the modern democratic world: if the German monarchy was allowed to win the war, then Britain and France, the only stable democracies in Europe, could be wiped off the map. First, it was disconcerting enough that the democratic experiments in the world would be significantly discredited by an Allied defeat. In his Gettysburg Address, Abraham Lincoln reminded the world that America had its roots in a great democratic experiment, “conceived in liberty and dedicated to proposition that all men are created equal.” He described the American Civil War as a test of whether “that nation, or any nation so conceived and so dedicated, could long endure” (Lincoln). When he described a nation “so conceived and so dedicated,” he meant any nation founded upon liberty, freedom, and power derived from the consent of the governed. That the States remained united after the Civil War proved that democracy could “endure” a period of such great civil unrest, but arguably the aggression of monarchical Germany against democratic Britain and France once again took the “democracy v. monarchy” case back to court—“Is democracy as effective as monarchy in war?”—and on the largest scale to date. If just one monarchy were allowed to defeat the two most powerful democracies in Europe, some could blame democracy itself for being an ineffective system in times of war. One could say democratic government is too cumbersome to effectively provide for the common defense. If the democratic systems were defeated, to some this would confirm their ineffectiveness; one more argument for maintaining Europe’s monarchies would be validated, certainly to the disadvantage of the democratic U.S. Thus, a German victory would have first threatened the United States by damaging the validity of democracy as a governmental system. Even worse, Europe might become again a realm of kings, queens, and subjects; given the expansionist tendencies of monarchs, it seemed not unreasonable further to conclude that the new German empire would seek to expand overseas, perhaps to crush the last belligerent democracy in the world. The United States army, militarily puny in 1914 compared to Europe’s armies, would crumble under such a large-scale European assault as a German Empire of Europe would be capable of delivering. Such a blatant threat to the U.S.’s domestic security had not been equated since British soldiers sacked Washington, D.C. and burned the White House in the War of 1812 (Brinkley 211), indicating just how novel and alarming the possibility of European invasion—and/or domination—was to Americans in 1914; after a monarchial German Empire was covering two continents, there would certainly be no more governments “of the people, by the people, for the people” anywhere in the world. In addition, this possibility was not far from the German conscience: in February 1917, Wilson received from Britain an intercepted telegram sent from Germany to Mexico. The “Zimmermann telegram” proposed that in the event of war between Germany and the U.S., Mexico should invade the United States; in return Mexico would regain its “lost provinces”—Texas and much of the American southwest—after the Allies were defeated (Brinkley 609). The Zimmermann telegram granted U.S. policy-makers a haunting perspective into Germany’s vision of the future: the continental United States cut up like a cake to be divided amongst Germany and its allies. In this sense World War I was not merely a war to make the world “safe for democracy” as Woodrow Wilson proclaimed 28


Pinnacle

Volume 3, Fall 2014

(Wilson), but also a war safeguarding the democracies of the world against certain extinction by monarchical domination. Therefore a German victory and a renewed monarchial order in Europe would prove an unprecedented menace to the national security of the democratic United States. On the other hand the conflict prsented an equally motivating and equally unprecedented opportunity for the United States. The Allies seized an opportunity to validate democracy, morally and militarily, by exercising a strategic slander campaign while ruthlessly seeking military victory. Recall that the Allied superpowers Britain and France were democratic, while the Central Powers’ de facto leader, Germany was a monarchy. Thus World War I was not merely a war between the Entente and the Central Powers, but also contest between democracy and monarchy, determining the merits of each and producing a verdict, a court case with an international jury: “democracy v. monarchy.” President Wilson may have been expressing the very goal of eliminating the German monarchy when he asked Congress to declare a war which would make the world “safe for democracy” (Wilson). Such high rhetoric from the president certainly indicated a commitment to moral ideals, demonstrating how the U.S. used World War I to associate democracy with highly moral objectives. The American press as well utilized World War I to slander monarchy while exalting democracy. The sinking of the British cruise liner Lusitania by German U-boats, for instance, was exploited by the American press as proof of the Germans’ unspeakable brutality (Brinkley 608). Indeed, U.S. propaganda throughout the war painted Germany as cruel and savage. For example, one poster endorsing citizens to buy liberty bonds declared “Remember Belgium,” and presented the German invasion of little Belgium as a rape by depicting a brutal German soldier leading away a frightened young girl (Young). That the government would so specifically target Americans’ sense of morality to cultivate support for the war indicates a firm belief that their side was morally benevolent, and the other purely evil. Significantly, the German Kaiser was actually overthrown by his own people in late 1918 (Watson). That a democratic revolution occurred during the war in Germany of all places implies that the Allies were wildly successful in promoting their government’s moral and military supremacy; so successful that the Germans agreed with them. Furthermore, it was the new Weimar Republic, not the old monarchy, which eventually surrendered to the Allies (Watson). This distinction fortifies Wilson’s assertion that it was monarchies who were the aggressors in the conflict, and America was the benevolent shield of liberty and peace alongside other republics. In brief, highly moral rhetoric and a successful propaganda campaign enabled the U.S. to use World War I as a compelling confirmation of its democratic experiment in government. If it wasn’t rewarding enough to avert domination by a German Empire, Americans saw in an Allied victory the opportunity to expand American influence over the globe. The last time the United States faced a comparable opportunity to manipulate European politics in its favor was when Britain repelled Napoleon’s invasion; in fact, many New England Federalists supported sending ships overseas to assist the British constitutional monarchy against the French empire. Ironically, the United States had declared the War of 1812 near the same time to force the end of impressment (Bailey 140). This was a tragic diplomatic error in several ways. Most importantly, the fledgling republic essentially allied itself with Emperor Napoleon by declaring war on Britain while they resisted his invasion. Secondly, what is the life of an American sailor who tragically became a British sailor worth, when compared to the possibility of an expanding Napoleonic empire in Europe? New England Federalists in particular opposed Napoleon so strongly that their militias boycotted the war and sold significant quantities of supplies to the British invaders (Bailey 146). Once one recalls that New England had the most active ports in America and therefore exported the most potential victims of impressment, the Yankees’ resistance to the anti-impressment war becomes all the more symbolic. But the Yankees’ sense did not matter: policy-makers on the Potomac had little influence, and so the War of 1812 was declared, eliminating any possibility of an alliance with Britain, which may indeed have proved fruitful for the U.S. in the event that the two English-speaking na29


Pinnacle

Volume 3, Fall 2014

tions conquered France together. The War of 1812 was a disaster in that the U.S. worked against its own interests by indirectly endorsing Napoleon. In short, this complete political and factional fiasco prevented the U.S. from influencing European events to its advantage when it had the chance. Fortunately, in World War I the United States was given an opportunity not to make such a grave diplomatic error again. Moreover, the Selective Service Act allowed the U.S. to assemble sizable armies and navies by May 1917 (Brinkley 611). With a powerful military force, the U.S. could finally seize an opportunity which it had missed by a deplorable error of diplomacy one hundred years before: to turn temporarily threatening European events to its favor. Perhaps the Spanish-American war as well had granted Americans a taste of imperial indulgence, for which they craved more. Recall that the interventionist conflict with the Spanish at the turn of the 20th century had indeed produced immense territorial, economic, and political benefits for the United States, including the acquisition of the Philippines, Guam, and Puerto Rico (Brinkley 560). Puerto Rico in particular was an island of massively lucrative sugar-plantations; a tariff-free sugar supply was surely welcome to the merchants of the continental U.S. If such a recent imperial crusade was so beneficial, perhaps some Americans saw another “splendid little war” in World War I, where the U.S. could reap similarly fantastic benefits while enduring a fraction of the losses that their opponents—and allies, in the case of World War I—suffered. In fact, this is almost exactly what happened. Thus it seems that World War I was viewed as a critical opportunity for the U.S. to exercise international influence to its own economic and political advantage. Yet how, specifically, would winning a world war with its British and French allies ensure the U.S. the position of heightened international influence which it desired? To put it bluntly, it did not. By an apparent accident of fate, the economic, military, and political toll which World War I took on Britain unseated it as arbiter of the world, allowing the U.S. to rise to world superpower status in its place. In truth, the U.S. was a poor ally to Britain and France, allowing Germany to batter them nearly to submission before the United States came to a heroic rescue in 1917 (Bailey 593). This particular form of Allied victory was closer to U.S. victory, and it allowed the U.S. to vastly increase its international influence by arguably saving the Entente from German domination, simultaneously establishing itself as a world superpower in political, economic, and militaristic terms. In addition, it required indiscriminate warfare upon American shipping by German submarines to motivate the U.S. to finally declare war (Bailey 593). Some historians may cite this as evidence of the U.S.’s opportunity-focused self-interest. While that is a relevant and notable analysis, it misses an even more crucial detail: warfare on American shipping was not merely crippling to the U.S. economy; it was also crippling to the Allied war effort since millions of dollars of precious war-supplies headed for Britain were being sunk by German submarines. In April 1917, sinkings of Allied ships totaled almost 900,000 tons. In fact, one of every four ships sailing out of British ports did not return (Brinkley 610). In light of Germany’s newest aggression, the U.S. then seemed to finally understand that the Allies were at last becoming truly defenseless and thus a military response was necessary. To its credit, the United States did finally come to the European democracies’ aid, yet the delay underscores some inconsistencies in its ideology: if the U.S. was truly fighting a war for democracy, there should not have been hesitation; the very first instance of monarchical aggression should have been enough to provoke a full-scale military response in defense of its allies and in defense of democracy. Although Wilson proclaimed otherwise in his rather sanctimonious declarations of a war to make the world “safe for democracy,” it seems difficult to conclude that the war was entirely moral. Rather, the U.S.’s belated support of its allies and the disparities between its behavior and its supposed ethics suggest that its real objective in entering the war may have been a pragmatic furthering of its own interests: the manipulation of European events to its advantage, and the seizing of an international position of power. Truly, none of Wilson’s propositions conveniently granted the United States a grand seat of influence in world affairs more than his League of Nations (Brinkley 621); as the origin country of its founder, the United States would perhaps be “the nation that ended all wars,” guaranteed a powerful voice in all

30


Pinnacle

Volume 3, Fall 2014

future affairs if the League succeeded as an international peacekeeping force. World War I conveniently expanded American international influence by weakening Britain and providing moral justification for intervention in European affairs. In this way, many aspects of the U.S.’s conduct in World War I allowed it to outpace its allies’ growth in international power. Indeed, the United States was certainly the only true victor of World War I, at least in terms of intact infrastructure, comparatively few military casualties, and post-war financial stability. Historian Harold F. Williamson elaborates on the remarkable influence—particularly in terms of economics—the U.S. enjoyed after the war [emphasis mine]: World War I could be said to have ended the period of British leadership and predominance in the world economy and to have opened the way for the assumption of leadership by the United States. We [The U.S.] had emerged from the war unmatched in productive power, capacity for economic expansion, and ability to accumulate surpluses for investment at home and abroad. We had a skilled and adaptable industrial population, dynamic leadership in business and industrial technology, and an immense variety of natural resources. (Williamson 785) Williamson’s testimony contributes to the assertion that only the United States truly enjoyed a victory in WWI; it was conceivably the only nation that benefitted from the war. Government investment in the economy, generous loans to European countries, and the U.S.’s uniquely intact treasury contributed to its new role as the economic leader, primary creditor, and democratic superpower of the world. Notably, the U.S.’s surpassing of Britain and France may have involved intentional instances of inaction, even in sight of clear German aggression. For example, the nation made no military response when the Germans sank the British cruise-liner Lusitania (Brinkley 608). Recall again that only unrestricted submarine warfare on American shipping could motivate the U.S. to declare war (Bailey 593). Recall also that while some historians argue that the U.S. was responding to threats to its shipping, the more adept realize that the U.S. was also responding to a severe drop in the Allies’ chances at victory. Given the colossal two-fold threat to the United States— discussed earlier—which would arise in the case of an Allied defeat, the U.S. needed to ensure that the British did not lose. Yet, if it was truly fighting to support the freedom-loving democracies against the evil monarchy, a two and a half year deliberation in declaring war seems absurd. But considering the U.S.’s suspicious failure to come to the immediate aid of its democratic siblings, it may not be completely impossible that the U.S. had intended to wear down British and French power so that it could subsequently save the Entente, reaping maximum benefits from its own glorious victory and enjoying sweeping gains to its international political prowess. Furthermore, at the Versailles Peace Conference, Wilson insisted that the Allies refer to the United States as an “associate,” rather than an ally (Brinkley 621). That he would so pompously keep distance from those who were, in fact, his allies, perhaps suggests that he never felt a particularly keen loyalty to them. Moreover, many leaders of the Allied powers resented “Wilson’s tone of moral superiority” (Brinkley 621); that he would project such self-aggrandizing, condescending arrogance to fellow diplomats perhaps indicates that he never intended to cooperate with them in the first place. In fact, the United States didn’t join the League of Nations that he successfully founded due to Wilson’s own stubbornness; his complete refusal to compromise with the Republican party led to Congress not ratifying the Treaty of Versailles (Brinkley 623). Thus Wilson was evidently quite poor at making friends; a stoic conquerer rather than a charmer, he embodies the U.S. World War I policy which harbored little convincing loyalty to its allies and perhaps even aimed to undermine them for its own gain. Intentional or not, this aim succeeded spectacularly. The United States lost only 126,000 soldiers in the war, a number far less damaging to the American workforce which powered its economy. By contrast the warring European nations lost at least 900,000 each. France faced particularly crippling losses, with 2.5% of its population killed in the war (Gordon 293). After Britain wore down the German resolve and weakened its armies, the U.S. took the victory with comparatively 31


Pinnacle

Volume 3, Fall 2014

minimal loss to its own forces (Brinkley 614). Thus the U.S.’s late entry into World War I served as a means to win the war without paying the economic and political price of an extended war, conveniently positioning the U.S. to emerge from the war as the undisputed military victor. The U.S. was just as fortunate in economic terms: in 1914 the U.S. was the largest debtor nation in the world with $3.5 billion in investments abroad. In contrast, European investments in the U.S. were worth $7.2 billion. But by the end of the war, the tables had turned: foreigners held $3.3 billion in American securities while Americans owned foreign investment worth $7 billion (Gordon 293). Undeniably, the war was immensely financially fruitful for the United States. Financially the U.S. went from a debtor nation to a creditor nation, certainly facilitating the massive economic boom beginning during the war and continuing into the Roaring ‘20s. Thus the United States handled its international affairs in a way which maximized its own power while allowing the extended war to crumble British and French money and military. The United States made its new power tangible to the world with the manipulation of political and economic world-affairs in the next decades. For instance, when Germany was facing difficulty paying its crippling reparations costs, the U.S. drew up the Dawes plan to assist with its payments. The Dawes Plan essentially granted Germany additional loans to pay the debts it already owed (Brinkley 710). Never before World War I did the United States claim any justification to interfere in European economic affairs, but now that it was unquestionably a world power of the Entente, the Allies welcomed its assistance. Furthermore, when World War I ended, Wilson’s vision of a League of Nations was organized as part of the Treaty of Versailles (Brinkley 621). The concept of a union of nations which was not an empire was unprecedented in modern history, yet Europeans were more keen to the novel concept than the U.S. Congress, which ironically did not vote to join Wilson’s League of Nations. That the ideas of a U.S. president could have such influence on Europe further demonstrates the reality of the new influence and respect that the United States commanded on the world stage. Economically, U.S. corporations became doubly active in their investments abroad after World War I (Williamson 786). The economic boom of the Roaring Twenties surely could be reasonably seen as a result of this increasing globalization, and in particular, the U.S.’s role as the primary creditor to an entire European continent; the profits from these loans surely contributed to the mounting wealth of the United States in the 1920s. Thus the United States’ new international power was made evident in its mounting influence upon the world’s political and economic affairs. The United States dramatically reversed 125 years of isolationism during World War I certainly as a response to German aggression but also, crucially, as a means to seize a convenient and unprecedented opportunity to expand its international political and economic influence. Until World War I, the U.S. limited its foreign wars primarily to land conflicts, and exclusively to the Western hemisphere; thus invading Europe marked a tectonic shift in its foreign policy. But notably, compelling motives fueled this shift: on the one hand, a colossal threat to the integrity of democracy and to the national security of the United States; on the other hand, a colossal opportunity to validate democracy against monarchy and greatly enhance the international power of the United States to world superpower status. Together these incentives prompted the U.S. to exercise its military to achieve greater political and economic influence in Europe. America would demonstrate this newfound power again and again in the next decades, through its hands-on manipulation of the economic and political affairs of the world. Works Cited:

Brinkley, Alan. American History: Connecting with the past. Boston: McGraw-Hill Higher Education, 2012. Print. Gordon, John Steele. An Empire of Wealth: The Epic History of American Economic Power. New York: HarperCollins, 2004. Print. Lincoln, Abraham. "Gettysburg Address." Dedication of the Soldiers' National Cemetery. Gettysburg, PA. Speech. Nugent, Walter T. K. Habits of Empire: A History of American Expansion. New York: Alfred A. Knopf, 2008. Print. Watson, Alexander. "Stabbed At The Front." History Today 58.11 (2008): 21-27. Academic Search Premier. Web. 26 May 2014.

32


Pinnacle

Volume 3, Fall 2014

Williamson, Harold F. The Growth of the American Economy. New York: Prentice-Hall, 1951. Print. Wilson, Woodrow. "Request for a Declaration of War Against Germany." Joint Session of Congress. Washington, D.C. Speech. Young, Ellsworth. "Remember Belgium." Woodrow Wilson. N.p.: PBS, n.d. Print.

33


Pinnacle

Volume 3, Fall 2014

La dualidad de ser Zach Barragán AP Spanish Literature and Culture En las dos obras de Jorge Luís Borges, El otro y El sur, Borges usa el tema del otro (o el doble) para que los personajes puedan usar la dualidad de ser para mejorar sus vidas de maneras distintas. Cada cuento usa la dualidad de ser como una herramienta que ayuda a avanzar los personajes y el argumento. Los personajes gozan de la dualidad ser de formas diferentes, pero las dos formas ayudan a los personajes. Por un lado, la dualidad de ser es muy rara en el caso de los dos Borges, joven y mayor. Cuando el joven y mayor se encuentran en un banco en Cambridge y se enteran que son la misma persona, dudan que sea la realidad. Piensan que están soñando. Es como una fantasía para las dos personas y no saben exactamente cómo gozar y aprovechar de la situación. No es su culpa, porque ellos no quieren arruinar nada que pasará en el futuro, y por eso solo pueden hablar de cosas breves sin detalles. Tienen la oportunidad de gozar de la dualidad de ser, pero se dan cuenta de que gozar de la oportunidad completamente puede tener consecuencias en sus propias vidas, en el pasado, presente y el futuro. Es cierto que los dos Borges no gozan de la dualidad de ser como podrían, pero es mejor para los dos personajes para poder seguir viviendo las vidas que merecen y deberían vivir. Al contrario, en El sur, Juan Dalhmann goza de la dualidad de ser de una forma más creativa y buena para su mismo. Con su herida, Juan Dahlmann reconoce que la muerte se está acercando. Él no quiere morir como todos los otros enfermos y heridos en el hospital. Dahlmann quiere morir de una manera más honorable y por eso él usa la dualidad de ser. Con la dualidad de ser, Juan Dahlmann puede escapar del hospital y ponerse en una situación muy diferente. Él crea la escena en el bar, los hombres borrachos y el conflicto entre si. Dahlmann crea esta situación porque él cree que morir de una lucha o una herida más macha es mucho mejor que morir en el hospital de una herida de la cabeza. Dahlmann goza de esta dualidad de ser de la mejor forma. Él la usa para escapar de la realidad y crear una escena en que él quiere vivir y morir. Con esta formar de usarla, Dahlmann no hace daño a nadie ni a su mismo. Solo está ayudando a su mismo y gozando del privilegio que es completamente suyo. Por eso, Juan Dahlmann goza de la dualidad con libertad y de la forma mejor. La dualidad de ser hace un papel muy importante en las vidas de los personajes. En cada obra, cada personaje goza de la dualidad de ser de formas apropiadas. En Borges y yo usan la dualidad de ser de una forma sencilla y no muy detallado porque quieren tener cuidado con una herramienta muy poderosa en sus vidas. En El sur, reconociendo que ya viene la muerte, Juan Dahlmann goza de la dualidad de ser completamente para que él pueda estar feliz en los mementos finales de su vida. Después de su accidente, Frida Kahlo cambió como una persona y una artista, y este cambio introdujo la dualidad de ser en su vida y en su arte, especialmente en su obra famosa Las dos Fridas. En la obra de pintura se ve dos Fridas, mano en mano. Las dos Fridas son distintas pero forman una persona. A la izquierda se ve Frida después de su accidente y a la derecha es Frida antes de su accidente. Se puede ver que la Frida a la izquierda está lastimada con heridas que nunca se irán. Esta es la Frida que está lastimada después de su accidente, que tiene cicatriz y sangre en su vestido. Es la Frida más presente, pero con mano en mano y corazones conectadas, la Frida anterior, la Frida saludable, sin heridas, todavía existe. En sus obras, Frida Kahlo usa elementos de su nueva vida y la vida anterior. Después de su acci34


Pinnacle

Volume 3, Fall 2014

dente, vive una vida en que la dualidad de ser es siempre aparente. Es cierto que su vida cambió mucho con el accidente, pero todavía puede gozar de las ventajas de la dualidad de ser. Su accidente y su sangre manchan su vida, como el vestido, con una mancha que nunca se quitará, pero todavía tiene los recuerdos, los momentos y aspectos de su vida brillante, como los colores de su otro vestido, antes del accidente. Frida Kahlo tiene acceso y los dos lados. El accidente fue un horror y una experiencia terrible, pero también única para Frida. Ella aprovecha de la situación y los conflictos que ella enfrenta. Hay una dualidad de ser dentro de Frida Kahlo y se puede ver muy claramente con su obra Las dos Fridas. Es una dualidad de ser muy difícil y distinta, pero de ella Frida goza.

35


Pinnacle

Volume 3, Fall 2014

Poetry Comparison In-Class Essay Samuel Lynn English V “Death the Leveler” by James Shirley and “Death, Be not Proud” (Holy Sonnet X) by John Donne are both poems that personify death. Although they both speak about death as an event that shouldn’t impact one’s actions in a negative light, Shirley’s poem speaks about death as an eternally looming force, while Donne speaks about it as something to be ridiculed and ignored. “Death the Leveler” is a poem about how death influences everything in our lives and “levels” them out. This is shown in the first couple of lines, when Shirley states, “The glories of our blood and state/Are shadows, not substantial things;/There is no armour against fate;/Death lays his icy hand on kings:” (1-4). This means that conquests in war and one’s rank are irrelevant, as it is inevitable that Death will make all individuals equal, even take the most valuable, such as royalty. The way that the three rhyme schemes in each stanza are structured in an ABABCCDD pattern set the first four lines of each stanza up to be the most important, the next two to be short imagery and description, and the last two to be a conclusion. Since these are the first four lines of the poem, Shirley meant for the reader to assume that this is essentially his thesis. Shirley then goes on to say that even men who are fierce enough to kill and then plant laurels in the wake of battle, not honoring the men they killed, still fear death and have no standing against it. Again, they must “stoop to fate” (14). In the last stanza, Shirley uses a metaphor that compares withering garlands to the “mighty deeds” of humans, stating that mighty deeds are beautiful and spectacular in the present moment but that in the end they wither away and are forgotten (17-18). There is a volte before the last couplet (proving that this is a rather mutated form of a traditional English sonnet) which changes the meaning of the poem from simply talking about how death makes everyone equal to saying that, even though rank and power don’t matter, righteous people are remembered and perhaps even live peacefully in Heaven for eternity after death: “Only the actions of the just/Smell sweet, and blossom in their dust.” “Death, Be Not Proud” is a poem about Death as well, but Donne believes the opposite about Death: it is a force, or a person in this case, unworthy of fear as it is simply a means of passing from one life to another. The thesis of this poem is found in the first two lines: “Death, be not proud, though some have called thee/Mighty and dreadful, for thou art not so;” (1-2). Already, this is a complete contrast to “Death the Leveler” because Shirley shows a great amount of respect for Death, as shown when he says, “Upon Death’s purple altar now” (19); not only is he saying that Death has a throne, but the color purple has always been known as a color of royalty. Donne, on the other hand, is saying that death should not be so arrogant because he is not so mighty and dreadful. In fact, Donne states, “For those whom thou think’st thou dost overthrow/Die not, poor death, nor yet canst thou kill me.” (3-4). He suggests that Death doesn’t actually kill anybody. Instead, Death simply delivers people’s souls to Heaven, where they are at peace (8). This is very similar to the last line of “Death the Leveler” because it suggests that death isn’t purely bad since righteous people may go to a peaceful place after they die (Shirley suggests an afterlife while Donne is positive of one). Unlike Shirley’s poem, Donne’s traditional sonnet goes on to ridicule Death for being a “slave to fate, change, kings, and desperate men,” which is perhaps why Donne uses the word “overthrow” instead of “kill” in line three (9). It is almost as though Death is a lowly messenger who simply delivers souls to their place in the afterlife and doesn’t possess 36


Pinnacle

Volume 3, Fall 2014

any real power. Then Donne compares Death to an eternal sleep, as no one should be or is afraid of sleep. Furthermore, he states that Death is no better than stimulants, such as poppy and charms, which make one pass out momentarily and then wake up in a high (11). In this case, the high would be an eternal utopia in the form of heaven. The poem comes to an end much like Shirley’s poem, concluding with, “One short sleep past, we wake eternally/ And death shall be no more; Death, thou shalt die.” (11-12). It is similar in the sense that it speaks of a better place after death, but Donne’s poem is still riddled with insults. The last couplet of Donne’s poem means that, after Death takes one to heaven, that soul exists in a more perfect world in which there is no death. It is a witty observation; Death is so lowly that he isn’t even allowed in heaven and he himself will die, at least in that world. In the end, both of these poems make remarkably similar implications. In Shirley’s poem, Death is the eternal elephant in the room. In Donne’s, Death is not worth acknowledgement because after one dies, he is reborn in heaven. This translates to the fact that one should not live afraid of death, but rather he should take advantage of every moment. Shirley seems to think that it is worthless to be afraid of Death because he is always there, and one cannot know when he is going to die, hence the “Early or late/They stoop to fate,” while Donne thinks it is worthless to be afraid of death because one cannot truly die, for he will simply be reborn in heaven, where there is no death.

Works Cited: Dore, Anita. The Premier Book of Major Poets: an Anthology. New York: Fawcett Books, 1970. Print.

37


Pinnacle

Volume 3, Fall 2014

Telemachus and Hamlet: The Fear of Action Jessey Bryan English V Not every hero is able to take action without vigilance. Telemachus from The Odyssey by Homer and Hamlet from Hamlet by William Shakespeare are comparable because of their fear of action. Telemachus is afraid to confront the suitors who court his mother and devour his goods because he is too weak and immature. However, with the help of Athena and Odysseus, he overcomes his fright. Hamlet, on the other hand, is cautious to avenge his father by murdering his uncle Claudius. Fortinbras and his valiant army inspire Hamlet to finally kill Claudius. Telemachus and Hamlet are both hesitant to face their enemies, but, through the inspiration of brave warriors, they overcome their different reasons for being scared. Telemachus is not able to challenge the suitors even though they try to marry his mother Penelope, attempt to murder him, and squander his resources. Telemachus has a significant problem with the suitors who have invaded his house after Odysseus has been gone for ten years due to the Trojan War. After the suitors ridicule his plan to find Odysseus, Telemachus prays to Athena, “O god of yesterday, guest in our house, who told me to take ship on the hazy sea for news of my lost father, listen to me, be near me: the Akhanians only wait, or hope to hinder me, the damned insolent suitors most of all” (Homer 26). Telemachus realizes that the suitors will offer no help to him and they may even try to stop his progress. He determines that they will be a major threat, and this scares him. That is why he prays to Athena, a goddess, for guidance. When Telemachus is explaining his quandary with the suitors to Eumaios, a swineherd, and Odysseus, who is in disguise, he tells them, “[Penelope] cannot put an end to it; she dare not bar the marriage that she hates; and they devour all my substances and my cattle, and who knows when they’ll slaughter me as well? It rests upon the gods’ great knees” (Homer 293). Telemachus’ fear is caused by the suitors abusing his supplies, and he is terrified because they might murder him. He knows he is not brave and mature enough to stand up to the suitors. He also suffers from the fact that Penelope cannot stop the suitors from courting her. When Telemachus calls a meeting to discuss the problem of the suitors, he addresses the Ithacans, saying, “My distinguished father is lost, who ruled among you once, mild as a father . . . We have no strong Odysseus to defend us, and as to putting up a fight ourselves- we’d only show our incompetence in arms. Expel them, yes, if only I had the power” (Homer 21). Here, Telemachus points out that Odysseus was not influential as a father when he calls him “mild.” This lack of influence is not Odysseus’ fault, but, since he has been away at Troy, no one has taught Telemachus to be a brave, powerful man. Telemachus is afraid of the suitors because of their strength, and Telemachus knows he is too weak and young to compete with them. He needs a braver force to aid and inspire him. While Telemachus is fearful because his power does not match the suitors’, Hamlet is afraid to act because he has a moral conflict. Hamlet worries because he does not know whether killing Claudius to get revenge for his father’s death is justified or dishonorable. Hamlet is burdened by the ghost of his father, King Hamlet, with killing Claudius, his father’s murderer. He responds to the ghost by telling him, “The time is out of joint. O cursèd spite,/ That ever I was born to set it right!” (I, v, 210-212). Hamlet believes that currently in Denmark everything is perverse. He tells his father that he was born to set everything back in order, but he is unhappy with the burden, which is seen when he calls his father “curséd spite.” Hamlet’s angst is best shown in his ruminations after he devises a plan to determine if Claudius is really his father’s murderer. He says, “But I am pigeon-livered and lack gall. . .” (II, ii, 604). Deep down he knows he can’t take re38


Pinnacle

Volume 3, Fall 2014

venge because he is a coward (like Telemachus). He thinks he is a coward for procrastinating his vengeance as he battles whether or not he should kill Claudius. Hamlet’s fear is not caused by others’ strengths, like Telemachus’ fear is, but it is sparked by an internal moral dilemma. This dilemma is shown in all the moments when Hamlet makes excuses for why he cannot kill Claudius. For example, when he sees Claudius praying he says, “Now might I do it pat, now he is apraying,/ And now I’ll do’t./ And so he goes to heaven” (III, iii, 77-79). Hamlet decides not to murder Claudius while he has the chance because he thinks that he will go to heaven if he is murdered while praying. Hamlet is creating reasons why he cannot kill Claudius, which also shows his dilemma about whether or not to murder Claudius. Even though their concerns are different, they both overcome their fears with the inspiration of brave people around them. Telemachus and Hamlet both overcome their different fears through the motivation sparked by the valiant people who surround them. Hamlet’s main inspiration is Fortinbras, the Prince of Norway, because his bravery reminds him of his father’s. When he sees how Fortinbras is brave and formidable, he looks back on how his own fear to take action has been a waste of time. He states, “How all occasions do inform against me,/ And spur my dull revenge! What is a man/ If his chief good and market of his time/ Be but to sleep and feed?” (IV, iv, 34-37). He tells himself that a man is not worth anything if all they do is eat and sleep. All the events that have transpired since his father demanded revenge have “dulled” his quest for vengeance. Fortinbras’ warrior-like demeanor sets an example for Hamlet and inspires him to act. Hamlet ends his soliloquy by saying, “O from this time forth/ My thoughts be bloody or be nothing worth!” (IV, iv, 68-69). Hamlet takes Fortinbras’ dauntlessness to heart, and he avenges King Hamlet by slaughtering and poisoning Claudius (V, ii, 352-358). Telemachus, in the same way as Hamlet, is inspired by venerable people: both Athena, the goddess of war and wisdom, and Odysseus, “the raider of cities.” Athena, disguised as a mortal, motivates Telemachus when he is feeling doubtful about finding his father by telling him, “You’ll never be fainthearted or a fool, Telemachus, if you have your father’s spirits; he finished what he cared to say, and what he took in hand he brought to pass . . . So never mind the suitors and their ways, there is no judgement in them, neither do they know anything of death and the black terror close upon them” (Homer 27). This quote demonstrates both Athena and Odysseus encouraging Telemachus to face his fears. Athena tells Telemachus that, if he is as brave and determined as Odysseus, he will never be a coward. With this support, Telemachus begins his expedition to find Odysseus even with the suitors challenging his decision. Telemachus now knows that he will have assistance when the time comes to inflict death and “black terror” upon them. On his expedition to find Odysseus, Telemachus meets Nestor and Menelaos, allies of Odysseus, who both support him by noting his similarities to the great Odysseus. Nestor complements him by saying, “Your manner of speech couldn’t be more like [Odysseus]; one would say No; no boy could speak so well” (Homer 39). Menelaos and Helen both note that Telemachus has “Odysseus‘ hands and feet; his head, and hair, and the glinting of his eyes” (Homer 57). These comparisons to his father give faith to Telemachus because Odysseus is known as one of the most intelligent and courageous men in Greece. At the end of The Odyssey, Telemachus and Odysseus, with the help of Athena, kill all the suitors who have been terrorizing Ithaca (Homer 409-425). In this event Telemachus trounces his fear and takes revenge against the suitors with the help of his father and Athena just as Hamlet takes revenge when he kills Claudius. Even with different fears, Hamlet and Telemachus are alike because they are given confidence by heroic role models. Telemachus and Hamlet share a fear and hesitancy to confront their enemies. Telemachus is frightened to challenge the suitors who antagonize his mother and exploit his goods because he is too young and frail to compete with them. With the support of Athena and Odysseus, he overcomes his caution and faces the suitors with Odysseus. In contrast, Hamlet is scared to kill Claudius because he does not know if it is the right thing to do. Fortinbras’ toughness and vigor motivate Hamlet to kill Claudius. Telemachus and Hamlet are both horrified at the thought of murdering their enemies, but they crush their fears with the support of valorous comrades. These characters show that many people need an 39


Pinnacle

Volume 3, Fall 2014

inspirational leader to overcome their terrors. In life, everyone requires a person whom he or she can strive to emulate. In Telemachus and Hamlet’s cases, they call for paragons of bravery like Odysseus or Fortinbras. It is not shameful to look up to someone who is a tremendous inspiration, and it may be helpful just as it is for Hamlet and Telemachus.

Works Cited: Homer. The Odyssey. Trans. Robert Fitzgerald. New York: Farrar, Straus, and Giroux, 1998. Print. Shakespeare, Wiliam. Hamlet. Folger Shakespeare Library Edition. New York: Simon and Schuster, 2012. Print.

40


Pinnacle

Volume 3, Fall 2014

The Road to Peace and Revenge are Similar Beasts Grant Kegel English V Is the hope for revenge or for inner peace a better motivator for action? Siddhartha in Siddhartha by Hermann Hesse and Odysseus in The Odyssey by Homer each go on an adventure to achieve their respective goals of achieving Nirvana in one case, and revenge upon the suitors in the other. These heroes of their own journeys have many similarities. They receive supernatural aid, battle with suicide, and trek through strange lands with support from strangers to their respective eventual destinations. However, the drive of these two is different, as Siddhartha searches in order to achieve inner peace, a mental accomplishment, while Odysseus wishes to reach home and wreak revenge upon the suitors, a physical achievement driven by longing and anger. The trials between Siddhartha and Odysseus are similar, and therefore, defining. Odysseus’s trials included pride and survival, while Siddhartha’s involved overcoming mental obstacles and emotional turmoil. Siddhartha leaves his home to discover the key to inner peace. He departs from his family to live with a group of wandering monks, the Samanas. After months of training, he moves on, disregarding the instruction he received from the Samanas. Siddhartha’s journey is full of trials, such as lack of sleep, food, and daily comforts allowed to a son of a Brahmin, as the narrator states, “He walked the path of eradication of ego through pain, through the voluntary suffering and overcoming of pain, of hunger, of thirst, of weariness” (Hesse 14). Through the pain and the trials, he continues on his journey to reach Nirvana, because he sees it as necessary to achieve his goal of inner peace: “Was not Atman within him? Did not the ancient source of all springs flow within his own heart? This was what must be found, the fountainhead within one’s own being; you had to make it your own! All else was searching, detour, confusion” (Hesse 7). This is unlike Odysseus, because while he suffers many trials in his journey, they are not self-inflicted; rather, they are forced upon him by the gods, others, or (inadvertently), himself. One such accidental, self-inflicted suffering which is fueled by narcissism and spitefulness is when Odysseus shouts, “If ever mortal man inquire how you were put to shame and blinded, tell him [it was] Odysseus” (Homer 160). He yells at Polyphemus to remember him, but in doing so, allows the Cyclops to curse him and the ship. By doing so, he is pushed by his own hand to hardship and a long, arduous journey. Again, this reaction was unintentional, fueled by his vengeance and pride against the Cyclops, while Siddhartha’s was fueled by peace and the eradication of his own needs and wants. Siddhartha’s goal was making things right internally (the establishment of selflessness within himself), while Odysseus’s goal was making things right externally (taking out the suitors and bringing back the status quo). These trials are similar, but the mental versus physical goals define the growth in these characters. Another connecting element is that both journeys run into the obstacle of temptation. Siddhartha, when he arrives at a small city, meets Kamala, a courtesan. With her, he learns the sensory pleasures— pleasures that go directly against his overall goal of selflessness and eradication of ego. Soon, he was “Sleep[ing] in vain, his heart full of misery he felt he could no longer endure” (Hesse 69). He gets caught up with the highs and lows of trade, sex, and other pleasures which deter from the path of inner happiness. Thus, after having been entrapped in the dismal life that Kamaswami, a successful merchant, had led, he leaves the city to return to his path, despite his many wishes to stay with Kamala forever, abandoning his quest for Nirvana. Odysseus’s run-in with the temptation to stay with a lover is similar to Siddhartha’s in the sense that he stayed with Circe. Odysseus, having experienced many dangerous encounters with gods and

41


Pinnacle

Volume 3, Fall 2014

creatures, comes upon the island of Circe, a witch. He makes a deal to stay with her, as long as his men are taken care of as well. However, after months, crew members ask Odysseus to “Shake off this trance, and think of home…” (Homer 179). Odysseus could have, and would have, stayed with Circe for much of his life if not for his crew helping him to shake off the temptations, the luxuries, and the benefits of staying with Circe. With their help, he instead braved the seas to reach Ithaca once again. Siddhartha can be connected in this aspect, as he could have stayed in the city, avoiding the thirst, hunger, and exhaustion that were prevalent on his journey, but instead he decided to travel on. A final parallel that can be made is that supernatural aid came to both characters differently with respect to approach. In Siddhartha, Vasudeva, a ferryman, teaches Siddhartha to come to terms with his son and the world around him through the peace of the river: “Upon his face blossomed the gaiety of knowledge that is no longer opposed by any will, that knows perfection, that is in agreement with the river of occurrence” (Hesse 114). Siddhartha obtained his ultimate knowledge through a normal man, instead of a god, which was how Odysseus received his supernatural aid. Athena approved of Odysseus’s journey to strike back against the suitors, and helped him along much of the way: “Son of Laertes and the gods of old, Odysseus, master of land and sea ways, put your mind on a way to reach and strike a crowd of brazen upstarts” (Homer 242). Vasudeva is a mortal man, no different than any other man, except he has found peace in his life. Athena, on the other hand, is the goddess of wisdom and strategic war, but is imperfect and known to hold grudges. Vasudeva makes sense in Siddhartha’s story, as it is the peaceful river life that allows Siddhartha to complete his goal. Odysseus’s goal of revenge is supported by Athena’s own rage and thirst for retribution. Siddhartha’s understanding of Nirvana was achieved through a ferryman, a regular person, while Odysseus made it home and murdered the suitors with the power of Athena. Overall, Siddhartha and Odysseus go through their journeys in similar fashions, but Odysseus’ is one full of anger and longing, while Siddhartha’s is one where he looks for peace and knowledge. Odysseus’s journey is fueled by revenge, and leads to consequences even after making it home. Siddhartha’s, respectively, is fueled by the thirst for knowledge and happiness, which sticks with him to the end of his days. As to which is a better motivator, it depends on the goal. If Siddhartha had tried to obtain eternal happiness through spite and anguish, there would be no resolution to his plight. If Odysseus wished for eternal happiness, he might as well have stayed with Kalypso forever. It isn’t their drive which leads them to the end, it’s their ability to recognize their external and internal obstacles, and in doing so, overcome them. In their supernatural aid, in their lows of temptation, and in their road of trials, they find the means to complete their tasks, and the peace of mind to accept the result.

Works Cited: Hesse, Herman. Siddhartha. Trans. Susan Bernofsky. New York: The Modern Library, 2008. Print. Homer. The Odyssey. Trans. Robert Fitzgerald. New York: Farrar, Straus, and Giroux, 1998. Print.

42


Pinnacle

Volume 3, Fall 2014

New Ideas on Correlation Between Speciation and Reproductive Barriers Anja Stadelmann Honors Biology The emergence of reproductive barriers has forever been associated with speciation, but two biologists from the University of Chicago found too many contradictory results when compiling data from the first experiment to ever test this assumption fully. Their hypothesis was that "If genetic barriers to reproduction are a leading cause of new species, then groups of organisms that quickly accumulate these genes should also show high rates of species formation." [1] In other words, a species with genes favoring a reproductive barrier will form new species as new barriers come into play. The rate at which genetic barriers form will then predict the rate at which speciation occurs. After making estimates of the rate of speciation for fruit flies and birds based on evolutionary trees, they compared these speciation rates to genetic indicators of reproductive isolation. Their results were surprising; the rate of natural speciation was not dependent on the rate at which reproductive barriers arose. While mentioning that reproductive barriers are still speculative at some level, they state, "Our results question whether genetic reproductive barriers played a major role in how those species formed in the first place." [1] These findings set in motion a whole set of new questions such as whether it's necessary to broaden the definition of speciation: "evolution of reproductive isolation." One of the most critical findings of this experiment was the discovery of "speciation genes" that are responsible for contributing to special reproductive isolation. Yet, on the topic of species formation, these genes are credited only a minuscule role. How do the biologists make an accurate estimate of rate of speciation for the birds and insects using evolutionary trees? Do they look at the amount of time between the formation of new species in the evolutionary tree and then create a ratio? How drastically different would the evolutionary tree estimations be for a species of bird halfway across the world when compared to these birds in the experiment? The article mentioned that only a handful of the birds and insects had speciation genes, so does this mean that a handful of individuals in other populations had speciation genes as well? Did or do humans have speciation genes? Why haven't other biologists tested the relationship between reproductive barriers and speciation? Has it always been assumed, made logical sense of, and never questioned? There is a chance that genetic barriers may actually predict speciation, but the biologists simply cannot catalog the species formed in time before they become extinct due to genes that may not be favorable or suited for the species' environment. Nowhere in the article do they mention the possibility of incorrect data due to immediate extinction. For example, say a species of caterpillars contains black and tan characteristics. They are the same species but with slightly altered genes that are not being reproduced out because of the large gene pool. When a reproductive barrier emerges, such as a stream or river, more of the black than the tan caterpillars may get stuck on one side and begin to filter out the tan ones as the population is smaller. Another reproductive barrier may then occur, and only the furriest of the black caterpillars will be able to breed with each other. So far one species is split into two, then split into three because of the second barrier. This process may continue to narrow down the species to as close to exact genes as possible; however, many of the newly formed species of caterpillar may not favor the genes needed for that environment and die out before they can obtain a fossil record. [1] "Genetic Reproductive Barriers: Long-Held Assumption About Emergence of New Species Questioned." ScienceDaily. ScienceDaily, 02 Sept. 2013. Web. 23 Sept. 2013. <http://www.sciencedaily.com/releases/2013/09/130902162536.htm>.

43


Pinnacle

Volume 3, Fall 2014

Revolution in Russia and Egypt: History Repeats Paige Voss Asia: East and West The Russian Revolution of 1917 and the Egyptian Arab Spring are prime examples of how, despite all physical and chronological gaps, history seems to constantly repeat itself. Initially, the governments of Egypt and Russia were similar in nature: authoritarian and oppressive of their citizens. The pent up tension eventually prompted two parallel revolutions. Yet, the resulting new government led to a downhill spiral from hope to disappointment. The final outcome was a new authoritarian government - oppressive, just as the first had been. The irony of the Egyptian Arab Spring and the Russian Revolution of 1917 is that both societies progressed and ended, politically, where they started. The preliminary foundations of the Russian and Egyptian societies at the time of the uprisings demonstrated corresponding traits. While Mubarak was previously in the military, the Tsar Nicolas was also known to pay more heed to military advice than to the advice of politicians. The military mindset of the two initial leaders affected how they reacted to threats and therefore molded the uprisings. Tsar Nicholas became the leader of Russia after the abrupt death of his father; Mubarak replaced President Sadat as president after Sadat was assassinated. While they had time to settle into their positions, both leaders were thrown into power unexpectedly, starting the tone of their leadership. To placate his citizens, the Tsar agreed to create the Duma in an attempt to give the people a voice. In Egypt, Mubarak tried to make the government more efficient by expelling members who had been involved in scandals. Both governments sometimes worked to appease their citizens’ complaints with decisive positive-seeming actions. Alexander Kerensky described Tsar Nicholas, stating, “[He was not] the outcast, the inhuman monster, the deliberate murderer I used to imagine. I began to realize that there was a human side to him” (Pipes 334). In Egypt President Mubarak had four terms in office where he was well-supported by the public. Neither leader was exactly as he was made out to be, each with support at some time in his reign. The beginning societies of both the Egyptian Arab Spring and the Russian Revolution echoed each other in multiple ways, including their authoritarian approach to leadership. The original governments of Russia and Egypt were authoritarian in their actions and oppressive to their people. When he first entered office, Mubarak refused to take on a vice-president. Tsar Nicholas got rid of his Commander in Chief, Grand Duke Nicholas Nikolayevich, and became Commander in Chief himself. Both strove to be singular leaders, which left more decisions in their hands, pushing them further from democracy. In Russia, ministers and high officials were appointed individually by the Tsar and reported straight to him. This resulted in the officials being, in some ways, pawns of the Tsar, which led to more decisions being reflecting only the Tsar’s preferences. In Cleveland’s A History of the Middle East (2000), he writes, “After 1894, the state introduced tighter controls over oppositional political activity and used its full range of powers, from patronage to intimidation to blatant electoral fraud, to ensure the election of government candidates” (Cleveland 382). This statement demonstrates how the government of Egypt secured its continued reign through authoritarian actions. The people of Russia and Egypt were oppressed by the way that both their governments relied on authoritarian actions to achieve their goals; these actions gave rise to the instigating factors of the uprisings. In Russia and Egypt similar conflicts arose between government and people as a result of government action and, ironically, inaction. In both Egypt and Russia citizens were protesting government discrimination; in Russia many

44


Pinnacle

Volume 3, Fall 2014

citizens of non-Russian ethnicities protested Russification, and others protested discrimination against Jews. In Egypt, citizens protested discrimination against the Muslim Brotherhood and other religious groups. Since such discrimination was often government-initiated, the already present dissatisfaction with government actions became linked to anger at the government for discriminating against its people. Another issue in Russia was rising food prices: between 78,000 and 128,000 workers went on strike to protest food shortages (Pipes 274). Food was also an issue in Egypt, as is clear by how it was addressed in the first plan by the new government. Ironically both governments’ failures to use their resources to address the most dire issues, due to the circumstances, eventually led to their ousting. Additionally, Russia encountered problems with censorship. As Wade states in The Russian Revolution, 1917, “The government closely controlled the right to form organizations for any purpose, even the most innocuous” (Wade 2-3). In Egypt, too, the government controversially arrested many fundamentalists, effectively censoring their population. As is always true, such rules that oppressed the citizens’ rights to speak or act out against a government only served to further their desire to do so. In both Russia and Egypt, the tension between government and people was rapid in its growth; as issues went unaddressed, these issues sparked uprisings in each country which also ran similar courses. The similarities of the interactions between the governments and citizens of Russia and Egypt during the first uprising demonstrate the irony of the events. In Egypt, the protests first began on National Police Day, and in Russia, on International Women’s Day. Ironically, both societies used these days traditionally set aside for celebration to make statements about their dissatisfaction. In both movements protesters and civilians experienced police brutality: protesters were abused by police forces in Znamenskii Square, Russia, and in Tahrir Square, Egypt, 40 people were killed and 40 others were wounded (Pipes 277). In the spur of the moment, the Egyptian government did not realize that, in these circumstances, police brutality would only serve to incense the angered citizens and give them more evidence to fuel their anger. On January 28, President Hosni Mubarak gave a speech saying he would make a new government and address what the people were protesting, but the protests did not stop. At that point, the President promised to step down after his term and reform the government. Tsar Nicholas ordered the Duma to adjourn until April, but then agreed to let the cabinet re-adjourn. Then the Tsar decided to abdicate and give power to his son Alexei and that his brother Michael Aleksandrovich would be Regent - in other words, Michael would be in power until Alexei was old enough to rule. Both leaders made attempts to satisfy the protesters’ demands, yet neither realized the gravity of the situation nor, as a result, went far enough. The circumstances and actions of the two governments mirror each other, consequently leading to new, promising governing bodies. Ironically, each new government showed signs of progress, giving hope to the people. In Egypt, the new president, Morsi, worked for the re-formation of a parliament and created the Shura council. In Russia, the Provisional Committee of the Duma was formed, which worked together with the Petrograd Soviet. These actions symbolized the new governments’ desire for a fresh start, which is exactly what the people were looking for. President Morsi created a hundred-point plan for the future, starting with addressing security, traffic, fuel, waste, and bread. The Provisional Government started with an eight point program that promised free speech, assembly, no discrimination, and the right to strike. Seeing the government’s quick and decisive moves to improve served as a reassurance and inspiration to citizens after the years of problems being left unsolved. Morsi declared that his vice president would be a woman and a Coptic Christian. The leader of Egypt himself was vowing to personally address the problems of discrimination by religion and gender, an important step in a positive direction in the eyes of the people. National Public Radio’s reporter, Merrit Kennedy, passes on the opinion of Professor Nathan Brown, stating that, “Morsi’s image and personal history are very different than previous presidents” (Kennedy). How new and different President Morsi was served as a source of hope to Egyptian citizens because what they fundamentally wanted was change. The irony of the Russian Revolution and Egyptian Arab 45


Pinnacle

Volume 3, Fall 2014

Spring included that both the new governments appeared to be positive forces of change to begin with but turned out not to be what the people were looking for. Unfortunately, the Provisional Government and Morsi’s government quickly began to disappoint the people with what appeared to be regressive actions. The leader of the Provisional Government was described in Dziewanowski’s 1979 A History of Soviet Russia as being, “Increasingly inclined toward dictatorial methods but . . . too shy and unskillful in their application” (Dziewanowski 94). On the flip side, in Egypt, the new leader Mohammed Morsi declared that he had power above that of the courts, which led some citizens to fear he was going to be an autocratic leader. On both sides, the hope of the citizens was quickly waning as they saw their leaders leaning towards exactly what they were trying to avoid. The Soviet and the Provisional Government had increasing problems cooperating with each other; often the speeches of Soviet leaders made the Provisional Government look dysfunctional and dissatisfactory in the eyes of the people. In December, the Egyptian government quickly created a new constitution even though the public complained and nonIslamists put together a charter against it. Each government’s failure to work as a cohesive group that represented the people hurt their reputation among the citizens. Since the police and gendarmerie had been dissolved, new Russian militias were formed, but they were weak and some were taken over by criminals. In Egypt, despite government promises to improve fuel prices, they still remained high, driving citizens to the streets in protest. For many of the attempted improvements, there were, ironically, overpoweringly negative side effects or lack of change. The newly-instated governments of both Russia and Egypt did not live up to the people’s expectations and instead disappointed them, leading to a further and final change in authority which resulted in each country once again being under an authoritarian leader. The Soviet Government of Russia and the militarily-instated government of Egypt returned to the authoritarian ways of their predecessors. On March 8, 1921 the Tenth Congress in Russia made a resolution that banned all factions within the Communist Party. By doing so, the government was limiting their opposition and avoiding criticism and confrontation, making their own power more greatly ensured. The new Egyptian government ruled that the Muslim Brotherhood and activities, organized, sponsored, or financed by it would henceforth be banned. The government was further oppressing their people by outlawing such a prominent part of society that until only recently had had enough support to have a member as president. On June 28, 1918, the government of Russia nationalized most large industries, including approximately 1,000 joint stock companies (Dziewanowski 107). As Dziewanowski states in A History of Soviet Russia, “For the time being, all former administrative personnel were ordered to remain at their jobs or face prosecution for ‘sabotage’” (Dziewanowski 107). The government took away its own citizens’ rights and forcibly controlled its people, demonstrating an authoritarian style. The ultimate governments of Russia and Egypt were once again authoritarian in their methods, marking the return of many past issues. Issues which had previously caused the uprisings returned to Egypt and Russia when the final governments came into power. In Egypt, more than 1,000 members of the Muslim Brotherhood were killed during protests (Kirkpatrick). This statistic demonstrates further police brutality, one of the issues the citizens wished to address by instating a new government. Stalin’s Soviet government took over banks, insurance companies, and means of communication. These actions demonstrated the new government’s authoritarian tone in their approach to leadership, mirroring the censorship of the original government. In Fahim’s 2013 article, “Egypt General has Country Wondering about Aims” he reflects on Egyptians’ feelings on General Sisi, saying that the events “leave much of Egypt wondering whether he [General Sisi] intends to return the country to civilian rule, as he has repeatedly promised, or to capitalize on public support for him by seeking power, formally or informally, for himself” (Fahim). Fahim clearly describes the country’s fears that General Sisi will bring them back to where they started, just with a new authoritarian leader. In Russia, the new government deemed strikes illegal and workers had to join new government trade unions. By stopping the citizens from 46


Pinnacle

Volume 3, Fall 2014

speaking out against the government, the Russians were regressing further back to the authoritarian rule of the Tsars. In the end Russian and Egyptian citizens found themselves facing many of the same issues, despite two consecutive changes in leadership. Patterns of repetition can be found throughout history, with present-day countries repeating the mistakes of those before them. Both Egypt and Russia’s governments before the Arab Spring and Russian Revolution exhibited political similarities in their histories and their authoritarian styles. In both countries similar issues between the citizens and governments led to uprisings. The second governments each sparked hope but quickly failed to meet the citizen’s desires, the unfortunate truth being that as a result of the final revolution, both countries found themselves with authoritarian leaders who brought back many of the same issues. At the end of the Russian Revolution of 1917 and the Egyptian Arab Spring, both countries had followed the same path from their initial authoritarian government to a final government of the same nature. The events of the Egyptian Arab Spring demonstrate how societies throughout history fail to learn from the mistakes of those before them and are therefore doomed to repeat them.

Works Cited: Cleveland, William L. A History of the Modern Middle East. Boulder, CO: Westview, 2000. Print. Dziewanowski, M. K. A History of Soviet Russia. Englewood Cliffs, NJ: Prentice-Hall, 1979. Print. Fahim, Kareem. "Egypt General has Country Wondering about Aims." New York Times, Late Edition (East Coast) ed.Aug 03 2013. ProQuest. Web. 28 Apr. 2014 . Kennedy, Merrit. "The Challenge For President Morsi: Unite Egypt." Weekend Edition Sunday (NPR) (2012): Newspaper Source. Web. 3 Apr. 2014. Kirkpatrick, David D. "Egyptian Court Shuts Down the Muslim Brotherhood and Seizes its Assets." New York Times, Late Edition (East Coast) ed.Sep 24 2013. ProQuest. Web. 28 Apr. 2014 . Pipes, Richard. The Russian Revolution. New York: Knopf, 1990. Print. Wade, Rex A. The Russian Revolution, 1917. Cambridge: Cambridge UP, 2000. Print. 2014. Works Consulted: "Abdel Fattah el- Sisi." Gale Biography in Context. Detroit: Gale, 2013. Biography in Context. Web. 5 Apr. 2014. "Aleksandr Fyodorovich Kerensky." Europe Since 1914: Encyclopedia of the Age of War and Reconstruction. Ed. John Merriman and Jay Winter. Detroit: Charles Scribner's Sons, 2007. Biography in Context. Web. 13 Mar. 2014. "Arab Spring." New York Times, Late Edition (East Coast) ed.Dec 25 2011. ProQuest. Web. 11 Mar. 2014 . "Arab Spring." Opposing Viewpoints Online Collection. Detroit: Gale, 2012. Opposing Viewpoints in Context. Web. 11 Mar. 2014. "Hosni Mubarak." Newsmakers. Detroit: Gale, 1991. Biography in Context. Web. 13 Mar. 2014. Merrit, Kennedy. "Opponents To Mark Morsi's First Year In Office With Protests." All Things Considered (NPR) (2013): Newspaper Source. Web. 3 Apr. 2014. "Mohamed Morsi." Gale Biography in Context. Detroit: Gale, 2012. Biography in Context. Web. 13 Mar. 2014. "Nicholas II." Encyclopedia of World Biography. Detroit: Gale, 1998. Biography in Context. Web. 12 Mar. 2014. Siegal, Robert and Lelia Fadel. "Morsi Supporters Fear Nearing Crackdown On Islamist Groups - U." All Things Considered (NPR) (2013): Newspaper Source. Web. 3 Apr. 2014. Stockdale, Nancy. "presidential politics in modern Egypt." American Government. ABC-CLIO, 2014. Web. 31 Mar. 2014.

47


Pinnacle

Volume 3, Fall 2014

"Vladimir Ilich Lenin." Encyclopedia of World Biography. Detroit: Gale, 1998. Biography in Context. Web. 13 Mar. 2014. Watts, Tim. "Russian Revolution of 1905." World History: The Modern Era. ABC-CLIO, 2014. Web. 7 Apr. 2014.

48


Pinnacle

Volume 3, Fall 2014

Feeding a Hungry Mind Anja Stadelmann English IV Black Boy, by Richard Wright, is a reminder how often students take for granted the ability to read and extract relevant knowledge, unaware of how lucky they are and how influential they can be. In books, Richard finds not only an escape in times of trouble but also a sense of strength. When he begins to analyze text, Richard stumbles on racism everywhere he looks and observes the overwhelming power of words. By immersing himself in books and knowledge, not only does Richard find liberation from everyday plights, but he gains a strong sense of the injustice and morality stemming from the power of literary expression. Richard develops his imagination and sense of curiosity by reading, which he calls upon when his reality becomes too difficult to handle. Although the life that Richard had been exposed to in books is dangerously out of reach, he cannot help but yearn for the freedom that he knows he is being denied. The need for a better life is seen in Richard's comment, "The warning red lights...blinked all around me, the sirens and the bells and the screams...filled the air."(169). Richard understands that he is using the adventure and rush of pulp fiction as an escape for his hungry and often violent mind. The feeling becomes an addiction to Richard as he is unable to cope with the segregation and immorality of the South and looks for any way to leave it all behind. While reflecting, Richard concludes, "I wanted everything to be possible... Because I had no power to make things happen outside of me in the objective world, I made things happen within."(72). In a state of constant craving, anything that fills Richard with a feeling of satisfaction or a sense of power attracts him. Surrendering to the urge for control, he grows apart from his family and discovers the independence he had been struggling for. Richard considers, "My reading had created a vast sense of distance between me and the world in which I lived and tried to make a living."(253). In providing Richard with an escape, literature also instilled in him an undeniable sense of curiosity. With the discovery of a new world of words, Richard begins asking endless delicate questions as soon as his mother begins reading him stories and teaching him numbers. Although clichÊ, books help Richard escape his confusing world and build a solid foundation for creative and free thinking. Injustice between whites and blacks is revealed to Richard when he begins to look more closely at his passion for literature. Richard’s understanding of the fears black people face during this time period is developed via external and internal forces (self-discovery), both as a result of literacy. A white woman says to Richard, '"You'll never be a writer. Who on earth put such ideas into your...head?"' (147). By putting him back into his "place,� the white woman pinpoints racist stereotypes that Richard is just beginning to process. The key element he extracts from his reading is the idea of restriction and deprivation. Richard realizes he is living in a world of suppression and is desperate to feel emotions when the only acceptable emotions for a black person to express are faith and religion. To fill his void of emotion, Richard writes, even though he senses it is horribly wrong. He cannot cope with the fact that emotions, a basic human function, are being denied him and defined for him. Richard expresses his disgust in discussing, "What being a negro meant. I could endure the hunger. I had learned to live with the hate. But to feel that...the very breath of life itself was denied me, that the very breath of life itself was beyond my reach, that, more than anything else, hurt, wounded me."(250). Not only does Richard feel restricted by his youth, but he also feels restricted simply by being himself. He believes that, "I no longer felt that the world about me was hostile, killing; I knew it"(251). No one could ever tell Richard the knowledge that he has derived from books: all that he has missed, all that he is going to miss, the pointlessness of his 49


Pinnacle

Volume 3, Fall 2014

mother's suffering, the feelings that he could never have, and especially the bitter inhumanity of the humans around him. Richard feels as if he has stolen the sense of morality and humanity that had been hidden from him by white people in illiteracy. Reading becomes both a source of excitement and a curse for Richard as he identifies racism in the pages of the books he has come to learn from. Richard reveals the tremendous power of words and understands the courage and risks that come along with defying the odds in writing publicly. Richard had associated books and words with a unique and personal form of emotional escape he shares only with the author. When asked to write about himself on the blackboard in front of the class, Richard cannot bring himself to do it. Not out of pure stage fright is Richard frozen in place, but because he is scared to lose his treasured written world to a room full of schoolchildren. Richard had accepted writing as a privilege given to the author only and is terrified that, in bringing words into reality, not only does he take honor from authors who have given him something to be passionate about, but words in general will also lose their special connotation. Eventually, Richard yearns to convey the emotions bottled up inside him and turns to writing for himself. His first story is an innocent tale, yet it lets the reader know Richard has accepted the idea of conveying emotion through the pen. After reading his story to his awestruck grandmother, Richard reflects, "Her inability to grasp what I had done or was trying to do somehow gratified me. Afterwards whenever I thought of her reaction I smiled happily for some unaccountable reason."(121). Richard experiences his first taste of independence as the leader of a new and changing generation. Writing became a way for Richard to feed his sense of independence and to shield him from the submission that he watches devour his friends and family. The positive power of words on Richard is unimaginable, yet the negative aspects may have resulted in a much more painful experience. When Richard's article is published in the paper and receives negative feedback, he realizes that with his ability to read he has stepped out of place and threatened white people. They recognize that Richard has "stolen" their powerful weapon of literacy. After he had been selling papers in which he read only the fictional stories, a friend confronts him about the entirety of the paper’s content, '"Well, the paper you're selling preaches the Ku Klux Klan doctrines."'(131). Richard feels a sense of betrayal and guilt after the newspaper incident. He experiences how devastating words can be and the impact they have on those who encounter them. After reading an angrily written and racist text, Richard concludes that, "Yes, this man was fighting with words. He used words as a weapon, using them as one would use a club... I could use them as a weapon? No. It frightened me."(248). The statement demonstrates Richard’s development as he acts confident in taking on the challenge of a segregated world, while it still scares him inside. He has been bursting with violence his entire life with limited avenues of expression and this conclusion gives him new hope and enthusiasm for the passion he has turned into a lifestyle. In experiencing the power of words, Richard discovers his independence and learns that his writing can be more than just words on a page if he has enough courage to follow what he knows is right despite the cost. Morals and a sense of basic humanity are established as Richard throws himself into books and discovers the power of words. Richard escapes his confusing world through text and builds a solid foundation for creative and free thought. Literature keeps Richard aware of the suffering around him and the pervasiveness of racism. He refuses to comply to his "position" in society after all he has read and begins his quest for independence. His family undermines Richard's urge to read/write, and if they succeed, Richard will turn to violence as a form of expression. By trying to stay out of danger, Richard's family involuntarily puts themselves and the entire black community in danger. There seems to have been no easy solution to the problem of Richard's uncaged mind. Works Cited: Wright, Richard. Black Boy. New York: Harper Perennial Modern Classics, 2005. Print.

50




Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.