Eruditio
Louisa Chen & Tassity Johnson, Editors
undergraduate journal for the humanities at duke university
volume 30, 2009-2010
editors-in-chief Louisa Chen Tassity Johnson senior editors Patrick Baker Mariam Diskina Monica Wang Eddie Wu staff editors Elizabeth Beam Rewa Choudhary Braden Hendricks Lauryn Kelly Shining Li Brandon Maffei Nick Schwartz Janet Scognamiglio acquisition editors Rewa Choudhary Shining Li layout editor Patrick Baker publisher The Publishing Place Hillsborough, North Carolina front cover Alex Sun Special thanks to Marie Pantojan for allowing us usage of her layout design. Copyright Š 2010 Eruditio at Duke University All rights revert to authors following publication Eruditio gratefully acknowledges the financial support of Duke University’s Undergraduate Publications Board Since 1982, Eruditio has published full-length academic papers written by Duke undergraduates. The journal comes out once a year in late April, featuring papers written on a wide range of topics from an equally wide range of departments. The papers featured in Eruditio represent the extraordinary depth and breadth of academic inquiry that Duke undergraduates undertake each and every day at the university.
Eruditio Volume 30 • 2009-2010 Contributors 3 Can Mediation Compete? Examining the role of third party intervention in the deterrence equation Lucy McKinstry 8 Humpty Dumpty Had a Great Fall(acy): Langage, Sexism, and Reform Jocelyn Streid 16 The Emergence of a Norm Cascade on Violence Against Women: CEDAW or Transnational Advocacy Network? Caroline Lampen 31 The Leadership of the Zen Master: Phil Jackson Anand Varadarajan 41 A New Gangster’s Paradise: The Diglossic Traits of Tsotsitaal and the Concept of South African Unity Joan Soskin 51 Rationalizing Inequality: Secularization adn the Stigmatization of the Poor Collin Kent 57 Who’s Afraid of the Big, Bad Televison? Lauryn Kelly 66 Sylvia Plath on Edge: A Case for the Correlation of Bipolar Disorder and Exceptional Poetic Creativity Elizabeth Beam 72 Personal Choice in Nietzsche’s On the Geneology of Morality Cory Adkins 85 The Life Within “Neutral Tones” by Thomas Hardy Robert Lehman 89 Pickled Dreams: Uniquely odern, Indian Narration in Midnight’s Children Karan Chhabra 92
CONTRIBUTORS
Cory Adkins is a Trinity freshman from Winston-Salem, North Carolina who plans to propose a Program II major in Ethics next fall. He would like to thank Professor Peter Euben and Teaching Fellow Ali Aslam, both who taught Cory in an Ethics course last fall and provided him with guidance for this paper. Cory is a member of the the Chronicle’s Independent Editorial Board, a Benjamin Duke Scholar and an enthusiast of idle dalliance.
Elizabeth Beam is a Trinity freshman with the intention of double majoring in Neuroscience and English. She is on the editing staff for the Archive and Eruditio, and she is a member of the Academic Affairs committee in the Collegiate Student Interest Group in Neuroscience (CO-SIGN). She recently joined the Woldorff Lab in the Center for Cognitive Neuroscience. Elizabeth ultimately hopes to conduct her own research at the intersection of the mind and language.
Karan Chhabra is a third-year English major from New Jersey. After a DukeEngage project in the South Indian rainforest and a semester studying abroad in Florence, Italy, he’s back in Durham searching desperately for a way to continue swinging from trees and/or saying “Ciao, bella!” without being ostracized. Karan wrote “Pickled Dreams” for a course titled Epic in the Modern, taught by Prof. Ian Baucom in Spring 2009. He’s really grateful to Prof. Baucom for--aside from this paper, and getting him to read Ulysses---showing him that the study of English can break out of the privileged, Western mold and even represent something global. Lauryn Kelly is an English major with a Spanish minor in the class of 2011. She is involved in Kappa Alpha Theta.
Collin Kent is an undergraduate junior from Tulsa, Oklahoma who is majoring in biology and minoring in both political science and chemistry. On campus he volunteers in the Duke Children’s Hospital, participates in Chemistry Outreach events for the Durham community, works in the Orthopaedic Research Lab of Dr. Farshid Guilak, and is a member of Wayne Manor selective living group. Upon graduation, he plans to take a year to work for a nonprofit organization abroad before entering medical school.
Caroline Lampen is a senior with a political science major (concentration in international relations) and a Spanish minor. She is the Co-President of Millennium Villages Project and serves as a tour guide for the Admissions Office. She spent a summer studying abroad in Barcelona, Spain and a semester enrolled in the Duke in New York Arts and Media Program. Her interest in international human rights advocacy led her to write this paper on violence against women.
CONTRIBUTORS
Robert Lehman is a junior from Frankfurt, Germany, pursuing a Political Science major, an English minor, and a Markets and Management Studies certificate. Robert has a passion for international development projects across the globe, having worked on education and social business projects in Bangladesh, Uganda, and Brazil. When he needs to clear is mind, Robert likes to lose himself poetry, and more often than not, finds himself returning to his favorite writer, Thomas Hardy. Beyond poetry, Robert spends his time playing soccer, running marathons, traveling into obscurity, and learning languages. Lucy McKinstry is a senior Political Science major from Lexington, Kentucky. She is currently writing a thesis analyzing the use of foreign law in the U.S. Supreme Court. She has an interest in the impact of globalization on domestic legal systems, and plans to study law after Duke. Lucy also loves traveling and spent a summer through DukeEngage and WISER conducting an economic needs assessment in Muhuru Bay, Kenya. She is a member of the Chronicle’s Independent Editorial Board, the Baldwin Scholars Program, and Delta Delta Delta sorority.
Joan Soskin majors in International Comparative Studies and English, with a minor in Spanish.She will be graduating in December 2010 and is involved in Kappa Kappa Gamma, Order of Omega, the Duke Association for Business Oriented Women (BOW), Habitat for Humanity, amd tutors at the Emily Krzyzewski Center
Jocelyn Streid is in the class of 2013. She will possibly major or minor in English and Neuroscience. She is involved in Campus Crusade, amd HAND (art at the hospital). She is also off-campus coordinator of the Duke Communit Garden and the program director for the Neuroengineering Research Program.
Anand Varadarajan a senior (Class of 2010) who’s majoring in History and minoring in Political Science. Next year he’ll be attending law school, one step closer to reaching his dreams of being a high-profile trial litigator. Outside of class, he’s been involved extensively with Duke’s competitive Mock Trial Team as both an attorney and a witness. He’s also part of Duke’s Career Ambassador Team, where he helps individuals make informed and strategic career choices. In his spare time, he loves to watch football and basketball, rooting feverishly for my Lakers, 49ers, and of course, Blue Devils.
letter from the editors
The humanities are often criticized for being indulgent, too tightly entrenched within the ivory tower, too out of touch with the “real” world and its innumerable problems. Yet what such criticism unrightfully denies is how crucial to our future prosperity carefully chosen words, precision and depth of thought, a deep reverence for life’s complications – all requisite for the best work of the humanities, all ever-present in the essays collected in this journal – in truth are. The solution to world hunger will not be found in this humanities journal; but creative thinking, thorough research, and gift of rhetoric - without which no solution for any problem, world hunger or no, is possible – is present on each and every page. We won’t bore you any longer with our thoughts on humanities or world hunger; we simply ask that you read these essays with the same respect and care as you would the work of any scholar. For what we all do, each and every time we write a paper or step into a classroom, is actively engage, as scholars, with the larger academic community. So often the true value of the university experience is lost in the quest for good grades, impressive resumes, well-paying jobs; let these essays act as a reminder of, and testament to why we are really here (or why we would be here, if student debt and familial expectations were no burden) – to learn the intrinsic value of our voices, our thoughts, our words, and to learn how best to use them. Happy reading!
Louisa Chen & Tassity Johnson Co-Editors-in-Chief, 2009-2010
Eruditio
7
eruditio
L u c y M c K i n s t ry Can Mediation Compete? Examining the role of third party intervention in the deterrence equation
Introduction In this work, I will conduct a fine-grained analysis of the deterrence equation, comparing two particular variables as causes of deterrence, and apply them to the case of the Beagle Channel1 crisis between Argentina and Chile. I will test the proposition that, in some cases, V(capc) is a more important variable in deterrence success than is P(fightd). V(capc) quantifies the value for the challenger of his own capitulation, a value which can be manipulated by the defender offering concessions, the defender attempting to compromise, or a third party intervening in the crisis.2 P(fightd) is the perceived likelihood that the defender will actually fight if attacked.3 All other variables in the equation remain constant. The Deterrence Equation: V(Capc ) ≥ P(Fightd ) × V(Warc ) + [1 – P(Fightd )] × V(Capd ) To avoid war in crisis bargaining, this equation is the keystone: “the challenger’s expected utility for accepting the status quo must be greater than its expected utility for attempting to overturn the status quo through the use of force.”4 Logically, all variables of the deterrence equation are at play in every crisis situation. However, not all hold equal weight in determining the outcome of a given crisis, nor do the variables have a static relative influence in conflicts throughout history. Much has been written on the importance of P(fightd), but less attention has been paid to the causal role of V(capc) in deterrence success. After analyzing these two theories of deterrence strategy, I will apply them to a brinkmanship episode between Argentina and Chile. Specifically, I examine, within the ongoing Beagle Channel dispute, the acute period of crisis which occurred between Argentina and Chile from October 16, 1978 to January 9, 1979.5 In this case, Argentina initiated a militarized dispute in response to an unsatisfactory arbitration award handed down by the ICJ and British government. Argentina reasserted its more extensive demands and ratcheted up its threats of force by activating its armed forces to high alert. The movement and buildup of military forces in preparation for armed conflict by Argentina triggered Chile to respond with similar arms activation, thus creating an interstate deterrence encounter.6 Although they reached the brink, this dispute did not escalate into a war. Instead, the Pope intercepted the conflict just in time, offered to mediate, and successfully averted war. I will compare the variables V(capc) and P(fightd) as they relate to the successful deterrence outcome in this situation. The Beagle Channel crisis provides evidentiary support for the argument that, in certain cases, increasing V(capc) is a more effective source of deterrence success for the defender than increasing P(fightd). By conducting this analysis, I aim to contribute to the extant literature on the relative efficacy of these particular deterrence strategies. Theories and Hypotheses
8
can mediation compete? Increase P(fightd ) The first variable of the deterrence theory I will be applying to the Beagle Channel crisis is P(fightd). Deterrence has been defined by scholars as a “policy that seeks to persuade an adversary, through the threat of military retaliation, that the costs of using military force will outweigh the benefits.”7 Clearly, military threat is at the heart of deterrence. Increasing P(fightd) has long been a popular strategy in deterrence literature, given that it is the primary variable of coercive leverage. P(fightd) can be increased in several ways: defender commits or engages its reputation, defender constrains itself so it will have to fight, defender creates a risk of conflict spinning out of control. 8 There are various commitment strategies to accomplish this. Hypothesis 1: If an increase in P(fightd) is the dominant variable leading to deterrence success, Chile must have employed one or more strategies to increase P(fightd). Fearing the heightened imminence of war, Argentina will decide not to attack and instead capitulate, incurring some substantive concessions and/or reputational costs. Finally, if an increase in P(fightd) is the causal factor for the challenger’s capitulation, the challenger will show behavioral or verbal evidence of this causal relationship. If deterrence success in the Beagle Channel crisis was a result of Chile’s increase in P(fightd), there are certain characteristics about the situation that we should see. Above all, Chile’s deterrence strategy must include some tactics to increase P(fightd). We will see Chile either a) engage their reputation b) employ a commitment device or c) create a risk that conflict will spin out of control and be very costly. Due to Chile’s effective coercion, Argentinean capitulation must entail either concessions on its demands, reputational losses, or both. Second, there should be identifiable evidence of causality between these tactics and the deterrence outcome. Specifically, Argentina should exhibit some behavioral or verbal evidence that their decision not to attack was a result of Chile’s actions to increase P(fightd). Increase V(capc) The second variable of the deterrence theory I will be applying in the Beagle Channel crisis is V(capc). Increasing this variable should make capitulation less costly for the challenger, therefore making it a more appealing option. Often V(capc) is manipulated through concessions or compromise from the defender; it can also be altered through one of several types of third party interventions. Given the staunch and legally-correct Chilean position subsequent to the ICJ arbitration award, I will assume that for this case, any increase in V(capc) will not be the result of Chile offering concessions or attempting to compromise on pertinent issues. Therefore any Chilean strategy to increase V(capc) must be an appeal to an international organization or other third party. Hypothesis 2: If an increase in V(capc) is the dominant variable leading to deterrence success, Chile will have employed one or more strategies to increase V(capc) . Alternatively, a third party intervention could be the source of the variable’s change in value. In light of the attractive alternative to war, Argentina will decide not to attack and instead acquiesce. This resumes the status quo, incurring little or no concessions for Argentina. Furthermore, the reputational costs for Argentina will be minimal. If this theory holds, we should find that either Chile requested third party intervention or a third party intervened uninvited. Consequently, Argentina must react in a favorable way by recalculating its deterrence equation and, should decide not to attack. If Argentina backed down on account of papal intervention, the outcome will not damage Argentina’s bargaining reputation. Bargaining reputation is “the willingness of a state’s foreign policy
9
eruditio leadership to risk armed conflict in pursuit of political goals and to refuse to concede to the demands of adversaries under coercive pressure.”9 Like most states, Argentina is highly concerned about its bargaining reputation, as can be deduced from this chief negotiator’s explanation of their objective: “a boundary that is honorable for Argentina, respects its legitimate rights to territorial integrity and protects its permanent interests in the southern region. This is the political objective and not peace… To give priority to the preservation of the peace over the national interests, would be a sign of weakness.”10 Although it will likely be difficult to find evidence that their reputation was not damaged, the lack of evidence that would be present if there was a damaged reputation may lend passive support. Hypothesis 3: If an increase in V(capc) is the causal factor for the challenger’s capitulation, the challenger will show behavioral or verbal evidence of this causal relationship. Again, we should look to the historical record to verify that mediation was the turning point for Argentina to avert its plan of attack. Comparing Argentina’s behavior before and after papal intervention, we should see a change in behavior which corresponds, as well as a change in rhetoric which gives credit to the Vatican. This could be a speech presenting the cancelled attacks as giving deference to the Pope’s moral authority or other public communication explaining their behavior as a result of the intervention. Reviewing the Case – The Beagle Channel Crisis This crisis emerged from a century-old dispute over the demarcation of territorial and maritime boundaries in the Beagle Channel, including three small islands, Picton, Lennox, and Nueva.11 For generations, the ambiguity of the original 1881 boundary treaty left room for an ongoing quibble between the two countries.12 Changes in international law and economic development increased the salience of the economic and strategic value of the territory and brought the simmering issue to a fore in the early 1970’s.13 More than the islands itself, Argentina was concerned with losing rights to whatever oil might be under that area, as well as the strategic navigational channel along the 200-mile sea zone. In an attempt to settle the matter once and for all, the matter was submitted to the British government in 1971 for arbitration by a panel of judges at the International Court of Justice.14 The arbitration award, handed down in 1977, was squarely rejected by the new military junta in power in Argentina on January 25, 1978, a clear violation of international law.15 The Argentine foreign minister declared, “No commitment obliges a country to comply with that which affects its vital interests or that which damages rights of sovereignty.”16 After Argentina’s flat rejection of the arbitral award, bilateral negotiations resumed and faltered. Towards the end of the six-month negotiating period, talks finally collapsed and “both countries immediately began a total mobilization of armed forces. Troops converged on the border, and the two navies began moving south.”17 The countries prepared for war. In the Beagle Channel crisis, Argentina is the challenger. The Argentine junta sought a more political settlement and was not afraid to use force. One chief negotiator explained their objective: “a boundary that is honorable for Argentina, respects its legitimate rights to territorial integrity and protects its permanent interests in the southern region. This is the political objective and not peace… To give priority to the preservation of the peace over the national interests, would be a sign of weakness.”18 Given their legally-justified position, Chilean leaders had high resolve for achieving their goals as articulated by the arbitral award. Argentina declared that further Chilean recourse to the ICJ would be viewed as a casus belli; therefore, mediation quickly emerged as Chile’s last resort. Argentina set the invasion date for December 22, but the attacks were thwarted by unfavorable weather. The following day, “with warships sailing just hours apart in the Straits of Magellan” Pope John Paul II declared that
10
can mediation compete? his personal representative, Cardinal Antonio Samoré, a career Vatican diplomat, had been dispatched to Chile and Argentina.19 “Military activity came to a halt and both sides prepared for the papal envoy’s visit.”20 Ultimately, the Cardinal Samoré’s shuttle diplomacy was effective in preserving peace, and the two states signed an agreement, the Act of Montevideo, on January 8, 1979 which requested the Holy See as mediator to guide them in negotiations and, additionally, committed both parties to avoid the use of force in their mutual relations.21 Rather than either party capitulating, they both, in essence, acquiesced to return to the status quo, negotiation. Although the state parties did not reach a resolution to the issue until 1985, for the purposes of this paper, I will touch only briefly on the long prior history of the dispute and the long subsequent proceedings of the slow-going mediation. In the end, Chile obtained the three islands, and Argentina kept most of the maritime rights in the region.22 Connecting the Theories and the Case The Beagle Channel Crisis was chosen because it effectively isolates these two variables as the main actors in the deterrence equation for this particular crisis period. I will go through the other variables and briefly show that they remained constant. V(warc) represents the value to the challenger of going to war.23 Chile’s deterrence strategy did not affect V(warc), because we saw no evidence during this immediate crisis period where Chile took the following actions: increase their relative military capability, threaten massive damage to the challenger. Chile did not increase their military spending during this time, nor did they reach out to form alliances. They did not have weapons of mass destruction, threaten use of scorched earth tactics, or impose any other threat of massive damage to the challenger. V(capd) is the challenger’s value of the defender capitulating.24 Chile’s deterrence strategy did not affect V(capd); in fact, the Argentineans remained firm in their goals well into the 1980’s. Argentina’s rejection of the ICJ arbitral award, a binding mandate under international law, showed that there was precious little Chile could do to affect Argentina’s goals for the crisis without sacrificing their arbitral award. Evaluating Hypothesis 1: Leading up to the crisis Chile had been preparing for war, and, according to Foreign Minister Hernan Cubillos, said: “During that whole year, [they] reinforced all [their] military divisions at specific points along the front…”25 Yet despite this increase in capacity and demonstration of resolve, it does not match temporally with the change in Argentinean strategy responsible for avoiding war. The president’s December 14 attack command, with the December 21 invasion date, effectively nullifies any hypotheses that Chile’s manipulation of P(fightd) tipped the scale of the deterrence equation. This command proves that none of Chile’s deterrence efforts up to that point had been sufficient to deter Argentina, and there is simply no historical evidence of any major changes that occurred in Chile’s bargaining strategy on December 21 or 22 to support this theory. Therefore, we can conclude that P(fightd) was not the causal determinant in defusing this crisis. This conclusion is further supported by the lack of evidence that Argentina incurred substantive concessions and/or reputational costs because of their decision to abort the invasion. Nor are there behavioral or verbal cues in the record that Argentinean decision-makers considered Chilean likelihood and capacity of fighting back to be a determining factor in their decisions. Moreover, even without the aforementioned evidence, it is relatively unlikely that Chile would have been successful in deterring Argentina by military might alone. Although Chile did activate their military, these measures did not dramatically increase P(fightd) to a point that would deter Argentina. In fact, the asymmetrical military capacities of the states suggest that Argentina would not be deterred by many, if not most, coercive measures available to Chile. Chile has a smaller military, largely due to the fact that its population is half the size of Argentina’s.26
11
eruditio Evaluating Hypothesis 2: There exists considerable evidence in support of hypothesis 2. On a general deterrence level, Chile had been soliciting support from the Vatican for some time.27 Foreign Minister Hernan Cubillos explained that they were looking for certain countries so that “it would have made it very difficult for the Argentineans to go to war after the arbitration award and after the suggestion from a country with those characteristics [considerable influence in terms of either moral, political or economic power, a country that values legal tradition and legality.]”28 Although this wasn’t the direct catalyst for the particular Vatican intervention in December 1977, it likely helped lay the groundwork for this to occur thus is worth noting. More temporally salient was the Cardinal’s December 21, 1978 arrival in Buenos Aires and coordination of a three-way communication between Argentine President Videla, Chilean leader Pinochet, and the Pope. This intervention occurred almost at the exact time of Argentina’s planned invasion, thus it was the most significant and most time-relevant event to possibly impact the deterrence equation. Although Chile did employ at least one strategy to increase V(capc), the variable’s more significant change was derived from a third-party intervention. As the theory predicts, Argentina decides not to attack, instead waits to hear out the Cardinal’s plan, and eventually signs the Act of Montevideo on January 8, 1979. This commits Argentina and Chile to peacefully seek negotiation under the oversight of papal mediation. Up to this point, Argentina made no concessions. Moreover, due to the moral authority and prestige of the Vatican, Argentina was able to deflect any criticisms of its resolve, thus it suffered no reputational costs in making this decision. Evaluating Hypothesis 3: First we will compare the behavior of Argentina before and after the papal intervention, which was the key event that altered V(capc) during this crisis. Argentina was aggressive from the start of the crisis, and on October 12, 1978 called one-half million Argentine military reservists up to active duty.29 Throughout the crisis, Argentina continued to stockpile arms, including the purchase of twenty-six Dagger fighter jets from Israel and seventeen new tanks from Austria.30 Moreover, the Argentine government employed credible commitment strategies to force themselves into war. For example, they “conducted a media campaign designed to mobilize public support against the Chileans […and] galvaniz[e] public opinion in support of Argentina’s claims to the islands in the Beagle Channel.”31 In November, the Economist reported that “Argentina was a country of black-out drills, troop movements and fiery speeches.”32 Immediately before the intervention, as has been stated, Argentina had literally given the attack command. The invasion date was set for December 21, 1978, but had to be postponed due to inclement weather.33 These activities stand in stark contrast with the sharp reversal of the Argentine position between December 22 and January 8, after the Vatican intervention.34 Within those weeks, Argentina had agreed to a force withdrawal and committed to try negotiations again under the Vatican. Clearly, the only element that changed Argentina’s attack command was the Papal request for mediation. It is likely that the unique nature of mediation under the Vatican made up the difference between this and past bilateral negotiation attempts. Not only was the Vatican neutral, but it commanded superior moral authority and credibility, and was offering mediation services from its highest level, the pope himself. Appeals to Argentina’s strong reputational interests played a strong role here. As would be expected, “Neither government could retreat from its bellicose position without losing face.”35 Yet how could one lose face by accepting a gracious offer from the pope himself? Evaluation Summary These evaluations of the three aforementioned hypotheses in regards to the Beagle Channel case, when considered all together, point to my central conclusion. The increase of V(capc) is a much more com-
12
can mediation compete? pelling explanatory variable for deterrence success in the Beagle Channel Crisis than is the increase of P(fightd). Indeed, it is universally accepted that the Vatican’s timely involvement was the single “most important factor in easing tensions.”36 The case of the Beagle Channel crisis illustrates an important, if unusual, distribution of relative importance among variables in the deterrence equation. One element of this crisis that I have not evaluated, but may be the most likely critique of this position, is the dynamic of civil-military relations in Argentina. Argentine President Videla had little power relative to the military junta. It was the junta which pressured Videla to give the initial invasion command, but it was Videla himself who was the primary contact for the Vatican representatives. Thus the ideological differences and power struggle between the junta and President Videla may have been a strong moderating variable in this instance. Indeed, the years-long mediation by the Vatican came to a relatively quick conclusion as soon as the junta fell out of power in the early 1980’s. Conclusion On its face, the successful abatement of conflict in the Beagle Channel crisis appears to have been divine intervention rather than a result of particularly brilliant Chilean deterrence policies. The Chilean government had no prior knowledge that the Vatican would make such a last-ditch attempt to avert war between the two countries. Nevertheless, the entry of the Vatican into the crisis altered the deterrence equation by increasing V(capc) just enough to tip the scale in favor of cooperation. Here I have analyzed an international crisis by looking through the rational actor model at a detailed examination of the deterrence theory and particular variables thereof. In short, this work presents an example of a successful mediation incident, where the increase in P(fightd) had no effect. This comparison of Argentina’s behavior prior and subsequent to mediator intervention in a crisis proves that the papal involvement was the critical piece. The example of the Beagle Channel crisis illustrates the effectiveness of increasing V(capc) in achieving deterrent success. It is important to acknowledge the numerous other factors that ripened this crisis for effective third-party mediation, such as the unique moral authority of the Vatican. Due to these circumstantial variables, it is unadvisable to generalize from these findings in broad strokes. Indeed, Thomas Princen warns that “an intermediary intervention should not be viewed as a major determinant of a conflict outcome, that marginal change… is the most one can expect.”37 Yet in some instances, marginal change is necessary and sufficient to ameliorate a volatile bargaining relationship. Indeed, the Beagle Channel example in and of itself is a reminder of the influence of mediation and the oft-overlooked variable, V(capc). notes
1. Named for the ship of Charles Darwin who traversed the channel in 1830, Lindsley (1987). 2. Gelpi, lecture 8, February 4. 3. Ibid. 4. Huth, Gelpi, & Bennett, p. 612 5. Wilkenfeld, et. al. p. 192 6. (Huth et al). 7. (Huth, in Huth, Gelpi, & Bennett p. 610) 8. Gelpi, lecture 8, February 4. 9. (Huth, 1988) 10. (Princen 1992a, p. 135). 11. (Princen 1992b, p. 155). 12. (Princen 1992a, p. 134) 13. The specific details of the parties’ demands are not necessary for the purposes of this
13
eruditio
paper, but are nicely summarized in (Princen 1992a, p. 134). 14. (Princen 1992a, p. 134). 15. (Lindsley, p. 439) 16. (Princen 1992a, p. 134). 17. (Princen 1992a, p. 138). 18. (Princen 1992a, p. 135). 19. Princen 1992b, p. 154. 20. Ibid. 21. Act of Montevideo. 22. Princen 1992b, 156. 23. Gelpi, lecture 8, February 4. 24. Gelpi, lecture 8, February 4. 25. Princen 1992b, 137 26. Can’t remember where I found this fact. 27. (Princen 1992a, p. 138). 28. (Princen 1992a, p. 134). 29. Princen 1992b, p. 137. 30. Princen 1992a, p. 137. 31. Lindsley p. 440. 32. Economist, Nov 11, 1978 33. Lindsley p. 441. 34. Interesting side note: the Pope had never before intervened, without invitation, into the affairs of two states (Princen 1992b, p. 172). 35. Lindsley p. 440. 36. Wilkenfeld, et. al. p. 192. 37. Princen 1992a, p. 185.
works cited
1. “Act of Montevideo.” United Nations. 26 Apr. 2009 <http://www.un.org/Depts/los/LEGISLATIONANDTREATIES/PDFFILES/TREATIES/ CHL-ARG1979AM.PDF>. 2. “Argentina and Chile; Beagle on leash, for the moment.” The Economist Nov. 1978. LexisNexis. Duke U Lib., Durham. 26 Apr. 2009 <http://www.lexisnexis.com>. 3. “Argentina and Chile; The other Horn.” The Economist Nov. 1978. LexisNexis. Duke U Lib., Durham. 26 Apr. 2009 <http://www.lexisnexis.com>. 4. Brecher, Michael, Jonathan Wilkenfeld, and Sheila Moser. Crises in the Twentieth Century. Oxford; New York: Pergamon Press, 1988. 5. Corbacho, Alejandro Luis. “Predicting the Probability of War During Brinkmanship Crises: The Beagle and the Malvinas Conflicts.” Universidad del CEMA Documento de Trabajo No. 244. (2003). Available at SSRN: http://ssrn.com/abstract=1016843. 6. F. V. “The Beagle Channel Affair.” The American Journal of International Law. 71.4 (1977): 733-740. JSTOR. Duke U Lib., Durham. 25 April 2009 <http://www.jstor.org>. 7. Garrett, James L. “The Beagle Channel Dispute: Confrontation and Negotiation in the Southern Cone.” Journal of Interamerican Studies and World Affairs 27.3 (1985): 81-109. JSTOR. Duke U Lib., Durham. 25 April 2009 <http://www.jstor.org>. 8. Huth, Paul K. “Extended Deterrence and the Outbreak of War.” The American Political Science Review 82.2 (1988): 423-443. JSTOR. Duke U Lib., Durham. 2 February 2009
14
can mediation compete?
<http://www.jstor.org>. 9. Huth, Paul, Christopher Gelpi, D. Scott Bennett. “The Escalation of Great Power Militarized Disputes: Testing Rational Deterrence Theory and Structural Realism.” The American Political Science Review 87.3 (1993): 609-623. JSTOR. Duke U Lib., Durham. 28 February 2009 <http://www.jstor.org>. 10. Laudy, Mark. “The Vatican Mediation of the Beagle Channel Dispute: Intervention and Forum Building.” Words over War. Mediation and Arbitration to Prevent Deadly Conflict. Ed. M. C. Greenberg, J. H. Barton, and M. E. McGuinness. Lanham: Rowman & Littlefield, 2000. 293-320. Available at: <http://www.wilsoncenter.org/subsites/ccpdc/pubs/words/11.pdf>. 11. Lindsley, Lisa. “The Beagle Channel Settlement: Vatican Mediation Resolves a Century- Old Dispute.” Journal of Church and States 29 (1987): 445-456. HeinOnline. Duke U Lib., Durham. 25 April 2009 <http://www.heinonline.org/HOL/Welcome>. 12. Princen, Thomas. Intermediaries in International Conflict. Princeton: Princeton University Press, 1992. 13. Princen, Thomas. “Mediation by a Transnational Organization: the Case of the Vatican.” Mediation in International Relations. Eds. Jacob Bercovitch, and Jeffery Z. Rubin. New York: St. Martin’s Press, 1992. 149-179. 14. Young, Oran R. The Intermediaries: Third Parties in International Crises. Princeton: Princeton University Press, 1967. 15. Wilkenfeld, Jonathan, Kathleen Young, David Quinn, and Victor Asal. Mediating International Crises. New York: Routledge, 2005.
15
eruditio
Jo c e ly n S t r e i d
humpty dumpty had a great fall(acy): Language, Sexism, and Reform “When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean -- neither more nor less.” “The question is,” said Alice, “whether you can make words mean so many different things.” “The question is,” said Humpty Dumpty, “which is to be master -- that’s all.” — Through the Looking Glass, Lewis Carroll, Chapter IV
When one thinks of sexism, images that spring to mind might include skanky magazine advertisements, sweet-talking bosses, slimy ex-boyfriends, or a hard and fast glass ceiling. All of these perpetrators of sexism, human or not, give us identifiable enemies to tear down in the quest for gender equality. Yet sometimes the most dangerous form of injustice is also the subtlest; when thinking of sexism, we rarely think of the ideology embedded into our very language. In some sense, unnoticeable linguistic sexism strikes us as counterintuitive; after all, as Humpty Dumpty argues, our words mean precisely what we want them to mean. How, then, can our words be sexist if we ourselves are not? The answer lies in a new understanding of language, one unapproved by the doomed egg of Lewis Carroll lore. Former theories posited that language serves as a postal system of sorts, sending off packages of information from speaker to listener. Under such a framework, it makes perfect sense to assume that one’s sentences contain neatly-wrapped messages fully intended by the communicator. With the slow discrediting of Chomskyan linguistics and the steady rise of Developmental Systems Theory, however, we have begun to see language (or, rather, languaging) as behavior that cognitively orients each individual involved. As such, language cannot simply package up a clear and objective representation of reality; it instead conveys a certain version of reality entrenched in the conditions that created it. As Dr. B. Kodish explains, Language is intertwined with behavior, consciousness, etc. It has a neurological base; that is, language doesn’t exist entirely separately from nervous systemspersons using the words. By means of spiral feedback mechanisms, we create our language; our language affects us; we create our language; etc., ongoingly. This individual process is embedded in, influences and is influenced by, a particular culture and community of others (2004).
Thus, one cannot view language outside of the contexts that produced it, be they cultural, historical, or ideological. Borrowing the terminology of Professor S. McConnel-Ginet of Cornell University, words carry with them “conceptual baggage.” (McConnel-Ginet 2008, 497) When we use generic pronouns like he/ him/his or terms such as lady or girl, we have total control over neither their meaning nor their cognitive implications. In other words, Humpty Dumpty spoke in error when he so confidently asserted, “When I use a word, it means just what I choose it to mean…” (Carroll 1871, 57; cited by Martya 2003, 131) If our language advances a particular social reality, we must then be wary of its ability to perpetuate the injustices of that reality. English evolved in a sexist society and therefore carries traces of the patriarchy that created it. Yet language does not merely mirror society – it also shapes it. The continued usage of sexist terms today thus exacerbates gender inequality. In making this case, however, feminists have sometimes failed to bring up substantial evidence for the psychological effects of sexist language, instead insisting that the language must be changed simply because it is unfair and discriminatory. Their strat-
16
humpt y dumpt y had a great fall(acy) egy means that nationally published writers like William Safire can assert in all seriousness that malegendered generics, among other forms of linguistic sexism, are just a part of proper grammar. This paper hopes to address the assertions of Safire and those like him by advancing a more persuasive, psychologically based argument. Those who deny language’s effect on sexism ignore the reciprocal relationship of language and culture; not only do our words echo our society’s sexist ideology, but they also, to some extent, engender it. Masculine Generics and Terms of Identity: A Subtler Sexism Few will deny that women’s oppression has had at least some effect on the English language. In turn, the use of such language has aided in the exacerbation and perpetuation of oppression. An examination of derogatory terms reveals striking imbalances. One can think of several insults likening women to animals (shrew, cow, etc.), for instance, but few for men. In fact, one estimate counts over 220 terms in the English language for sexually immoral females, but only 20 for males. (Vahed 66, 1994) The corruption of formerly honorific female titles reveals a similar disparity; Old Master refers to a great artist, whereas Old Mistress refers to someone else entirely. (Gabhart 1972; cited by Liben 2002, 811) The same can be said of courtesan, queen, princess, and dame; all formerly honorific, these terms now burst with negative connotations. (Blagg 11) Our verbs also reflect biases; young adults, for instance, have recently begun to use rape to denote victory. After performing well on an exam, a Duke student might exclaim, “I raped that test!” S. Ehrlich of York University points out that other sexual verbs pervert our language; words like lay or screw turn heterosexual sex into something that a man does to a women, instead of a “mutually experienced affair.” (Ehrlich 1994, 60) Clearly, those who want to find an outlet for their sexist sentiments have a plethora of words to choose from. Yet although we know that derogatory language can express sexism, whether language can unconsciously create sexist thought remains the subject of hot debate. In fact, the most powerful argument against reform comes from those willing to accept that sexism does indeed manifest itself in language. (Briere 1983, 626) They argue that certain linguistic patterns exist merely as symptoms of society’s sexism, not as perpetuators of it. Grammarians like William Safire, linguists like Steven Pinker, poets like L. E. Sissman, and many laypeople themselves scorn the notion that language can so deeply shape thought. Yet psychological research, both emergent and decades-old, points to tangible effects of language on the development of the individual’s views on women. This paper examines two types of subtly sexist language: masculine generics and terms of identity like girl or lady. Unlike pussy or cow, these two types of sexism often escape the notice of the many who view them as innocuous; the same person who condemns others for calling women “chicks” may use words like “mankind” or “fireman” in his or her daily communication. Indeed, an overly narrow definition of what constitutes sexist language has prevented many from recognizing, much less reforming, its usage. In a 1978 book on nonsexist language, author B.S. Persing describes sexist language as “any verbal or nonverbal act that precasts either female or male into roles on the basis of sex alone.” (cited in Cronin 1995, 820) Yet such a definition leaves out sexist language’s finer forms; neither masculine generics nor terms of identity necessarily cast women into certain roles in the same way that mistress does. Yet within both forms lies a definite masculine bias with severe psychological consequences. Perhaps our definition of “sexist language” must then be amended to include not only language that explicitly degrades women, but also language that implicitly damages one’s views of them. Not-So-Generic Generics Anyone in an English-speaking society finds it virtually impossible to avoid stumbling upon malegendered generics. Defined as masculine pronouns and terms applied universally, they are intended by users to refer to both men and women. Effectively exposed to such terms from birth, we see
17
eruditio masculine generics as completely normal. After all, no one bats an eye at sentences like “When a student is studying, he will go to the library” and “A dog is a man’s best friend,” or at titles like fireman or layman. Yet despite their widespread acceptance, male-gendered pronouns and suffixes are not as innocuous as they appear. Professor C. Coffey of Cabrillo College best demonstrates the exclusivity of masculine generics with the following extended example: For hundreds of years “man” has been creating a social reality that ignores one half of the human race. “He” talks about the accomplishments of “mankind,” ultimately attributing them to “man’s” capacity to create and then use verbal symbols. These symbols allow “him” to think and to accumulate the fruits of this thought into a body of knowledge essential to the survival of “mankind.” (1984, 511)
The individual Coffey brings to mind is inevitably male, and his experiences thus represent the experiences of man alone; women clearly find no place in this passage. Nevertheless, contemporary society fails to question this clearly biased word choice, presumably under the assumption that male-gendered generics exist as rules of grammar, historically fixed and forever permanent. Before the late 17th century, however, most did not consider masculine generics the standard. (Stanley 1978) Indeed, they served as the singular, sex-indefinite pronoun in both writing and speaking more often than not. Found in the writings of literary giants like Chaucer, the singular they was not only accepted, but also expected. (Bodine 132) Yet prescriptive grammarians like Murray in his 1795 English Grammar began to denounce the singular they at the birth of the 19th century, instead replacing it with he. (Murray 1795; cited in Stanley 1978, 805) Still, in the wake of these changes, many still clung to linguistic tradition; writers like Jane Austen, Eliot, Byron, and Dickens often used they with a singular subject. (Stanley 1978, 807) Nevertheless, prescriptive grammarians soon had their way; a 1850 Act of Parliament officially replaced he or she and they with he in official document, and over a century and a half later, many English teachers, writers, and editors alike now condemn anything else. (Bodine 1975, 136) What caused grammarians to abandon they for he? Professor A. Bodine of Cambridge argues that subject agreement was not the sole motivation. Although they is incorrect in terms of number agreement, its replacement is incorrect in terms of gender. If grammarians truly wanted to amend the language, Bodine argues, they would have declared he or she the standard, not a pronoun as equally incorrect as they. Interestingly, grammarians deemed he or she too wordy, while failing to attack other common examples of wordy precision like one or more or person or persons. (Bodine 1975, 133) Thus, Bodine asserts, such a linguistic shift must have been ideologically motivated. Indeed, grammarians often justified the creation of a universal “he” by stating that man is first in the natural order; thus, he can subsume the woman. (Chew 2007, 648) Bodine’s theory ought not prove shocking; after all, in the late 18th century, grammar books were written for the only students widely thought worth educating: males. (Jacobson) Still, many contemporary grammarians continue to support the grammatical dominance of male-gendered generics. Among them, the renowned William Safire remains most prominent. In his popular New York Times column, Safire has asserted that such pronouns exist simply as grammatical conventions; changing them can “pull the punch out of a good sentence.” (Safire 1999) Many in the field echo this point on aesthetics, arguing that more neutral options like he or she mangle the English language. Yale professor David Gelerntern, for instance, explains that masculine generics keep writing clear and concise. Going so far as to call feminist reformers “language rapists,” Gelernter asks, “Why should I worry about feminist ideology while I write? Why should I worry about anyone’s ideology?” (Gelernter 2008) Despite its venom, Gelernter’s point seems legitimate; even if masculine generics were borne of sexism, a writer’s words ought to remain his own – if he intends “he” to encompass both men and women,” then it should. It may thus strike one as petty and even patronizing to insist on reform.
18
humpt y dumpt y had a great fall(acy) Yet the claims of grammarians and writers hold only if male-gendered generics are truly generic. An abundance of arguments, however, has shown that pronouns like “he” show definite masculine biases, thus toppling any beliefs about the grammatical innocence of male universals. Why do these biases exist? Professor W. Martyna estimates that we encounter masculine pronouns referring to males up to ten times more often than masculine pronouns used universally. (Martyna 1978) In effect, our language has trained us to recognize words like “he” or “man” as specifically and exclusively masculine; the generic usage of such terms thus contradicts our ingrained cognitive tendencies. As such, male-gendered generics may sometimes sound very odd to us. Sentences like “Man, being a mammal, breast-feeds his young,” “Half of all men are women,” or “My brother married a weatherman” expose the inadequacies of masculine universals. (Martyna 1980; Gibbon 1999, 42; Blagg) These examples startle us because two contradictory meanings collide within them; the so-called generic triggers a masculine image while the context triggers a feminine one. If man and he were truly universal, we would see these expressions as ordinary rather than odd. Second-wave feminism witnessed the rise of research examining the cognitive consequences of male-gendered generics. The first groundbreaking study asked college students to bring in images to be used in a hypothetical sociology textbook. When researchers gave chapters within the book titles like “The Economic Human,” about fifty percent of the images each individual chose contained females. When titles contained less neutral terms, like “Industrial Man” or “Economic Man,” nearly sixty-five percent of students only collected pictures of males. (Schneider 1973, cited in Falk 1996) This phenomenon, known as androcentric, or male-biased imaging, proves that supposedly universal pronouns actually evoke primarily male images, excluding females from thought. Countless other studies have confirmed the male bias in masculine generics. (Gastil 1990; MacKay 1983; Todd-Mancillas, 1981) Hyde 1984, for instance, found that over eighty percent of both male and female children and college students responded to a prompt with masculine generics with stories about males. If he, his, and him were truly generic, that number would have been far closer to fifty percent. (Hyde 1984) Thus, those who argue that masculine generics are generic simply because the user intends them to be so forget that communicators lack complete control over their own words; language triggers cognitive re-orienting with or without the languager’s consent. These studies had examined the effects of the experimenter’s usage of malegendered generics; a later study by Professor M. Hamilton of Centre College found that the subject’s own willful usage of such pronouns also causes androcentric thinking. Requests for “traditional, academic, formal” responses induced some subjects to write with male-gendered generics; in contrast to those who received prompts for more informal responses and therefore tended to use their and his or her instead of he, subjects who had been implicitly prompted to use masculine generics showed male bias in their imagery. (Hamilton 1988) Thus, as Hamilton points out, “the masculine generic pronoun per se increases male bias, separately from either the subject’s predisposition toward biased cognitions.” (Hamilton 1988, 787) As such, the masculine generic carries cognitive consequences for not only the listener, but also the speaker. Sexist language thus performs a comprehensive orienting task, sculpting the conceptual framework of every individual involved in its use. In the face of such evidence, those who so confidently argue that the generic “he” is unequivocally sexless appear ridiculous. Yet some may argue that even if masculine generics do harbor a male bias, it is of little consequence. Perhaps their use may cause us to conjure up more images of males, but imagery itself seems harmless; such a cognitive bias hardly counts as sexism. Imagine, however, if that cognitive bias were present in racism. Douglas Hofstadter explores the concept of linguistic racism in his infamous essay “A Person Paper on Purity in Language.” Going by the penname “William Satire,” Hofstadter pretends to defend a world in which terms like chairwhite, businesswhite, and mailwhite, along with phrases like “All whites are created equal” and “That’s one small step for whites, a giant step for whitekind” are the norm. Well-educated on the historical barbs of racism, we cringe
19
eruditio at Hofstadter’s language. To us, his argument that “it is self-evident when ‘white’ is being used in an inclusive sense, in which case it subsumes members of the darker race” seems ludicrous. (Hofstadter 1983, cited in Cameron 1998, 142) If we find racist language deeply distressing, why does sexist language not equally disturb us? Women, like African Americans, have historically suffered from the institutional denial of basic human and civil rights, endured countless acts of physical violence on account of their identity, and faced – and indeed, often continue to face – discrimination in their schools and workplaces. In fact, some argue that both racism and sexism stem from highly similar cognitive processes. (Morrison 1987, 40) Yet for some reason, we make distinctions about gender that we would not dare make about race. Just as Hofstadter insults African Americans by subsuming them under their Caucasian counterpart, so too do we demean women by allowing masculine generics to encompass their existence. By offering the masculine pronoun the status of “universal,” we essentially mark the male as a typical person and the female as an atypical person. This exclusion has unique cognitive effects upon females. Because male-gendered universals force women to wonder whether what they are hearing or reading applies to them, processing masculine generics may require an extra cognitive step for females. Several studies suggest that as a result, women suffer slightly lower comprehension and recall rates when reading passages written with biased language than they when reading more neutral writing. (Todd-Mancillas and Meyers 1980, cited in Falk 3, 1996; Crawford and English, 1984, cited in Weatherall 2002) No inherent weaknesses in women cause these deficiencies; had the English-speaking society produced female rather than male generics, one would assume that males would face the same cognitive challenges. Because masculine universals prevent speakers and writers from communicating as directly with women as possible, females also find advertisements written with biased language less persuasive than those without it. (Falk 1996) Thus, not only do masculine generics force a slight cognitive handicap upon women, but they also insulate them from the influence of certain arguments. Essentially, biased pronouns hinder a woman’s ability to “project herself ” into language, changing and even harming the way she interacts with it. Masculine pronouns also taint the way both men and women understand all other pronouns. Khosroshahi 1989 found that people who consistently use male-gendered generics in their writing experience androcentric imaging even when prompted with gender-neutral pronouns. For example, people accustomed to using biased language interpret the otherwise unbiased they and he or she as masculine. We cannot blame an individual’s inherent sexism for such a phenomenon, since the same neutral pronouns prompted young girls to envision females instead. (Adamsky cited by Martyna 1978) We can then infer that the same girls who consider the typical person to be female grow up into women who see the typical person as male. Thus, as individuals age, the increasing number of masculine generics to which their environment exposes them builds up a cognitive bias towards males. As Professor R. Merritt of Purdue explains, “Children’s early and consistent exposure to, and use of, generic masculine terms…may result in equating people with the male image… Once this cognitive bias is formed, it is quite resistant to change.” (Merritt 1995) Thus, masculine universals train individuals to become androcentric in their thinking; by the time they reach adulthood, their training has been so comprehensive that they see even neutral language through a biased lens. As a final point, the finding that women conjured more images of males than females when given a sex-neutral prompt is reminiscent of the infamous “white doll over black doll” preferences of African American children in the 1940s. (K. Clark and M. Clark, cited in Khosroshahi 1989, 519) However, androcentric thinking persists among women even today, while later doll studies have found that African American children now often prefer darker dolls, in large part do to the successes of the advocators of racial equality. (Hraba and Grant, cited in Khosroshahi 1989, 519) Thus, in at least some respects, the push for women’s equality has not made nearly as much
20
humpt y dumpt y had a great fall(acy) progress as the much-lauded civil rights movement. Cognitive Consequences Breed BehavioralBiases Although arguments pointing out the disparities in our sensitivities to racist and sexist prejudices, cognitive handicaps placed upon women, and the potency of androcentric training illustrate the harms of cognitive biases, some may still argue that the cognitive effects of sexist language alone cannot justify altering the English language. Biases that reside primarily in cognition seem relatively inconsequential when compared to seemingly more crucial issues like pay inequality that “real” feminists concern themselves with. Yet if the cognitive effects of masculine generics were truly inconsequential, then the use of feminine generics would incite little controversy. After all, if we recognize he as universal, then with little effort we should be able to see she as generic as well. On the contrary, when more and more men began to pursue teaching professions in the latter half of the 20th century, they protested the tendency of languagers to refer to teachers with the “she” pronoun. In fact, they argued that the generic feminine pronoun led to lower status and salaries. (Martyna 1980, 484) Thus, pronoun usage is not as petty as it may sound; if male teachers accused language of causing the trivialization of their efforts, imagine the possible effects of pervasive masculine generic usage on the status of women in society. Here, then, lies the most convincing argument against male-gendered generics: mental biases like androcentric imaging produce prejudiced practices in the real world. In other words, cognitive exclusion of women leads to behavioral exclusion of women. A 1992 study, for example, presented stunning evidence for the tangible dangers of sexist language. Researchers gave both male and female subjects the facts of a 1974 murder case in which the female defendant pleaded self-defense. Some subjects then read the original jury instructions given in the case; as was common, the document used malegendered generics, asking jurors to determine whether “he [the defendant] had reasonable grounds to fear from his assailant.” (Hamilton 1992, cited in Roman 1994, 342) Other subjects read altered versions of the instructions with either he or she or she replacing he and other generics. After asking all subjects to make a decision on the case as if they were jurors, researchers found that those who had read instructions with masculine generics were significantly more likely to deny the female defendant her self-defense claim. Psychologists speculate that the male pronouns hindered the subjects’ ability to “put themselves in [the defendant’s] shoes, assessing risk of death or bodily harm from her personal point of view.” (Hamilton 1992, cited in Roman 1994, 341) Thus, because of the cognitive barriers erected by masculine generics, jurors struggled to see the case from the woman’s perspective and consequently made decisions far less sympathetic to her situation. For those demanding evidence for the “real” effects of sexist language, news that a woman’s judicial fate could depend upon the pronouns of legal documents should prove sobering. Androcentric language also levies heavy damage in a realm historically plagued by sexism - the professional world. Gender-marked titles, for instance, alter our perception of females. In 1998, McConnell and Fazio asked subjects to describe the typical individual in various occupations. When the occupational title included a male suffix – chairman rather than chairperson, for instance – the vase majority of subjects described the “average person” within that role as male. (McConnel and Fazio 1998, cited in Chew 2007, 3650) One can easily imagine the affects this bias must have on women aspiring to become chairmen or councilmen – if those around her automatically view a woman in a certain role as abnormal, how much harder than a man must she work to gain the trust and acceptance of her peers? Perhaps even more frustrating is the ability of gendered language to influence what career path a woman aspires to embark upon in the first place. A 1973 study by Bem and Bem found that “help wanted” ads containing masculine pronouns attracted far fewer women than more neutrally worded ads. (Bem and Bem 1973; cited in Mucchi-Faina 2005, 205) Later research confirmed this effect; both males and females rated psychology as a less attractive career for women after reading a description of a psychologist’s ethical responsibilities written with masculine generics. (Briere 1983) Thus, androcentric
21
eruditio imaging affects occupational decisions; when individuals have a harder time visualizing themselves in certain careers, they often avoid them. These studies reveal the influence that immediately available androcentric language has on both other’s opinions of professional women as well as their own career choices. By examining the maturation of children within their androcentrically languaging community, however, we may begin to understand the development of more ingrained occupational perceptions and preferences. When McGee and Stockard asked 4th graders in 1990 which occupations, out of a list of 21, they would prefer as adults, they found that the greatest predictor of preferred occupations – above status, above salary, above difficulty – was the child’s gender. (McGee and Stockard 1990, cited in Helwig 1998) Children not only showed implicit awareness of social stereotypes surrounding occupations and gender, but also preferred for themselves careers consistent with those stereotypes. Later studies have confirmed their findings. (e.g. Sellers 1999; Liben 2001) Thus, children form their career aspirations within the framework of gender stereotypes. Gender-marked occupational titles and male-gendered generics may explain how those stereotypes form in the first place. A 2002 study by L. Liben found that young children often believed that biased titles like postman or chairman could not be applied to women. The trend held true even for children who had undergone prior testing that revealed they had low stereotypic attitudes in general, meaning that we must attribute these responses to the gender-marked nouns themselves instead of prior stereotyped thought. Thus, names with masculine suffixes cause children to develop stereotypes about various career paths; early exposure to them may then have long-term consequences. As Liben expresses with frustration, such titles could produce “children who develop into college students or publishers’ representatives who ask women professors to take messages for their male colleagues on the assumption that the former are secretaries for the latter.” (Liben 2002, 826) Keep in mind that these observations appeared in publication in 2002, long after many believed the work of the feminist movement to be largely over. Not just gendered titles but also gendered pronouns sculpt children’s understanding of careers. For instance, children given job descriptions with masculine generics rated women less competent in performing the job than those who did not. Such a trend held true even when researchers made up a fictional occupation; when the “wudgemaker” was described as a he, elementary-age students believed that males would perform the wudgemaker’s responsibilities far more effectively than females. (Hyde 1984) Because the children could have no prior stereotype of the wudgemaker, Hyde offered clear proof that the effects of sexist pronoun usage can shape opinions even if the individual harbors no prior sexist thought. The power of pronouns in the formation of stereotypic thought is especially frightening given the overwhelming number of jobs for which masculine pronouns are generally used. Our language tendencies give only the few low-status, historically female positions like babysitter and secretary the she pronoun, thus creating explicit gender schema for languaging children to adopt as reality. As Liben notes, “Once schemata are in place, they filter incoming information, making it difficult for subsequent experiences to modify stereotypic beliefs.” (Liben 2002) Indeed, these gender schemata virtually require language to develop. As one scholar notes, one of the critical periods of gender identity occurs from 18 to 24 months of age; during the same period, children learn basic words like he, she, man, and woman. (Constantinople 1979, cited in Hyde 1984, 689) Thus, as Professor J. Hyde of Denison University remarks, “the very fundamental learning of gender identity is languagebased.” (Hyde 1984, 698) The enmeshed nature of gender development and language might thus explain the dramatic effects of biased language upon a child’s understanding of occupational stereotypes and his or her place within them; essentially, developing children inherit a system of inequality through the language they learn. Thus, masculine universals like man or he reorient the cognitive spaces of all individuals involved in their usage. These cognitive biases then aid in the development of stereotypes that manifest themselves in prejudiced behavior against women. As such, the linguistic habits that William Safire and
22
humpt y dumpt y had a great fall(acy) his colleagues view as simple features of grammar actually perpetuate gender inequalities in cognitive, judicial, and even professional realms. Little Ladies and Big Girls: Terms of Identity Subtle linguistic sexism does not limit itself to male-gendered generics; terms of identity also encourage biased thought and behavior. Defined as synonyms for woman with no dominant derogatory connotation, terms of identity often slip under the radar of those listening for more obviously damaging terms. Thus, words like lady and girl are terms of identity, whereas insults like ho or bitch are not. Combating the sexism inherent in identity terms proves challenging because individuals, depending on the context and intent, can use them appropriately. Thus, we need not rebuke an elementary school teacher who refers to a girl in her class, nor a member of the British Royalty who goes by the title Lady. Nevertheless, incorrect and therefore damaging usages of terms of identity abound, shaping thought and influencing behavior. The word girl carries with it a good deal of conceptual baggage. As the 2001 Publication Manual of the American Psychological Association notes, girl refers to a female either old enough to be in high school or younger; its association with childhood results in connotations of high warmth and low dominance. (Cralley 2005, 301) Thus, the term trivializes when used to refer to an adult woman. Granted, we need not accuse all applications of girl as sexist; many women at Duke, for instance, often refer to their friends as girls. In such a setting, females may wish to be seen as warm and nondominating. In what context, then, might the usage of girl, whatever the intention, produce sexist thought and action? Though the term may seem fairly innocuous in informal gatherings of female friends, it proves highly dangerous in the workplace. A 1978 study, for instance, found that if researchers occasional refer to a female applicant for an executive position as a “girl,” then subjects report her as appearing less “dignified, brilliant, and tough.” As if blows to her authority were not enough, subjects award her an average of $6,000 less – worth quite a lot in the late 1970s – in salary than she would have been given if they had never heard others call her a girl (Brannon 1978, cited in 20 Briere 1983, 626) Thus, a simple term can radically change a woman’s professional experience and success. Given the prevalence of “cutesy” terms like career girl – as well as the absence of terms like career boy - the detrimental effects of girl in the professional realm could help to explain the vast inequalities women still face in the workplace. Lady is a slightly more culturally complicated term of identity than girl. Contemporary society has recently condemned some of its applications while ignoring others. For instance, many now find lady doctor passé, but lady president and lady athlete more or less acceptable. Yet regardless of its specific usage, lady carries a complex and controversial history. The term once brought with it a standard of proprietary behavior; society expected its ladies to write thank-you letters and plan dinner parties, not argue politics or sacrifice family for career. Thus, as Miller and Swift note in their book Words and Women, “the qualifier lady served as a mechanism of social control of women’s behavior.” (Miller and Swift 1977, cited in Pelak 2008, 190) Indeed, it still holds this function; a mother who urges her daughter to act “like a lady” wants to force her child into a certain social norm. Yet some current usages of lady have sullied the term, giving it a frivolous overtone that trivializes its subject. Consider the difference between the sentences “I know a woman who does yoga” and I know a lady who does yoga.” To many, the former suggests contemplative study and the latter, eccentric behavior. To make matters worse, many low-status positions held by females contain lady in its title. (Lakoff 59) Thus, saleslady, cleaning lady, and even bag lady sound natural to us, while businesslady does not. It is also interesting to note that lord, the masculine counterpart of lady, still retains its historic nobility, thus revealing that lady, like mistress, serves as yet another example of the corruption of formerly honorific female terms. These two different conceptions of lady – one as a social constraint, the other as a trivializing term – may contribute to the way we view women labeled as “ladies.” Adding the adjunct lady to the
23
eruditio occupational title of a women, for instance, changes the opinions others have of her; thus, Glenn 1976 found that subjects gave women with lady in front of their professional titles significantly poorer reviews than women without it. Thus, we automatically view the performance of a “lady athlete” as inferior to that of an “athlete” and the abilities of a “lady president” as poorer than that of a “president.” The adjective appears to uncontrollably demean its subject. Essentially, terms like “lady doctor” and “lady athlete” do their real damage by implying that the real doctors and athletes – those that do not require some special qualifier – are male. Such language thus devalues females as “the other.” The social consequences of “otherness” also manifest themselves in the gender inequality of collegiate sports programs. A 2008 study by C. Pelak of the University of Memphis examined the names of college basketball teams across the southern United States and found that 61% of Southern schools attached “Lady” to the name of the female team. Grambling State University in Louisiana, for instance, is home to the “Lady Tigers.” Some schools even went so far as to add “Lady” to a diminutive of the male team name; Arkansas Baptist College has both the “Buffalos” and the “Lady Buffs.” The prevalence of such practices is in itself noteworthy, but Pelak found a fascinating correlation in her data. Colleges with “lady” teams were far less likely to match the proportionality standards of Title IX, meaning that in comparison with more compliant colleges, they not only had fewer female athletes, but also offered fewer scholarships and fewer resources to those athletes. (Pelak 2008) Granted, Pelak acknowledges that such a relationship does not necessarily mean that sexist names cause inequalities in athletic opportunities; nevertheless, sexist names may reinforce a sexist climate that allows such inequalities to persist. In fact, Pelak points out that other studies have concluded that “societal ambivalence toward female athletes is constructed and perpetuated through the…trivialization of women’s sports teams. “ (Hargreaves 2000, 190; cited by Pelak 2008) Thus, a demeaning label like lady may fuel a community’s apathy towards its female athletes, in turn causing tangible losses in athletic resources for such women. Schools often justify their use of lady by emphasizing its clarifying function; without lady, many colleges remain stumped as to how to differentiate the women’s team from the men’s. Yet clarifying terms can quickly become separatist terms, and separatist terms just as easily become hierarchic terms. In a sense, all terms of identity, including girl, do their damage through their creation of dichotomies. Labels in themselves carry weight; by providing group identities, they work to create social divisions. Individuals without sexist intentions reinforce such dichotomies all of the time. For instance, elementary school teachers often address their students as “girls and boys.” By once again applying the same practice to racial divides, we can see the harm in such seemingly innocuous statements. As Leaper and Bigler point out, “Most individuals readily predict that the routine use of racial labels (‘Good morning, Whites and Latinos’) would result in increased levels of racial stereotyping and prejudice.” (Leaper 2004, 133) Thus, labeling groups may almost inevitably lead to tension between them. In fact, a 1995 study found that gender dichotomization leads to the development of gender stereotypes. When teachers continually categorized elementary-age children with statements such as “All the boys should be sitting down,” or “Amber, you can come up for the girls,” students possessed significantly greater gender stereotyping beliefs in a mere four weeks. (Bigler 1995) The use of “lady” as an adjective may then have the same effect as a teacher’s greeting of “Good morning, boys and girls.” Through the creation of separate groups (i.e. athletes versus lady athletes), such language may aid in the solidification of gender stereotyping. Though no one approves of intergroup conflict and stereotyping, their effects are somewhat mediated if each side possesses equal power. The trouble with masculine and feminine dichotomies, however, is that females have a long heritage of victimhood. When intergroup tension occurs within a society, the less dominant group bears the brunt of its damage. Just as group prejudice harmed African Americans more than Caucasians, so too does gender stereotyping hurt females more than males. Thus, since males have historically dominated females, and because some of this oppression occurs even today, women get the short end of the stick when masculine and feminine lines are drawn. Gender separation as it manifests itself in terms of identity could thus form the foundation of sexism itself. Nevertheless, words like girl and lady have weaved themselves into the fabric of our daily dis-
24
humpt y dumpt y had a great fall(acy) course; eliminating them from our vocabulary proves an intimidating task indeed. Perhaps these names, then, can instead be reappropriated by women and given new meaning. The civil rights movement, for instance, rallied around the term black because it turned what once was a target for racist insults into a basis of pride. (Lakoff 1973, 61) Nigger and queer have also been the subjects of fairly successful reclamation efforts. (Cameron 1998) In light of such progress, one wonders if the same could be done for girl and lady. Feminists have tried to turn words of bias into words of empowerment before; writer Igna Musio, for instance, argues that we ought to turn the slur cunt into a declaration of freedom (Musio 2002, xxii). If she dares change the meaning of so controversial an insult, perhaps the renovation of girl and lady is plausible. If so, then we hold remarkable power over our own cognitive orientation; although language may alter our cognition, we may possess the ability to redefine language and thus reconstruct cognition itself. What Now? The Charge and Challenges of Reform Feminists have pushed for reform for decades, approaching it in a multitude of different ways. As a result, Publishers like Random House, Prentice-Hall, Harper & Row, and the American Psychological Association require manuscripts to use nonsexist language. (Briere 1983; Liben 2002) In the professional realm, the Federal government’s Dictionary of Occupational Titles uses only gender-free occupational titles, and Britain has banned masculine generics in job advertisements. (Liben 2002; Cameron 1998) Indeed, several colleges – Duke University included - have even gone so far as to rename “Freshman Week” as Orientation Week.1 Yet although some reform has occurred, it has not necessarily been in a climate conducive to it. Despite some progress, sexist language persists even in realms built upon concepts of equality. Chew 2007 found that in the American legal community, supposedly a refuge for the oppressed, continues to use sexist terms. Granted, some legal terminology has changed in the past four decades; a reasonable man in a court of law is now a reasonable person. Yet those terms most associated with power and status still carry male suffixes today; Chew’s data reveals that the use of words like chairman and congressman are used at least nine times more in legal documents than words like chairperson and congressperson. (Chew 2007, 661) Thus, some of society’s most prestigious legal positions still remain undeniably masculine. Indeed, by charting the use of gendered titles over time, Chew discovered that the movement towards nonsexist language within the legal community has in fact stagnated since 1994. (Chew 2007, 664) It may seem puzzling that several decades after academic research began to reveal the harms of sexist language, many remain resistant to change. Yet people often hesitate to abandon the linguistic patterns passed down to them by parents and teachers – many, in fact, see any emphasis placed on masculine generics or terms of identity as ridiculous at best, radical at worst. Indeed, reformers often endure endless ridicule; labeled the “women’s lib redhots” with “the nutty pronouns,” they find themselves explaining to a disdaining public why using alternative pronouns is not the same as changing “manholecover” to “personhole cover.” (Martyna 1980) Surprisingly, much of this resistance comes from the demographic most historically supportive of abandoning conventions. A 2008 study found that young adults within the age range of 18 to 22 are more likely to have neutral or negative attitudes towards sexist language reform than older women and men. (Parks 2008) In effect, demographic ideological trends – at least in the realm of gender and social justice – have flipped; the historically conservative older adults have become more progressive than the historically liberal youth. Parks explains that younger people today have grown up in a far more conservative era; whereas 30- and 40-year-olds saw the birth of Title IX and the progression of second-wave feminism, teenagers and 20-year-olds witnessed the advent of a new understanding of “political correctness.” (Parks 2008, 281) In the past couple of decades, the culturally conservative right has used PC as an insult; a politically correct policy is one that pays “excessive attention to the sensibilities of those seen as different from the norm.” (Mills 2008, 100) Thus, “PC feminists” are overly sensitive and overtly
25
eruditio interventionist; they wish to politicize issues that distract attention from more important matters – after all, why discuss pronouns when national security is at stake? In response to the rise against political correctness, young adults may fear that nonsexist speech may cause them to earn the now-suspect label of “feminist” and thus may shun reform. The scorn surrounding political correctness and the subsequent demonization of feminism therefore exist as perhaps the most intimidating obstacles blocking linguistic change; the public must recognize that the language of discrimination is not an issue of politics, but rather a matter of justice. When examining the reform that has been accomplished in the face of resistance, however, one realizes that some of the efforts of feminists have seem slightly disjointed and somewhat unorganized – they have changed “Freshman Week,” for instance, but not “congressman.” In fact, in their hurry to out-argue the staunch opposition, feminists may have failed to ask a some fundamental questions: what does comprehensive reform look like and does it even work? The argument of reformers proceeds as follows: if sexist language produces sexist thought, then nonsexist language ought to produce nonsexist thought. For some reason, however, there exists a dearth of scientific research into the validity this conclusion. Though several studies have delved into the cognitive effects of biased language, few researchers have explored the consequences of unbiased language. Those that have, however, offer us results as confusing as they are encouraging. Perhaps the most compelling evidence for language reform lies in its effects upon women themselves. Research done in 1988 found that stories with gender-neutral pronouns influence the selfesteem of young girls more positively than stories using masculine generics. (Henley 1988, cited in Roman 1993, 340) Professor M. Hamilton found that this phenomenon holds true in older females; his 1989 study demonstrated that women who read Maine’s altered constitution without masculine generics viewed themselves more positively than women who read the original, less inclusive version. (Hamilton 1989, cited in Roman 1993, 340) As first glance, the robustness of both Henley’s and Hamilton’s findings may appear surprising; after all, we would like to think that factors as seemingly trivial as pronoun usage hold little sway over our self-regard. Yet when language has systematically denied women a place among “typical” humans for years, any move that eliminates the need for females to implicitly ask themselves if they belong among the communicator’s audience members delivers a blow to an oppressive norm. Henley and Hamilton, however, only address the effects of neutral language upon the addressee; whether language reforms result in cognitive reforms in the communicators themselves remains another question entirely. One promising study found that high school students who read passages without masculine generics were far less likely to use them in their own writing than students who read more sexist prose. (Cronin 1995) The results bolster those advocating reform, supporting the argument that exposing people to nonsexist language causes them to produce it themselves. Since Hamilton’s 1988 study (see page 10) suggested that one’s use of nonsexist writing directly decreases their cognitive masculine bias when compared to their use of sexist writing, changing language use may in fact change thought. Nevertheless, a later study suggested that the answer may not be as clear-cut as one would like. Khosroshahi examined the effect of college students’ deliberate language reform on their own cognition. (Khosroshai 1989) Because written language tends to be more “mindful” than spoken language, and because English educators generally instruct students to use masculine pronouns, Khosroshahi assumed that those who often used he or she instead of a generic he when writing various term papers had reformed their own language. She then asked both reformed and non-reformed students to read passages containing he or she and describe how they imagined the unspecified individuals described in the passages. If changing one’s own language produces nonbiased thought, then reformed students should not imagine male individuals more often than female individuals. Khosroshahi’s findings, however, challenge our understanding of language and cognition; only female reformed students were not androcentric in their thinking. Everyone else – including each “reformed” man – pictured more male
26
humpt y dumpt y had a great fall(acy) than female individuals. Thus, reformed language seemed to affect women more than men. Khosroshahi offers several explanations for these puzzling results. She first proposes that the androcentrisim of men may really be “gendercentrism” – men envisioned males because they themselves were male. Indeed, women who used unbiased language and who consequently were not androcentric in their thinking often imagined more female than male individuals. Thus, just as white children often prefer white dolls in the aforementioned doll experiment, we tend to envision individuals of our own gender more often than not when our language does not cause us to be specifically androcentric. Khosroshahi’s second explanation, however, is not so forgiving. Since women generally have a more vested interest in language reform than men, they tend to internalize nonsexist thought. Men who use unbiased language, on the other hand, may have merely conformed to an environment that encouraged reform. Their linguistic changes occurred inorganically; consequently, reform remained superficial. (Khosroshai 1989, 510) In light of this second possibility, reforming biased attitudes through language may prove more complicated than originally thought. Because simply reading nonsexist pronouns like “he or she” still trigger androcentric thought in those who have not reformed their language (see reference to Khosroshahi 1989 on page 12), changing the pronouns of every published document will not automatically abolish cognitive biases; linguistic reform must happen on an individual basis. At the same time, it is not enough to simply pressure every member of society to change their language habits; adults already have sexist language – and therefore biased cognition – ingrained into their everyday languaging. Since children learn biased language from birth, such language easily shapes the development of the initial cognitive system. Yet using unbiased language to revamp an already-established cognitive system presumably requires more effort than building the original one; it thus may not be sufficient for the languager to simply read, hear, and speak a nonsexist language. Khosroshahi’s later findings suggests that due to the complex nature of change, individuals must truly want such reform themselves if nonbiased language is to have its full effect. Indeed, some may use their own language reform to excuse their inaction in other realms of gender inequality; for those who already harbor strong sexist sentiments, it may merely serve as symbolic concession rather than a sign of internal change. (Mills 94, 2008) Thus, the primary danger of language reform is its potential to induce apathy towards sexism as a whole. Though the decline of masculine generics and terms like “lady” might begin to address society’s underlying injustice, such change is only the first step. Few will argue that changing our words will automatically repair the sexual, professional, and political equalities ingrained into our society, but language reform will allow us to begin to truly recognize women as equals. Thus, reform ought to take place within a larger movement against sexism. Language serves as an essential tool in restructuring attitudes, but it cannot complete the construction alone. At the same time, linguistic change is justified in its own right. Even if changing an individual’s internal cognitive structure requires his or her internal desire to reform, mediating the immediate and tangible effects of sexist language does not. In other words, we can address the behavioral effects of masculine generics and terms of identity by simply avoiding them, even if mere avoidance does little to affect the cognitive orientation of individuals. After all, sexist language affects more than sexist attitudes; it produces measurable behavioral harms. Influenced by masculine universals and terms like lady and girl, some employers, clients, and voters see women as more atypical, less competent, and less worthy of high salaries, promotions, or election. Similarly, some females – children and adults alike – subconsciously avoid certain professions and positions simply due to their descriptions or titles, and some jurors find it more difficult to excuse the behavior of women simply because of the male-oriented language of the law. By removing sexist terms from workplace, political, and judicial discourse, we can directly attack the behavior they cause. In short, sexist language affects two realms of sexism: cognition and behavior. Each prong requires a separate attack. In order to revamp cognition, we must induce individuals to desire reform
27
eruditio by placing it within the larger context of gender inequality. In order to address behavior, we must avoid the specific terms that cause it. Thus, although there exists no quick fix for biased language and the sexism it perpetuates, a well-thought out and comprehensive approach can begin to address the deeplyentrenched linguistic and gender hierarchy of society. Feedback Revisited: Wiping Up the Mess of Humpty Dumpty’s Myth Because languaging – the gendered type in particular – influences the cognitive orientation of every individual involved, it is clear that the assertion that words mean just what one intends them to mean is simply a product of Lewis Carroll’s fancy. Thus, our language may carry mental and behavioral implications beyond our control. It is thus vital to emphasize that those who use masculine generics or terms of identity do not necessarily intend to be sexist; indeed, given their normality, most of us are probably guilty of at least occasionally slipping into such habits. Thus, the impassioned need not direct their anger at individuals, but rather at the social system that teaches and reinforces such language. Nevertheless, in a reality in which women still struggle for equal pay and female politicians still fight for equal treatment, we must not underestimate the role we ourselves play in the maintenance of an unjust society. Even if most individuals cannot be labeled as “sexist,” their word choice is nonetheless both a characteristic and a cause of a cognitive framework that places males at the center of human experience and women at the periphery. Even more frightening is the fact that this internal bias manifests itself in a variety of behaviors, be it jury decisions, salary rewards, or career choices. Those who examine the relationship between language and sexism often get caught up in a chicken-and-the-egg debate; grammarians, linguists, and lobbyists alike quickly become entangled in arguments about which influences which, which influenced first, and which influences more. Indeed, Humpty Dumpty again reveals his misunderstanding of language when he asks of the individual and his words, “Which is master?” (Carroll 1871) Yet as Dr. B. Kodish explains, “‘language,’ ‘thought’ …’behavior,’ and ‘culture’ do not function separately but rather as elements within a…unified whole… where they mutually interact in multi-dimensional and probabilistic ways.” (Kodish 2004). Ultimately, the components of sexism are inextricably intertwined in a manner far more complex than Humpty Dumpty would have us believe. When it comes to the relationship between the individual and his languaging, it is not and cannot be about “who is master”. If anything, both are. Language harbors power independent of the user simply because each individual is subconsciously shaped by his or her own phylogenetic and ontogenetic history with that language. Yet this history need not render individuals powerless over their own cognitive orientation; by choosing to neither use nor accept linguistic patterns
works cited
1. Angelica Mucchi-Faina. 2005. “Visible or influential? language reforms and gender (in)equality.“Social Science Information; London 44, (1): 189. 2. Bigler, Rebecca S. 1995. “The role of classification skill in moderating environmental influences on children’s gender stereotyping: A study of the functional use of gender in the classroom.” (0009-3):1072, http://find.galegroup.com/itxinfomarkdo?contentSet=IACDocu ments& docType=IAC&type=retrieve&tabID=T001&prodId=AONE&userGroupName= duke_perkins&version=1.0&searchType=BasicSearchForm&source=library&docId=A17412 47&Z3950=1 3. Blagg, Janet. “Taking the dick out of dic(k)tionary; non-sexist writing for enthusiasts.” Society of Editors: 1-11. 4. Bodine, Ann. 1975. “Androcentrism in prescriptive grammar: Singular ‘they’, sex-indefinite
28
humpt y dumpt y had a great fall(acy)
‘he’, and ‘he or she’.” Language in Society, 4, (2): 129-46, http://www.jstor.org/stable/4166805. 5. Briere, J. and C. Lanktree. 1983. “Sex-Role Related Effects of Sex Bias in Language” Sex Roles 9, (5): 625-32. 6. Cameron, Deborah. 1998. The feminist critique of language. 2nd ed. London: Routledge. 7. Carroll, Lewis. 1871. Through the looking glass. Dover, http://books.google.com/books?id=bNkS3LrxZTAC&printsec=frontcover#v=onepage& q=&f=false (accessed 12/8/09). 8. Chew, Pat K., and Lauren Kelley-Chew. 2007. “Subtly sexist language.” : 643, http://find.galegroup.com/itx/infomark.do?contentSet=IACDocuments&do cType=IAC&type=retrieve&tabID=T001&prodId=AONE&userGroupName=duk e_perkins&version=1.0&searchType=BasicSearchForm&source=library&docId=A17277 7359&Z3950=1. 9. Coffey, C. 1984. “Language - A Transformative Key.” Language in Society 13, (4): 511-3. 10. Cralley, Elizabeth L., Janet B. Ruscher, Elizabeth L. Cralley, and Janet B. Ruscher. 2005. “Lady, Girl, Female, or Woman: Sexism and cognitive busyness predict use of gender-biased nouns.” Journal of Language & Social Psychology 24, (3): 300-14. 11. Cronin, Christopher, and Sawsan Jreisat. 1995. “Effects of modeling on the use of nonsexist language among high school freshpersons and seniors.” (0360-0): 819, http://find.galegroup.com/itx/infomark.do?contentSet=IACDocuments&docTy pe=IAC&type=retrieve&tabID=T001&prodId=AONE&userGroupName=duke_ perkins&version=1.0&searchType=BasicSearchForm&source=library&docId=A181674 24&Z3950=1. 12. Ehrlich, S., and R. King. 1994. “Feminist Meanings and the (De)politicization of the Lexicon.” Language in Society 23, (1): 59-76. 13. Falk, Erika, and Jordan Mills. 1996. “Why sexist language affects persuasion: The role of homophily, intended audience, and offense.” (8755-455): 36, http://find.galegroup. com/itx/infomark.do?contentSet=IACDocuments&docType=IAC&type=retrieve&tabI D=T001&prodId=AONE&userGroupName=duke_perkins&version=1.0&searchType= BasicSearchForm&source=library&docId=A19265426&Z3950=1. 14. Gelernter, David. 2008. “Feminism and the english language.” The Weekly Standard. 15. Hamilton, M.C. 1988. “Using Masculine Generics: Does Generic He Increase Male Bias in the Users’ Imagery.” Sex Roles 19, (11-12): 785-99. 16. Hyde, Janet S. 1984. “Children’s understanding of sexist language.” Developmental Psychology 20, (4) (07):697-706, http://search.ebscohost.com/login aspx?direct=true&db=pdh&AN=dev-20-4-697&site=ehost-live&scope=site. 17, Jacobson, Carolyn. “Some notes on gender-neutral language.” in University of Pennsylvania [databaseonline]. 2009]. Available from http://www.google. com/#hl=en&source=hp&q =%22some+notes+on+genderneutral+language%22&aq=f&aqi= &oq=&fp=b36c7832dbb01be6. 18. Khosroshahi, Fatemeh. 1989. “Penguins don’t care, but women do: A social identity analysis of a Whorfian problem.” Language in Society, 18, (4): 505-25, http://www.jstor.org/ stable/4168079. 19. Kodish, Bruce I. 2004. “What We Do with Language - What It Does with Us.” ETC: A Review of General Semantics 60, (4): 383, 35 http://gateway.proquest.com/openurl/ openurl?ctx_ver=Z39.88-2003&xri:pqil:res_ver=0.2&res_id=xri:lion-us&rft_id=xri:lion:rec: abell:R04051673. 20. Lakoff, Robin. 1973. Language and woman’s place. Language in Society, 2, (1): 45-80,
29
eruditio
http://www.jstor.org/stable/4166707. 21. Leaper, Campbell, Rebecca S. Bigler, Campbell Leaper, and Rebecca S. Bigler. 2004. Gendered language and sexist thought. Monographs of the Society for Research in Child Development 69, (1): 128-42, http://content.ebscohost.com/ContentServer.asp?T=P&P =AN&K=14010804&EbscoContent=dGJyMNHr7ESeprQ4wtvhOLCmrlGep65SsKa4 SbaWxWXS&ContentCustomer=dGJyMPGsr0y0r7dMuePfgeyx%2BEu3q64A&D=aph. 22. Liben, Lynn S., Rebecca S. Bigler, Holleen R. Krogh, Lynn S. Liben, Rebecca S. Bigler, and Holleen R. Krogh. 2002. Language at work: Children’s gendered interpretations of occupational titles. Child Development 73, (3): 810, http://content.ebscohost.com/ ContentServer.asp?T=P&P=AN&K=6639069&EbscoContent=dGJyMNLe80Sep644wt vhOLCmrlGep69SsKe4S7aWxWXS&ContentCustomer=dGJyMPGsr0y0r7dMuePf geyx%2BEu3q64A&D=aph. 23. Martyna, W. 1980. Beyond the He-Man Approach - The Case for Nonsexist Language. Signs 5, (3): 482-93. 24. Martyna, Wendy. 1978. What does ‘he’ mean? use of the generic masculine. Journal of Communication (Pre-1986); New York 28, (1): 131. 25. Merritt, Rebecca Davis, Cynthia J. Kok, Rebecca Davis Merritt, and Cynthia J. Kok. 1995. Attribution of gender to a gender-unspecified individual: An evaluation of the people = male hypothesis. Sex Roles 33, (3): 145-57. 26. Mills, Sara. 2008. Language and sexism. Cambridge: Cambridge University Press. 27. Morrison, Melanie A., Todd G. Morrison, Melanie A. Morrison, and Todd G. Morrison. 1999. An investigation of measures of modern and old-fashioned sexism. Social Indicators Research 48, (1): 39. 28. Muscio, Igna. 2002. Cunt: A declaration of independence. 2nd ed.Seal Press, http:// books.google.com/books?id=fzOF61JsU3MC&dq=Muscio,+Inga+cunt&printsec=frontcover &source=bl&ots=Z5rT4ocS10&sig=V6QT6nWcaPYwiUtIlhA0xBfRDl8&hl=en&ei=IsgVSj TI4OplAfLi4DBBQ&sa=X&oi=book_result&ct=result&resnum=6&ved=0CCQQ 6AEwBQ#v=onepag&q=&f=false. 29. Parks, Janet B., Mary Ann Roberton, Janet B. Parks, and Mary Ann Roberton. 2008. Generation gaps in attitudes toward Sexist/Nonsexist language. Journal of Language & Social Psychology 27, (3): 276-83. 30. Pelak, Cynthia Fabrizio, and Cynthia Fabrizio Pelak. 2008. The relationship between sexist naming practices and athletic opportunities at colleges and universities in the southern United States. Sociology of Education 81, (2): 189-210. Roman. 1993. The women and language debate: A sourcebookRutgers University Press, http://books.google.com/ books?id=jGWYW_LVmN8C&dq=hamilton+1989+constitution+sexism&source=g bs_navlinks_s. 31. Safire, William. 1999. Genderese. New York Times1999. http://www.nytimes. com/1999/05/16/magazine/on-language-genderese.html. 32. Sally McConnel-Ginet. 2008. Words in the world: How and why meanings can matter. Language; Washington 84, (3): 497. 33. Stanley, J. P. 1978. Sexist Grammar. College English 39, (7): 800-11. 34. Vahed, Hajira. 1994. Silencing womyn with words. Agenda(21): 65-70, http://www. jstor.org/stable/4065824.
30
Caroline Lampen
The Emergence of a Norm Cascade on Violence Against Women: CEDAW or Transnational Advocacy Network?
At the 1993 United Nations World Conference on Human Rights, the UN made a declaration that defined violence against women within the human rights framework and formally situated this norm on the international agenda. The conference solidified the emergence of the norm cascade on violence against women, as well as women’s rights in general. Throughout the 1980s and early 1990s, the women’s rights movement gained tremendous momentum and support on an international scale. Women throughout the world had always suffered from various forms of violence, including rape, battery, sexual abuse, torture, trafficking, forced prostitution, kidnapping, sexual harassment, dowry-related violence, and female genital mutilation (Ahmed et al. 196) but it was not until 1993 that the international community officially recognized violence against women, and women’s rights more broadly, as part of the existing human rights framework. This paper seeks to examine what accounts for this variation in the acceptance of women’s rights as an international norm, termed a “norm cascade,” from 1975 to 1993. The first hypothesis suggests that the Convention on the Elimination of Discrimination Against Women (CEDAW) significantly impacted the women’s rights movement by bringing about a norm cascade on violence against women. The existing literature on CEDAW points to its role in the 1990s in solidifying violence against women as a true human rights violation, particularly through its Recommendation 19 made in 1992, and by encouraging states to include violence against women in reports to the CEDAW committee. While CEDAW was monumental in gendering human rights law and raising awareness about the discrimination against women, the emergence of the norm cascade cannot be attributed to this international law. The examination of the impact of CEDAW in propagating norms of women’s rights leads to the proposal of a revised hypothesis that more realistically reflects how the norm cascade on violence against women came into being. The second hypothesis posits that the women’s transnational advocacy network, specifically nongovernmental organization (NGOs), instigated the women’s rights movement and propelled violence against women onto the international agenda. This paper will analyze how international normative change on women’s rights, or the norm cascade, resulted from the work of the transnational advocacy network to seize UN support, establish “norms of inclusivity” (Weldon 56), and implement the human rights issue frame. The paper concludes by assessing the effects of the women’s transnational advocacy network and CEDAW law on solidifying a norm cascade on violence against women and encouraging its internalization by member states. Women’s Rights Campaign on Violence Against Women The violence against women campaign served as the face of the women’s rights movement beginning in the mid-1980s and significantly shaped women’s rights within the overarching human rights context. It helped to bring tremendous attention to efforts of NGOs that advocated on behalf of women. Women throughout the world could unify over their shared experiences of violence. The success of
31
eruditio the movement in attaining widespread acceptance of violence against women as a human rights violation can be attributed in part to the power of the image of violence in effectively capturing people’s attention. Keck and Sikkink explain that “campaigns against practices involving bodily harm to populations perceived as vulnerable or innocent are most likely to be effective transnationally” (Keck et al. 27). Thus, women utilized the existing “master frame” (Keck et al. 196) of violence and rights in order to cultivate recognition of its own agenda of violence against women in public and private spheres. Defining Norms This paper focuses on the emergence of norms on the international agenda. Norms can be defined as “collective expectations for the proper behavior of actors with a given identity” (Keck et al. 3). The analysis draws primarily on Martha Finnemore and Kathryn Sikkink’s threestage model of global norms. They present a “life cycle” (Finnemore et al. 892) of norms to explain their creation and implementation. The first stage is “norm emergence” (Finnemore et al. 895) in which norm entrepreneurs convince norm leaders to embrace the new norm. This process of cultivating a group of leaders to support the norm leads to a “tipping point” in which a critical mass of relevant actors adopt the norm. In the second stage, the norm is greatly accepted and becomes a “norm cascade.” The third and last stage of the model of “norm internalization” (Finnemore et al. 904) occurs when states implement the norm in the national context and also internalize it as part of their culture. The following is the norm “life cycle” figure (Finnemore et al. 896): (This can be included as one of your figures, the citation for which you’ve already provided, but you’d have to introduce it as something like “See figure 1 for a model of the norm “life cycle.”) Finnemore and Sikkink’s model relies on states as the primary actors. Norm entrepreneurs seek to attain support of a new norm from a significant number of states and to “cascade” (Finnemore et al. 895) the norm through the rest of the population of states once it has emerged and been adopted. My application of their model differs in how I define the primary actors. I draw upon Susanne Zwingel’s description of “transnational advocacy networks [that] have emerged as new global actors with considerable success in the promotion of global norms” (Zwingel 414). Transnational advocacy networks consist of a group of actors, including individuals, NGOs, governments, and international organizations that advocate on behalf of a cause in order to promote a norm in society and create pressure for socialization and change (Keck et al. 8). In my analysis of the primary actors that contributed to bringing about the norm cascade on violence against women, I focus primarily on the women’s transnational advocacy network, specifically the significant contribution of NGOs. Original Hypothesis The Convention on the Elimination of Discrimination Against Women (CEDAW) significantly impacted the women’s rights movement by bringing about a norm cascade on violence against women. Given the successful implementation of women’s rights into international law in 1979 through CEDAW, it is interesting to examine the extent to which CEDAW contributed to bringing violence against women to the international agenda and instigating a norm cascade. As one of six major UN treaties, CEDAW established universal recognition of the fundamental equality between men and women (Merry 942). While the 1948 Universal Declaration of Human Rights mentioned the equal status of women, CEDAW signified a bill of rights for women, with particular attention to equality in marriage, employment, education, politics, the legal system, family, and health (Merry 944). In “Constructing a Global Law-Violence against Women and the Human Rights System,” Sally Engle Merry states that “one of the major ways the international human rights system endeavors to prevent violence against women is by international law, particularly CEDAW”
32
the emergence of a norm cascade (942). The CEDAW committee’s adoption of general recommendations extended the language of the convention beyond discrimination. In 1989, the committee approved general recommendation 12, advocating for violence against women to be considered by member states and requiring statistics on gender violence to be included in reports. In 1992, its recommendation 19 formally defined gender-based violence as a form of discrimination. The CEDAW recommendations on violence against women provided the basis for the 1993 UN General Assembly Declaration on Violence Against Women. While the CEDAW recommendations were not legally binding like the original text of the Convention, they appeared to have been significant in legitimizing the campaign for violence against women and propelling it onto the international agenda (Merry 952). In “Inside Outsiders,” Liz Kelly similarly identifies the relevance of CEDAW in empowering the violence against women movement. She argues that the UN Declaration on the Elimination of Violence Against Women in 1993 defined violence as form of discrimination, “a concept … chosen to create a clear link to CEDAW” (Kelly 479) and its discrimination framework. Kelly suggests that the direct recognition of violence against women through CEDAW contributed to bringing attention to this issue on the international agenda. In summary, this literature supports the hypothesis that the norm cascade on violence against women can be attributed to the influence of CEDAW. Analysis of Original Hypothesis Despite the contribution of CEDAW in drawing attention to violence against women through its recommendations and encouragement of member states to report on such rights violations, it does not adequately account for how the norm cascade on violence against women came into existence. In “The Global Women’s Movement: Articulating a New Vision of Global Governance,” Ellen Dorsey explains that “the passage of CEDAW was a critical step towards the protection of women’s rights, [but] it [was] only one mechanism to elevate the claims of women. Its limitations were the threshold for redefining all rights codification from a gender perspective” (Dorsey 443). The campaign for violence against women necessitated the use of a human rights frame, instead of a gender specific frame, to raise international consciousness of the issue and its acceptance as a fundamental human rights violation. Additionally, in much of the literature on the emergence of violence against women as an international norm, CEDAW is absent from the account of relevant history. The emergence of violence against women on the international agenda of women’s NGOs did not really occur until the early 1980s, after CEDAW had already been passed. CEDAW does not even include violence against women in its statement on the equal rights of women (Keck et al. 166) (Can you cite the statement directly here?). CEDAW’s discrimination framework was limiting in its ability to capture the attention of the international community and instill agency in women’s rights NGOs. S. Laurel Weldon’s chronological account of the global movement against gender violence completely bypasses the existence of CEDAW and jumps from the First World Conference on Women in 1975 to the second and third ones in 1980 and 1985. It was not until the norm cascade on violence against women had already been established in the early 1990s that Weldon mentions the relevance of CEDAW in helping to enforce violence against women in its member states (Weldon 60). Specifically, CEDAW adopted an Optional Protocol in 1999 that provided individuals with the opportunity to seek justice for human rights violations that their state did not address by petitioning the CEDAW committee. CEDAW has been effective in monitoring violations of the basic rights of women in its member states and bringing these human rights violations to the public eye. NGOs are instrumental in providing reports on the conditions of women within countries and helping the CEDAW committee to assess the progress of its members (Zwingel 405). While CEDAW has made note-
33
eruditio worthy contributions to the acceptance of nondiscrimination and violence against women, it lies at a stage of norms subsequent to (Is that what you mean? I was confused here.) the emergence of a norm cascade. CEDAW plays an irreplaceable role in attaining full implementation of the rights of women, as national institutions change to reflect new rights, and eventually the internalization of these norms at the nation-state level (Zwingel 414). Thus, CEDAW was significant in reinforcing the women’s rights movement and the norm cascade on violence against women on international and national levels once it had already taken off. Overall, CEDAW does not account for the variation between 1975 and 1993 in the presence of a norm cascade on violence against women. Revised Hypothesis CEDAW was of limited importance in bringing about a norm cascade on violence against women. In reality, transnational advocacy networks, specifically the significant contribution of nongovernmental organizations, instigated the women’s rights movement and propelled violence against women onto the international agenda. The development of a transnational advocacy network throughout the 1970s and 1980s created the opportunity for the violence against women campaign to develop and attract rapid attention internationally in the 1990s. Ellen Dorsey describes the necessary step of organization among NGOs that had to occur before women could demand the emergence of new norms. NGOs translated the networks they had been building throughout the 1970s and 1980s into a strong, global women’s rights movement that formulated a shared agenda (Dorsey 452-453). In Activists Beyond Borders, Keck and Sikkink argue that transnational advocacy networks function as a “political space” (Keck et al. 198) to “mobilize information strategically to help create new issues and categories and to persuade, pressure, and gain leverage over much more powerful organizations and governments” (Keck et al. 2). Activists from NGOs were instrumental in cultivating a network that fostered cross-cultural collaboration and in leading efforts to promote widespread acceptance of violence against women within the human rights framework. Historical Background and Analysis In order to understand the emergence of a norm cascade, it is necessary to first examine the historical context of the women’s right movement and the discussion on violence against women. In reality, the emergence of a women’s rights movement began before the creation of CEDAW in 1979. The UN women’s conferences track the development of the women’s rights advocacy network and the strengthening of the movement to eventually unify around the violence against women campaign. Women’s rights NGOs consistently played a significant role in drawing global attention to the women’s agenda. UN Support and “Norms of Inclusivity” In the early 1970s, women’s groups lobbied for increased recognition of women’s issues and the UN-sponsored conferences were instrumental in supporting their efforts. The UN General Assembly declared 1975 to be the International Women’s Year, organizing the First World Conference on Women in Mexico. Over 6,000 women participated in the NGO forum at the conference, marking a tremendous increase in NGO activism with relation to the UN. The General Assembly approved the recommendations on improving the equality of women and declared the next ten years as the UN Decade for Women in an effort to globally advance the status of women. This declaration legitimized the efforts of women’s NGOs to capture the world’s attention, marking the beginning of a true international women’s rights movement (Ahmed et al. 188-189). The 1975 Mexico conference revealed to the participants the existing divide between activists from the North and South. The groups from the North (developed world) advocat-
34
the emergence of a norm cascade ed on behalf of discrimination and women’s inequality while the groups from the South (developing world) pressed for development and social justice as the focus of the women’s agenda (Ahmed et al. 189). The Southern groups had difficulty separating the struggles of women from those of the nation in general, and thus concentrated on basic inequalities that their societies as a whole suffered from in comparison to the western, developed world. The topic of violence against women emerged in the 1970s on a local level. It was raised at the Mexico conference in 1975 but did not receive much attention (Weldon 59). At the 1976 First International Tribune on Crimes Against Women, thousands of women gathered to speak out on issues of rape, prostitution, beating, and female genital mutilation (FGM) (Keck et al. 175). The Second World Conference on Women took place in Copenhagen in 1980 to assess progress made in the realms of development, employment, health, and education for women. This time, 8,000 people attended the NGO forum (Ahmed et al. 190). Charlotte Bunch, a leading activist, helped to lay the foundation for the formation of a transnational advocacy network on violence against women by organizing panels on violence against women at the NGO forum to encourage increased networking among the groups. She observed the possibilities for consensus that revolved around this issue: “[violence against women] had the potential to bring women together in a different way, and … it had the potential to do that without erasing difference … [because] there was a sense that women were subordinated and subjected to this violence everywhere” (Keck et al. 177). Despite this point of convergence, tensions between the different women activists continued to exist, thereby impeding any significant progress in terms of bolstering a unified women’s rights movement. In 1975 and 1980, Northern women dominated the conferences and the agendas. In preparation for the Nairobi conference in 1985, both sides sought to create a common ground on which they could agree and progress by implementing “norms of inclusivity” (Weldon 56), which include the self-organization of marginalized groups to develop a voice and agenda, the presence of marginalized groups at the conference, and a commitment by all sides to work towards consensus while accepting disagreement on some issues (Weldon 56). Southern NGOs from Africa organized themselves prior to the conference, determined to demonstrate their equality to the Northern groups. Activists from both sides worked to increase Southern participation in the conference, provided Southern groups with the opportunity to voice their agenda and priorities independent of the North, and encouraged all of the groups attending the conference to create an inclusive agenda, accepting that disagreement would exist. At the 1985 Nairobi conference, South women represented the majority for the first time. The significant progress made in reframing the debate on female genital mutilation (FGM) at the 1985 conference provides a concrete example of how the groups from the North and South began to converge and reconcile differences. At the previous conferences, Northern groups conveyed their belief that female genital mutilation in the developing world represented a “backward or primitive culture” (Weldon 63). The Southern groups resented this attitude of neo-colonialism and viewed FGM as a reflection of nationalism and cultural tradition (Keck et al. 71-72). In the context of increased equality and mutual respect at the 1985 Nairobi conference, Southern women addressed the topic of FGM themselves and sought to frame it simply as a form of domestic violence, and not as an indicator of primitivism (Weldon 61, 62). The Northern and Southern groups were able to converge on this broader definition of violence against women and thus shaped the strengthening of the movement and its future focus. The NGO forum at Nairobi was an “equalizing experience” (Weldon 62) by changing the tone of previous conferences in which western women dominated and dictated the agenda. The 1985 Conference in Nairobi marked the end of the UN Decade for Women. Over 14,000 people attended the NGO forum and hundreds of NGO representatives participated in the actual conference (Ahmed et al. 191). It resulted in a document, “Forward Looking Strategies for the Advancement of Women”, with measures to implement equality of women at the national level, specifically with reference to employment, health, education, food, agriculture, industry, science and technology, and
35
eruditio housing (Ahmed et al. 191). The document also explicitly stated, “national machinery should be established in order to deal with the question of violence against women within the family and society” (United Nations). The violence against women angle helped women to overcome the divisions between the North and South, as the “continuum of violations” (Weldon 63) was expanded to include not only the previously accepted offenses of rape and mental harassment, but also the violations that were particularly important to Southern women, such as female genital mutilation (Weldon 63). Keck and Sikkink describe the instrumental role of these conferences in legitimizing women’s rights internationally, as well as providing the opportunities for thousands of women from around the world to gather together to share information and create common agendas (Keck et al. 169). The UN conferences were significant as a medium for agenda setting and consensus building. The NGO forums at each of the conferences offered a chance for women to cultivate relationships and better understand the underlying tensions between the different groups of activists (Friedman 23). By effectively seizing UN support of women’s issues as a platform to implement “norms of inclusivity” (Weldon 57), the NGOs built consensus. NGOs used the international contacts they had made at the conferences to form a transnational advocacy network. Keck and Sikkink identify the Nairobi conference as the “first step in securing agenda attention to the issue, for initiating the change in discursive positions of governments, and for strengthening the linkages among women’s groups working on the issue” (Keck et al. 179). This new cooperation among women resulted in the development of the campaign to frame violence within the human rights frame (Weldon 64), moving away from the existing discrimination frame of CEDAW. Issue Framing With a more unified movement, women’s groups continued to build regional networks to further develop a global conversation on violence against women. The widespread activism of NGOs throughout the world in the early 1990s reflects the overarching goal of redefining women’s rights within the greater human rights system. The groups strived to have violence against women in the public and private realm be considered not only as a fundamental woman’s rights but also as an issue of human rights. One woman living in Sudan noted how “the language of women’s rights as human rights moved very quickly into the national and regional levels at a pace that far exceeded that of any previous movement on behalf of women internationally” (Friedman 31). The use of issue framing by women’s groups ultimately led to norm cascade of violence against women by 1993 when the UN World Conference on Human Rights released a Declaration explicitly stating the consideration of women’s rights as human rights (Friedman 31). The following analysis of the women’s transnational advocacy network’s efforts traces the impact of issue framing in the late 1980s and early 1990s by identifying the first three phases of Finnemore and Sikkink’s life cycle of norms: norm emergence, tipping point, and norm cascade (Finnemore et al. 896). Norm Emergence In “Women’s Human Rights: The Emergence of a Movement,” Elisabeth Friedman argues that a global movement promoting women’s human rights coalesced between 1990 and 1993 (Friedman 18). She points to Charlotte Bunch’s famous 1990 article, “Women’s Rights as Human Rights: Toward a Re-vision of Human Rights,” and to the 1993 UN World Conference on Human Rights as the two significant benchmarks framing the core of the movement. Charlotte Bunch, the Founding Director of the Center for Women’s Global Leadership (CWGL), was instrumental in spreading the notion of women’s rights as human rights: “the human rights community must move beyond its male defined norms in order to respond to the brutal and systemic violation of women globally” (Bunch 492). Bunch’s article publicized and promoted a new mechanism that the women’s transnational advocacy network had already begun to use to gain influence and power. For example, in the late 1980s and early 1990s, NGOs such as GABRIELA, a women’s group in the Philippines, and International
36
the emergence of a norm cascade Women’s Rights Action Watch both asserted that “women’s rights are human rights” (Bunch 496) in campaigns and conferences. In addition, Bunch’s article inspired women to adopt this issue frame tool, as evidenced by the response of Susana Chiarotti, one of the founders of Indeso-Mujer in Argentina: “This theoretical piece made a great difference in our work. [It created] language [that was] irrefutable; you would have to cover yourself with shame if you didn’t accept it” (Keck et al. 165). Friedman also alludes to how the use of issue framing led to mainstream human rights groups’ increased support of women’s NGOs in 1989. Human Rights Watch began a Women’s Rights Project as a result of “women’s rights groups’ use of human rights methodology” (Friedman 26), as well as increased pressure from women’s activists within the US to address the abuse of women’s rights worldwide. Amnesty International adopted a women’s focused agenda due to internal pressures within the organization that criticized the underrepresentation of women in existing research (Friedman 25). These two examples clearly convey the importance of the women’s transnational advocacy network in introducing women’s issues, specifically violence against women, into the public eye. As relationships grew between women’s groups and mainstream human rights groups, the Center for Women’s Global Leadership (CWGL) took the leading role in jumpstarting the Global Campaign for Women’s Rights by organizing meetings with women activists and articulating “the possibility of using [human rights] norms… to advance women’s rights” (Friedman 27). According to Keck and Sikkink, these efforts indicate an “unusually clear example of global moral entrepreneurs consciously strategizing on how to frame issues in a way likely to attract the broadest possible global coalition “ (Keck et al. 185). The adoption of women’s rights by mainstream human rights organizations and the initial work of CWGL to coordinate global action represent the solidification of the “norm emergence” (Finnemore et al. 895) stage of Finnemore and Sikkink’s life cycle of norms. Norm entrepreneurs, such as CWGL, convinced leaders to embrace a new norm, by constructing the “cognitive frame” (Finnemore et al. 897) of human rights that resonated with the international community. This activism also substantiates the claim that issue framing was the main catalyst in propelling violence against women onto the international agenda, and thus beginning a norm cascade. Given that the human rights framework focuses on violations at the state level, women’s activists faced an obstacle in including violence against women abuses, which occurred in public and private realms. Thus, women sought to bridge this divide by revealing the limitations of existing human rights. In “Refusing to Go Away: Strategies of the Women’s Rights Movement”, LaShawn Jefferson describes one of the most important strategies of the women’s movement: the women “question[ed] the legitimacy of the mainstream human rights movement if women’s human rights were not fully integrated in it. [They] challenged the legitimacy and effectiveness of a conceptualization of human rights that emphasized violence by the state but overlooked violence by private actors” (Jefferson 34). In their efforts to broaden the rights associated with basic human rights, women drew attention to the framework’s weaknesses and confronted the preexisting understanding of such rights with the male as the norm (Bunch 492). Through these issue framing campaigns, women took advantage of the accepted international rights network of human rights and utilized it for its own purposes of furthering the acceptance of violence against women as a violation of human rights. Tipping Point Throughout the early 1990s, the women’s movement on violence against women gained momentum and increased attention from human rights groups. While it is logical to classify the late 1980s and early 1990s as the stage of “norm emergence” due to the catalytic effect of issue framing, it more difficult to identify the exact “tipping point” toward the norm cascade, given the widespread activism occurring in the years preceding the UN World Conference on Human Rights. NGOs surged in activity to document violence against women, organize petition drives, and draft consensus documents on including women’s rights, specifically regarding violence against women, into human rights. As the coordinator of the global campaign, the CWGL organized the Sixteen Days of Activ-
37
eruditio ism against Gender Violence as a strategic event to garner worldwide support and pressure the UN to include women’s rights on the human rights agenda at the 1993 Conference (Ahmed et al. 195). This sixteen-day period links International Day Against Violence Against Women on November 25 to International Human Rights Day on December 10. The theme for the 1991 event was “Violence Against Women Violates Human Rights” (About the 16 Days). This petition drive asked the preparatory committee for the World Conference on Human Rights to include gender violence as a violation of human rights. The petition was instrumental as a “recruiting tool for the movement, as it helped spread the concept of women’s rights as human rights across the globe” and attained over 300,000 signatures in 123 countries and 20 languages (Friedman 28). This strategic petition drive could be classified as the “tipping point” (Finnemore et al. 895) of the norm cascade in which a critical mass of relevant actors around the world adopted the norm of violence against women. Norm Cascade The activism of the women’s transnational advocacy in the years leading up to and at the actual conference in 1993 resulted in a UN adopted Declaration on the Elimination of Violence against Women stating that “gender-based violence and all forms of sexual harassment and exploitation … are incompatible with the dignity and worth of the human person, and must be eliminated” (Kelly 479). This norm cascade on violence against women was reinforced by the work of the Center for Women’s Global Leadership and other NGOs to organize a tribunal on the violation of women’s rights at the conference. The first-hand testimonies of the different forms of violence that women suffered from had a significant impact on the conscience of the international community (Ahmed et al. 195). The conference also led to the establishment of a Special Rapporteur on Violence against Women (Ahmed et al. 196). These monumental achievements reflect the solidification of the norm cascade on violence against women. The decades of work by women’s NGOs and activists to coordinate efforts, form a transnational advocacy network, and frame violence against women within human rights had paid off. Specifically, the 16 Days campaign garnered global support for violence against women as a fundamental human rights violation. It is also necessary to recognize that CEDAW created a special recommendation in 1992, which “codified the standards upon which the movement was coalescing, symbolizing the elevation of these claims to the status of globally accepted norms” (Dorsey 443). CEDAW helped to formally outline the inclusion of violence against women into international law but it did not provide any binding law. In reality, violence against women could not have become situated on the international human rights agenda without the perseverance of the women’s right movement in “pushing, fighting, cajoling, stigmatizing, strategizing, coalition building, and simply being steadfast and refusing to go away” (Jefferson 33). CEDAW neglected to provide an issue frame by which violence against women could attract international attention. CEDAW’s real impact was in serving as a mechanism for state accountability in consolidating the norm cascade. Conclusion The speed with which violence against women became present on the international agenda in the early 1990s alludes to the significant variation in the international acceptance of fundamental women’s rights from 1975 to 1993. The 1993 UN Declaration serves as the initial indicator of the tremendous progress made in bringing about a norm cascade. The violence against women agenda continued to swell through the Fourth UN Women’s Conference in Beijing in 1995 and the Beijing Plus Five Conference in 2000. NGOs gathered in unprecedented numbers at these events and shaped declarations that reiterated the women’s agenda (Ahmed et al. 198). The transformation of the women’s rights movement into a transnational advocacy network that brought about a norm cascade on violence against women represents the power of NGO activism in
38
the emergence of a norm cascade bringing about international normative change. In “Constructing a Global Law-Violence against Women and the Human Rights System,” Sally Engle Merry outlines the influence of nonbinding declarations and resolutions in bringing about global consensus on violence against women. She acknowledges the role of CEDAW in providing a legal mechanism to address violations of women’s rights, but also commends the nonbinding declarations that resulted from UN Conferences for their international legitimacy (Merry 968). The resulting declarations from the 1985 Women’s Conference and the 1993 Human Rights Conference, as well as petitions that NGOs organized within the women’s advocacy network, reflects the propagation of norms on an international level in the absence of binding law on violence against women. The unification of women around the violence against women angle gave them a sense of agency to act and bring about normative change. In this sense, NGOs and activists created a new accepted culture through these public and internationally recognized advocacy campaigns. The impact of the women’s rights transnational advocacy network challenges the rationale that law must precede a norm cascade. It is thus evident that the original hypothesis on the significant role of CEDAW in bringing about a norm cascade on violence against women cannot be corroborated given the irreplaceable role of the women’s transnational advocacy network. My initial assumptions about the impact of CEDAW were founded on literature about CEDAW’s development throughout the 1990s to increase enforcement mechanisms and the number of abiding member states. In reality, CEDAW’s greatest impact lies in the third stage of norm implementation and potential internalization. In 1999, it created an Optional Protocol that member states could ratify to entitle individual women or groups to seek redress for rights violations. The CEDAW committee demands regular reports from its member states, which reinforces the potential for behavioral change in the countries by “forcing [them] to review domestic law, policy and practice, and to assess to what extent it is complying with the standards of the convention” (Merry 956). It has helped to bring about progress in countries like Bangladesh, where law has recently changed to prohibit sexual harassment (CEDAW at 30). CEDAW has the potential to capitalize on the work of the transnational advocacy network by acting as a mechanism to bring about change on the domestic level and the eventual internalization of violence against women as a human rights offense. The women’s transnational advocacy network demonstrated the power of ideas and mobilization, at a level below the law. By building consensus and fostering cooperation through the UN conferences in the 1970s and 1980s, women activists and NGOs laid the groundwork for a transnational advocacy network that would create an issue frame for its agenda and a norm cascade. The emergence of a norm cascade on violence against women exposes the power of normative change in the absence of binding human rights law. works cited
1. About the 16 Days. Center for Women’s Global Leadership. <http://www.cwgl.rutgers.edu/ 16days/about.html>. 2. Ahmed, Shamima and David M. Potter. NGOs in International Politics. Bloomfield, CT: Kumarian Press, Inc., 2006. 3. Bunch, Charlotte. “Women’s Rights as Human Rights: Toward a Revision of Human Rights.” Human Rights Quarterly 12 (1990): 486-98. 4. CEDAW at 30- CEDAW Success Stories. UNIFEM. < http://www.unifem.org/ cedaw30/success_stories/>. 5. Dorsey, Ellen. “The Global Women’s Movement: Articulating A New Vision of Global Governance.” The Politics of Global Governance : International Organizations in an Interdependent World. Edited by Paul F. Diehl. and Paul F. Diehl. Boulder, Colorado: Lynne Rienner Publishers, 2001.
39
eruditio
6. Finnemore, Martha, and Kathryn Sikkink. “International Norm Dynamics and Political Change.” International Organization. 52.4 (1998): 887. 7. Friedman, Elisabeth. “Women’s Human Rights: The Emergence of a Movement.” Women’s Rights, Human Rights. Edited by Julie Peters and Andrea Wolper. New York, NY: Routledge, 1995. 8. Jefferson, LaShawn R. “Refusing to Go Away: Strategies of the Women’s Rights Movement.” Human Right Dialogue. Fall 2003. <http://www.cceia.org/resources/ publications/dialogue/2 _10/index.html/res/id=sa_File1/HRD_Violence_against_ women.pdf>. 9. Keck, Margaret E. and Kathryn Sikkink. Activists Beyond Borders: Advocacy Networks in International Politics. Ithaca, N.Y.: Cornell University Press, 1998. 10. Kelly, Liz. “Inside Outsiders- Mainstreaming Violence Against Women into Human Rights Discourse and Practice.” International Feminist Journal of Politics 7.4 (2005): 471. 11. Merry, Sally Engle. “Constructing a Global Law-Violence Against Women and the Human Rights System.” Law & Social Inquiry 28.4 (2003): 941-77. 12. United Nations. Forward Looking Strategies for the Advancement of Women to the Year 2000. Nairobi, 15-26 July 1985. Paragraph 258. <http://www.un.org/womenwatch/ confer/nfls/ Nairobi1985 report.txt>. 13. Weldon, S. L. “Inclusion, Solidarity and Transnational Social Movements: The Global Movement Against Gender Violence.” (2004): 1-49. 14. Zwingel, Susanne. “From Intergovernmental Negotiations to (Sub)National Change: A Transnational Perspective on the Impact of CEDAW.” International Feminist Journal of Politics 7.3 (2005): 400.
40
Anand Varadarajan
The Leadership of the Zen Master: Phil Jackson
Given the highly competitive nature of sports, especially in the professional realm, teams and players are constantly searching for ways to augment their chances of success. For the owners of professional franchises, wins and losses are not the only issues at stake; the success of the team also dictates large sums of money, marketing, fan support, and in some cases, the emotional fervor of a city and a nation. In such a high stakes environment, the need for effective leadership becomes transparent. “Head coach” figures are hired to not only bring a new perspective on team strategy, but perhaps more importantly, to nurture team chemistry and ensure that players remain motivated and focused. Imbuing these intangible qualities, those that transcend the “x’s and o’s,” is perhaps the greatest challenge facing professional sports organizations today. The NBA (National Basketball Association) is a wonderful opportunity to examine the fascinating interplay of money, personalities, and athletics. Two of the most successful franchises over the past two decades have been the Chicago Bulls and the Los Angeles Lakers. In the 1990’s and 2000’s, the Bulls and Lakers, respectively, were led by head coach Phil Jackson, and with his arrival came unprecedented success for both teams. Not only did Jackson lead these teams to a combined nine league championships, but he also created a unique team environment that fostered team unity and maintained a high level of focus and motivation for years at a stretch. But what about Jackson’s game and personnel management makes him so successful? Why do his constituents – players, assistant coaches, and upper level management – rave about his coaching style? A close look at Jackson’s coaching shows that he displays many of the effective leadership characteristics discussed by intellectual leadership historians such as James Macgregor Burns and Garry Wills. And amongst the sports world, it is widely known that Jackson toys with eccentric Buddhist and Native American ideas; in fact, sports announcers refer to Jackson as the “Zen Master.” While the nickname has spread with jest, it actually touches upon a dimension of understanding and strategy that defines Phil Jackson’s outlook on basketball. Phil Jackson espouses a unique spiritual philosophy about the game of basketball. His intelligence in applying that philosophy to various facets of the game – managing player personalities, motivating his players, and managing game strategies – makes him one of the most successful coaches in sports history. The Formation of a Spiritual Philosophy Before delving into Phil Jackson’s actual game and personnel strategies, it is critical to understand his philosophy and its underlying influences. From the onset of his career, Phil Jackson has elucidated a goal that fuses team success – winning – with a transcendent spiritual satisfaction. In the opening pages of his autobiography, he writes about his mentality when he was first hired by the Chicago Bulls in 1989: “My dream was not to just win championships, but to do it in a way that wove together my two greatest passions: basketball and spiritual exploration… I sensed there was a link between spirit and sport” (Jackson 3). On the surface, the notion of spiritual dimension in the arena of sports seems outlandish. How can one establish any meaningful connection between “spirit and sport”, and how could that connection possibly manifest itself in the actual planning and execution of the game? Answering
41
eruditio these questions becomes easier when analyzing the influences on Jackson’s philosophy: his Christian childhood, his exposure to the counterculture movement, and his early experiences as a player in the NBA. Moreover, understanding these influences becomes critical to any analysis of Jackson’s leadership, because according to leadership historian William Chafe, the private recesses and personal experiences in a leader’s life often manifest themselves in significant ways during the execution of leadership (Chafe 3). Thus, by delving into these three main experiences, one can begin to comprehend the intricate, spiritual philosophy that has pragmatic application in the game of basketball and in the Zen Master’s leadership. One of the primary influences on Jackson’s philosophy and his leadership style is his childhood experience growing up as a conservative Christian in North Dakota. Although today Jackson would not classify himself as a Christian, his life philosophy is undoubtedly influenced by Christian values. Particularly, one can see how aspects of Jackson’s basketball philosophy trace roots to the central tenets of Jackson’s parents’ faith – Pentecostalism. This religious branch of Christianity stresses a personal connection with God, but more importantly, it emphasizes a dedication to certain ritual practices, such as intense study of the Bible, daily prayers, and treating peers with respect. While Jackson himself admits that the pure “religious” features of the faith were lost on him, the qualities that were engendered by the daily practice of the religion – selflessness, compassion, and commitment – remain extremely influential (Jackson, Sacred Hoops 29). In the game of basketball, Jackson carries the expectation that each of his players conduct themselves with the same selfness, compassion, and commitment that his parents taught him to embrace. While Jackson’s conservative upbringing instilled certain characteristics, his adolescent years as a participant in the counterculture movement were also an extremely formative experience. The 1960’s provided Jackson an opportunity to escape the rigidity of his provincial childhood through exposure to new ideas. Particularly, the counterculture’s emphasis on intellectual exploration spurred Jackson to fulfill his curiosities about Native American culture. Growing up near an Indian reservation, Jackson was cognizant of the strange and mystical spirituality of Native American tribes, but his parents’ conservatism prevented him from truly appreciating these tribes. As biographer Roland Lazenby says, “because of the cultural divide, there was little opportunity to hold more than cursory relationships with these Indian children… but he was aware of them and filled with curiosity about them” (Lazenby 48). But when he entered college at the University of North Dakota, Jackson seized the opportunity to finally satiate these intellectual curiosities by studying philosophy, psychology, and religion, with particular attention to the Lakota Sioux. Of particular interest to Jackson were the spiritual preparations of the Lakota, including battle rituals and a warrior mentality. In fact, some of these fascinations were so strong that Jackson implemented them into his pregame “rituals” as both a player and a coach, and he continues to emphasize many aspects of Native American spirituality as the Lakers’ coach today. Perhaps the most visible counterculture remnant in Jackson’s coaching today, however, is Zen Buddhism. This religion places heavy emphasis on the aesthetic, breathing, meditation, and destressing. But what most lured Jackson was the religion’s exclusive focus on the present or the “moment”. According to Zen Buddhists, proper living requires that the mind become aware of the present and unaware of the past or the future. To a certain extent, meditation required non-thinking more than thinking. This non-theistic, eccentric philosophy appealed to Jackson immensely; as Jackson says, “to someone raised in a Pentecostal household – where attention was focused more on the hereafter than the here and now – this was a mind-boggling concept” (Jackson, Sacred Hoops 32). Regardless of his unfamiliarity with the practices of Zen at the time, the notions of Zen Buddhism have become embedded in Jackson’s coaching philosophy. Together, both the Native American ideas and Buddhist philosophies amalgamated to refine a strong spiritual sense within Jackson, even though the attention to “mental state” departed radically from the devotional emphasis with which his parents had reared him. With an understanding of the variety of Christian, Native American, and Buddhist influences in Jackson’ life, it is critical to comprehend how they helped generate a basketball philosophy. While Jackson had been an involved basketball player for much of his youth, it was only when playing for the New York Knicks did he find these strange philosophies beginning to gather momentum and basketball application.
42
the leadership of the zen master Playing for an NBA franchise in a professional capacity propelled Jackson to find a competitive edge that he could hold over his opponents; diving into the spiritual and psychological aspects of the game provided such an avenue. As such, Jackson began experimenting with some of the meditation techniques he had learned about in college. After his first season as a Knick, Jackson framed the lessons he had learned over the course of an NBA season within the intellectual philosophies with which he had been grappling. In what he labels the “Holzman School of Management,” Jackson formulated three essential lessons:
Lesson one: Don’t Let Anger Cloud the Mind Lesson two: Awareness is Everything Lesson three: The power of We is stronger than the power of Me (Jackson, Sacred Hoops 34-35)
These lessons reveal two aspects of Jackson’s basketball philosophy. First, it elucidates the emphasis on focus and teamwork, two of the most essential elements for success, according to Jackson. Particularly, as a secondary player on the Knicks, Jackson understood that team success relied on a harmonious balance between the superstars and the role players. The second point these lessons articulate is the inherent connection between Jackson’s basketball philosophy and his other intellectual forays. Eliminating anger and embracing the group, lessons one and three, were at the crux of the Lakota warrior mentality and were consistent with the Christian ideals of his parents. The emphasis on awareness, lesson two, perfectly aligns with Zen Buddhist discourse. Quite clearly, Phil Jackson’s time as a New York Knick allowed these brewing intellectual concepts and sports notions to crystallize into a more coherent, spiritual basketball philosophy. Managing Player Personalities: Taming the Superstars One of the most daunting challenges facing NBA management and coaches today is ensuring team balance and chemistry. Achieving this entails both catering to the stars of the team while at the same time compelling these stars to interact and behave cooperatively in a team environment. But as the NBA has become more market driven and personality dependent, these stars have become extremely difficult to manage. Ron Artest, Allen Iverson, and Rasheed Wallace are examples of players who come into the league with a sense of entitlement and utter disregard for the rules of the game and perhaps even the rules of society. Consequently, it becomes the coach’s burden to “tame the superstar” – that is to say, mold both the player’s skills and personality so that he can flourish in a team environment. Phil Jackson has been blessed with some of the greatest talents the game has ever seen: Michael Jordan, Scottie Pippen, Shaquille O’Neal, and Kobe Bryant. And while he did inherit this talent, he should be credited with maximizing their immense potentials, managing their strong personalities, instilling a team-first mentality, and engaging in both a transactional and transformational leadership to attain these goals. Jackson’s previous experiences as an adolescent undoubtedly influence his perspective on the importance of teamwork. Upon accepting his first head coaching job in the NBA in 1989, Jackson commented: I vowed to create an environment based on the principles of selflessness and compassion I’d learned as a Christian in my parents’ home; sitting on a cushion practicing Zen; and studying the teachings of the Lakota Sioux. I knew that the only way to win consistently was to give everybody – from the stars to the number 12 player on the bench – a vital role on the team, and inspire them to be acutely aware of what was happening, even when the spotlight was on somebody else (Jackson, Sacred Hoops 4).
The first major task that Jackson face when he joined the Bulls was managing their superstar shooting guard Michael Jordan. Up until Jackson’s hire in 1989, Michael Jordan had been given a free reign; he was allowed to dictate practice times, excuse himself from team events, and even bring family and friends on team planes and buses. With such leeway, the notion of a unified, cohesive “team” was
43
eruditio completely alien to any member of the Chicago Bulls organization. In changing this culture, Jackson “suspected that Jordan would respond to a mental challenge, if it was issued on a daily basis… suggesting paths and leaving Jordan and his teammates free to choose” (Lazenby 148). While such freedom and latitude seems counterproductive for a team that was already struggling with a chemistry issues, this approach demanded additional preparation, advanced work, self-criticisms, and collaborative discussions on the part of the players. For Michael Jordan, in particular, this was a humbling process that required him to assess his own skills, and perhaps more significantly, lead the team by example. By giving the players, and especially Jordan, ownership in team strategy and ultimately, team success, Jackson essentially guaranteed that players would play to their fullest potential. Not surprisingly, these psychological tactics of self-correction perfectly aligned with Buddhist notions of self-awareness and introspection. While difficult for players initially, these strategies eventually led to unprecedented team success, as the Bulls won a championship within two years of Jackson’s stint as head coach and went on to win six championships over an eight-year span. Jordan later commented on Jackson’s psychological schemes: “That type of psychological warfare sometimes can drive a person crazy, yet it can drive you to achieve, too. I like mind games, so Phil is great for me” (Lazenby 149). But Jackson’s ability to manage the superstar’s personality was not limited to these psychological games. On a different level, Jackson convinced Jordan that his legacy would be defined by championships, as opposed to individual accolades. Before Jackson’s arrival, Jordan had won every conceivable individual award: MVP, Defensive Player of the Year, Rookie of the Year, Slam Dunk Champion, etc. The only award missing from Jordan’s resume was a championship; Jackson made it clear that in order for Jordan to fill in that final slot on his resume, Jordan had to buy into the team concept. The team concept, as Jackson understood and communicated to Jordan and other stars, was deeply rooted in “the Lakotas’ concept of teamwork… A warrior didn’t try to stand out from his fellow band members; he strove to act bravely and honorably, to help the group in whatever way he could to accomplish its mission” (Jackson, Sacred Hoops 109). Thus, one can see how Jackson cleverly presented his overarching goal as an individual goal. Equating the self and the team was at the heart of the Jackson’s basketball philosophy, and strategically persuading players like Jordan to embrace this concept demonstrates Jackson’s intelligence in applying his philosophy. Another superstar personality that Jackson had to quell was Shaquille O’Neal. When Jackson joined the Lakers in 1999, Shaquille O’Neal was recognized as one of the best centers in the NBA. The Lakers had awarded O’Neal with one of the most lucrative contracts in NBA history, and just like Jordan, O’Neal was parading in the preferential star treatment. But the Zen Master knew that he had to channel O’Neal’s charisma and talent to produce a better team product. And just as he had done in Chicago with Michael Jordan, Jackson articulated an exchange process, whereby the individual star bought into a team concept for some larger reward that served both the group and the individual. Now Jackson began building a similar relationship with O’Neal… Shaq was motivated by the opportunity to score lots of points, so he [Jackson] fashioned a trade-off with the big center. If O’Neal would show the leadership Jackson wanted, then the triangle offense would give him the opportunity to score big numbers. (Lazenby 364)
Again, one can see the compromise that Jackson strategically accomplished. Shaq’s individual success depended on the leadership he displayed; better leadership from O’Neal would enhance both individual and team status. The final superstar that Jackson has had to deal with is Kobe Bryant. But unlike Jordan or O’Neal, Jackson has had a rocky relationship with this superstar. Upon Jackson’s arrival in Los Angeles in 1999, Kobe Bryant was viewed as a role player – the guard who could complement Shaquille O’Neal in the quest for a championship. But as the team garnered success and Bryant’s skills matured, Bryant demanded the attention and leadership of the primary option. This transition was an aspect of team chemistry that Jackson had never before dealt with, and the growing tensions between Bryant and the front office forced the frustrated head coach to quit in 2004. After his final season with the Lakers, Jackson writes:
44
the leadership of the zen master Why was this relationship so difficult? Those answers will have to come another time. I do know that there were many occasions this year when I felt like there was a psychological war going on between us… Ultimately though, I don’t believe we developed enough trust between us to win a championship. (Jackson, Last Season 259)
Ironically, the Zen Master rejoined the Lakers in 2005, just a year after publicly declaring the lost trust between the coach and the superstar. Even more unexpectedly, Bryant and his former coach have worked together brilliantly since Jackson’s return, orchestrating Bryant’s first MVP award and two number one seeds in a highly competitive Western Conference. But how was Jackson able to finally “tame” this superstar? As he did with Jordan, he put the onus for success squarely on Bryant’s shoulders. Jackson said in a recent interview that “Bryant had matured… using energy for the optimum benefit of the ball club” and “putting pressure on teammates” in a positive way (Jackson, Inside Hoops). These comments bear a striking resemblance to those comments made about Jordan and the Bulls when he first joined, where Jackson challenged Jordan to improve himself and his teammates. This Zen Buddhist ethic of self-awareness and self-correction was once again implemented through a psychological “exchange” – the superstar gives leadership in exchange for individual benefits. Looking at Jackson’s leadership in handling his superstars from a purely intellectual perspective, Jackson engages in what James Macgregor Burns would call transactional leadership. Transactional leadership occurs when: One person takes the initiative in making contact with others for the purpose of an exchange of valued things. The exchange could be economic or political or psychological in nature…Their purposes are related, at least to the extent that their purposes stand within the bargaining process and be advanced in maintaining that process. (Burns 19-20)
This theory of transactional leadership can be wonderfully applied to Phil Jackson’s coaching. Jackson encourages his players, especially his superstars, to sacrifice selfishness and leadership for teamoriented basketball focused on winning. One should also see that the second part of Burns’ definition holds true in Jackson’s coaching. The high level of success, which is bargained for, can only be maintained through continuous application of this transactional process. It is important to note the “transactions” Jackson initiated with his players were not singular ordeals; they were long-winded, complicated, and annual processes that continuously challenged players like Jordan, O’Neal, and Bryant to maintain their leadership and perpetuate team success. Furthermore, secondary players, such as Dennis Rodman and Scottie Pippen, were also integral parts in the execution of these trade-offs, as their consent and subsequent efforts helped legitimize team harmony (Jackson, NBA at 50). All in all, the complexity of these “psychological” exchanges and the maintenance of these transactional processes clearly demonstrate how Jackson fits the mold of a transactional leader. Establish a Mental Advantage: Motivating Players Basketball players and coaches know that maintaining a mental edge over an opponent often dictates the results of a game. Phil Jackson, in particular, was acutely aware of a necessity for mental preparation, focus, and motivation. Thousands of books have been written on effective motivation techniques, and the stereotypical cultural image of motivation is the locker-room halftime speech that incites players to find a new drive to succeed. But the Zen Master follows a much more subtle approach, appealing to a variety of unorthodox psychological and behavioral maneuvers to bring out the best in players. By looking at the application of these subdued yet highly effective psychological tactics, one can see how Jackson raises his leadership to intellectual and transformational levels. Jackson places extraordinary emphasis on the psychological well-being of his players. As
45
eruditio per his spiritual philosophy, it is essential for players to remain mentally sharp through the course of their preparation and execution of the game. To establish this mental acuity, Jackson has his players attend sessions with psychologist and Zen enthusiast George Mumford. While the specifics of Mumford’s work remain a “trade secret,” players have divulged that the sessions entail stress reduction through Zen, tai chi, yoga, and common sense (Lazenby 33). Most of Jackson’s players are particularly enthusiastic about these sessions, and one player, Bill Wennington, describes their effectiveness: “He tries to get your basketball life, you whole life, in a peaceful, relaxed state so that you can compete. He doesn’t want you to be stressed out about anything” (Lazenby 34). As the quotation illustrates, Mumford’s sessions align perfectly with Jackson’s Zen philosophy of upholding a calm, focused, and non-stressed approach to basketball. But more importantly, these sessions demonstrate the Zen Master’s creativity and intelligence in finding an effective teacher to instill his spiritual philosophy. In addition to using Mumford’s sessions as a means to de-stress his players, Phil Jackson indulges his players in aspects of Buddhism for both motivation and focus. Particularly, he encourages his players to adhere to specific elements of the Middle Path of Buddhism. According to Buddhist doctrine, strict observance of the Middle Path, or Eight-Fold Path, will lead to the Buddhist equivalent of salvation, nirvana. Jackson has extracted two elements of this path that he finds significant for the basketball world: Right Thinking and Right Action (Turner 4). Right Thinking mandates becoming completely absorbed in the present moment; Jackson finds that embracing such a concept is critical in a sport where dwelling on past mistakes – turnovers, missed shots, bad defense – can adversely impact the present or future play. The effectiveness of Right Thinking is demonstrated “when Jordan spoke of playing ‘in the moment’ as he performed spectacularly in carrying the Bulls to their later championships…voicing the theme [of Right Thinking]” (Lazenby 33). Right Action, or performing to the fullest of one’s ability, is also critical to Jackson’s approach. In a league where negligible amounts of talent separate individual players and teams, playing with passion, dedication, and energy is oftentimes the difference between a win and a loss. Clearly, by connecting his players with these elements of Buddhist philosophy, Jackson has found tremendous success in the NBA. But it is important to understand the methods through which Jackson communicates these rather complex ideologies to individuals who are oftentimes resistant to understanding the nuanced intellectual points of the game. After all, how could a basketball coach inculcate these spiritual and intellectual principles if the team’s focus was on basketball? Answering this question involves delving into Jackson’s brilliant use of media. Firstly, the Zen Master gives out books and articles for players to read on team buses and planes. These books cover a variety of topics, but some, such as Practicing the Presence, Way of the Peaceful Warrior, and Zen Mind, Beginner’s Mind are actually drawn from books that Jackson himself had read in formulating his spiritual philosophy (Jackson, Sacred Hoops, 125). But Jackson’s true ingenuity is in the film-room. Understanding that players spend inordinate amounts of time analyzing and scrutinizing game footage, Jackson oftentimes find it productive take a break to watch poignant clips from movies to drive home certain messages. For example, in preparation for a series against Detroit in 1990, Jackson said: I came up with the idea of using the Wizard of Oz as a teaching device. The Pistons had been waging psychological warfare against us – and winning. I needed to turn the tables by making the players aware of how Detroit’s roughneck style of play was affecting the team as a whole. So I mixed vignettes from the Wizard of Oz with clips from Pistons games for our next tape session (Jackson, Sacred Hoops, 107).
This quotation probes into Jackson’s psychological insight and the overwhelming emphasis on the mental aspects of the game, but it also illustrates Jackson’s ability to find innovative and crafty means for communicating rather complex notions. Jackson’s most infamous use of “propaganda” came in a 2000 playoff series between the Los Angeles Lakers and Sacramento Kings. Part of Jackson’s philosophy and the discourse he had been imparting to his team was “identifying the soul of the opponent,” much like the Lakota warriors did
46
the leadership of the zen master before engaging in battle. It was necessary to understand the purpose and energy with which the opponent came into battle. Thus, in preparation for this playoff series, Jackson spliced clips of Edward Norton, the racist character from American History X, alongside clips of the Kings’ started pointing guard, Jason Williams, whose tattoos and stature bore a striking resemblance to the racist character from the movie. Jackson also displayed pictures of Adolf Hitler alongside the Kings’ coach Rick Adelman, whose stature and mustache once again bore a striking resemblance to the German dictator (Lazenby 31). Granted, such tactics seem inappropriate and possibly even malicious, but the underlying psychology is undoubtedly ingenious. Jackson’s tactic injected a moral element into what was initially a basketball game with insignificant moral consequence. He motivated his players by appealing to a higher conscience, by compelling them to “identify a soul” within the opponent that was consumed by hatred and evil. The Lakers went on to win the series, but one can only speculate to what extent they were motivated by these video clips. Nevertheless, Jackson’s use of propaganda in this instance displays an uncanny ability to motivate players using unconventional and provocative psychological tactics. Jackson’s implementation of creative tactics to focus, motivate, and psychologically engage his players demonstrates intellectual leadership as described by both Burns and scholar Garry Wills. As Burns argues, an intellectual leader is one who shows the “capacity to conceive values or purpose in such a way that ends and means are linked analytically and creatively” (Burns 143). The Zen Master’s cerebral approach to the game combined with his creativity in implementing that approach testify to Jackson’s skill in “analytically and creatively” linking his spiritual thinking with the game of basketball. Even though Burns and Wills discuss intellectual leadership in a political context, Jackson possesses that quality shared by all intellectual leaders, a quality that allows him to not only conceive of revolutionary ideas and approaches, but adapt and execute those ideas in a specific realm – basketball. But the Zen Master’s appeal to a “higher morality”, as in the case with the Sacramento Kings, shows that he strives for a higher, transformational leadership as well. Transformational leadership results in “mutual stimulation and elevation that converts followers into leaders and may convert leaders into moral agents” (Burns 4). This definition certainly seems applicable to Jackson, whose coaching has augmented the leadership of players like Jordan, O’Neal, and Bryant. Moreover, by elucidating an extremely spiritual and morality-centric philosophy, Jackson indeed makes his players “agents of morality”. While some may argue that Jackson’s constituency is not large enough for him to be considered an intellectual and transformational leader, it is important to focus on his behavior within his context. Within the context of a basketball team and the framework of an NBA franchise, Jackson has undoubtedly instilled a revolutionary philosophy and initiated cultural transformations that beckon comparisons to those political and historical figures that Burns and Wills discuss. Managing the Game Up to this point, the focus on Jackson’s leadership has primarily been on how Jackson has intelligently applied his philosophies before and after game situations. But not only is it essential to probe into the abstract psychological tactics of the Zen Master, but to also see how these Eastern and Native American philosophies manifest themselves in actual game strategy and management. By looking more closely at the actual techniques and strategies that Jackson preaches along with his conduct during games, one can begin to put together the Zen Master’s comprehensive basketball philosophy and better observe the tangible effects of Jackson’s leadership. The first game strategy that should be addressed is the triangle offense. The triangle offense’s architect is Jackson’s long-time assistant Tex Winter, but Jackson was the first coach to install the system with any success at the professional level. The triangle offense, unlike other “run n’ gun” and “iso-based” offenses, relies on flexibility, finesse, and timing. According to Jackson, the system is designed to compete with those opponents who may have physical advantages, but who would be vulnerable to the precision and intelligence demanded by the system. Perhaps more im-
47
eruditio portantly to the Zen Master, however, this system epitomizes the spiritual philosophy he espouses: It embodied the Zen Christian attitude of selfless awareness. In essence, the system was a vehicle for integrating mind and body, sport and spirit in a practical, down-to-earth form that anyone could learn. It was awareness in action. The triangle offense is best described as five man tai-chi. (Jackson, Sacred Hoops, 87)
Through the quotation one can see the tangible impact that Jackson’s intellectual philosophies have on the game. One can gauge that Jackson’s meticulous attention to the Buddhist aesthetic informs his actual execution of basketball strategy. Perhaps the most intriguing aspect of Jackson’s behavior can be observed during the actual game. Throughout the course of the game, Jackson remains unusually calm and in control; rarely does he bicker with players, other coaches, or even the referees. During stretches of the game when his team struggles, Jackson oftentimes refuses to call timeouts to break the opponent’s momentum. These strategies are baffling for opponents and sometimes even for Jackson’s own players, and despite apparent negligence on the part of Jackson, this hands-off approach is in fact extremely calculating and consistent with his spiritual philosophy. His refusal to call timeouts during stressful portions of the game stems from the Zen self-correcting and self-empowering strategies he implements in practice. According to Jackson, if players are expected to devise solutions in practice, they had a similar burden during the game; it was not the coach’s responsibility to bail them out with a timeout. And even during timeouts, Jackson is never seen chastising his players or feverishly reprimanding past mistakes. Instead, he uses timeouts as an opportunity for players to practice the Zen routine of “visualization.” The exercise is designed to “help them cool down mentally as well as physically… I [Jackson] encourage them to picture themselves someplace where they feel secure. It’s a way for them to take a short mental vacation before addressing the problem at hand” (Jackson, Sacred Hoops, 120). Thus, Jackson’s psychological discourse and meditation tactics find relevance and application even during tense moments of the game. Jackson’s demeanor on the sidelines is part of a larger strategy to instill selfawareness in his players. Jackson’s teams’ unparalleled success in close game situations is a testament to the effectiveness of these methods, and one critic comments on the effectiveness of Jackson’s approach: A major benefit of this method is that it reduces the chances of coach dependency, and transfers more independence and responsibility to the athletes, who are invariably in a better position than the coach to recognize potential solutions to game related problems anyway. (Turner 3)
All in all, Jackson’s hands-off approach equips his players with a spirituality that helps them tackle game situations with poise and focus. To better understand Phil Jackson’s refined methodologies during the game, it is useful to compare his tactics to that of other notable basketball coaches. In fact, given the highly unorthodox manner of Jackson’s approach, one can set up an opposing dichotomy, or what Wills refers to as an “anti-type.” Anti-types allow one to understand a leader by presenting another leader who “represents the same qualities by contrast” (Wills 20). The NBA coach who best epitomizes the Zen Master’s anti-type is former New York Knicks and Houston Rockets head coach Jeff Van Gundy. Van Gundy is the perfect foil to Jackson; he contests every foul, reprimands his players incessantly, and uses timeouts at every available opportunity. Despite lauding the intellectual aspects of the game, Van Gundy does not engage in any moral or spiritual exercises while coaching. Moreover, Van Gundy contrasts Jackson in his success; Jackson has won nine championships, while none of Van Gundy’s seasons have culminated in a league title. But the greater benefit of comparing Van Gundy to Jackson is that it allows basketball observers to pin down the factors and elements of success, and how coaching styles manifest themselves in the actual execution of the players. For example, several NBA critics have noted how Van Gundy’s teams adopt his frenetic, sometimes chaotic energy on the court. Players like Latrell Sprewell sometimes projected Van
48
the leadership of the zen master Gundy’s energy in a negative fashion by picking up technical fouls, flagrant penalties, and even entering into fights. One can contrast this uncontrolled energy with the moderated aggression of Jackson’s player Dennis Rodman. Like Sprewell, Rodman was known to have a belligerent personality before joining the Zen Master’s team in the mid 1990’s. Yet Jackson was able to subdue Rodman’s temper through calm, psychological strategies that channeled Rodman’s pent up aggression into positive energy on the basketball court. Thus, looking at Jackson and an anti-type reveals how coaches can adversely or positively affect the game and its players, allowing fans, critics, and analysts to assess what truly makes a great coach. Assessing the Zen Master’s Legacy: Blessed by Circumstance or Pioneer of Fortune? After acquiring a sense of the Zen Master’s personality, ideologies, and strategies, it is imperative to place him in the larger context of the basketball world. He has already been inducted into the Hall of Fame, but what is Phil Jackson’s legacy as a legendary coach? The debate over Jackson’s legacy often comes down to that of an event-taking person versus an event-making person. Jackson’s critics quickly point out that his success can be attributed to coaching three of the transcendent talents of this generation: Jordan, O’Neal, and Bryant. In essence, they argue that Jackson was an event-taking person, a person who was blessed by circumstance and simply nudged elite players into the echelon for which they were already destined. But Jackson’s supporters respond by asserting that in the fifteen seasons that these players played before Jackson’s arrival, none of them won a championship. Thus, these supporters claim that Jackson was an event-maker, actively manipulating and altering circumstance in pursuit of a goal. The reality of the situation probably lies somewhere in the middle; Jackson was fortunate to work with superb players, but displayed inordinate intelligence in applying a spiritual philosophy to refine that talent and maximize his team’s potential. However, what is more astounding about Jackson’s nine championships is the era in which they have been won. The league has come to be defined by transient player loyalties, shifting player contracts, and volatile financial logistics. Amidst all this turmoil, how has a strange, unorthodox coach managed to capture nine of the past 18 league titles? Perhaps Jackson’s long-time friend and assistant Tex Winter put it best when he said, “Phil recognizes that there are a whole lot of things more important than basketball… He doesn’t take himself too seriously… I think this is his strength, the way he handles players and his motivation, his personal relationship with players,” (Lazenby 22). The quote beautifully captures in a few sentences how Jackson’s success can be attributed to his intelligence, philosophies, and superior interpersonal skills. Jackson is undoubtedly one of the strangest, yet most fascinating head coaches in the history of professional sports. His spiritual philosophy and his astute application of that philosophy in managing player personalities, motivating players, and handling in-game situations make him a leader worth studying. Few have been able to emulate his success, either in terms of winning or by imitating his coaching style. In fact, one could argue that Jackson is the most successful type B personality head coach in the history of professional basketball. Nevertheless, a careful look at Jackson’s leadership allows for an analysis of the Zen Master’s skills on a sports level, but perhaps more impressively, on an intellectual level as well. Jackson’s coaching displays qualities of both a transactional and a transformational leader, and his creative and analytic articulation of an extremely complex ideology also places him in the sphere of an intellectual leader. While the event-taking and event-making nature of his legacy will always be disputed, his continuing success and adherence to such an unorthodox ideology make him a leader for fans, players, and scholars to appreciate. works cited
1. Burns, James M. Leadership. New York: Harper & Row, 1979. 2. Chafe, William H. Private Lives/Public Consequences Personality and Politics in Modern
49
eruditio
America. Cambridge, Mass: Harvard UP, 2005. 3. Jackson, Phil. “Phil Jackson Interview.” Inside Hoops. 4. Jackson, Phil. Sacred Hoops: Spiritual Lessons of a Hardwood Warrior. New York: Hyperion, 1995. 5. Jackson, Phil. “The NBA at 50: Phil Jackson.” National Basketball Association. 6. Lazenby, Roland. Mindgames Phil Jackson’s Long Strange Journey. New York: Bison Books, 2007. 7.Turner, David. “Phil Jackson: Zen and the Counterculture Coach.” 2 May 2009 <https:// uhra.herts.ac.uk/dspace/bitstream/2299/1346/1/900740.pdf>. 8. Wills, Garry. Certain Trumpets The Nature of Leadership. New York: Simon & Schuster, 1995.
50
Joan Soskin
A New Gangster’s Paradise: The Diglossic Traits of Tsotsitaal and the Concept of South African Unity
The voice of the gangster so often goes unheard in academia: it is undeniable that we generally associate the urban vernacular with violence, sexuality, and the intrusive bass beats of hip-hop. “Slang” tends to lie beyond the parameters of the academic, surrounded by a veritable stigma that cannot be washed off. And yet this academically ostracized gangster has provided us with one of the most riveting linguistic circumstances of our time. For within the field of linguistics, we are often confronted with evidence of social phenomena: language can provide the unifying gravity of communication as well as the catalyst for societal explosions. The development of Tsotsitaal (or “gangster speak”) in South African society exemplifies both the unities and conflicts brought about by language. In fact, this exact occurrence has been referred to as “one of the most important sociolinguistic developments in the 20th Century” (Msimang) due to its rapid establishment as a lingua franca in urban areas alongside English. Academics are just beginning to recognize the wide-reaching effects of this developing language on its speakers, the South African youth of Johannesburg and the Soweto Township. Its demographic is reflected in its many different names; categorized as a “pidginized variety of Afrikaans,” (Holm) or an “urban mixed code,” (Rudwick) Tsotsitaal operates under the guises of isiCamtho, Fly taal, or Flaaital, among others. In the course of less than 50 years, Tsotsitaal has become the vernacular of the youth, a language under which they can unite against the persistent forces of the apartheid that are just now beginning to fade into the background. As the gravity of this contribution to South African history and society evidences itself in this divided country, we must wonder if gangster speak is really so bad after all. A Brief History of South Africa Since the early days of colonization, South Africa has been under the influence of racial tensions, segregations, and outright violence. The history leading up to the Apartheid (Afrikaans for ‘separateness’ combined with ‘-hood’) is a long and sordid one, but a general outline of social groups and language are crucial to a full understanding of the current social situation and pervading tensions. The story of South Africa begins in the 4th century with the settlement of the Bantu people alongside the indigenous San and Khoikhoi. Their language, Bantu, is a member of the Niger-Congo language family and is still spoken in a number of different dialects today. Although eventually a number of different tribes settled the area and contributed to the African population, the word ‘bantu’ degenerated into a pejorative term for Blacks during the Apartheid regime; the Oxford Dictionary of South African English describes its use as ‘obsolescent’ and ‘offensive’ because of its strong negative associations during this time (“Bantu”). The literal definition, however, simply means ‘people’. Likely, this etymological transformation can be attributed to the introduction of white settlers to South Africa in the 17th Century. In 1652, Jan can Riebaack founded the Cape Colony on behalf of the Dutch East India Company. In 1759, a struggle began between the British forces and the Netherlands, and in 1806 the colony was officially ceded to the British. It was in 1816 when the native people, the Zulu, first formed forces under Shaka Zulu and began fighting back. They eventually drove the Boers out of Cape Colony, and in 1852 won limited self-government
51
eruditio for themselves. Unfortunately, the discovery of gold and diamonds in the Transvaal a mere decade later spurred a new surge of white colonization and violence, and fighting resumed until the 1902 Treaty of Vereeniging. In 1910, the Union of South Africa officially came into being, consisting of the British colonies in addition to the Boer republics of Transvaal and the Orange Free State. It is here that the development towards Apartheid began to snowball. In 1913, the Land Act was introduced to prevent blacks from buying land, and in 1948 the Apartheid officially began at the hands of the National Party (NP). Under the Group Areas Act, whites and blacks were legally segregated in 1950; that same year, the African National Congress (ANC) under Nelson Mandela began a campaign of civil disobedience. In the 1970s, over three million blacks were forced to resettle in areas designated ‘black homelands’, the ANC was banned, and Apartheid had officially begun. Likewise, the unbanning of the organization in 1990 marked the official end of the Apartheid era, though racial and societal tension persists throughout South Africa still today. Accordingly, the linguistic structure of South Africa must be viewed against the backdrop of these societal and racial tensions. The social strains that are remnants of the Apartheid stratify language breakdowns by racial and ethnic boundaries. Today in South Africa, there are eleven official languages: isiXhosa, isiZulu, Siswati, isiNdebele, Sesotho, Setswana, Tshivenda, Xitsonga, Afrikaans, and (clearly dominant) English. The majorities speak isiZulu or isiXhosa in their home environments, and yet English and Afrikaans, the languages of the white settlers, continue to be the lingua franca in trade and government. In a state already suffering from a people divided, language barriers such as these perpetuate segregation and inequality. The predominant use of English and Afrikaans in higher socioeconomic circles is thus a tool to preclude involvement from the lower classes. In his Gramatica Castellana, famous humanist Antonio de Nebrija writes, “Language has always been the perfect instrument of empire.” His words refer to the notorious standardization and nationalization of language in divided 15th century Spain, a country that experienced a marked increase in national unity from this linguistic manipulation. And yet, as history often mimics itself time and again, perhaps South Africa needs to utilize this “instrument” as Spain did to forge national unity through homogenization of language. To this end, South Africa’s youth have taken charge of their national destiny in a linguistic sense. Tsotsitaal became a protest language during the apartheid era in the famous chants and songs sung by Sophiatown protestors: “ons dak nie! ons phola hie!” (Miller). A dictionary of Flytaal defines the operative word in this sentence, dak, as a term for leaving or departing (Molamu). Thus, the sentence translates roughly to “We’re not leaving! We are staying here!” With phrases like these, demonstrators showed their opposition to the government during Apartheid protests by using their own words and language. This tactic further cemented unity in their collective cause, making their demonstrations that much more effective. Amazingly, they have created their own sort of lingua franca, a linguistic conglomeration of many facets of South African society and history. Tsotsitaal combines elements of many of the languages with which its speakers had contact: many high-frequency words and phrases from Afrikaans, but also elements from isiZulu and isiXhosa. Loanwords from all these languages have been adopted with slight or extreme semantic modifications, reconstructions, or redefinitions. In this way, the language has grown to represent a large portion of the South African population, uniting them under one linguistic umbrella. Tsotsitaal is now the language used most often in social interaction between the youth, the new lingua franca of South Africa, and yet it originated as a street language. “Those who created it were also motivated by participation in common activities, particularly crime,” (Msismang). Due to its delinquent roots, the language continues to be viewed in a negative light by academia and by society’s elders. This virtual barrier between the nationally recognized languages—for the purposes of this comparison, Afrikaans—and Flytaal bears resemblance to the constructs of a diglossia. Diglossia is defined as a linguistic situation “where two varieties of a language exist side by side throughout the community, each having a definite role to play.” In many ways, this linguistic phenomenon represents the cultural stratification between the accepted upper class and the rejected lower class that exists in South African society.
52
a new gangster’s paradise A Discussion of Diglossia The societal phenomenon of diglossia, Latin for ‘bilingual’, exists in cultural settings of apparent social stratification. In Ferguson’s groundbreaking 1959 article, he defines the term as follows: Diglossia is a relatively stable language situation in which, in addition to the primary dialects of the language (which may include a standard or regional standards), there is a very divergent, highly codified (often grammatically more complex) superposed variety, the vehicle of a large and respected body of written literature, either of an earlier period or in another speech community, which is learned largely by formal education and is used for most written and formal spoken purposes but is not used by any section of the community for ordinary conversation.
Since its early introduction, the concept of diglossia has evolved and come to include a number of characteristics defined by other linguists. Considering that the term bears such close relation to bilingualism, these specifications have been developed in order to differentiate between the two. The variables are function, prestige, literary heritage, acquisition, standardization, stability, grammar, lexicon, phonology, distribution, and conditions that favor its development (Shiffman). Function—In regards to function, the diglossia differs from bilingualism in that the H and L languages are used for different purposes and in different social settings. In Ferguson’s initial investigation of Haiti, he created a table of a number of categories, denoting which language (High or Low) would be used in each situation. The different circumstances seem to follow a relatively stable pattern; academic contexts require the use of the H language, whereas social settings enable its speakers to use the L vernacular. In the case of Afrikaans versus Tsotsitaal, this differentiation is solidified even in the name of the L language—defining it as ‘gangster-speak’ sets boundaries between the two languages and establishes them in their social purposes. Moreover, as Tsotsitaal is a vernacular exclusively of the youth, language barriers prohibit inter-generational conversation using this tongue. Prestige—In addition to simply being used in different settings, the diglossia requires that the H language be used in contexts of greater social stature, such as literature, academia, or public speaking. Conversely, the L-variety is the language used only in slang, felt to be degenerate. This latter description is one that applies directly to Tsotsitaal as it is defined linguistically as a degenerate Afrikaans. Literary Heritage—A corollary to the concept of prestige is the issue of literary heritage. In most diglossic languages, the literature is composed in the H-variety, whereas the L-variety is reserved for restricted genres of poetry, music, or in-character dialogue. For example, in Gavin Hood’s Oscarnominated Tsotsi, the gangster characters converse amongst themselves in Fly-taal with English subtitles although the other characters spoke in English and not the more realistic Afrikaans. In this case, the director made the conscious choice to use the gangster’s dialect as a way to reinforce their degeneracy. Acquisition—Although Tsotsitaal is the language of the youth, its speakers learned the language of their parents’ generation as their first (L1) language. This is one of the aspects in which Tsotsitaal breaks from the general construct of the diglossic language. Generally, the L-variety language is the variety learned first, whereas the H-variety is learned through schooling as a second (L2) language. Due to the fact that the current generation of youth are the first to speak Tsotsitaal fluently, it has not yet begun to mimic this aspect of diglossia. However, it is highly likely that when they begin to have children, they will not switch back to speaking Afrikaans and their children will acquire the L vernacular as their L1 language. Standardization—The next criterion is that the H language is standardized through dictionaries, texts, et cetera, but the L language is not. This holds true for Tsotsitaal, which originated as a spoken language and as such does not yet have written standardization. Molamu’s dictionary includes a note in its introduction regarding the difficulty this created in the writing of such a dictionary, admitting that he took liberty in choosing which spelling and pronunciation to include. Although incomplete at a total entry number of only 3,000, this dictionary takes the first step towards standardization of the language. Stability—The stability criteria states that diglossias are generally stable, persisting for centuries
53
eruditio or longer. Again, the newness of the language means that it does not quite meet this criterion, and this is a test that it will have to pass with time. Grammar—According to the general rules of diglossia, the grammatical structure of the H-language is supposed to be much more complex than that of the L-variety. In this case, isiCamtho derives much of its grammatical structure from colloquial Afrikaans. For this reason, it leaves out the difficult grammatical structures that are used only in Afrikaans literature or academia. Without the more complex tense systems, gender systems, and syntax, the grammatical structures of Tsotsitaal are significantly less complex than those seen in Afrikaans. Lexicon—the only criteria for lexical systems in diglossia is that they are generally shared. Lexically, Flytaal is a melting pot of phrases taken from isiZulu, isiXhosa, English, and Afrikaans. The second lexical criterion—that the words or phrases be adopted slightly—is also fulfilled in this case. For example, the word chomie (n.) in Tsotsitaal means ‘friend, pal’. This term may have been derived from English ‘chum’, referring to a ‘friend’ (Mesthrie). One observer’s note after paging through Molamu’s dictionary: many of the words, especially those derived from other languages, refer to criminal acts, a meditation on the origins and initial intent of the language. A number of instances of this appear in the dictionary. Derived from isiXhosa comes bangalala (n.), a term for the Civic Guards of Western Native Township. The term is thought to be derived from isiXhosa ukubangulula, meaning ‘to search out, to discover, to interrogate or examine closely’. isiZulu contributes baqa, a term which specifically means ‘to be caught in the act of something’. It is thought to derive from the isiZulu bhaqa, which means ‘torch’, and baqa, which means ‘to squash’. From Afrikaans we have mamiere (n.), referring to official documents of identification, deriving from papier. Phonology—Phonemically, Tsotsitaal continues to be a hodgepodge of the languages from which it was birthed. In many cases, however, it adopts the phonemic structure of Afrikaans when spoken, continuing to follow the pattern of the diglossia. Small variations include the lengthening of [^] to [a] and the loss of the initial [u] that often occurs in Afrikaans (Mesthrie). In light of these specific criteria, the case of Afrikaans and Tsotsitaal appears to generally follow the pattern of diglossia, but this analysis is limited by the fact that Tsotsitaal is such a young language and cannot yet be defined strictly under these rules. For this situation to truly be defined as a diglossia, the H language could not be the mother tongue of any of its speakers. In this case, Afrikaans is the L1 of the parent generation, and thus Afrikaans remains the L1 language for the youth. Considering that it meets the other requirements, this examination of Tsotsitaal and Afrikaans postulates that with time, Tsotsitaal’s current speakers will parlay the language onto their children as the L1 language, thus molding it to the other criteria listed here. This process is socially significant in that within the span of one generation, Tsotsitaal will transition into a language in which the formerly oppressed peoples of South Africa can communicate. As in the aforementioned case of Spain’s linguistic homogenization, the completion of the transition into a common language fitting the diglossic mold will create a notion of intensified unity between the displaced and previously oppressed peoples of South Africa, particularly those living in the impoverished townships. Although the specificities of the diglossic framework are not crucial to the success of this linguistic phenomenon, they represent past successes in language as a tool for society and in this way become crucial to the success of South Africa’s development. Social Effects of the Youth Vernacular Tsotsitaal can be described as an urban vernacular, or a “pidginized urban variety of Afrikaans used exclusively by Africans,” (Sebba). An urban vernacular is a type of language that is specific to an urban setting, often developed by the youth. As mentioned before, it was developed in the years of Apartheid as a means of secret communication between youth. Since that time, it has evolved to become the vernacular of the young generation as a whole, losing most of its negative connotations and redefining the
54
a new gangster’s paradise Tsotsi identity to become a modern image of youth and progressivism (Rudwick). Contributing to this image of modernity are two main factors: the opacity of the language and the unity that it creates. Speaking to the former aspect, although the language has much in common with Afrikaans and other languages spoken in South Africa, its “innovative character, not to say multilingual virtuosity, has given Fly Taal the aura of urban cool and the advantages of a language that can be totally opaque to outsiders,” (Sebba). Tsotsitaal forms a language barrier between its speakers and others, creating an atmosphere of simultaneous exclusivity and unity. This latter concept of unity is the most defining effect that Tsotsitaal has and could continue to have if it continues its path of standardization and diffusion amongst South Africa’s youth. In terms of diffusion, Tsotsitaal continues to spread to the youth through pop music such as Kwaito. This new type of music, a form of expression that the youth can claim entirely as their own, is “…the soundtrack of these aspirations, and its future form is…the slang called tsotsi-taal, gangster talk,” (Houghton). Through their language, the youth are able to communicate with one another and to form a cultural identity separate from apartheid, a privilege that their parents did not have -this generation of youth, “the young South Africans [that are] now hitting drinking age are the first to grow up without the mental segregation that came with apartheid,” (Houghton). Through their language and music, they are fully able to create a unity that does not rely on the constructs of apartheid, a process that can be facilitated by this language unique and universal to them. Conclusion As South Africa works towards a national identity, its residents struggle with the burden of linguistic barriers. What the nation really needs is a common language to unite its people in a universal cultural identity. Perhaps this language already exists in Tsotsitaal, the gangster prose, a pre-cooked language ready to be adopted by the greater society. This urban vernacular, although it originates in the dingy ghettoes of Soweto and their criminal activities, has come to represent so much more. So, let us not disregard Tsotsitaal as some degenerate slang, but see it as the nucleus of a new language. Let’s start by taking those words which are truly South African—babalaas, braavileis, tshaila—and proudly integrate them into the language that we speak most often, (Walton).
For centuries, South Africa has remained a nation ethnically and linguistically divided, searching for a sense of national unity amongst its eleven official languages. Today, through Tsotsitaal, the youth have found a solution of their own creation, uniting their generation under one linguistic umbrella. And so it may be that this language of the delinquent teen, the slang of Soweto’s slums and the Apartheid hoodlum, will be a force that betters society; perhaps, through the voice of the gangster, South Africans can find paradise in linguistic unity. works cited
1. Deumert, Ana. Language Standardization and Language Change : The Dynamics of Cape Dutch. Boston: John Benjamins Company, 2004. 63-67. 2. Fergusen, Charles A. “Diglossia.” Word (1959). 3. Freschi, Federico. “Postapartheid Publics and the Politics of Ornament: Nationalism, Identity, and the Rhetoric of Community in the Decorative Program of the New Constitutional Court, Johannesburg.” Africa Today (2007): 27-44. 4. Holm, John A. An Introduction to Pidgins and Creoles. Cambridge, UK: Cambridge UP, 2000. 81-85.
55
eruditio
5. Houghton, Edwin. “Post Apartheid Pop.” Utne July-Aug. 2008: 32-34. 6. Kamwangamalu, Nkonko M. “The Language Planning Situation in South Africa.” Current Issues in Language Planning 2 (2001): 361-445. 7. Lubliner, Jacob. “Reflections on Diglossia.” Thesis. Berkely, CA: University of California P, 2004. 8. Meierkord, Christine. “Black South African Englishes - Towards a Variationalist Account.” Thesis. Erfurt, Germany: ESSE, 2005. 9. Miller, Andie. “From Words into Pictures: In conversation with Athol Fugard.” Eclectica Magazine. Oct.-Nov. 2006. 10. Molamu, Louis. Tsotsi-taal: A Dictionary of the Language of Sophiatown. Pretorita, University of South Africa: Unisa P, 2003. 22. 11. Norris, Shane A., and Robert W. Roeser. “South African-ness Among Adolescents: The Emergence of a Collective Identity within the Birth to Twenty Cohort Study.” The Journal of Early Adolescence 28 (2008): 51-68. 12. Rudwick, Stephanie. “Township language dynamics: isiZulu and isiTsotsi in Umlazi.” Southern African Linguistics and Applied Language Studies. 23 (2005): 305-17. 13. ----. “’Zulu, we need [it] for our culture’: Umlazi adolescents in the post-apartheid state.” Southern African Linguistics and Applied Language Studies. 22 (2004): 159-72. 14. Sebba, Mark. Contact Languages. New York, NY: Macmillan, 1997. 15. Shiffman, Harold. Diglossia. Philadelphia, PA: University of Pennsylvania P, 1999. 16. Van Rooy, Bertus, and Marné Pienaar. “Trends in recent South African linguistic Research.” Southern African Linguistics and Applied Language Studies 24 (2006): 191-216. 17. Walton. “Let’s Make Tsotsitaal the national language.” Weblog post. Red Star Coven. 31 Jan. 2005. 8 Dec. 2008 <http://redstarcoven.blogspot.com/2005/01/lets-make-tsotsitaal-na tional-language.html>.
56
Collin Kent
Rationalizing Inequality: Secularization and the Stigmatization of the Poor
The discourse on poverty reverberates throughout social, political, and religious debate today, and welfare critics champion economic independence and sometimes label the poor as irresponsible and lazy citizens. Most of these cavilers cast their arguments in ignorance of past frameworks of different notions of poverty. Modern conceptions took centuries to craft, moving away from traditional Christianity towards a secularization that branded the poor as a sore on the body politic. Economic theorists such as Thomas Malthus and Adam Smith transformed the human understanding of society towards a rational, calculative end, which culminated with the project of evolutionary progress in Herbert Spencer’s ‘social Darwinism,’ a cause that condemned the poor as inferior beings counterproductive to the human potential. Saint Francis, the Protestant Work Ethic, and the English Poor Laws “Blessed are the poor in spirit: for theirs is the kingdom of heaven.” (Matthew 5:3) The modern notions that vilify poverty would be almost unfathomable to citizens of the fourteenth century world, since the post-reformation era brought about a radical shift in religious interpretation that drastically altered the previous conceptions of the poor. The writings of Saint Francis of Assisi, the apostle of poverty, most clearly represent the dominant pre-modern Christian interpretation of scripture, which consequently defined life for believers. Saint Francis of Assisi, the apostle of poverty who represents the dominant pre-reformation Christian interpretation of scripture, writes to friars that their first rule is to “live in obedience, in chastity and without property” (Francis 31). Saint Francis dedicated his writings and indeed his life to advocate this last point: life without possessions. He quotes Matthew 19:21 as a foundation for his claims, which demonstrate the unimportance of material goods in Christian life: “If thou wilt be perfect, go, sell what thou hast, and give to the poor, and thou shalt have treasure in heaven.” For these Christians, life on earth paled in consequence to the afterlife, and because Jesus preached that the poor shall be rich in the kingdom of heaven, the best believers necessarily aspired to maintain the ideal of poverty. Moreover, Francis condemns social mobility and preaches, “Everyone should remain at the trade and in the position in which he was called…you shall eat the fruit of your handiwork; happy shall you be, and favored” (Francis 37). This starkly contrasts the glorification of pursuit of wealth in capitalist societies, and even suggests that such enterprise may oppose God’s will. For Francis and his contemporaries, to serve God in ‘poverty and humility’ defined the “pinnacle of the most exalted poverty;” those humble followers were to be “kings of the kingdom of heaven, poor in temporal things, but rich in virtue” (Francis 61). This respect for, and even exaltation of, the poor undergoes, to a great extent, a radical transfiguration in the modern era of the eighteenth and nineteenth centuries. Max Weber, in his seminal Protestant Ethic and the Spirit of Capitalism, outlines the birth of what he calls the Protestant work ethic, which, during the reformation, arguably constitutes the first major shift in attitudes towards poverty
57
eruditio and begins the transition towards economic secularization. Weber focuses on the role of Protestantism, particularly that of Calvinism, in facilitating the emergence of capitalistic forms of production. The core value Weber identifies in Calvinism is the completely enigmatic nature of the divine. God no longer is rational or intelligible to humans, thus leaving the individual bereft in the attainment of salvation in the face of a predetermined destiny. Calvinism, however, turns this despair into productivity by shifting the focus from influencing one’s salvation towards verifying it through hard work and prosperity: The attainment of [wealth] as a fruit of labour in a calling was a sign of God’s blessing. And even more important: the religious valuation of restless, continuous, systematicwork in a worldly calling as the surest and most evident proof of rebirth and genuine faith, must have been the most powerful conceivable lever for the expansion of that attitude toward life which we have here called the spirit of capitalism. (Weber 172)
Work and prosperity became “the most suitable means of counteracting feelings of religious anxiety” that arose from predestination (Weber 112). This anxiety reflects the quintessential state of the modern individual, but here individuals do not indulge or wallow in their anxiety, but channel its psychological energy into work. Now, for the first time, religious ethos found an expression within the material world. Weber argues that in Calvinism the traditionally antithetical position between the world and religion is reversed. Believers now ought to taken possession of the world, to turn it into a profitable account. The concept of the ’vocational calling’ underlies Weber’s argument, in which it is the spiritual duty of every good believer to work continuously for the glory of God. This ties a transcendent view to the physical and creates a world in which one pursues material gain while simultaneously performing God’s work (Weber 160). This shift required a reinterpretation of Christian beliefs away from the traditional Franciscan ideals of poverty. For the Calvinists, “unwillingness to work is symptomatic of the lack of grace” (Weber 159) and the poor thus became morally deficient individuals who are damned by God; to be poor was to be a bad Christian. Weber explains this already shifting theological perspective on poverty, but he also chronicles the secularization of the work ethic. This secularization, though not the topic of this inquiry, allowed for other notable theories to emerge that greatly amplified the stigmatization of the poor. Once the project of capitalism had advanced to a certain point, the religious sources that had set the very process in motion were gradually dismissed as obsolete historical scaffolding, paving the way for more modern and secular economists such as Malthus and Smith. These thinkers’ novel ideas, perhaps more than anything, constituted the poor as a sore on the body politic, as illustrated by the changing eighteenth century English poor laws and the criminalization of poverty. The Elizabethan Poor Law of the sixteenth and seventeenth centuries, like the teachings of Saint Francis, stood contrary to capitalist rationality concerning relief and received sharp criticism from later political economists. The Act of 1601, for example, became a comprehensive measure of relief, “all classes of the Poor were provided for, that those who were able be set to work, that the sick be relieved, and that children receive an education” (Dean 21). In 1796 in the English House of Commons, William Pitt, a leading advocate of economic relief, proposed a Poor Bill which espoused to “make relief in cases where there are a number of children a matter of right and honour, instead of a ground of opprobrium and contempt,” yet in just a few years political economists held such a view almost incomprehensible; for in 1798, just two years after Pitt’s proposition, English scholar Thomas Malthus published his seminal work, An Essay on the Principle of Population, which would lead the charge against poor relief and transform modern notions of poverty (Dean 18). Malthus and the Principle of Population
58
rationalizing inequality “The true and permanent cause of poverty” -Thomas R. Malthus Malthus premises his theory of population on a fundamental disequilibrium between population and subsistence. He suggests that the birth rate has the ability to increase geometrically, while food supplies can only grow at an arithmetically. This disparity creates an omnipresent, natural tendency towards overpopulation; but since population growth requires resources, ‘checks’ exist to limit population to the natural carrying capacity (Malthus 4-6). These checks have two distinct forms, the first of which Malthus calls ‘positive checks.’ Famine and starvation are these obvious and immediate consequences of overpopulation, which inevitably spawn disease that ravishes overpopulated societies. These positive checks force the population below the maximum level of the means of subsistence. The second form, Malthus explains, is the ‘preventative check’, which “is peculiar to man, and arises from that distinctive superiority in his reasoning faculties, which enables him to calculate distant consequences” (Malthus 6). Humans have the unique ability to examine their savings, earnings, hopes for employment, and the state of society, in order to determine whether they possess the ability to support a family. “These considerations,” says Malthus, “are calculated to prevent, and certainly do prevent, a great number of persons in all civilized nations from pursuing the dictate of nature in an early attachment to one woman” (Malthus 7). This refrain from marriage and reproduction Malthus labels ‘moral restraint.’ For Malthus, moral restraint is the only acceptable action for the man who fears he will not have the means to provide for his family; it thus represents a problem not for the rich man, but for those living in poverty. He writes: This duty is intelligible to the humblest capacity. It is merely, that he is not to bring beings into the world, for whom he cannot find the means of support […] If he cannot support his children, they must starve; and if he marry in the face of a fair probability that he shall not be able to support his children, he is guilty of all the evils, which he thus brings upon himself, his wife, and his offspring. (Malthus 272)
For the poor, then, failure to practice moral restraint constitutes a crime upon humanity. Starvation and death, writes Malthus, are the just punishment for those’ guilty of all the evils’ they have committed. Through the Principle of Population, Malthus created a rational, secular understanding of society. The Malthusian world, unlike that of Weber’s Calvinists, is intelligible to the human capacity and capable of being judged and altered. Fate is no longer predetermined, but the ability exists to execute moral restraint to adversity. These principles present a new light in which to view the poor that moves away from Franciscan ideals and from the rhetoric of the old poor laws. First, Malthus’ description of society as a constant struggle between population growth and food supply simultaneously constructs an inherent competition between individuals for those resources. Although Malthus does not render humans as animalistic beings fighting each other for survival, his theory leaves all individuals as usurpers of subsistence and the poor are less deserving than the rest. The poor, specifically, are a drain on the sustenance of the rest of society, condemning humanity to the positive checks of ‘misery and vice.’ Secondly, the proposed duty of moral restraint condemns the poor as those who have failed to fulfill their obligation. They must therefore be incapable of calculating their economic situation; a task that Malthus says is comprehensible even to the ‘humblest capacity.’ He continues: The labouring poor, to use a vulgar expression, seem always to live from hand to mouth. Their present wants employ their whole attention, and they seldom think of the future. Even when they have the opportunity of saving they seldom exercise it; but all that they earn beyond their present necessities goes generally speaking to the ale-house. (Malthus 343)
59
eruditio If not incompetent, the poor are simply irresponsible individuals who consciously chose to bring upon themselves, their families, and humanity, all the calamities of hunger and disease that accompany poverty. The concept of moral restraint creates a self-responsibility in which individuals are masters of their own fate, able to prevent the despair of poverty. Malthus contributes more directly to the dialogue on the poor in his writings on the poor laws, in which he advocated their abolition. He argued that welfare hinders moral restraint and economic foresight because aid diminishes the will to save and thus encourages the worst tendencies of the poor, especially drunkenness. He claims that virtues among the poor such as responsibility and frugality are prerequisites for any true progress to be made, and that assistance discourages such attributes (Malthus 279). Put more bluntly by Dean, Malthus believed that “poor relief destroys the spirit of independence and industry of the poor, weakens their willingness to save, removes the restraint from improvident marriage, and fails to discriminate between the proper and improper objects of charitable benevolence” (Dean 83). Moreover, unlike the later theorists Smith and Spencer, Malthus couches his argument in theological terms and he writes that “natural and moral evil seem to be the instruments employed by the Deity in admonishing us to avoid any mode of conduct which is not suited to our being. […] If we multiply too fast, we die miserably of poverty and contagious diseases” (Malthus 258-259). Pain from these checks, believed Malthus, comes from God in an attempt to prevent us from engaging in corrupt behavior. Failure to heed this admonition results in us “justly incur[ring] the penalty of our disobedience” and “our sufferings operate as a warning to others” (Malthus 259). This seems to suggest that the poor deserve the poverty into which they are condemned as part of God’s plan to eliminate ‘idle and sluggish existence’; the population principle and its effects help foster an industrious and moral lifestyle (Dean 89-90). These theological arguments, which are absent in later social theories, gauge the extent to which secularization permeated throughout society. Malthus was pivotal to the process of the rationalization of society, but he was not yet ready to abandon all reference to the divine. Perhaps some explanation in Christian terms was required in order to make his argument apprehensible, but later scholars such as Darwin and Spencer would extend Malthus’ argument while simultaneously discarding the sacred. In The Constitution of Poverty, Mitchell Dean further examines the effect of Malthus on Christian practices. He traces the post-Malthusian history of philanthropy and poor laws to substantiate his claim that Malthus significantly swayed the poor law debate of the early nineteenth century towards reducing aid. Magazines such as The Christian Observer and The Philanthropist identified strongly with Malthus and the former even referred to him as an ‘enlightened philosopher’(Dean 92-94). Moreover, Frankland Lewis, who would become the chairman of the Poor Law Commission of 1834, was likely the unknown author of the 1817 Report from the Select Committee on the Poor Laws, which actually incorporated the Malthusian science of population into its findings. This supports Dean’s claim that Malthus directly influenced the 1834 Poor Law Amendment Act, which exhibited Malthusian criteria of self-reliance and abolition of aid to able-bodied men. This obliterated reforms such as William Pitt’s previously mentioned proposal, among others, and changed the entire dynamic of the poor laws system. Prior to the amendment, able-bodied poor received the majority of the poor aid in the form of money doles, work, and child allowances. After 1834, however, relief was restricted to the sick, the old, widows with children, and the insane. Healthy, capable men and their dependents were cut from the system (Dean 99). Bishop Otter expresses the influence of Malthus on this legislation and represents the opinion of his contemporaries, summarizing in his 1836 Memoirs: “This act is founded upon the basis of Mr. Malthus’ work. The essay of Population and the Poor Law Amendment Bill will stand or fall together. They have the same friends and the same enemies” (Dean 100). This illuminates Malthus’s tangible contribution to the discourse on poverty in society even beyond the theoretical judgment his theory facilitates.
60
rationalizing inequality Adam Smith and the Secular Economy “In everything there is need for improvement” -Adam Smith Like Malthus, Adam Smith radically altered economic thought with his An Inquiry into the Nature and Causes of the Wealth of Nations. While neither this book nor Smith’s other works speak extensively on the issue of poverty or the poor laws, Smith secularized vocation away from the Weberian “calling” and created a framework in which the poor would be judged. Smith begins by elucidating the principle of exchange and division of labor which establishes an underlying philosophical framework of human nature, labeled the ‘moral economy of exchange’ by Dean, through which he writes (Dean 130). The principles underlying the majority of Smith’s assumptions are that exchange universally arises between rational individuals and that that transaction is made not for the general welfare but out of self-interest; “It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest” (Smith 13). For Smith, this moral economy of exchange constitutes a system of natural justice in which proportional transactions of labor and exchange should occur. “What is usually the produce of two days or two hours labour, should be worth double of what is usually the produce of one day’s or one hour’s labour” (Smith 41-42); anything else is not exchange but rather ‘extortion’ or ‘plunder.’ Applying these two premises, Smith contends that wage-labor offers the hopes of civil status and happiness to those without property. The poor laborers possess the same freedom and ability to participate in exchange as the rich man or landlord, trading their only property, their labor, in this system to maximize opportunity. So for Smith, the poor are no longer a distinct group but simply laborers in society who deserve the same treatment as anyone else capable of labor exchange (Dean 133-135). Smith labels this faction of the population as the ‘laboring poor’ yet by no means considered them indigent, and, as Geoffrey Gilbert observes, for Smith, “the notion of a destitute wage earner in an economy like Britain’s would have been almost a contradiction in terms.” Real poverty requires an inability to earn a wage income sufficient to acquire the ‘necessities of life;’ the truly destitute were those who must beg or steal, or starve, not those capable of labor exchange. (Gilbert 283-284). Smith’s discussion “Of the Accumulation of Capital, or of Productive and Unproductive Labor” can be applied more directly to the discourse of poverty. Smith distinguishes between two types of labor: productive labor that produces capital for society and adds value to its subject and unproductive labor which adds no value. The paradox that Smith highlights is that all laborers, productive and unproductive, must be maintained by the same finite produce, yet only productive labor can replace capital and resources. Smith echoes Malthus’ concern regarding scarcity and the poor: Such [unproductive] people, as they themselves produce nothing, are all maintained by the produce of other men’s labour. When multiplied, therefore, to an unnecessary number, they may in a particular year consume so great a share of this produce, as not to leave a sufficiency for maintaining the productive labourers, who should reproduce it next year. The next year’s produce, therefore, will be less than that of the foregoing, and if the same disorder should continue, that of the third year will be still less than that of the second (Smith 306).
Though Smith’s argument regards capital produce rather than food supply, the quasi-Malthusian premise of overpopulated existence while resources dwindle still holds true. These consuming, unproductive poor suck the sustenance from society without recompensing material wealth. Furthermore, Smith charges lack of capital responsible for decreased industry and poverty in a society and indolence among its citizens; “the proportion between those different funds [that support productive or unproductive labor,] necessarily determines in every country the general character of the inhabitants as to industry or idleness” (Smith 299). Thus for Smith, the presence of productive
61
eruditio and unproductive labor actually shapes the nature of individual. Using cyclical logic, the route of the problem can be traced back to those unproductive individuals responsible for diminished capital in a society, which in turn creates more indolence. Although Smith classifies the servant, the clergyman, and the physician, all of whom do not produce material goods, as unproductive laborers, surely the most unproductive for Smith are the idle, unemployed, poor consumers. Geoffrey Gilbert’s Adam Smith on the Nature and Causes of Poverty provides a broader examination of Smith’s regard for the poor, yet ultimately Gilbert concludes on the moot point that poverty remained relatively unimportant in Smith‘s writings. Gilbert, however, fails to extend his analysis of Smith into the broader realm of discourse on poverty in which Smith’s theories carry significant implications. Gilbert writes that in his Theory of Moral Sentiments, Smith explores the human psyche and contends the poor suffer not from lack of physical comfort or health but from psychological pain in the form of social isolation and shame. Moreover, Smith proposes, somewhat ironically, a “deeply ambivalent…view of goods, wealth, and individual economic achievement” (Gilbert 276), in which the condition of the propertyless equals that of the rich insofar as ‘real happiness’ can be acquired (Gilbert 275-277). This echoes the denouement of the principle of exchange in which the poor possess same means for contentment as the wealthy. While perhaps Smith proposes this anti-materialistic theory as a counterpoise to the framework established by Wealth of Nations, Gilbert overlooks the extent to which it perpetuates the modern economic dogma that discounts the poor. Although Gilbert admits that Smith does present a ‘rather weak’ case for benevolence and desire to relieve misfortune of others (Gilbert 278), the moral theory still engenders the ethic of self-reliance extended earlier by the Wealth of Nations. Synthesizing these premises creates a ternary argument in which Smith’s writings change conceptions of poverty in the modern era to foster this ethic of self-responsibility. First, applying Gilbert’s analysis of the Theory of Moral Sentiments, Smith creates a framework in which there is no need for concern for the poor because they are capable of happiness without material goods. Their suffering stems from personal causes within their own psyche, which they themselves have an ability to alter. Even if humankind possesses some sense of benevolence or duty to aid those in misery, there is no longer a motive to help the poor. Secondly, even if the poor desire property to satiate their material craving, they possess their own labor, which enables them to fairly participate in exchange. In the moral economy of exchange they are capable of achieving just compensation for their efforts, which diminishes the necessity for the wealthy to assist, or even pity the poor; each individual assumes responsibility for his or her own welfare. By consequence, these people choose their roles as unproductive, idle persons. Lastly, in a Malthusian fashion, the poor deplete subsistence and resources needed by productive laborers and breed torpescence and sloth by diminishing capital. The poor are not only failing to fulfill their personal responsibility but also are a burden to the rest of society. Smith creates this secular economist moral philosophy in which individual success and happiness rely only on one’s own actions, and the poor exemplify citizens who do not sufficiently inhabit this ethic. Spencer and Social Darwinism “Survival-of-the-fittest”… Together, the writings of Smith and Malthus created a framework in which the polymath Herbert Spencer would synthesize ideas to postulate an argument that would advance secularization and further transform conceptions of poverty. Spencer exemplifies key principles of modernity through his application of evolutionary principles to society and explanation of the human condition in purely scientific terms in his theory that has become known as ‘social Darwinism.’ Gregory Claeys, however, labels the phrase ‘social Darwinism’ a misnomer and traces the ideological foundations for its principles, noting
62
rationalizing inequality that this modern shift began with the advent of Malthusian theory and Smith’s political economy in mid-Victorian Britain, decades before Darwin (Claeys 228). Spencer, like Darwin, utilized the writings of Malthus as a foundation of his theory. Malthus viewed society in naturalistic terms of an ‘organic metaphor’ and labeled life a constant struggle, although unlike Spencer still maintained some acknowledgement of the sacred. These two ideas are central for Spencer. Malthus’s remonstrance against the poor laws, which he argued further entrenched the character flaws responsible for the condition of poverty, was, as Claeys notes, “but a step, though a substantial one, from the evolutionary view that aiding the unfit may undermine the organic improvement of the race.” Malthus already championed “competition-as-natural-selection dictated the survival of the “fittest,” and the starvation of the less successful, unless other factors intervened,” so Spencer merely had to construe the argument in more evolutionary terms to create his own theory of progress (Claeys 232). Besides Malthus, Adam Smith helped facilitate the publicizing of life as a competitive struggle and created an understanding of society that allowed Spencer to follow with his claims. Political economy provided an application of Malthusian principles to social class, which created a ‘near-seismic shift’ in notions of individual and class competition within a society. Similar to biological evolution and competitive requirements for natural selection, Smith constituted nations as “fit” if they maintain a ‘competitive edge’ over others, and “fit” individuals were productive laborers where others were not. While Malthus characterized humans as biological beings, Smith applied scientific principles to society itself. Spencer even invokes Smith’s distinction between productive and unproductive labor; “Every citizen should ‘perform such function or share as is of value equivalent at least to what he consumes’ and countenancing ‘the poverty of the incapable, the distress that comes upon the imprudent, the starvation of the idle” (Spencer qtd. in Claeys 234). Without the synthesis of claims and shifts in modern thought provided by both Smith and Malthus, Spencer’s Principles of Biology and Principles of Sociology could not have contributed the same arguments and perspective on human society. Spencer defines this society as an amorphous being capable of constant evolution, and this idea, along with that of “survival-of-the-fittest,” a phrase actually coined by Spencer, eventually resulted in sharp judgment of the poor. In his Principles of Sociology, Spencer begins with the metaphor of society as an organism and describes different classes as evolving species, similar to biological speciation and even paralleling Smith’s division of labor: So it is with the parts into which a society divides. A dominant class arising does not simply become unlike the rest, but assumes control over the rest; when this class separates into the more and the less dominant, these again begin to discharge distinct parts of the entire control. With the classes whose actions are controlled it is the same. These various groups into which they fall have various occupations: each of such groups also, within itself, acquiring minor contrasts of parts along with minor contrasts of duties. (Spencer 4)
This description of social evolution refers to certain, undoubtedly wealthier, classes as ‘dominant’ in society, connoting that these are better than and evolutionarily favored over the rest. Moreover, the both progressive and retrogressive nature of social evolution outlined by Spencer created a system in which the most “fit” and “desirable” in society ought to survive, while in reality the poorer classes reproduced the most and were to determine the course of human evolution (Claeys 236). In response, Spencer argued for a hands-off approach towards the poor, and to “maintain mankind’s ascent to perfect humanity, he also recommended applying a ‘stern discipline of nature’ to the social sphere” (Gondermann 28). For him, sympathy and aid for the poor, feeble individuals of society hindered the progress of social evolution and would undermine the condition of future generations. The language of society as an organic being, which constituted the poor as “unfit” individuals somehow surviving in the world to contravene the human potential, reverberated throughout social and political discourse throughout the nineteenth century and even into the early twentieth century, perpetuating the stigma of poverty as equating indo-
63
eruditio lence, inferiority, and even crime against the progress of humanity (Claeys 240). The Secularization of the Stigma Throughout the age of modernity, notions of poverty and treatment of the poor constantly shifted as new social theorists such as Spencer, Smith, and Malthus crafted their definition of society and the role of humans within it. Weber’s account describes the movement away from Franciscan ideals and the beginning of the stigmatization of the poor with post-reformation Protestantism, yet Weber’s description, unlike those of the other thinkers, characterizes the still theological foundation of society. For the Calvinists, the poor are criticized not for external economic or behavioral elements but because their poverty reflects predetermined and inevitable damnation by God; the poor are sinners who God has clearly demonstrated do not deserve His grace. Verification of salvation required the establishment of a work ethic, which, although stemming from religion, resembles similar frameworks extended in more rational terms by both Malthus and Smith. Though the poor were already damned by God, Malthus Essay on the Principle of Population established the foundations for a secular critique of poverty. Malthus created, for the first time, a theory in which society parallels nature and is described in organic terms. In the Malthusian world, the poor are parasites of society who siphon limited subsistence and irresponsibly reproduce. Similar to that in Weber‘s narrative, the principle of population demands an ethic of work and responsibility, this time, however, to prevent overpopulation, famine, and disease. Smith, too, and implications from the Wealth of Nations represent a step away from the sacred in the disparagement of the poor. Modern capitalism secularized vocation, which no longer was a divine calling as for Weber’s Calvinists but instead constituted economic reason. Now the poor could be labeled indolent citizens of society who refuse to participate in the economy of exchange as productive laborers. Finally, Spencer perhaps best exemplifies the modern, rational critique of the poor. While Malthus maintained some reference to the sacred and Smith simply laid the foundation for disparagement, social Darwinism truly casts the poor as a sore on society. Society, for Spencer, becomes a completely intelligible entity equivalent to a biological organism, capable of being studied, predicted, and altered by human evolution. Now, within humanity, the poor were construed as the lowest of all species. Yet despite the professed ‘survival-of-the-fittest,’ these beings continue, with the help of charity and economic aid, to prevent the progress of social evolution, as an evident disease on the body politic. Today, cries of “personal responsibility” and “self reliance” echo Malthus’s call for ‘moral restraint,’ Smith’s moral economy of exchange, and even Spencer’s ‘survival-of-the-fittest.’ Recent decades in American politics have seen a rise in advocacy of the “hands-off approach” that Spencer called for in order to maintain the ascent to ‘perfect humanity.’ The welfare system has its supporters, but the attitudes of St. Francis and the Elizabethan Poor Laws have largely been replaced. The stigmas crafted over the past centuries continue to shape society’s conception, and indeed judgment, of the poor. works cited
1. Claeys, Gregory. “The “Survival of the Fittest” and the Origins of Social Darwinism.” Journal of the History of Ideas 61 (2000): 223-40. JSTOR. Duke University Library. 23 Feb. 2009 <http://www.jstor.org/stable/pdfplus/3654026.pdf.> 2. Dean, Mitchell. The Constitution of Poverty: Toward a Genealogy of Liberal Governance. New York: Routledge, 1991. 3. Gilbert, Geoffrey. “Adam Smith on the Nature and Causes of Poverty.” Review of Social Economy 55 (1997): 273-91. EBSCOHost. Duke University Library. 23 Feb. 2009 <http://web.ebscohost.com/ehost/detail?vid=1&hid=115&sid=002250ff- 0ea24224a19e-6405c2cfbdbb%40sessionmgr104&bdata=JnNpdGU9ZWhvc3QtbGl2ZSZz
64
rationalizing inequality
Y29wZT1zaXRl#db=eoh&AN=0437577>.c2cfbdbb%40sessionmgr104&bdata=JnNpdGU 9ZWhvc3QtbGl2ZSZzY29wZT1zaXRl#db=eoh&AN=0437577>. 4. Gondermann, Thomas. “Progression and retrogression: Herbert Spencer’s explanations of social inequality.” History of the Human Sciences 20 (2007): 21-40. SageJournals. Duke University Library. 23 Feb. 2009 <http://hhs.sagepub.com/cgi/reprint/20/3/21>. 5. Malthus, Thomas R. The Principle of Population. Ed. Lloyd Reynolds and William Fellner. Homewood, Il: Richard D. Irwin, Inc., 1963. 6. Smith, Adam. The Wealth of Nations. London: J.M. Dent and Sons LTD, 1910. 7. Spencer, Herbert. The Evolution of Society. Ed. Paul Bohannan. Chicago & London: The University of Chicago P, 1967. 8. Weber, Max. The Protestant Ethic and the Spirit of Capitalism. Trans. Talcott Parsons. New York: Charles Scribner’s Sons, 1958.
65
eruditio
Lauryn Kelly
Who’s Afraid of the Big, Bad Television?
Technology surrounds the everyday occurrences of our lives. For some, it has become integrated to the point where we hardly take notice of it any longer; we take it for granted and never question the complicated series of wires and chips beneath the surface. The most modern of these technologies would be the internet and even the computer itself, but we cannot forget that nearly eighty years ago the television was a novel apparatus, one that the general public did not know much about other than that it transmitted moving images from some unknown source to their living room. However, even though the television will celebrate its centennial during our lifetime, most people still remain ignorant to what goes on behind the screen, even though it has greatly changed over time. To some, this kind of information falls by the wayside to what is on TV that night; others who understand the technology pay it no mind, and yet for many people in the 1940s and 1950s, this kind of information scared them nearly to insanity. Where were the TV signals coming from, and who was transmitting them? Now we know that the signals come from satellites drifting through outer space, but sixty years ago the idea of a seemingly omnipotent, disembodied source broadcasting several different stations at once lead to uneasiness, paranoia and, in the most extreme cases, madness. Contrary to what we may believe, television still holds some sort of power over us, even the power to haunt us. Some may argue that besides its addictive qualities, television does not scare us anymore than our refrigerator would. However, the fears that existed regarding the television during its debut have merely transformed along with contemporary fears regarding technology. With the exception of new digital televisions, both televisions from the past and those of modern times have similar inner workings. Almost all TV’s today rely on an apparatus known as a cathode ray tube, or CRT. The description of how exactly a CRT works reads like a glossary of esoteric electrical terms, so to offer the abridged version: the “cathode,” a heated filament, emits three streams of electrons (red, green, and blue) that are attracted to a “focusing anode.” The beams then pass through a thin metal screen called a shadow mask, which is perforated with tiny holes that are aligned with the red, green, and blue phosphors on the screen at the other end of the CRT. The beams are directed by steering coils, which are sets of copper windings that create magnetic fields to move the electrons horizontally and vertically to hit the correct spot on your television screen. The picture shown on your screen depends on which channel you’re watching, and your television extracts both the sound and the picture from radio waves transmitted to its antenna from a broadcasting station (Marshall, 4-13). With all the complicated terminology and unfamiliar devices described it is no wonder Americans did not, and still do not, take the time to get to know their television a little more personally. However, it was this unfamiliarity that stirred an uneasiness in the minds of Americans, and left them wondering if, just as easily as they could watch television programs, someone else might be watching them at the same time. During the Cold War the government did all that it could to keep everything under its control in order to curb the spread of Communism. These actions eventually came together under a policy known as “containment.” However, besides restoring the balance of power between the dueling nations and diminishing communist influences, containment also subjected U.S. citizens to domestic surveillance (“Containment”). The National Security Agen-
66
big, bad television cy, “[created] by secret presidential [directives] in 1952,” began collecting all kinds of data from Americans’ private lives to ensure that no pertinent information went unaccounted for (Hafetz). For the most part, the government tended to observe those they considered threatening to the democratic stability of the country. COINTELPRO was a 1960s federal surveillance program that targeted civil rights leaders, such as Martin Luther King Jr. COINTELPRO not only resorted to wiretapping the reverend, but also aimed to “disrupt, discredit, and defame perceived political radicals” (Chideya). Similarly, there also existed a subunit of the CIA named MH CHAOS, which was responsible for spying on and harassing the anti-Vietnam War movement (Lindenfeld). However, there were programs that targeted innocent civilians as well. A NSA program known as Operation Shamrock “intercepted millions of telegrams to and from the United States” (Hafetz). Though most of these operations were unknown at the time, their recent unveiling proves Americans of the Cold War era had every right to suspect that the government had the ability to violate their privacy. This suspicion eventually transformed into a fear, a fear they were being listened to, watched, followed. Where does the television play into all of the paranoia? During this time the United States and the Soviet Union were involved in what has come to be known as the “space race.” The competing nations contended for stratospheric domination to prove their technological superiority (“Space Race”). However, there was a frightening underlying motive – launching satellites would also enable the nations to “[survey] the world below to the smallest detail” (Sconce, 144). Due to the growing concentration of satellites floating in space, Americans began to worry that their brand new television set was actually an extension of the government’s espionage. Television seemed the most feasible candidate to “serve as a ‘window to the home’” (Sconce, 145). Because of the hype surrounding this topic, television programs at the time played off of it to attract their audience. One program specifically, titled The Outer Limits, began each episode with an “ominous series of commands and assertions” (Sconce, 136). While listing off each aspect of the broadcast that “they” controlled, “they” also tampered with the pictures on the screen corresponding to each manipulation. This instilled in the audience a fear that there could actually be someone on the other end of the broadcast exerting their omnipotence via airwaves. Another television show that also aired during the Cold War era was The Twilight Zone. Though much less menacing than The Outer Limits, one episode hit especially close to home regarding the surveillance scare. In “What’s in the Box,” which aired March 13, 1964, a TV repairman, angered by the insults of his customer, fixes the television to broadcast the past, present, and future events of his customer’s life (“What’s in the Box?”). When Joe, the customer, goes to turn on the television, he finds that it is able to receive channel 10, which was previously inaccessible. To his astonishment, he watches himself in a cab with his mistress, an event that had taken place hours before he had returned home, where he now sits. At first, he dismisses what he has seen, calmly changing the channel, convinced he must be seeing things. However, after realizing the wrestling match he had hoped to watch is no longer being aired, he switches back to channel 10, only to find himself watching the opening scene of the episode. Moments later, he witnesses a fight between he and his wife that will occur later on in the episode. Finally, he watches a judge condemn him to death for killing his wife and quivers at the sight of himself in an electric chair (What’s in the Box?). This half-hour episode embodies a central fear of Americans of the Cold War era. Who knew for sure whether or not the government had rigged each and every television set to survey the unknowing families who brought them into their homes? Perhaps live images of their families, gathered around in front of the television, were being broadcasted on some government agent’s television screen. Knowing how easily the government could wiretap its citizens on a whim, Americans were fearful of the power the government had over television signals. Besides the fear of being watched, there existed an entirely separate fear regarding the television and its potential impact on the lives of Americans. The initial excitement regarding the television stemmed from the idea that it had the “the fantastic ability to teleport the viewer to distant realities” (Sconce, 128). However, critics soon began to think of these “distant realities” more as a zone of oblivion. They believed Americans had better ways of spending their time
67
eruditio and fretted over the fate of “high culture and refined audiences;” while Americans had previously been active, critics watched, horrified, as Americans fell subject to “passive reception” (Sconce, 132). By having televisions in their homes, Americans became more detached from the real world and instead spent more and more time fixated on their favorite programs. Some critics even worried about the values of the American youth and how television might corrupt them (Sconce, 132). These two fears represent the way in which the television “haunted” its audiences in a figurative sense. However, there were instances when people thought they had witnessed supernatural activity going on inside their television. Before the invention of the television there had also been past instances in which people thought they had heard ghostly voices on radio broadcasts that could not be accounted for by anything but an occult voice (“A Brief History of EVP’s (Electronic Voice Phenomena)”). Because television broadcast signals were thought to be similar to radio broadcast signals, people often thought that the television might serve as another medium for “paranormal contact” (Sconce, 126). Jeffrey Sconce cites several of these instances in his book Haunted Media. In one episode, the Mackey family reported seeing their recently deceased grandfather on the screen, and immediately transported the television to the police station, where visitors stopped by frequently to see the apparition before it disappeared. On another occasion, a woman living in Wisconsin claimed to have seen a couple arguing on a balcony and the call letters of an obsolete radio station on her television screen. Even stranger, the same call letters had appeared on the television of a man in London twelve years earlier (Sconce, 142-143). The television is nearing the hundredth anniversary of its integration into the American lifestyle, and yet there is still something about this seemingly harmless piece of technology that frightens us. One of the more obvious threats that television poses is its ability to draw its audience in to watch program after program, day after day; there is something so addicting about television that the average time a television is turned on in a typical household has gone up one hour from ten years ago (Associated Press). There have even been health hazards linked to watching excessive amounts of television (“TV poses 15 different health risks to our children”). Besides these evident and rational fears, there still exist underlying fears regarding the mystical powers of television. Television has become so incorporated into our lives to the point where reaching for the remote has become second nature. Having been around for so long, its look changed significantly – it has transformed from a boxy, space-consuming apparatus to a sleek wall accessory. Its newer, slimmer figure can be attributed to television manufacturers’ switch from the use of previously mentioned cathode ray tubes to plasma screens. This type of screen, as opposed to the phosphor-coated screens, relies on xenon and neon gasses contained within tiny cells in the television set. Again, the in-depth description of how a plasma-screen television works requires a dictionary and an extensive knowledge of chemistry, however a summarized version suffices: The plasma display’s computer charges electrodes that intersect with each of the red, green, and blue subpixel cells, which make up a pixel. This excites an electric current in each cell, which causes charged particles to collide and stimulate the release of ultraviolet photons. When an ultraviolet photon hits the phosphor material that coats the inside of each cell, visible light is given off as the phosphor atom returns from its excited state to its normal energy level (Harris, 2-3). Just as the technology behind a television has changed and evolved, so has the wariness that revolves around the television. We no longer worry about being watched through our television sets, realizing the fear to have been groundless. However, the fear of paranormal contact by way of broadcast signals has not completely died down. This theory, most likely to be ridiculed by many, is supported by movies like Poltergeist, White Noise and The Ring. Each deals with the television as a portal for paranormal beings. Poltergeist, made in the eighties, was thought to be one of the scariest films of its time. It tells the story of a family whose television serves as a medium of communication between the youngest daughter of the family and a series of spirits who have “left this life but have not gone into the spectral ‘Light’” (“Synopsis for Poltergeist”). What is most frightening about this film is the idea that within the television, which tends to blend into the background of everyday life, lies a supernatural power, waiting to escape and terrorize the family.
68
big, bad television The ambiguity of the static that appears before the “Beast” takes over the house has become an ominous sign of something terrible to come in recent horror films. Within the first five minutes of Poltergeist we see the daughter, Carol Anne, talking to the television screen, which is only displaying static. However, it is obvious that she is carrying on a conversation with people on the other side, who she calls “TV people” (Poltergeist). It is an interesting, and evidently terrifying, concept to think that static, which, in the way we think of it, represents a harmless, televisual nothingness, could carry with it such a mystical power, a power that is merely waiting for someone to discover it and set it free. The creator of White Noise, Niall Johnson, became fascinated with E.V.P.’s through research for his movie, yet he took the concept one step further by also using static as a portal to “the other world” (White Noise). What seems to be most frightening about static, and what has consequently been perpetuated by White Noise along with other horror films, is the abruptness of it and mystery behind it. During the Cold War years, the sudden interruption in a television broadcast “could signify imminent nuclear annihilation” (Sconce, 137). Though in contemporary society we would not take static to be a sign of nuclear warfare, it still elicits a sense of suspicion as we still think of it as a sign of television broadcasting gone awry. And the deafening sound creates uneasiness to boot. Perhaps before movies like Poltergeist and White Noise we could’ve walked past our televisions with ease or attribute static to poor weather conditions or some technological complication among the broadcasting signals. However, after many years of film directors building upon the idea of spirits coming out of television sets, it seems we’ve begun to walk a bit more quickly whenever we pass a television set or feel a wave of disquietude wash over us when our television suddenly cuts to static. Just as technology and our fear of television have evolved, so has the manner by which occult beings come through the television. In Poltergeist, once Carol Anne has established mutual communication between the human world and the paranormal realm, an efferent, supernatural current injects itself into the walls. This current releases the spirits of ancient tribal members, angered by the fact that the house had been built over their burial ground. We see the spirits as hovering bursts of light that faintly resemble human figures, which are anything but frightening. Twenty years later in White Noise, we see again a “chosen” gatekeeper unknowingly opening the doors between these two worlds. While in Poltergeist we are barely able to make out a human form beyond the flare of the light, the spirits in White Noise are completely embodied. E.V.P.’s were originally heard through radio static, however in this film, the voices of the spirits are heard through both radio and television static, and, once again, the television serves as the gateway for them to pass through. What’s more, the spirits appear as still shots of an image lost behind a screen of static, almost like detached extensions of the television screen. In this way, the television becomes even more of the villain in that, without ever moving, it has extended its limits beyond the casted glow of the screen. The Ring, arguably the most horrifying of the three, represents the most extreme way in which the dead can contact us via the television. By incorporating the technological aspects of our daily lives, such as missed phone calls, borrowed movies, and television, The Ring becomes infinitely more scary than the other two films in that perhaps one day it may not be entirely ridiculous to think that someone could come through what we think of now as an impenetrable television screen. This idea, coupled with our limited knowledge of the dead, makes us wonder if E.V.P.’s could ever evolve to the point of not only transmitting the voice of the spirit, but also its entire being, like it has in The Ring. Similar to the spirits in White Noise, Samara’s “image” glows and wavers, as does static on a television screen. Yet she is also a tangible, 3-D form as we can see by the way that she crawls and the water that drips off of her (The Ring). Perhaps the degree of terror can be attributed to special effects, but if the underlying themes had proven irrelevant to our own lives, would these films be able to haunt us even half as much? Special effects do in fact play a large role in all three of these films, and with each one the quality of these effects has improved. This idea parallels technology’s progression year after year. However, the depiction of modern society in any horror film allows us to relate to it, and consequently frightens us more. During the late 70s, ghost hunting became a popular hobby for Americans, but the fad significantly took off
69
eruditio in the 80s (“Ghost hunting”). It is possible that the creators of Poltergeist used this information to their advantage while brainstorming for movie ideas. The Ring, being the most recent, applies most directly to our lives today. A videotape, while thought to be a primitive form of technology at this point, can be passed easily from person to person, as is shown in the movie. This resonates with the idea of YouTube videos being viewed by millions of people a day. These three films demonstrate the way in which the fear of television evolves with current events, changes in cultural lifestyles, and the progression of technology. The first time anyone questioned the true motives behind the invention of the television, it was attributed to the espionage frenzy of the Cold War era. Nowadays, we can dismiss this as “the product of misguided communist hysteria” (Hafetz). We cannot, however, dismiss the extant fear of the television, as has been made evident by recent horror films. With the little technology that existed in the United States in the 50s and 60s, it was not at all ludicrous to question the extent of the technological powers of the television and the broadcasting signals that bounced around outer space. Nearly twenty years later, the amount of time Americans spend watching TV has increased significantly, as has the number of television sets found within a home. However, there is still much most people do not know about how television works, and even less they know about life beyond death. As technology advances in quality, perhaps one day it will advance upon our authority over it, and while fascination with E.V.P.’s grows, it could come to pass that contact with the dead could reach beyond just the conveyance of voices. works cited
1. “A Brief History of EVP’s (Electronic Voice Phenomena).” Long Island Paranormal Investigators. Web.21 Apr 2009. 2. Associated Press. “Time Spent Watching TV Continues Growing.” The New York Times. 25 Nov 2008 Web.23 Apr 2009. <http://tvdecoder.blogs.nytimes.com/2008/11/25/time-spent- watching-tv-continues-growing/>. 3. Brian, Marshall. “How Television Works.” HowStuffWorks 19. Web.19 Apr 2009. <http://electronics.howstuffworks.com/tv.htm>. 4. Chideya, Farai. “COINTELPRO and the History of Domestic Spying.” NPR. 18 January 2006. <http://www.npr.org/templates/story/story.php?storyId=5161811>. 5. “Containment.” NuclearFiles.org. Web.20 Apr 2009. <http://www.nuclearfiles.org/menu/ key-issues/nuclear-weapons/history/cold-war/strategy/strategy-containment.htm>. 6. “Ghost hunting.” Wikipedia. 08 Apr 2009 Web.25 Apr 2009. <http://en.wikipedia.org/wiki/Ghost_hunting>. 7. Hafetz, Jonathan. “History’s Lesson about Domestic Surveillance.” Brennan Center for Justice. 23/05/2006 Web.20 Apr 2009. <http://www.brennancenter.org/blog/archives/histo rys_lesson_about_domestic_surveillance/>. 8. Harris, Tom. “How Plasma Displays Work.” HowStuffWorks 5. Web.23 Apr 2009. <http://electronics.howstuffworks.com/plasma-display2.htm>. 9. Lindenfeld, Frank. “Book Review: Secrets: The CIA’s War at Home.” Nothingness.org. Web. 20 Apr 2009. <http://library.nothingness.org/articles/SA/en/display/270>. 10. Poltergeist. Dir. Tobe Hooper. Perfs. Craig T. Nelson, JoBeth Williams. MGM, 1982. 11. The Ring. Dir. Gore Verbinski. Perfs. Naomi Watts, Martin Henderson. DreamWorks, 2002. 12. Sconce, Jeffrey. Haunted Media. Durham & London: Duke University Press, 2000. 13. “Space Race.” Everything2 29 May 2001 Web.21 Apr 2009. <http://everything2.com/title/space%2520race>. 14. “Synopsis for Poltergeist.” IMDB.com Web.23 Apr 2009. <http://www.imdb.com/title/tt0084516/synopsis>. 15. “TV poses 15 different health risks to our children.” Mail Online 23 Apr 2007 Web.23
70
big, bad television
Apr 2009. <http://www.dailymail.co.uk/news/article-450162/TV-poses-15-different-health- risks-children.html>. 16. “What’s in the Box?” The Twilight Zone. Dir. Rod Serling. 1964. 17. “What’s in the Box?” Wikipedia 26 Mar 2009 Web.21 Apr 2009. <http://en.wikipedia.org/wiki/What%27s_in_the_Box>. 18. White Noise. Dir. Geoffry Sax. Perfs. Michael Keaton, Chandra West. Universal Pictures, 2005.
71
eruditio
Elizabeth Beam
Sylvia Plath on Edge: A Case for the Correlation of Bipolar Disorder and Exceptional Poetic Creativity
Edge The woman is perfected. Her dead Body wears the smile of accomplishment, The illusion of a Greek necessity Flows in the scrolls of her toga, Her bare Feet seem to be saying: We have come so far, it is over. Each dead child coiled, a white serpent, One at each little Pitcher of milk, now empty. She has folded Them back into her body as petals Of a rose close when the garden Stiffens and odors bleed From the sweet, deep throats of the night flower. The moon has nothing to be sad about, Staring from her hood of bone. She is used to this sort of thing. Her blacks crackle and drag.
(Plath 1999:93 – 94)
In bipolar disorder and in poetry, there is a fundamental tension that gives rise to art. Sylvia Plath’s poem “Edge,” the grand finale of a final fit of hypomania, burst forth from the friction of manic and depressive moods. When the mind thrusts itself against itself in states of euphoric agitation, when cognitive and emotional extremes put all meaning and worth at stake, Plath suggests that “the woman is perfected” only in death. In conveying this, though, her creative expression proves exceptional; she is
72
sylvia plath on edge perfected in poetry. By representing the way in which the mentality unique to bipolar disorder parallels the creativity unique to poetic prowess, Plath’s life and work show how it is no coincidence that eminent poets are so often so close to bipolar break. In the following case study, the poem “Edge” by Sylvia Plath is assessed as exemplary of bipolar linguistic style and expressive of bipolar cognition. Where Plath signs off her last masterpiece, the story of creativity in madness begins to unfold. Composed on February 5, 1963, less than a week prior to her suicide, “Edge” would prove to be Plath’s last poem (Bundtzen 2001:25). At its climax, Plath preempts actual death by poetically envisioning herself as a stoic, stone-cold Greek statue—the mention of Greece proves especially ironic. It was there, as early as 500 B.C., that philosophers first considered the correlation between poetic creativity and mental instability (Goodnick 1998:27). Aristotle was perhaps the first to bring attention to this phenomenon, inquiring, “Why is it that all men who have become outstanding in philosophy, statesmanship, poetry, and the arts are melancholic?” (qtd. in Jamison 1993:44). In Plato’s dialogue Phaedrus, Socrates speaks on the necessity of divine madness in reaching the pinnacle of poetic achievement. Like Plath in the opening line of her last poem, Socrates suggests that poetic “perfection” exists at the peak of affective disturbance: “If a man comes to the door of poetry untouched by the madness of the Muses, believing that technique alone will make him a good poet, he and his sane compositions never reach perfection, but are utterly eclipsed by the performances of the inspired madman” (51). The correlation between bipolar disorder and poetic creativity has continued to surface throughout literary history. In Touched by Fire, Kay Redfield Jamison speaks from experience as a bipolar writer. Researching the psychopathology of eighteenth century British and Irish poets, she found this group to be thirty times more likely than the general population to be bipolar (1992:61-72). She quotes Lord Byron, a prominent poet at the turn of the 1700s who noted a peculiar trend among his colleagues: “We of the craft are all crazy. Some are affected by gaiety, others by melancholy, but all are more or less touched” (qtd. in Jamison 2). Though Byron implies the occurrence of depression and mania in separate populations, both of these opposing dispositions converged in him and a plentitude of other poets. In fact, Jamison cites no less than 84 prominent poets as having been “touched” with bipolar disorder (267-268). Research compiled by Alice Flaherty, an expert on hypergraphia and a bipolar writer, indicates that the occurrence of bipolar disorder in writers is tenfold compared to rest of society (2004:30). In poets, the rate is fortyfold. These statistics and retrospective diagnoses strongly suggest bipolar disorder as the mental illness that plagued Plath. Her psychiatrist Dr. Horder acknowledged after her death that she “was liable to large swings of mood, but so excessive that a doctor inevitably thinks in terms of brain chemistry” (Stevenson 1989:298). John Maltsberger, a Harvard psychiatrist specializing in suicidology, presents a case based on biographical evidence and accounts from her friends: Sylvia Plath died of suicide on 11 February 1963, roughly four months after she was separated from her husband, Ted Hughes. Almost certainly she suffered from a bipolar disorder. She was depressed, sometimes furiously angry, excited, perhaps sometimes briefly ecstatic, and preoccupied with suicidal images of metamorphosis in the months before she put her head in the gas oven (1997:293).
Not only in her death, though, did Plath substantiate psychiatrists’ claims; her literary legacy is vivid with elation and despair. In The Bell Jar, Plath’s novel on psychotic collapse, she speaks for herself through Esther Greenwood: “If neurotic is wanting two mutually exclusive things at one and the same time, then I’m neurotic as hell” (1971:140). More direct evidence of Plath’s bipolar tendencies exists among the insights preserved in her published journals. “I have the choice of being constantly active and happy or introspectively passive and sad,” she reflects on the poles of her moods. “Or,” she continues with an ultimatum on which she would follow through, “I can go mad by ricocheting in between” (2000:59). Illustrating her choice of the latter in “Edge,” Plath depicts a convergence of moods by juxtaposing light and dark images, a statue’s pure white marble and the “blacks” of a hooded moon. Indeed, the entirety of the collection leading up to this last poem is proof of both madness and genius. Brian Cooper, a professor
73
eruditio of psychiatry at King’s College, remarks on the co-occurrence of Plath’s mental and professional breaks: The appearance of a posthumous volume of verse, Ariel, established Sylvia’s reputation, and slowly the recognition grew that this young woman, whilst engaged in a life-and-death struggle with depression, had in the last months of her life achieved a literary breakthrough, producing some forty remarkable poems in an intense burst of creative energy (Cooper 298).
Although the correlation seems to be inversely proportional, Plath’s mental wellbeing and professional success were very likely related. Modern science, however, is only beginning to recognize the qualities of bipolar cognition that have a causal connection with creativity. The current cognitive research draws conclusions from generalized tests, providing preliminary indications of a propensity for creativity in bipolar individuals. In one study ascertaining long-term inclinations for professional and artistic creativity, the Lifetime Creativity Scales assessed subjects through an extensive interview process (Richards et al. 1988). Mean peak creativity was found to be elevated in manic-depressive versus control subjects. For those with “cyclothymia,” a mild form of bipolar disorder in which moods alternate less jarringly between hypomania and depression, increased creativity was likewise reported. Another study measured creativity with the Barron-Welsh Art Scale (BWAS), a test which confers higher scores for preference of asymmetric and complex figures over symmetric and simple ones (Santosa et al. 2007). Bipolar and creative discipline groups scored higher than controls by roughly fifty percent. In a study investigating temperament-creativity relationships, scores from the BWAS test in addition to several temperament evaluations were combined to find that higher creativity scores were associated with neuroticism and cyclothymia (Strong et al. 2006). These results suggest that obsessive cognition and variability of affect, both characteristic of bipolar disorder, have a positive influence on creativity. And yet, while bipolar diagnosis and independent symptoms converge in pointing towards general creativity enhancement, scientists have yet to probe cognitive origins of distinctively poetic creativity. Philosophy, history, and science have thus far deduced, rather ontologically, a tenuous association of the label of bipolar disorder and the label of poetic creativity. What remains to be articulated is the way in which poetry is particularly conducive to expression of bipolar cognition, and the particular way in which bipolar cognition induces and influences poetic creativity. With “Edge” as an example of exceptional poetic creativity and with insight into Sylvia Plath’s mind through her journals and behavioral accounts, this case study will draw forth more precise linguistic and cognitive analyses. Bipolar Disorder and Linguistic Style Glancing back to the Greeks, classical poets had anticipated an important distinction. Even before mental illnesses were clinically delineated and bipolar disorder identified, the type of madness induced by mercurial moods was worshipped as a wellspring of poetic inspiration. Many a magnum opus of Greek poetry was attributed to Dionysus, god of the epiphany, who not coincidentally grappled with euphorias and melancholias (Jamison 1993:50). This implicit coupling of abnormality in moods and creativity in poetry is now being confirmed as a veritable correspondence of bipolar linguistic style and the linguistic style particular to poetry. Psychological research links bipolar disorder to the definition of cognitive creativity, but it has not yet addressed why the bipolar cognitive style seems to have a particular affinity for poetic expression. An ideal finding would be a mechanism of bipolar disorder that predisposes creative individuals to expression through poetry. What has so far been established, with promising implications, is a prevalence of bipolar disorder in poets as compared to other types of writers or artists. In a study of 291 world-famous men, poets were reported as the most likely to have bipolar disorder (Post 1994). Kay Redfield Jamison observed a similar trend among British writers and artists with national accolade (1989). Half the poets, at a rate significantly higher than that of novelists, playwrights, biographers, and visual artists, had previously been treated for mood disorders. Finally, in an extensive psychopathological examination of more than a thousand creative professionals that used biographi-
74
sylvia plath on edge cal review in The New York Times as a standard for popular renown, poets were reported to have elevated rates of depression, mania, and psychosis than artists of any other genre (Ludwig 1992:342). Beyond being affiliated with mood disorder, works of bipolar poets point directly to aspects of psychopathology. Antoon Leenaars, recipient of the International Association for Suicide Prevention’s Stengel Award for research in suicidology, has investigated several poems from Plath’s book Ariel according to a list of protocols for suicide risk (1998). “Edge” exhibits evidence of no less than 11 of these 35 protocols. The first line, for instance, in which Plath casts perfection as possible only in death, suggests grief from “losing an ideal” and a “frustrated need” for perfection (Leenaars 1998:635, 634). Plath exemplifies “pitiful forlornness” and the wish “to flee” when lamenting having “come so far, it is over” (Leenaars 1998:632, 635; Plath 1999:93). All in all, the protocols evident in “Edge” can be summarized as follows: inability to adjust, emotional detachment, hopelessness, fixation on grief, introversion, manic-depressive mood disturbance, self-punishment, unmet expectations, inversion of murderous impulses on the self, unwillingness to accept loss of an ideal, and desire to egress. While it is apparent that “Edge” is the product of an unstable mind, Leenaars does not expand on how presence of these protocols affects the creative quality of the poem. Having described poetry in terms of bipolar disorder, it is now necessary to define creativity in terms of poetry. Alice Flaherty, while stressing the inseparability of social context from analyses of creative works, cites the essential elements of creativity: Creativity requires novelty because tried-and-true solutions are not creative, even if they are ingenious and useful. And creative works must be valuable (useful or illuminating to at least some members of the population) because a work that is merely odd is not creative (2004:51).
Applying these parameters to linguistic expression, creativity can be pinned down as a new and useful command of language. Grasping a sense of poetry, however, proves more difficult as it flits among various tenuous definitions. On the one hand, according to poet G. Burns Cooper, poetry distills the metaphors and images of literary prose into a denser form (1998:1). Linguist and literary theorist Roman Jakobson, on the other hand, claims that the significance of sound and shape distinguish poetry as a fundamentally different mode of writing (1987:225). Unlike prose, in which the pronunciations of words and the line breaks of passages are left to arbitration, poetry maintains autonomy between the way it is written and what it means. In poetry, then, meaning can be derived from both the words themselves and the way in which they are arranged on the page. A synthesis of these definitions deems poetry as creative when the resonance of its content and construction arises in a novel and valuable way. Considering the high rate of bipolar disorder in poets, it comes as no surprise that there is profound similarity in the type of creativity that is characteristic of poetry and of bipolar cognition. Qualitative psycholinguistic analyses reveal the way in which bipolar individuals effectively poeticize their cognitive states. To hold stylistic studies flush to the definition of poetic creativity, separate frameworks for content and construction are applied here to the bipolar linguistic expression represented by “Edge.” First, peering through the lens of creativity in content, cognitive distortions identify certain styles of tone and voice that are often used by writers with mood disorders. From “magnification” to “personalization,” these rhetorical devices are both conducive to creativity and prevalent in works of poets with mood disorder. In a study of poets labeled depressed or nondepressed, two Emory professors found a higher incidence of cognitive distortion for the former group (Thomas and Duke 2007). Their diagnosis of depression, however, is up for debate; the “depressed” group consisted entirely of poets listed by Jamison as bipolar (1993:267-268). Another study ranked cognitive distortions as a better predictor of suicidal ideation than self-report measures (Wedding 2000:140). “Edge” is no exception to these findings, exhibiting numerous distortions and exemplifying poetic creativity: (i) Overgeneralization refers to the extension of a rule from one isolated set of circumstances to other unrelated sets. This device points to the predilection for inductive reasoning characteristic of bipolar cognition (Johnson and Leahy 2005:143). In the first line of “Edge,” Plath
75
eruditio overgeneralizes her own death by casting it upon “the woman” as an everywoman figure, thus declaring the universality of her novel experience and expanding its relevance to a wider range of readers. Plath’s suicide was an extreme consequence of affective disturbance, but there is nonetheless popular emotional value in the idea of seeking of stasis from the turmoil of everyday life. (ii) Magnification reflects manic grandiosity through an overstatement of significance, much like hyperbole. Through her mention of “a Greek necessity,” likely alluding to the goddess Medea, Plath magnifies herself to divine proportions. Just as Medea was abandoned by her husband, so Plath separated from Ted Hughes due to suspicions of philandering—but the novel twist on classical mythology is that in reality, Plath kills herself rather than her children as a form of vengeance. Contemporary value exists in how this illuminates the woman’s too-often tragic role in the dissolution of romantic relationships. (iii) Conversely, minimization denotes depressive feelings of worthlessness in the way it demeans items of significance. With the metaphor of “a white serpent” for “each dead child,” human life is reduced in value to that of an animal—a snake, no less, and the negative connotations it evokes. Plath continues to minimize in the next line by describing pitchers as “little.” By diminishing objects at the foot of the statue, Plath elevates the woman into a reification risen larger than life through death, thereby supporting the overarching goal of the poem with a novel combination of images. (iv) Finally, dichotomous thinking divides ideas into “all-or-none” terms, much like bipolar disorder splits moods along opposite poles. Lamenting “nothing,” the moon’s total apathy is useful in the way it reinforces Plath’s calm composure in the face of death. The novel juxtaposition of the moon and a Grecian statue, representing the dysphoric extremes of lost emotion and lost will to live, aptly captures a depressive state of mind. Ultimately, Plath’s poetry has gained appeal for the way it taps into the negative affect which all people experience to some extent.
By distorting her cognition, bipolar disorder directed Plath down novel paths of thought. Guiding those ideas to the page through the cognitive distortions, she expressed her eccentric perception so that others could relate, infusing her work with value to society. In this way, the cognitive distortions identified by Thomas and Duke are not only associated with affective disturbance in eminent poets, but they are furthermore alimental to the creativity in the content of “Edge.” Second, in order to understand the bipolar influence on linguistic construction, University of Chicago linguist James Goss has proposed a new application of the Growth Point model (2006). Defined as “the smallest unit of language and thought that contains the properties of the dynamic whole,” the Growth Point allows for linguistic expression and cognition to be studied as an interaction rather than as parallel tracks. For those with bipolar disorder, who tend to pivot on a Growth Point and bound off in unexpected directions, both the psychological goal and linguistic form of one Growth Point affect the next one. With implications as to why bipolar individuals seem so predisposed to writing poetry, Goss argues that an overemphasis on linguistic structure for those with bipolar disorder results in frequent use of poetic devices such as punning, rhyming, and alliteration. Applying his analyses of discourse to the written word, the poem “Edge” embodies several of the features he lists as bipolar idiosyncrasies: (i) Perhaps most pertinent to poetry is the tendency for bipolar writers to select Growth Points by words similar in sound, called clang associations. In lines 12 through 16, Plath’s comparison of the statue to a flower seems to be driven as much by the purpose of the poem as by the musicality of the words. The idea of folding petals sparks the rhyme “rose close,” while “bleed” initiates the assonant descriptions “sweet” and “deep” in the next line. These sound associations infuse Plath’s writing with a high degree of lyri-
76
sylvia plath on edge cism, which some might argue is the critical factor distinguishing poetry from prose. (ii) The paragrammatical segment departs from proper syntax, allowing the bipolar writer to manipulate semantic meaning. In lines 2 and 6, Plath breaks lines where there would not ordinarily be pauses in speech, causing her audience to dwell on the words “dead” and “bare.” By slowing down the reader, this construction reinforces the tone of stasis that Plath evokes with the image of a statue. The stark isolation of these words reveals their key role in the lines to follow; through death, Plath attains “accomplishment,” and she is so “bare,” so stripped of motivation, that her journey must now be “over.” Plath thus adjusts the typical structure of language to support the meaning of its content. (iii) Tangentiality is digression from a linguistic goal. Driven by associative leaps from Growth Point to Growth Point, it reflects the looseness of association in bipolar cognition. In “Edge,” Plath makes two swift shifts in images. First, she steps away from the central woman figure and into a profusion of garden imagery, her change in topic hinging on the minor metaphor for folding “as petals.” Next, moving from her image of the “night flower,” Plath enters into a description of the moon in the last two stanzas. The cold and bare statue is made vivid in death through the metaphor of a closing rose, and the poem is projected onto a vaster context of coldness and apathy through the image of the moon. Thus, the use of tangentiality makes for effective poetry by recombining seemingly unrelated images so that the emotional effects of the poem are heightened. (iv) Especially present in manic discourse is the overproduction of intentions, which consolidates the bipolar writer’s racing thoughts into a word or semantic structure with a multiplicity of meanings. In “Edge,” for example, Plath pursues two metaphors in the same poem. First, the “illusion of a Greek necessity” suggests the statue is the goddess Medea, with the children at the foot of the statue representing her spiteful act of infanticide upon discovery of her husband’s infidelity. Alternatively, the “Greek necessity” might represent Plath’s muse, now only an “illusion.” She regards her previous works of poetry, the “scrolls of her toga,” as falsely inspired, and she is “perfected” in her vow to write no more, leaving herself and her works to rest as a stonecold monument. The construction of “Edge” compacts Greek concepts of myth and muse into a dense metametaphor of the statue as Plath, reinterpreting them in a novel and clever way as the personal and professional factors that confounded in her choice of suicide. Relating the events of Plath’s life to the poem, readers are sympathetic towards both her Medea-esque desperation after splitting from husband Ted Hughes and towards her worries, oft expressed in her journals, that her writing was to no availl.
Through these techniques in structuring language, characteristic of both poetry and bipolar linguistic expression, Plath infuses creativity into the construction of “Edge.” Plath is creative not only in her choice of images, but also in the way in which she presents them. The flow of contextual elements according to sound and structure results in a novel combination of images that behold her theme of stasis from an array of different lights. Looking through the lenses of cognitive distortion and the Growth Point model, it becomes clear how bipolar idiosyncrasies in content and construction contribute to poetic creativity. Zooming out from analyses of “Edge” at a line-by-line level, the genre in which Plath wrote is also an indicator of bipolar disorder. Her spotlight on the statue as a metaphor for herself denotes a focus on the self that has frequently been observed in the linguistic style of bipolar individuals. Stirman and Pennebaker found, through text analysis of 300 poems from nine suicidal poets, that use of first person was significantly greater for experimental subjects versus nonsuicidal controls (2001). All members of the suicidal group, which notably contained Sylvia Plath, are also listed by Jamison as having bipolar disorder (1993:267-268). Not surprisingly, in the mid-twentieth century, the confessional movement was pioneered by Plath and other bipolar poets such as John Berryman, Anne Sexton, Theodore Roethke, and Robert Lowell. In a review of Lowell’s Life Stories, literary critic M. L. Rosenthal was the first to recognize “confession” as an aim of poetry reflecting the widespread urge “to build a great
77
eruditio poem out of the predicament and horror of the lost Self ” (1959:155). Indeed, in Plath’s journals, she articulates her compulsion while acting on it: “I will write until I begin to speak my deep self ” (286). While self-centeredness tends not to be lauded as a personality trait, it makes for effective poetry. In Aesthetics, a philosophy of art focusing on the relation of form and matter, German philosopher Hegel contends that the poet is meant to be his own subject: As the center and proper content of lyric poetry there must be placed the poetic concrete person, the poet…. External stimuli, express invitations, and more of the like are not in any way excluded. But in such a case the great lyric poet soon digresses from the proper topic and conveys himself (1998:1129).
This sort of poetic aesthetic is exceptionally capable of captivating readers for several reasons. First, poets command greater authority in their poems by seizing the world from their own viewpoints. Before confessional poetry, it was ambiguous whether the speaker of a poem was the author or a persona; poets like Plath eliminated any doubts by explicitly referring to their own lives. Confessional poets can be sure that they are experts on their subject matter, and readers will more readily accept the truths of crafted creative expressions that have been extracted from real events. Additionally, poets who speak from personal experience are better able to engage their audiences. It feels more natural for a reader to identify with one speaker, to experience something vicariously through a focused beam of sensory stimuli, than to extrapolate personal relevance from general statements. Finally, this writing technique rouses emotional identification in readers. Linguist and literary theorist Roman Jakobson asserts that, in lyric poetry, the first-person voice functions to evoke emotion (1982:26). Paradoxically, as a poet narrows the lens of a poem’s worldview to encompass only his or her own perspective through the slim pronoun “I,” the poem delves deeper into visceral responses of readers who relate it to themselves. The poet digs past passive observations of an environment towards the heart and guts with which humans react to and interact with problems and people and themselves, pulling readers into the tangle. These advantages to the confessional style help account for why “Edge” has risen to eminence. The noteworthy departure from Plath’s typical employment of the first person, which constitutes the voice in 37 of the 42 poems in Ariel, is as if to emphasize its absence. The “woman” at the center of the poem is continuous with the “golden child,” the “virgin,” the “mother” who speaks in previous poems, but the leap that the reader must make is to realize that the narrating observer is still that same speaker (Plath 1999:52, 63, 1). Existing between the chasm of the “dead body” being observed and the disembodied observer, as if Plath were eulogizing herself from her own casket, the poet disrupts the balanced footing with which readers guilelessly approach the poem. Instead of leading decorous chasses about the subject matter, Plath drags her audience into clashes of selves split between speaker and subject matter, between depression and mania colliding in suicide, between death and a living poetic legacy. And yet, despite the dynamism arising from this tension, she maintains calm composure from a detached distance. Facing Plath’s stone-cold emotional numbness, the reader overcompensates with shock and horror. In the end, the prophecy in “Edge” having been validated in Plath’s death, the poet tragically earns authority for her suicidal ruminations. The public, aware that suicide was the consequence of a mind clinically split, nevertheless identifies with the desire for stasis. Regarding all aspects of the bipolar linguistic style, which is characterized by cognitive distortion, abnormal Growth Points, and confessionalism, it seems that the peculiar way in which bipolar individuals write is contingent on their psychopathology. Furthermore, it seems that there is an overlap of bipolar linguistic style and the linguistic style that makes for exceptionally creative poetry. “Edge” exists in eminence at this overlap. The question remains, however, as to just what it was about Plath’s bipolar cognition that caused her to write in such a way. Next, the abnormal cognition underlying the composition of “Edge” will be explored more thoroughly. Bipolar Disorder and Cognitive Style
78
sylvia plath on edge Although recent research establishes the prevalence of bipolar disorder in eminent poets, mental illness alone does not guarantee talent in poetry. When Socrates refers to the “madness of the Muses” he suggests that the passion induced by mental illness, when combined with existing poetic skill, may enhance a poet’s chances of expressing extraordinary creativity (qtd. in Jamison 1993:51). To better understand why the bipolar mind is so predisposed to and proficient at writing poetry, it is crucial to comprehend how bipolar disorder affects both ability and motivation. Independent investigations of manic and depressive moods in terms of creative drive and skill begin to reveal why bipolar disorder is so common among poets of uncommon renown. The case of Sylvia Plath demonstrates how variation along the bipolar spectrum is correlated with variation of creative drive and skill, of the quantity and quality of poetic creativity. The most florid display of bipolar disorder—and the most common mood associated with creative writing—is most certainly mania. According to the Diagnostic and Statistical Manual for Mental Disorders, mania presents in patients with feelings of euphoria, elevated self-esteem, talkativeness, increased energy, insomnia, paranoia, and psychosis (2000:357-359). It may be that the hyperactive cognition of bipolar individuals spurs them to get a handle on their thoughts in writing. Or, for those already predisposed to take up the pen, the euphoria and eagerness of manic states may inspire a fever for writing. Psychological research confirms this speculation. In a study examining how creativity varies with the spectrum of moods expressed in bipolar disorder, subjects classed as euthymic, hyperthymic, dysthymic, and cyclothymic were assessed for creativity (Shapiro and Weisberg 1999). Scores on the Adjective Checklist Creative Personality Scale, which rates subjects according to endorsement of traits associated with creativity, were highest for hyperthymic subjects. This suggests that moods accompanied by high energy levels and general happiness are most conducive to creativity in those with bipolar disorder. Although Plath has been alleged as exemplary of unipolar depression, the evidence for mania in the last period of her life is overwhelming. Maltsberger illustrates Plath’s manic qualities during the time that Ariel, her final and most feted volume of poetry, was composed: She had been subject to angry, sometimes violent, paranoid outbursts for years, but now friends found her “distraught” and noticed that sometimes she talked hysterically. She had difficulty sleeping.... She often seemed paranoid. In January 1963 she seemed excited and “ecstatic.” A friend noticed she had a quality of “incandescent desperation.” The night before her death she was found standing motionless in a freezing cold hall; she claimed she was having “a wonderful vision.” During October 1962 she was feverishly and brilliantly creative, sometimes writing several poems in one day (Maltsberger 1997:293-294).
In Plath and other writers, mania has been associated with increased rates of cognition and creative productivity. Plath’s pressured speech and prolific writing during her manic episode are features of hypergraphia, a medical condition that causes individuals to write excessively and compulsively (Flaherty 2004:24-25). Furthermore, her poetry casts light on the way in which creative drive was essential to her being. In one interpretation of “Edge,” Plath presciently laments the loss of her creative drive with the metaphor of a muse, that “Greek necessity.” Although divine inspiration floods through “the scrolls of her toga,” which represents her collected works, Plath’s font of passion for poetry is nearly dried up. Without her muse, her burning desire to write, Plath loses her ability to write poetry and, so it happened, to live. In addition to overproducing words, manics tend to overinfuse meaning into what they write. Cognitive distortions such as minimization and magnification are examples of how extremist thinking is manifested on the page. Likewise, many features of bipolar writing identified by Goss, such as clang association and paragrammatical segments, are exploitations of sound and structure that inject deeper levels of meaning into the way thoughts are expressed. Flathery describes how the characteristics of manic linguistic expression are also part of the reason why manics write: Manics write because what they are writing about seems vitally important to them, worth preserving. Manics write because one topic reminds them of another, not an un-
79
eruditio common method of composing for nonmanics, but one that, when taken to a manic extreme psychiatrists call flight of ideas. Manics write because the sounds and shapes of words entrance them. Hence their characteristic rhyming and puns (known as clang associations), and hence also the high frequency of manic-depressive poets (2004:38).
Perhaps the elevated belief in the value of their words is why so many bipolar individuals are writers—it takes a megalomaniac to believe that one has the authority to speak to wide audiences and be forever preserved in print. And perhaps this is why so many bipolar poets have taken to the confessional style. Having discovered words as a way to organize all the meanings flying about in their heads, they write in search of the meaning lost amidst the chaos of their lives. They write because their perception of the world is so intense, like sanity only “much, much more so” as Flaherty puts it, that their mind requires some outlet (2004:13). Just as adamantly as bipolar writers believe in the worth of their words, they believe that their lives are worthy of print. Plath’s last poem can be seen as her means of commemorating her life through poetry and elevating it to epic proportions. “‘Edge’ reads not only like an obituary,” Leenaars and Wenckstern observe, “but like a Greek tragedy” (1998:628). Indeed, in an investigation of the psychopathological difference between everyday and eminent creativity, Harvard professor Ruth Richards implies that the grandiosity of bipolar thought is related to the motivation to create: Extracreativity factors, related to manifest bipolar disorders themselves—including a driven, “obsessoid,” work-orientation ability to think in broad if not grandiose terms; a sense of “standing apart” from the mainstream; and a need for more publicly recognized achievement to validate a fluctuating sense of self—might raise the odds for eminent level creativity when creative talent is already present (Richards 1993:213).
It is necessary to note that “creative talent” is not necessarily directly elevated by bipolar disorder. Although a fiery desire to write is often correlated with mania, not all hypergraphics are eminent poets. Nonetheless, it may be that the very drive to write is one way in which good poets are elevated to greatness by “rais[ing] the odds for eminent level creativity.” As a deluge of ideas lets loose into a flood of words on the page, Flaherty suggests that manic individuals take advantage of their wealth of thoughts according to the Darwinian theory for creativity (2005:148-149). With thoughts falling along a Gaussian distribution, an increase in idea generation results in a proportional increase in the number of those that are novel and useful. A similar technique is used by mentally stable writers who brainstorm many ideas before selecting the most creative for translation into writing. Hypergraphics furthermore benefit from the practice effect as they improve through task repetition. For Plath, extensive journaling likely served as an overexpression of everyday musings from which she could select the most provocative for her poems. In this way, it seems that passion for writing not only initiates the taking up of the pen, but also continues to add creative flourish to what is written. It is not just creative drive that is increased in manic individuals. Research shows that their thought and linguistic expression is processed not only at an increased rate, but also with increased diversity. In a study of artistic and bipolar temperaments, Strong et al. found that subjects who scored higher on the BWAS test for creativity also frequently exhibited “openness” on tests of temperament (2007). Richards proposes an “overinclusive” cognitive style for bipolar individuals (1993:214). The bipolar mind, by these models, is overwhelmed with emotions and ideas due to an overall inability to inhibit unnecessary information. Flaherty argues that creative individuals are those who are able to make sense of these excesses, to extract novel ideas from “loose, cross-modal associations” (2005:149). Shifting suddenly in images throughout “Edge,” Plath follows the flow of cognitive associations in a linguistic display of what Goss identifies as tangentiality. The juxtaposition of a statue, flower, and moon might seem random and startling if it were not for the fluidity with which Plath connects them through clang associations, drawing calm and creativity from the chaos. Her overinclusive cognitive style during a state of mania may thus be responsible for this instance of linguistic creativity.
80
sylvia plath on edge Although mania is more directly implicated in the picking up of the pen, depression also plays a role in the translation of thoughts to words. A Janusian process, characterized by the simultaneous conception of equally valid and yet antagonistic ideas, has been proposed to describe bipolar cognition (Rothenberg 2001). Flaherty postulates that the creative cycle rotates between Freud’s primary and secondary thought processes, between thinking that is emotionally-charged and divergent and thinking that is logical and focused (2004:60). Jamison reiterates that the circle of mood states turns in sync with the circle of writing and revision. “Work that may be inspired by, or partially executed in, a mild or even psychotically manic state,” she explains, “may be significantly shaped or partially edited while its creator is depressed” (1993:6). Whereas mania abounds in surpluses—of energy, drive, productivity, thoughts, words— depression slogs in a poverty of interest, ambition, and even the will to live. The DSM-IV identifies the symptoms of depression as feelings of sadness or apathy, reduced energy, increased need for sleep, trouble concentrating, loss of pleasure in activities once enjoyed, and suicidal ideation (2000:349-352). “Edge” illustrates how Plath, imagining bleak images of a “night flower” and a moon robed in “blacks,” accessed negative affect amidst a wildly raging manic state. In her descriptions of the children and vases, employment of the cognitive distortion minimization suggests a sense of worthlessness invading her pressured linguistic expression. Idealizing suicide by describing her dead self as “perfected,” she divulges her depressive symptoms in the week before they took her life. In Plath’s dysphoric mania, elevated energy laden with negative affect was directed towards self-destruction only after it had been successfully channeled through self-creation in poetry. Considering how bipolar disorder severs and slaps together severe extremes, it is clear why this mental illness is so often associated with the drive to write poetry. “It is the interaction, tension, and transition between changing mood states,” Jamison writes, “that is critically important; and it is these same tensions and transitions that ultimately give such power to the art that is born in this way” (Jamison 1993:6). When those with bipolar disorder lose coherence in their sense of self, writing is a means of pulling out a single identity and pinning it to the page. Indeed, a typical motivation for all writers is to escape from the mind. “What do prisoners do?” Flaherty inquires while relating the writer to a jailbird singing of its wish to fly free. “Write, of course; even if they have to use blood as ink, as the Marquis de Sade did” (2004:36). In her journals, Plath seeks relief from an unstable mental state, vowing in one passage to “immerse self in characters, feelings of others” (2000:519). Likewise, in “Edge,” Plath flees her mind by writing from the indifferent distance of third person. She is a statue, a metaphor for herself rather than herself because she yearns for escape from the reality of her own mind. Through these poems, Plath escapes her abnormal neurochemistry by inventing an idyllic alternate reality. The second component of this model regards that which is sought. If writing serves as an escape from the commotion and incongruence of many selves, then what poets write towards is calm and coherence. Poet G. Burns Cooper considers this a purpose of poetry in general: The mind yearns for order, for a sense that it understands, at least partly, how things are put together. One of the functions of poetry is to satisfy that yearning: poetic meaning connects and organizes seemingly disparate ideas. The ear also yearns for order. One of the functions of poetic rhythm and meter is to satisfy that craving, too; they help to organize the potentially chaotic stream of sound we perceive (1998:2).
Plath expresses this wish in “Edge,” employing lyrical techniques such as assonance and embodying a sense of “classical harmony and proportion” in the image of a stone Greek statue (Lim 1997:87). As an inanimate monument no longer burdened by life and its affective complications, Plath feels at last as if she is “perfected.” Poetry is an especially effective form of scriptotherapy in this manner because, unlike fiction which requires characters to be sustained for pages on end, each poem need only convey one tight, distinct meaning; each poem achieves unity in its representation of just one identity. Whereas language generally functions to orient speakers to one another, writing poems is a way for the poet to orient oneself to oneself.
81
eruditio The creative process is driven not only by separate manic and depressive states, but also by their oscillation and, in dysphoric mania, their combination. In addition to writing through her moods, Plath wrote directly of them, testifying to the divided mind from which her exceptional poetic expression originated. “Edge” thus serves not only as an example of bipolar linguistic style, but also as substantiation of bipolar cognition. Art from Antithesis In conclusion, this case study of Sylvia Plath reveals the infrastructure of parallel tensions among bipolar linguistic expression and cognition. The poem proves a prime example of bipolar linguistic expression in the way it arises as art from the tension of content and construction. Likewise, it is the psychological tension of oscillating and overlapping oppositional moods and selves that drives bipolar individuals to compose poetry as a means of escaping their minds and arriving at sanctuary on the page. Although these mental tensions feel disconcerting to those with bipolar disorder, propelling Plath so far over the edge that she took her life, they resonate with the standards for poetic creativity so that works of what Plath might term “perfection” arise from psychopathological suffering. At the edge of mania and depression, driven to the very edge of her life, Plath pushed through emotional chaos and onto the page in her last masterpiece “Edge.” works cited
1. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders: DSM-IV-TR. Washington: American Psychiatric Association, 2000. 2. Andreason, N. C. “Creativity and Mental Illness: Prevalence Rates in Writers and Their First-Degree Relatives.” American Journal of Psychiatry 144 (1987): 1288 – 1292. 3. Braun, Claude M. J., et al. “Mania, Pseudomania, Depression, and Pseudodepression Resulting from Focal Unilateral Cortical Lesions.” Cognitive and Behavioral Neurology 12.1 (1999): 35 – 51. 4. Bundtzen, Lynda K. The Other Ariel. Amherst: University of Massachusetts Press, 2001. 5. Chen, Yuan-Who, and Steven C. Dilsaver. “Lifetime Rates of Suicide Attempts among Subjects with Bipolar and Unipolar Disorders Relative to Subjects with Other Axis I Disorders.” Biological Psychiatry 39.10 (1996): 896 – 899. 6. Cooper, Brian. “Sylvia Plath and the Depression Continuum.” Journal of the Royal Society of Medicine 96 (2003): 296 – 301. 7. Cooper, G. Burns. Mysterious Music. Stanford: Stanford University Press, 1998. 8. Flaherty, Alice W. “Frontotemporal and Dopaminergic Control of Idea Generation and Creative Drive.” Journal of Comparative Neurology 493 (2005): 147 – 153. 9.---. The Midnight Disease: The Drive to Write, Writer’s Block, and the Creative Brain. Boston: Houghton Mifflin, 2004. 10. Drevets, Wayne C., et al. “Subgenual Prefrontal Cortex Abnormalities in Mood Disorders.” Nature 386. 24 (1997): 824 – 827. 11. Goodnick, Paul J. Mania: Clinical and Research Perspectives. Washington, DC: American Psychiatric Press, 1998. 12. Goss, James. “The Poetics of Bipolar Disorder.” Pragmatics and Cognition 14.1 (2006): 83 – 110. 13. Haldane, Morgan, and Sophia Frangou. “New Insights Help Define the Pathophysiology of Bipolar Affective Disorder: Neuroimaging and Neuropathology Findings.” Progress in Neuro-Psychopharmacology and Biological Psychiatry 28.6 (2004): 943 – 960. 14. Hegel, Georg Wilhelm Friedrich. Aesthetics. Trans. T. M. Knox. Vol. 2. Oxford: Oxford
82
sylvia plath on edge
University Press, 1998. 15. Jakobson, Roman. Selected Writings: Poetry of Grammar and Grammar of Poetry. Netherlands: Mouton de Gruyter, 1982. 16. ---. Sound Shape of Language. Berlin: Mouton de Gruyter, 1987. 17. Jamison, Kay Redfield. “Mood Disorders and Patterns of Creativity in British Writers and Artists.” Psychiatry 52 (1989): 125 – 134. 18. ---. Touched with Fire: Manic-Depressive Illness and the Artistic Temperament. New York: Simon & Schuster, 1993. 19. Johnson, Sheri L., and Robert L. Leahy. Psychological Treatment of Bipolar Disorder. New York: Guilford Press, 2005. 20. Leenars, Antoon A., and Susanne Wenckstern. “Sylvia Plath: A Protocol Analysis of Her Last Poems.” Death Studies 22.7 (1998): 615 – 635. 21. Lim, Sandra. Double Consciousness and the Protean Self in Sylvia Plath’s Ariel. Stanford: Stanford University Essays in Humanities, 1997. 22. Ludwig, Arnold M. “Creative Achievement and Psychopathology: Comparisons among Professions.” American Journal of Psychotherapy 46 (1992): 330 – 356. 23. Maltsberger, John T. “Ecstatic Suicide.” Archives of Suicide Research 3.4 (1997): 283 –301. 24. Migliorelli, R., et al. “SPECT Findings in Patients with Primary Mania.” Journal of Neuropsychiatry and Clinical Neurosciences 5 (1993): 379 – 383. 25. Norden, M. J., and D. H. Avery. “A Controlled Study of Dawn Simulation in Subsyndromal Winter Depression.” Acta Psychiatrica Scandinavica 88.1 (1993): 67 – 71. 26. Peet, M., and S. Peters. “Drug-Induced Mania.” Drug Safety 12.2 (1995): 146 – 153. 27. Plath, Sylvia. The Bell Jar. New York: Harper & Row, 1971. 28. ---. Ariel. New York: Harper Perennial, 1999. 29. ---. The Unabridged Journals of Sylvia Plath. Ed. Karen V. Kukil. New York: Anchor Books, 2000. 30. Post, Felix. “Creativity and Psychopathology: A Study of 291 World-Famous Men.” British Journal of Psychiatry 165 (1994): 22 – 34. 31. Richards, Ruth, et al. “Creativity in Manic-Depressives, Cyclothymes, Their Normal Relatives, and Control Subjects.” Journal of Abnormal Psychology 97.3 (1988): 281 – 288. 32. ---. “Everyday Creativity, Eminent Creativity, and Psychopathology.” Psychological Inquiry 4.3 (1993): 212 – 217. 33. Rosenthal, M. L. “Poetry as Confession.” The Nation 189.8 (1959): 154 – 155. 34. Rothenberg, Albert. “Bipolar Illness, Creativity, and Treatment.” Psychiatric Quarterly 72.2 (2001): 131 – 147. 35. Santosa, Claudia M., et al. “Enhanced Creativity in Bipolar Disorder Patients: A Controlled Study.” Journal of Affective Disorders 100 (2007): 31 – 39. 36. Shapiro, Pamela J., and Robert W. Weisberg. “Creativity and Bipolar Diasthesis: Common Behavioral and Cognitive Components.” Cognition and Emotion 13.6 (1999): 741-762. 37. Steinberg, Hannah. “Exercises Increases Creativity Independently of Mood.” British Journal of Sports Medicine 31 (1997): 240 – 245. 38. Stevenson, Anne. Bitter Fame: A Life of Sylvia Plath. Boston: Houghton Mifflin, 1989. 39. Stirman, Shannon Wiltsey, and James W. Pennebaker. “Word Use in the Poetry of Suicidal and Nonsuicidal Poets.” Psychosomatic Medicine 63 (2001): 517 – 522. 40. Strong, Connie M., et al. “Temperament-Creativity Relationships in Mood Disorder Patients, Healthy Controls and Highly Creative Individuals.” Journal of Affective Disorders 100 (2007): 41 – 48. 41. Thomas, Katherine M., and Marshall Duke. “Depressed Writing: Cognitive Distortions
83
eruditio
in the Works of Depressed and Nondepressed Poets and Writers.” Psychology of Aesthetics, Creativity, and the Arts 1.4 (2007): 204 – 218. 42. Wedding, Danny. “Cognitive Distortions in the Poetry of Anne Sexton.” Suicide and Life-Threatening Behavior 30.2 (2000): 140 – 144. 43. Yamadori, Atsushi, et al. “Hypergraphia: A Right Hemisphere Syndrome.” Journal of Neurology, Neurosurgery, and Psychiatry 49 (1986): 1160 – 1164.
84
Cory Adkins
personal choice in nietzsche’s on the genealogy of morality
At the height of his first essay in On the Genealogy of Morality, Nietzsche enters into a dialogue with the reader by inviting him to “have a look down into the secret of how ideals are fabricated on this earth” (27)1. The reader descends into a “dark workshop” where he encounters Nietzsche’s moral critiques made manifest — he hears and smells, but does not see, the self-deceiving proclamations and putrid sickness of the men of ressentiment who blithely consume a self-destructive morality. This descent into moral darkness and putridity, at once frightening and intriguing, becomes paradigmatic for both Nietzsche’s critique of an institutionalized Judeo-Christian slave morality, which he characterizes as being a reaction, a ressentiment, against powerlessness, and for his suggestion of a “reverse experiment” in value making which promises to elevate man out of the darkness and “into the light again” (66). However, Nietzsche remains silent on exactly this reverse experiment. Although he praises several life-affirming and virile moralities, famously those of the Greeks, he resists constructing his own moral system, abstaining in favor of “one stronger than [him]” (67). Because of this seeming moral abdication, we might easily be deceived into labeling Nietzsche as the very sort of nihilist he protests against: one who violently destroys moral values without rebuilding. Nietzsche, however, is no nihilist. We must not forget that in Geneology Nietzsche engages the reader in an ethical dialogue, both literally when he invites the reader to “speak!” about the dark workshops and figuratively in the rest of his essays through the use of interpretable aphorisms (26). Through this exchange, Nietzsche not only tears down moral illusions but also challenges the reader as an individual to create a new, self-affirming myth, a set of basic values that justify him in the same way the Greco-Roman myths justified the greatness of those civilizations that Nietzsche so admires. In fact, I will show that, while Nietzsche has no qualms with violently destroying the institutionalized moral illusions that he claims alienate modern man from self-realization, he distinguishes between critiquing moral illusions and moral values. Indeed, Nietzsche recognizes the centrality of such values to the human experience, and encourages the creation of new values, derived from individual will to reinvigorate the human condition. Nietzsche thus becomes a thinker who recognizes both the elevating potential of moral values and the role of individual choice, rather than institutional influence, in selecting those values. Make no mistake, Nietzsche unabashedly assaults what he perceives to be the pervasive moral system of his time, the Judeo-Christian ethic of ressentiment that he dubs a “slave morality” because of the mediocrity and the self-loathing that it instills in its practitioners (20). However, Nietzsche does not attack this ethic of ressentiment simply because it is a moral construct that he deems damaging to the psyche, but also because it is institutionalized; its practitioners are no longer aware of the roots from which it sprang. The reader’s “look down” into the “dark workshop” where “ideals are fabricated” elucidates Nietzsche’s position. In guiding the reader through this dark workshop, Nietzsche invokes all the tropes he comes to associate with the “men of ressentiment” (27). The reader descends into a blackness, in which he “cannot see anything,” breathing deleterious, putrid air as he listens with only Nietzsche to guide him (27). This putridity, Nietzsche claims, “incarcerates within itself ” man’s intrinsic “instinct” to power, creating a self-loathing that results in mediocrity and stagnancy (59). However, Nietzsche finds
85
eruditio fault not primarily with the effects of this slave morality on the human psyche, but with the manner in which it is adopted. In looking down, Nietzsche presents the reader with beliefs that are “fabricated… in dark workshop[s],” or rather, with beliefs that are produced in the same way a factory disseminates goods, adopted with as little inquiry into their roots as a factory-made canned food (26). This blithe consumption of beliefs frustrates Nietzsche. Transforming a verse from Luke into a description of the misguided passivity of those below, he exclaims that “They [the men of ressentiment] know not what they do”; they are blind to the falsity of the beliefs they readily consume and are helpless to remedy the illnesses that stem from such beliefs (28). Nietzsche, then, does not critique moral values on the whole then, but specifically values that are unquestionably inherited and that allow man to unknowingly debase himself — values which come to control man’s will rather than the other way around. It is this confused and passive adoption of moral illusions that Nietzsche critiques and that he claims makes modern man so mediocre. Nietzsche has no tolerance for such moral illusions and seeks relentlessly to expose them as the “lies” that the reader encounters in looking down into the dark workshop (27). As violently as Nietzsche criticizes these moral illusions, however, he just as ardently embraces self-affirming moral principles so long as they are actively and knowingly created by man. Indeed for Nietzsche such basic moral values, as opposed to illusions, can elevate modern man into a new state of healthiness and virility. In presenting an alternative to the ressentiment he so violently critiques, Nietzsche praises the Greek conception of the deity, worth quoting at length: That the conception of gods does not, as such, necessarily lead to that deterioration of the imagination which we had to think about for a moment, that there are nobler ways of making use of the invention of gods than man’s self-crucifixion and self-abuse…this can fortunately be deduced from any glance at the Greek gods, these reflections of noble and proud men in whom the animal in man felt deified, did not tear itself apart and did not rage against itself! These Greeks, for most of the time, used their gods to…carry on enjoying their freedom of the soul. (65)
Here Nietzsche not only acknowledges the possibility of salubrious moral values which express “the animal in man” rather than turn it “against itself,” but actually links the success of Greek civilization to its adoption of such basic values. The basic moral principles of the Greeks justified and encouraged the virility of their civilization by encouraging the exercise of the will and by providing excuses for failure, banishing both the “self-torture” of ressentiment and the “turning-away from existence” of nihilism (63). However, for Nietzsche, the fact that the Greeks “used their gods” is just as critical as the purpose to which the gods were used. The Greek pantheon does not control or punish its followers (if they can be called followers) in the way the Judeo-Christian God, the supreme creditor and “Hangman,” does (64); rather the Greek people create and manipulate their gods to use as “as a reason for much that was bad and calamitous”(65). Greek morality becomes an extension of man’s will rather than the other way around, a value rather than an illusion. Nietzsche’s description of his redeemer, a modern man strong enough to carry out the “reverse experiment” of reinvigorating our myths and values, illustrates the contrast between value and illusion (66). The redeemer, a modern day Grecian, ascends into the “thinner air of higher up,” relying upon his “great health” to “emerge into the light,” a moral characterization that lies in stark contrast to the descent into the dark and putrid workshop where ideas are fabricated (66). We see then that, although Nietzsche rejects fabricated, institutionalized illusions that allow man to unknowingly turn against his intrinsic nature, he just as readily turns to moral values to redeem the human condition—values which man interrogates and controls become central to the “great health” that Nietzsche prophesizes (66). The question becomes how to create such great health in the sick and mediocre age that Nietzsche bewails, an age in which the greatness of the Greeks has long been replaced by human complacency and stagnancy. In seeking an answer to this question it is important to note that so far Nietzsche has left us with two seemingly opposed ideas. Might we not object to Nietzsche on the basis that the self-affirming values he embraces may just as easily become passively adopted institutions as the ressentiment that he rejects? Nietzsche’s solution to this problem is critical to his overall philosophy. By ending his essay, and indeed
86
personal choice the dialogue between the reader and him, with silence, Nietzsche asserts the primacy of individual choice in the selection of value systems —moral values are no longer adopted but created by the individual will. Once again, an analysis of the paradigmatic “look down” into the dark workshop elucidates this point. Nietzsche engages the reader in a destructive dialogue, tearing down moral illusions by providing dialogic cues to the reader. When the reader seems content to admit that “this workshop…seems to me just to stink of lies” Nietzsche forcefully prompts him further, and the reader obediently responds: [Nietzsche:] No! Wait a moment! You haven’t said anything yet about the masterpiece of those black magicians who can turn black into whiteness, milk, and innocence… [Reader:] I understand, I’ll open my ears once more…Now at last I can hear what they have been saying so often: “We good people—we are the just” —what they are demanding...[is] the victory of God, the just God, over the Godless… “the last judgment”, the coming of their kingdom (29)
Nietzsche seems able and willing to encourage the reader to tear down any illusions that may deceive him: Nietzsche prompts him to understand not only that the Judeo-Christian morality is “lies” but into understanding that those lies have their origins in the same will to power, in this case the need for judgement, that Judeo-Christian morality reacts against. The reader’s obedience facilitates this understanding. However, when the polemic is complete and it comes time to ascend out of the darkness of the workshop and into the light, Nietzsche offers only silence: “Enough! Enough!” he cries (29). It would be easy enough to assume that this silence shows Nietzsche has merely run out of ideas or lacks the philosophical acumen to explicitly describe the sort of morality meant to save Europe form cultural decline. On the contrary, Nietzsche’s silence highlights his belief in the individual will as the mechanism for individual salvation. The disappointing silence, following a guided moral descent and anticipating the antithetical ascent, parallels Nietzsche’s essays on the whole. After unveiling moral illusions, the descent portion of his essays, Nietzsche offers relatively little in the ways of moral ascent. He prophesizes a redeemer who will “[give] earth its purpose and man his hope again, this Antichrist and anti-nihilist, this conqueror of God and of nothingness” (67). However, this prophecy is followed by the same silence that truncates the reader’s ascent from the dark workshop: Nietzsche exclaims, “Enough! Enough! At this point just one thing is proper, silence” (67). Nietzsche’s silence asserts that, for ascent, unlike for the destructive descent, mere obedience will not do. Nietzsche’s redeemer is not one man but many, a host of independent “spirits who are…acclimatized to thinner air higher up, to winter treks… [who possess] a very self-assured willfulness of insight” and “whose solitude will be misunderstood…as a flight from reality” (66). These spirits create values “in solitude” through their individual wills, relying on their own insight to create meaningful and affirming value systems, thus ascending “higher up…[and] into the light,” in accordance with the tropes Nietzsche attaches to those men who see through illusions and will their own myths and values (66). Just who are these spirits? One answer might be Nietzsche’s readers themselves. Nietzsche, in invoking silence at both the end of the paradigmatic passage and the second essay, has not abandoned us, the readers, to be guided along by some future redeemer but rather gives us the means to redeem ourselves. We cannot merely follow Nietzsche, or any redeemer, to reach the redemptive moral ascent. Nietzsche may pull back the veil, but the only way for us as readers to ever ascend is to create our own values that affirm our natures as distinct individuals. Nietzsche thus emerges not as a thinker who eschews all moral values or who justifies cruel tyranny but as a thinker who embraces a very liberal morality which, he hopes, has the potential to reinvigorate a Europe in cultural decline. Nietzsche does insist that every ethical veil ought to be deeply interrogated and ripped aside, but he just as fervently recognizes the centrality of basic, guiding values in affirming and invigorating human existence. The distinction between veils and values, for Nietzsche, is the distinction between institutional influence and individual will. We may worry
87
eruditio about the possibility of moral relativism stemming from this emphasis on individual will, but that is an issue for another time. For now, it is safe to say that we all stand to learn a little from Nietzsche. When Nietzsche asks, “Who has enough pluck?” before guiding the reader through the dark workshop, he does not solely refer to the descent, the tearing away of illusions, but also the ascent, that unsettling journey into the thin air and blinding light of higher up (27). After all, there is nothing pluckier than daring to create our own life affirming values, than daring to will our own redemption. notes 1 Nietzsche, On the Genealogy of Morality (New York: Cambridge UP, 2006); all pages cited in parenthesis are from this edition. works cited 1. Nietzsche, Friedrich. Nietzsche: ‘On the Genealogy of Morality’ and Other Writings Revised Student Edition (Cambridge Texts in the History of Political Thought). New York: Cambridge UP, 2006. Print.
88
Robert Lehman
the life within “neutral tones” by thomas hardy
Neutral Tones WE stood by a pond that winter day, And the sun was white, as though chidden of God, And a few leaves lay on the starving sod, --They had fallen from an ash, and were gray. Your eyes on me were as eyes that rove Over tedious riddles solved years ago; And some words played between us to and fro-On which lost the more by our love. The smile on your mouth was the deadest thing Alive enough to have strength to die; And a grin of bitterness swept thereby Like an ominous bird a-wing.... Since then, keen lessons that love deceives, And wrings with wrong, have shaped to me Your face, and the God-curst sun, and a tree, And a pond edged with grayish leaves.
In his poem “Neutral Tones”, Thomas Hardy constructs a movingly familiar world in which the rules of reality melt in a reflection on lost love. On the surface this is a tidy poem in which an emotionally distant speaker remembers a defining interaction with a love from the past. Seeing each other demonstrates to each of them that the love they once shared was now irrevocably cold and lost. A closer analysis of the poem, however, reveals that Hardy is remarking on the great extent to which the rules of reality are subverted by faded love. “Neutral Tones” is an expression of the enduring power of love to rewrite the fundamental dichotomies that define our lives, tearing down the conventional walls of human experience. We are introduced to an external world that exists only as an interpretation of an internal temperament, we experience an interaction in the present, in which every moment is defined by the past, and we are told a story that is screaming with unresolved emotion in ripple-less, neutral tones. The first dichotomy that Hardy experiments with throughout “Neutral Tones” is the distinction between the internal and the external world of experience. The very structure of Hardy’s poem works to depict an experience in which the speaker’s internal and external worlds become intertwined in the face of this memory of lost love. Hardy works a powerful symmetry into the poem, framing the speaker’s reminiscence over the nature of his interaction with his love in the opening and closing stanzas by the image of the pond, the sun, the tree, and the leaves. This overall symmetry helps to
89
eruditio melt the division between the external and the internal world. It is the particular image of the external world on this winter day that inspires the journey into the speaker’s internal world in the middle stanzas. This fusion of realities, that the speaker accesses his internal world by interpreting the world he sees around him, is emphasized when we reach the final stanza. The speaker’s solemn internal realization that “love deceives” is now irrevocably connected to that first image of the winter day. The extent to which his internal thought and the external world become one is emphasized through the enjambment in the final stanza, which fuses “your face”, the center of the speaker’s internal world in this poem, with “the God-curst sun, and a tree, and a pond edged with greyish leaves”, the recurring representation of the outside world, which now is no longer distinct from his internal being. The sense that the boundaries of the internal and the external world of the speaker fall away in this poem can be pushed further when considering how Hardy develops two levels of an objective correlative within the poem. An objective correlative is a technique in which the poet describes an objective situation in order to awaken an emotional response in the reader without directly describing the subjective emotion; it uses the outside world to describe the inside world. The first level of Hardy’s objective correlative helps the reader learn about the speaker’s now cold and regretful internal world through the connection to the lifeless setting on this winter day. The second level is that the speaker himself uses the external world to understand the life within him. His reflection on his time with the woman he loved manifests, “is shaped to me”, in the form of the drab image that he creates. The external world gains new meaning through the speaker’s perspective, as it exists only in the lens of the his internal life. The speaker relates to the emotion and understands the emotion of lost love only in the context of the image that has been dulled to represent his inner temperament. Hardy illustrates, in this sense, how personal perspective defines our interpretation of our physical surroundings. By delving deeper into how the speaker connects with his inner temperament, we start to see another fusion of dichotomous facets of our reality, as the past and the present dissolve into each other. The poem starts with a representation of the present that appears irreconcilable with the past. The stasis of the grey and white winter day, heightened by the dual alliteration in the description of the pond in which “a few leaves lay on the starving sod”, is disconnected from any memory of a greener, once fertile past. This fissure is expanded in the second stanza when the woman’s “eyes” of the present rove distantly over “riddles solved years ago”. At this point in the poem, reality as we know it is in tact – the passion of these past lovers’ relationship has now cooled to a dull winter, the present stands distinct from the past. In the third stanza, however, Hardy frames the speaker’s descriptions in repeated paradoxes that illustrate the elastic relationship between the past and the present in the context of lost love. The way in which the speaker engages with the past through these paradoxes continually retraces the trajectory of a relationship that can only be understood in context of a collision between the past and the present. The initial images of the first three lines of the stanza produce a positive, hopeful sentiment, akin to what would have been felt in the days in which the speaker was still in love. The happiness of the past, however, is intrinsically tied to the harsh reality of the present. As such, we essentially see the relationship end three times in the third stanza, as hope is struck down time and again, continually undercutting the expectation of the reader. It appears that the speaker is tormenting himself by bringing back the relationship back to life in memory just to be able to feel it die again. After the “smile on your mouth” is torn down to be the “deadest thing”, we are given a momentary lift when we think the smile is “alive enough”. This however is immediately qualified with the desperate image “to have strength to die”. The “grin” in the next line is quickly tied to “bitterness” and “swept thereby”. These draining paradoxes are another expression of the enduring power of love, as the past love comes surging back to take hold of the present moment. This powerful demonstration of the link between the past and the present, and the description in the final stanza of a form of love that “deceives”, that “wrings with wrong”, points towards a speaker coming to terms with anguishing, tumultuous emotions. The title of the poem, however, is “Neutral Tones”, and the speaker’s distanced descriptions don’t match the apparent intensity of the emotions of his past. We see the last of Hardy’s dichotomies between neutral stoicism and vibrant emotion begin
90
neutral tones to blur as they interact with each other in the speaker’s attempt to process this love that was lost. We observe the love of two individuals being vigorously pulled apart in a poem that is utterly devoid of action. In processing this fission, the speaker finds no release for this emotionally taxing memory except to smother it with indifference, with neutrality. The winter day is not described as white, but rather the sun, generally an image of warmth in color and sensation, is blanketed by the feeling of being “white as though chidden of God”. In this very same sense, the speaker is torn with tumultuous emotion, but he suppresses this tension by blanketing himself with an indifferent tone. The fact that the speaker’s vibrant emotion manifests in neutrality gives rise to feelings that are bubbling fiercely and never released. The fact that a situation that is brimming with emotion is described with monosyllabic words in a persistently neutral tone makes the smothered emotions all the more potent. Hence once again, conventionally separate elements of neutral and powerful emotions are fused together. We are led to the intensity of feeling through the speaker’s neutrality that blankets the emotions that stir beneath. It is this sense that the traditional notion of reality is discarded as disparate elements connect to give each other meaning that defines the world that Hardy creates in “Neutral Tones”. The poem, at once static and stirring, passionate and resigned, an introduction and a conclusion to emotion, fashions symmetries and paradoxes to relate to impossibly complicated emotions. The inside world and the outside exist only in relation to each other, the green of the past is only understood through the grey of the present, and it takes a neutral tone of voice to let the audience hear the poem’s silent screams.
91
eruditio
Karan Chhabra
Pickled Dreams Uniquely Modern, Indian Narration in Midnight’s Children
Saleem Sinai and the India he represents in Midnight’s Children are both quite confused. They are torn between the classical and the modern, the Eastern and the Western, the brown and the white. What else to expect from a land with millennia of native history punctuated only by bouts of foreign conquest (the latest by commercial England) and the epic character through which Salman Rushdie gives that land a voice? The irruption of modern Western capitalism, and the social, aesthetic, and political structures it brought, cannot be undone as easily as some Indians would like. After Nehru hails the moment “when an age ends; and when the soul of a nation long suppressed finds utterance,” (Rushdie 129) and Saleem is born, the nation and the novel that find utterance are much more complex than the immaculate images of Kashmir in Midnight’s Children’s first few pages. The narration in Rushdie’s novel is equally amalgamated—it is alternately, often simultaneously, novelistic, filmic, didactic, and mythic. But must such a fusion be grotesque? Is there an aesthetic that can accommodate the multitudes “jostling and shoving” (4) inside Saleem, Midnight’s Children, and all of India? To be truly representative, such a form would have to be at once subcontinental and continental, simultaneously conservative and current. Saleem uses the word “pickling” to describe his work. But how does that explain Saleem’s schizophrenic array of voices? Creating a epic for India is a bit complicated—for India’s unique problems, a uniquely Indian aesthetic is needed. Midnight’s Children suggests that with Bollywood film, India has already created its national narrative form. As Rushdie explores the potential of pickling to represent India, then, he tests Bollywood as well. Rushdie begins experimenting with narrative forms in the novel’s very first passage: I was born in the city of Bombay… once upon a time. No, that won’t do, there’s no getting away from the date: I was born in Doctor Narlikar’s Nursing Home on August 15th, 1947. And the time? The time matters, too. Well then: at night. No, it’s important to be more … On the stroke of midnight, as a matter of fact. Clock-hand joined palms in respectful greeting as I came. Oh, spell it out, spell it out: at the precise instant of India’s arrival at independence, I stumbled forth into the world. (3) The first phrase confirms that one is reading a novel, perhaps with a typical first-person start-to-finish life story. But with “once upon a time,” the reader is nudged into the realm of fairy tales. And the reader finds yet another form when the narrator becomes visible and says “No, that won’t do;” that is, the form of oral storytelling. Immediately one feels like a child on an elder’s lap, excited to inherit the stories of their shared background. The child presses for each detail, each meaning to be spelled out; the storyteller obliges. Saleem continues, “Now, however, time (having no further use for me) is running out. I will soon be thirty-one years old” (3). The narrator is far younger than a grandfather, and his task is more urgent than a mere bedtime story. But then the narration becomes less self-referential and more novelistic:
92
pickled dreams One Kashmiri morning in the early spring of 1915, my grandfather Aadam Aziz hit his nose against a frost-hardened tussock of earth while attempting to pray. Three drops of blood plopped out of his left nostril, hardened instantly in the brittle air and lay before his eyes on the prayer-mat, transformed into rubies. (4) Although Saleem is present in the possessive “my grandfather,” his voice is no longer as recursive or as conversational as before. He uses symbolic language, as before, but he no longer feels the need to explain every ambiguity or every symbol. It falls to the reader to figure them out—which points toward a novelistic tradition different from the spelled-out oral narration above. Elsewhere, Saleem’s narrative voice is even less personal: Time is slowing down for Amina once more; once again, her eyes look through leaded glass, on which red tulips, green-stemmed, dance in unison; for a second time, her gaze lingers on a clocktower which has not worked since the rains of 1947; once again, it is raining. The racing season is over. A pale blue clocktower: squat, peeling, inoperational. It stood on blacktarred concrete at the end of the circus-ring—the flat roof of the upper storey of the buildings along Warden Road, which abutted our two-storey hillock, so that if you climbed over Buckingham Villa’s boundary wall, flat black tar would be underneath your feet. (166) These moments are photographic: they describe scenes, not actions. The language is not figurative, as in the previous passage (“brittle air,” “rubies”), but tangible, confined to colors and shapes. Diction is factual and declarative, relying unusually on forms of “be” (“flat black tar would be under your feet”) rather than the active, whimsical verbs of the previous passage, like “plopped” and “transformed.” There is no obvious metaphor for the reader to pick out and decipher. And whatever feeling this narration evokes comes not from the narrator’s attitude, but rather the details presented. Drawing on the terms of Seymour Chatman’s paper “What Novels Can Do That Films Can’t (and Vice Versa),” the device of depiction in this third passage is characteristically filmic. He explains, “in its essential visual mode, film does not describe at all but merely presents, or better, it depicts [emphasis author’s own]” (Chatman 128). This passage is seen; the novel’s opening, by contrast, is spoken: “Such is the character of speech: it usually tells us something about the speaker. … The camera, poor thing, is powerless to invoke tone” (Chatman 132). Rather than depicting like a film, each picture containing a thousand words, novels, with authorial particularity, assert detail. The oral narration in the first passage is even farther along that continuum, not merely asserting detail but spelling it out so that nothing is missed. But the novel contains multitudes of such contrasting narrative forms, resisting characterization under any one Western mode. Which, if any, can bring meaning to the whole? As we know, Saleem is in a rush to deliver that meaning: “I must work fast, faster than Scheherazade, if I am to end up meaning—yes, meaning—something” (4). His syntax is, perhaps intentionally, ambiguous. Is he worried about his own meaning, or that of his story? (If he is the story, is there a difference?) And what is “meaning,” after all? He later hints at an answer: “by day amongst the pickle-vats, by night within these sheets, I spend my time at the great work of preserving. Memory, as well as fruit, is being saved from the corruption of the clocks” (37). The task of his storytelling is to defeat the clocks, to remember. But our usual, Western modes of doing so, we see, fall flat. Saleem’s uncle Hanif tries to write a naturalist film preserving the story of women running and working a pickle-factory, but his efforts are futile because they’re not what India wants. His wife Pia complains to Saleem, “All the world wants Pia to be in rags! Even that one, your uncle, writing his boringboring scripts! O my God, I tell him, put in dances, or exotic locations! Make your
93
eruditio villains villainous, why not, make heroes like men! … Now he must write about ordinary people and social problems! And I say, yes, Hanif, do that, but put in a little comedy routine, a little dance for your Pia to do, and tragedy and drama also; that is what the Public is wanting!” (277) Pia is open to Hanif ’s idea of remembering people otherwise to be lost, but before any of that, for her a film must be easy to enjoy. Hanif ’s “boring-boring” films probably require effort—Pia feels no need for that. But this doesn’t sound like Chatman’s conception, in which “it requires special effort for films to assert a property or relation. … Filmmakers and critics traditionally show disdain for verbal commentary because it explicates what, they feel, should be implicated visually” (128). Pia, though, wants villainous villains—she wants entertainment, not a hermeneutic task complicated by the pressure of narrative time. The Western model of film will not preserve India, as Saleem must, because no one is interested in it. Sheila Nayar explains (less melodramatically than Pia) why Hanif ’s films were never produced: Visual narrative is like written narrative, in the sense that it too is a text and requires a kind of “reading.” … a “writerly” mindset. But what if an individual does not possess such a mindset? What if, due to his or her functioning in a non-literate or low-literate or oral-privileging environment, s/he does not have the cognitive skills required? (15) Perhaps the subaltern audiences of India want visual assertion—the clarity of tone and attitude that spoken and written narratives have, but with the universality of the concrete and visual, plus some entertainment mixed in. Pia’s pleas would then represent those of a nation: to preserve India, but in a modern way. It is the twentieth century, so oral narrative will not do. A 500-page novel written in English will probably not work either, in a land with several dozen major languages and 300 million non-literate citizens (Nayar 15). And it is more than clear what Pia thinks of modern Western film. So, Nayar asks, “where does one ‘reside’ when life is inhabited without the benefit of text? What is history when it exists bereft of documented and verifiable facts? What can one remember—indeed how must one remember—in order not to forget?” (15). Written text will not preserve India; it is not pickling. The solution she presents to the deficiencies of Western narrative forms is one uniquely Indian: Bollywood film. Nayar claims that “the conventional Indian popular film possesses clear characteristics of oral performance and orally transmitted narratives, conspicuously sharing characteristics with, for example, Homeric epic and the Indian Mahabharata” (14). Bollywood is the form of the past India’s epics, like the Mahabharata, thrust forward into Saleem’s and India’s present. Midnight’s Children’s brand of magical realism, then, alludes to Bollywood epic as Rushdie strives to construct India’s written epic. India is usually the setting and the topic of Bollywood narrative, but more importantly, India is its audience. Bollywood directors are not so much in the business of making movies of India as for India. Nayar asks, “can we rightly say that a media image is ‘representing,’ if the spectator it is representing is from the outset noetically excluded from comprehending it?” (22). Perhaps Rushdie asked himself the same question before setting out to write the novel of India. Indeed, Saleem refines his goal thus: “What I hope to immortalize in pickles as well as words: that condition of the spirit … in which an overdose of reality gave birth to a miasmic longing for flight into the safety of dreams” (415). Bollywood, too, is the genre that will preserve those who get enough reality in everyday life; Bollywood is the genre of dreams. Nayar compresses the qualities of Bollywood narrative—which, she maintains, are circumscribed by orality—into the following criteria:
94
pickled dreams (i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x) (xi)
non-contemplative non-realist “Manichean” loosely plotted (and of kaleidoscopic variety) kinesthetically arousing flashback-using tradition-refining (as opposed to originality-seeking) favoring formulas and clichés brutal in their violence plagiaristic possessing a tendency to “swerve into a happy ending” (21)
With the above characteristics as Bollywood’s residue of orality in mind, we can reconcile much of Midnight’s Children’s narrative variety, and perhaps reveal even more about it when it breaks Bollywood’s mold. One of Midnight’s Children’s most striking qualities is its temporality. The entire novel is a collage of flashbacks, constantly telescoping between the voice of the pickler Saleem in the present to his point in the story, back and forth within that pattern. According to Nayar, this haphazard temporality characterizes oral and Bollywood narrative as well: Without writing, meticulously sculpting a sentence—let alone an entire plot—is quite impossible. Oral narratives are hence, by noetic necessity, episodic, sequential, and additive in nature … constructed around techniques like the use of flashbacks, thematic recurrences, and chronological breaks. … It is pastiche—but quite without the postmodern self-consciousness. (16) Of course, Midnight’s Children is decidedly postmodern and self-conscious—this temporality is created with a reason. There is little sense of a linear narrative; Saleem’s persona as oral narrator (like that in the introductory passage) enters the text repeatedly and unexpectedly. Saleem generalizes this irregular notion of time to the whole of India when he jokes, “no people whose word for ‘yesterday’ is the same as their word for ‘tomorrow’ can be said to have a firm grip on the time” (119). Linear time does not satisfy the needs of oral narrative; the flashback has the advantage of “facilitating a movement between data more easily transmittable in separate containers” (16). Similarly, Saleem compartmentalizes his time into thirty pickle-jars, each with a dominant flavor often different from those of its components. He also later admits that, because of the tempo of his narrative, his entire story is subject to inaccuracy. He says, “Because I am rushing ahead at breakneck speed; errors are possible, and overstatements, and jarring alterations in tone … in autobiography, as in all literature, what actually happened is less important than what the author can manage to persuade his audience to believe” (310). Saleem’s sense of purpose trumps any concern for literal accuracy—indeed, it seems not to need accuracy to be successful. He is in a rush because he is trying to preserve. But pickling—by virtue of its spice—will always distort, regardless of the haste with which it is performed. Saleem does not apologize for the flavor he adds, though; he defends his methods as follows: In the spice bases, I reconcile myself to the inevitable distortions of the pickling process. To pickle is to give immortality … a certain alteration, a slight intensification of taste, is a small matter, surely? The art is to change the flavor in degree, but not in kind; … to give it shape and form—that is to say, meaning. … They may be too strong for some palates, their smell may be overpowering, tears may rise to eyes; I hope nevertheless that it will be possible to say of them that they possess the authentic taste of truth. (531)
95
eruditio Saleem suggests that one must intensify in order to preserve a memory. “Tears may rise to eyes”— melodrama may result—but that does not make his stories any less truthful, he says. That he does not change reality “in kind” is dubious, but Bollywood cinema works on the same assumption: Amplification and polarization—and their inevitable by-product, melodrama—are also part and parcel of the characters who inhabit orally inscribed narrative. Colorless personalities—characters who are quiet, still, delicately nuanced—cannot survive in such a world. They must, like the mnemonic phrases of an oral epic, be organized into some kind of form that will render them permanently memorable. Hence Bollywood is populated by one-dimensional, oversized, and inflated personalities … They are big, they are brash, they are epic. (Nayar 20) Preserving requires generating shape and meaning from a reality that seems to lack both. Thus Midnight’s Children relies on a profusion of types, from the orthodox Padma, to the forward-looking Aadam, to the aboriginal Tai. So that they can constitute an epic of India, these personalities are often distended and “inflated,” just like the novel’s narrative forms and plot events—each escapade, each pickle jar bursts with spice and color: “Melodrama piling upon melodrama; life acquiring the coloring of a Bombay talkie; snakes following ladders, ladders succeeding snakes; in the midst of too much incident, Baby Saleem fell ill” (Rushdie 168). As Saleem’s brush with typhoid shows, those who make up the heart of India suffer from an “overdose of reality” (415). Nayar suggests that by “speaking to an audience’s desire for escape from a much less fanciful existence” (21), preservation by intensification also serves Saleem’s miasmic “flight into the safety of dreams” (Rushdie 415). Of course, Bollywood’s spice makes it too simple for many modern viewers. Hindi films advance a stark moral dualism, in part because heavily polarized characters are easier to remember than those with nuanced psychological conflicts, as we prefer in Western film (Nayar 21). Saleem’s uncle Hanif, a Western-minded realist, “was fond of railing against princes and demons, gods and heroes, against, in fact, the entire iconography of the Bombay film; in the temple of illusions, he had become the high priest of reality” (Rushdie 279). But because “the ambiguities of existence, the nuances of the psychological self, the grayness of the moral universe, the ordinariness of human life—all those characteristics to which texts circumscribed by literacy are so rigorously devoted” (Nayar 21) do not lend themselves to memory and survival, literate thinkers will forever call Bollywood a temple of illusions. Saleem would disagree, however, because he says Bollywood audiences are closer to reality than anyone else: Suppose yourself in a large cinema, sitting at first in the back row, and gradually moving up, row by row, until your nose is almost pressed against the screen. Gradually the stars’ faces dissolve into dancing grain; tiny details assume grotesque proportions; the illusion dissolves—or rather, it becomes clear that the illusion itself is reality. (189) In this model, the back row is the comfortable detachment of people like Hanif and the Western critic. As one gets closer, though, reality becomes distortion—what was elegant from afar is now distended and fantastical. One such scene is Amina’s stealth encounter with Lal Qasim. From afar, a man and woman are having a banal, casual meal together. But “through the dirty, square, glassy cinema-screen of the Pioneer Café’s window,” Saleem found something profuse. He uses—perhaps because he must—cinematic, yet sensational terminology to describe what he sees: Unable to look into my mother’s face, I concentrated on the cigarette-packet, cutting
96
pickled dreams from a two-shot of lovers to this extreme close-up of nicotine. But now hands enter the frame … hands outstretching tensing quivering demanding to be—but always at last jerking back, fingertips avoiding fingertips, because what I’m watching here on my dirty glass cinema-screen is, after all, an Indian movie, in which physical contact is forbidden lest it corrupt the watching flower of Indian youth … I left the movie before the end, to slip back into the boot of the unpolished unwatched Rover, wishing I hadn’t gone to see it, unable to resist wanting to watch it all over again. (248-9) Through the magnifying lens (fun house mirror?) of Bollywood film, their meeting is magical. It is highly intense, highly dramatic, but it also serves to preserve. This encounter upholds the strict Indian moral code by preventing physical contact outside of wedlock. For contrast, consider the praise a magazine lavishes upon Commander Sabarmati when he murders his adulterous wife and the man she was seeing: “the noble sentiments of the Ramayana combine with the cheap melodrama of the Bombay talkie, but as for the chief protagonist, all agree on his upstandingness” (Rushdie 301). Like this story, Hindi cinema justifies gratuitous violence when it helps preserve Indian tradition. “Hindi films, like the oral epics that preceded them, manipulate public stories” (Nayar 18) like the Ramayana to give their audiences the pleasure of commonly held morality, culture, and identity. Whereas for literate audiences, “long-term exposure to print has engendered an anxious need to be original, to shun clichés” (Nayar 18), as memorable and commonly held repositories of wisdom, clichés and allusions to public stories help Bollywood cinema attain its purposes. They consolidate and reinforce pre-existing wisdom into national ideals. One must wonder, though, from what change does Bollywood want to defend India? Preservation is a struggle against time—modernity—a constant threat to the Indian way of life. As we can infer from the age of India’s epics, the nation has struggled against modernity for quite some time. Saleem shows how even the Hindu calendar reflects a long-standing fear of time: That inescapable date is no more than one fleeting instant in the Age of Darkness, Kali-Yuga, in which the cow of morality has been reduced to standing, teeteringly, on a single leg! Kali-Yuga—the losing throw in our national dice-game; the worst of everything; the age when property gives a man rank, when wealth is equated with virtue, when passion becomes the sole bond between men and women, when falsehood brings success (is it any wonder that I too have been confused about good and evil?) … will last a mere 432,000 years! (223) Time—at least for the lengthy Kali-Yuga—brings about the erosion of classical Indian values; even more fundamentally, “the corruption of the clocks” (37) obscures the distinction between good and evil. In the face of such a threat, Indian cinema’s reliance on classical morality and “Manichean” ethics is intentional. And because fearing modernity is so fundamental to Indian culture, only an aesthetic that affirms the classical can triumph and sustain India. Like producing film, though, preserving culture is not inaction. The task of India is generative—to preserve is to create. So Saleem’s pickles have new, spicy flavors, and Indian cinema is dreamlike and fantastical. Dreams, Saleem says, are in fact the condition of India’s existence as a nation. When India achieved independence, it transformed into a mythical land, a country which would never exist except by the efforts of a phenomenal collective will—except in a dream we all agreed to dream; … India, the new myth—a collective fiction in which anything was possible, a fable rivaled only by the other two mighty fantasies: money and God. I have been, in my time, the living
97
eruditio proof of the fabulous nature of this collective dream. (Rushdie 124) Perhaps because India only arose as a modern nation-state after the irruption of the British, that nation state must now be perpetually dreamed into existence. Modernity in the form of the Gandhian government sought to deprive Saleem of his right to produce and continue himself by sterilizing him. But what the government could not do was deprive him of his ability to imagine. This is the generative ability with which Saleem are left. The potential of sterility to produce is a hallmark of Bollywood’s “indirect kiss,” and Saleem recognizes its power: “how much more sophisticated a notion it was than anything in our current cinema; how pregnant with longing and eroticism!” (Rushdie 162). How can the absence of romantic contact be “pregnant”? Precisely because, Saleem suggests, it gives birth to imagination. Saleem explains the odd connection between pickling and fertility at the novel’s conclusion: Symbolic value of the pickling process: all the six hundred million eggs which gave birth to the population of India could fit within a single, standard-sized pickle-jar; six hundred million spermatozoa could be lifted on a single spoon. (529) Pickling works not by hermetically sealing, or by refrigerating it so that there can be no life within the jar. Pickling means overwhelming the intrusion of the foreign with its own spice and intensity. In that sense, even though pickling prevents change, it does not mean sterility—the process that has continually given birth to the identity of India is procreative. Sterilization campaigns, Saleem suggests, cannot take that power away. The concept of pickling, then, is optimistic at its core. And we know that Bollywood movies tend to “swerve into a happy ending” (Nayar 21). Nayar says, This kind of storytelling is, to literate minds, profoundly ahistorical, exhibiting a tendency to fly in the face of “realism,” to revert to fantasy endings. … Each tale in the telling must be a repository of the past, and “a resource for renewing awareness of present existence”—then Hindi film’s sameness, its repetition, its aforementioned telescoping of temporalities, makes complete sense. (18) The task of the storyteller, of Saleem Sinai as well as the Bollywood director, is to package the truths of the past into concoctions that will be interesting and enjoyable today. Saleem’s story seems to be headed toward a Bollywood ending (“I shall reach my birthday, thirty-one today, and no doubt a marriage will take place”) but then “the crowd, the dense crowd, the crowd without boundaries … will make progress impossible”, and Saleem and the Midnight’s Children are “sucked into the annihilating whirlpool of the multitudes” (Rushdie 532-3). That is quite an un-Bollywood conclusion. Bollywood cinema is a hybrid, to be sure, between Indian oral tradition and the visual flash of modern cinema. But Midnight’s Children is a hybrid of hybrids; it is an oral legend that draws on cinema that is ultimately packed into 533 pages of exquisite English prose. “Midnight’s children can be made to represent many things, according to your point of view; they can be seen as the last throw of everything antiquated and retrogressive in our myth-ridden nation … or as the true hope of freedom,” (Rushdie 230). So at the novel’s close, after feeling Saleem’s and India’s pain for so long, will readers be satisfied with a Bollywood ending? Bollywood’s audiences are not typically those of Booker Prize-winning works of postmodern fiction; are a birthday and wedding a satisfying conclusion? This audience expects not spice and dreams, but rather a superb work of English-language literature. So perhaps the story is crushed under the burden of its own hybridity, the weight of telling a story that is authentically Indian to an audience that is not.
98
pickled dreams works cited
1. Chatman, Seymour. “What Novels can do that Films can’t (and Vice Versa).” Critical Inquiry 7.1, On Narrative (1980): 121-40. <http://www.jstor.org/stable/1343179>. 2. Nayar, Sheila J. “Invisible Representation: The Oral Contours of a National Popular Cinema.” Film Quarterly 57.3 (2004): 13-23. <http://www.jstor.org/stable/3185938>. 3. Rushdie, Salman. Midnight’s Children. New York: Random House Trade Paperbacks, 2006.
99