Ampersand III

Page 1


Ampersand

Journal of the Bachelor of Arts and Science Volume 111, 2010

McGill University Montreal, Quebec, Canada www.mcgillbasic.com


Ampersand

Journal of the Bachelor of Arts and Science Volume 111, 2010 Editor-in-Chief Kathleen Gollner Editorial Board Ian King Jennifer Leavitt Anna O’Kelly Elena Ponte Jonah Schermbrucker Qurrat-ul-ain (Annie) Tayyab Kartiga Thiyagarajah Design and Layout Emily Coffey Anastassia Dokova Darren Haber Robert Rabie Qurrat-ul-ain (Annie) Tayyab Michael Tong

Ampersand is supported by the Bachelor of Arts and Science Integrative Council, the Arts Undergraduate Society, the Dean of Arts Fund, and SSMU Campus Life Fund.

The moral rights of the authors have been asserted

Printed on 100% recycled paper


iii

Ampersand

Contents v

From the Editors

vi

About the Contributors

viii

Introduction

1

Entomological Warfare: Insects as Weapons of War David Whyte

9

Too Precious to Lose: Crimes Against Coral Molly Krishtalka

19

Diagrams, Proofs, and Diagrammatic Proofs Andy Yu

30

Music: Stimulating emotional, neurological and physiological reactions By Catherine Roquet, translated from French by Elena Ponte

36

Land Use, Development, and Livelihood in the Peruvian Amazon Nicholas Moreau

44

The Black Death in Europe Rachel Li

51

The Safety and Efficacy of Prozac from 1987-2008 Yael Smiley

62

Nip, Tuck: The Medicalization and Demedicalization of Plastic Surgeries Todd Plummer

68

The Trees are Dancing Kyle Teixeira-Martins

73

The Etiology of Schizophrenia: A Gene-Environment Interaction Explanation Laura Hickey


iv

On the Necessity of Integration One root, one trunk, one foundation supreme, Links the many leaves that call themselves “I;” Dispersing wildly as an autumnal stream That forgets its source in pursuing the sky. Numerous masks conceal a common face, Echoing madness from a single sound; The pursuit of man is an endless chase For the trivial – missing the profound. All disciplines from one source derive, Like many branches of a single tree; Nothing but honeycombs of the great hive Of knowledge and human inquiry. “Many are we” the lost leaves blindly state; Common are your roots. Why not integrate? Ian Gerald King


v

Ampersand

From the Editors The initiation of the B.A. & Sc. Program in September 2004 provided students with a means to square the study of the arts and the sciences. Though integrative discourse has a long tradition in academia, discussion during the last century has been nearly mute. The trend towards specialized fields, with strict subject demarcations and stringent course outlines, effectively polarized the arts and the sciences. Fortunately, the subsequent silence was not bound to last. In January 2004 Professor Morton Mendelsohn1 commended the combined arts and science program as an opportunity “to achieve a diverse knowledge base, to gain competence in different methods of scholarship, to hone intellectual flexibility, and to integrate material across disciplines.� These, the merits of integrative study, are increasingly recognized and promoted. Our small but maturing Ampersand library, with issues one through three, is an illustration therein. The featured papers in Ampersand realize the strengths of integrative study. There are no limits to the knowledge our peers are willing to integrate: from efforts to contrast views between disciplines, as in a contrast of Biological and Sociological approaches to Schizophrenia; to explications of topics where the demarcation blurs, as in an explication of the Plague in a History of Medicine. They negotiate combinations requiring different methodologies and, in so doing, their research attains either original insight or thorough review. Consider our introductory article. It features a McGill student, Graham McDowell, who uses methods from Sociology and Geography to conduct original research on Climate Change. Throughout the journal, our contributors combine disciplines to provoke thought and to reconsider conclusions, emphasizing achievement in integrative study. In our efforts to assemble Ampersand, we consistently challenge each other to consider and then reconsider the nature and the merits of integrative study. We hope our discussion promotes yours. We hope our featured papers lend to your consideration of how arts and science, together, inspire provocative and meaningful achievements.

The Editors

1  Associate Dean, Academic and Student Affairs, in the Faculty of Science (at time of announcement)


vi

About the Contributors Laura Hickey is a U3 student in the Faculty of Arts and Science, currently pursuing a

double major in Biomedical Sciences and Sociology. This essay was written in SOCI 508: Social Psychiatry and Medical Sociology. Laura sites an interest in Schizophrenia due to a friend’s mother’s diagnosis with the disease. Laura is coordinator of the SSMU Best Buddies club, volunteers at the Royal Victoria Hospital, and plays intramural soccer.

Molly Krishtalka is currently completing her Bachelor of Arts in Honours International

Development Studies with a minor in Hispanic Languages. Her academic interests include Arctic sovereignty and environmental security. After graduation, she plans to pursue a degree in law. Her paper was written for ECON 318: The Criminal Economy.

Rachel Li decided to study at McGill in order to take advantage of its rich selection of

courses and majors. She values the accumulation of (occasionally arbitrary) pieces of knowledge and loves learning about anything that catches her fancy. She is currently in the U1 year of her B.A. & Sc. program.

Graham McDowell is in his U2 year at McGill where he is pursuing an Honours degree

in Geography and a minor in International Development Studies. He works for Dr. James D. Ford as a research assistant for the Iqaluit Land-Use Mapping Project. This summer he will be traveling to Nepal where he will be conducting research for his Honours thesis: “The Human Dimensions of Climate Change in the Nepal Himalaya”. In addition to his academic interests, Graham has worked professionally as an ice climbing guide and in the course of his travels he has climbed in every major mountain range in North America.

Nicholas Moreau is a U3 B.A. & Sc. student pursuing an Honours in Environmental

Studies with a minor in Anthropology. He is a member of the McGill men’s lacrosse team, but his interests also include: collecting (and watching) VHS tapes, reading, fiddling with photography, and working with his hands. His paper on land usage in the Peruvian Amazon was written for HIST 366: Topics in Latin American History the Environmental History of Latin America taught by Professor Daviken Studnicki-Gizbert.

Todd Plummer is a U2 English Literature student. His minor concentration in Social

Studies of Medicine has allowed him to explore his fascination for the intersection of the humanities and the sciences. When Todd is not busy working on the executive the Department


vii

Ampersand

of English Students Association or practicing with the Varsity Alpine Ski Team, he writes for both his own blog and Leacock’s magazine. He is a classically trained pianist, avid reader and Eagle Scout. Todd ultimately plans to pursue a career in magazine editorial.

Catherine Roquet is a U3 B.Sc student in Psychology. She is an Arts and Music school

girl gone indie. She hopes to bring her love of music into her science and research, to understand every detail of her obsession. She also wishes to bring music to others and has always been fascinated by music therapy, on which she also writes. She hopes to continue her studies in auditory neurocognition (music perception) at the graduate level and possibly become a better pianist.

Yael Smiley is in U3, studying Anatomy and Cell Biology and Social Studies of Medicine. Her paper was written for Professor Andrea Tone’s class on the history of psychiatry. She is an advocate for Students Supporting Disabilities, McGill Health Promotions, and Lois and Clark: The New Adventures of Superman.

Kyle Teixeira-Martins is a U2 B.A. & Sc. student majoring in Interfaculty Environment

and minoring in Drama and Theatre. He is a native to the Island of Montreal, currently living in the West-Island City of Beaconsfield as well as the Domaine Vert in Mirabel, QC. This essay was written as a take-home exam for BIOL 355: Trees Ecology and Evolution, where students were asked to discuss the most puzzling aspect about trees.

Andy Yu hails from Hong Kong and is pursuing a bachelor of arts at McGill with an Honours in Philosophy and minors in Mathematics and Economics. “Diagrams, Proofs, and Diagrammatic Proofs” was written for Dirk Schlimm and Brendan Gillon in PHIL 481: Language, Symbols, and Thought. Andy’s interests range from logic and the philosophy of mathematics to law and behavioral economics. Last summer, he participated in a summer school program in logic and formal epistemology at Carnegie Mellon. This summer, he will be working and traveling in Ghana. After graduation, he hopes to pursue a master of science in logic as well as a law degree. David Whyte is a fourth year Arts and Science student pursuing duel majors in Biology

and Humanistic Studies. As a member of the Bachelor of Arts and Science Integrative Council acting as a representative to students and third and fourth year, he takes a special pride in being able to contribute to Ampersand. He maintains an especially strong interest in animal biology and has a new found interest in insect biology since taking the course for which his essay was originally written. This being said, David has always had a sympathy towards insects and always apologizes before moving bugs out of his apartment.


viii

Introduction

I

ntegrative research is a prerequisite for understanding and addressing some of today’s most pressing issues. But much to the disservice of intellectual curiosity––and ultimately to the communities whom we academics are obliged to serve––such work has too often been ignored due to the division between the arts and the sciences. Not to deride either, it is clear that both domains have made essential contributions to knowledge, but insofar as their division has precluded inquiry of the swath of terrain between them, the division must be seen as démodé. Integrative research explores the middle ground; it is about recognizing unasked questions, seeking new perspectives, and developing progressive approaches to emerging topics.

As an undergraduate researcher working with Dr. James D. Ford, I have had the unique opportunity to explore the merits of integrative research. Through our research in the Canadian Arctic, we are working to understand how climate change––and its interaction with social, economic, and political change––is affecting Inuit hunters in the Iqaluit region of Baffin Island. When on Baffin Island, I accompany hunters “on the land” where I interpret, document, and discuss the climatically induced hazards, like unstable sea ice and whiteout conditions, as they are encountered. I also interview hunters about non-climatic factors effecting their land-use decisions; high gas prices and time constraints are frequently mentioned. In a rapidly changing Arctic, understanding the effects of climate change has required not only an assessment of risks directly associated with environmental changes but also the explicit investigation of how concurrent changes like economic globalization and political integration are changing Inuit culture. Understanding climate change through the lens of cultural change is the type of “progressive approach” made possible when disciplinary blinders are removed. Our research draws on scientific and Inuit knowledge to understand the multifaceted nature of conditions faced by Inuit hunters. For example, the hunters carry global positioning systems (GPS), which provide empirical data about their land-use. This data is correlated with information from the hunters about landscape and biotic anomalies as well as other social, economic, and political factors that may have affected their land-use. These maps have greatly expanded our understanding of how land-use is changing, and at what cost culturally and economically. In addition, the maps have been integrated with Google Earth to improve land-use visualization and to facilitate knowledge sharing about how change in the Arctic is affecting Inuit hunters. From our work, it is clear that while climate change is increasing the occurrence and severity of a number of land-use hazards, parallel cultural change has lead to an unequal pattern of vulnerability to amplified hazards. For example, hunters with strong traditional knowledge and access to economic means are relatively less affected; these hunters are very familiar with the local environment and are more capable of sensing dangerous situations. They are also able to afford adaptive responses like boats for the lengthening open water hunting season. However, those less well-equipped—young hunters and economically or socially marginal-


ix

Ampersand

ized hunters—are being tangibly impacted. Challenges faced by affected hunters are propagated through the community, especially in terms of food security. A necessary condition for deriving these conclusions has been the willingness to look beyond disciplinary boundaries. Climate change, which is commonly regarded as the domain of the physical sciences, has proven to be a phenomenon for which impacts cannot be meaningfully understood within a purely scientific framework. Likewise, cultural change in Inuit communities cannot be sufficiently examined without considering how climate change is affecting traditional livelihoods. In my own research, then, insights from the arts and sciences have provided the integrative framework necessary for understanding how global change is affecting Inuit hunters in the Canadian Arctic. But mine is only one of many examples where McGill students are asking questions that do not fit within the discourse of the arts or sciences alone: questions that explore the middle ground. The thought provoking contributions to this volume of Ampersand make the vitality of McGill’s integrative research community abundantly clear. Take for example Rachel Li’s The Black Death in Europe, which traces how the European experiences with the plague influenced Western views of medicine, disease, and death; or Nick Moreau’s paper on the Peruvian Amazon, which investigates what can be learned from indigenous practices of sustainable agriculture in Peru. David Whyte’s Entomological Warfare: Insects as Weapons of War reveals how insect biology has been manipulated by humans in warfare, while Yael Smiley’s paper on Prozac critically examines the role of the anti-depressant in shaping North Americans’ conceptions of and relations to depression. These authors––and all of those contributing to this volume of Ampersand––demonstrate the possibilities of integrative research as well as the fruitful and often unexpected discoveries enabled by it. As we embark headlong into the twenty-first century, removing artificial barriers to inquiry is imperative. This necessity is made clear by the range of complex issues young scholars are now called upon to address. That we are willing and capable of responding is evident in this third volume of Ampersand.

Graham McDowell Iqaluit Land-Use Mapping Project Department of Geography McGill University


Entomology & Warfare   1

The very nature of insects, that is, their inherent attributes and instinctual behaviours, make them ideal weapons of war. Considering social insects—such as bees, wasps, and hornets—and non-social insects—specifically, the Peaderus Beetle, the Diamphidia Beetle, and the Blister Beetle, David Whyte explicates the roles of insects in warfare. He traces how they were gathered, transported, and then used as offensive or defensive weapons, vectors of disease, destroyers of crops, and implements of torture.

Following the domestication of animals, which took place roughly ten thousand years ago, humans quickly discovered that many large mammals could be used in warfare. Horses, camels, dogs, elephants and, more recently, dolphins, have all been used by humans to gain advantage on the battlefield. Although the military exploits of these species are, historically, the most well known, the first organisms that humans implemented in military disputes were not mammals at all, but insects. The first use of biological warfare is hypothesized to have occurred as early as a hundred-thousand years ago in the Up-

per Paleolithic period (Lockwood, 2009). The earliest uses of insects in this manner employed them both as weapons to attack enemies directly and as defence mechanisms. Later, as human knowledge accumulated and evolved, more sophisticated tactics were devised that involved insects being used to transmit disease and destroy crops. To this day insects are used in a number of ways and entomological research of a military nature continues to push the use of insects in warfare and defence to its limits (Lockwood, 2009).


2

Ampersand

Insects as the Perfect Weapon The Nature and Benefits of Social Insects

Social insects like bees, ants, wasps, and termites have been revered by military strategists since the time of the Egyptians—if not earlier—as revealed by hieroglyphic evidence (Lockwood, 2009). These insect societies, with their complex social orders, were considered models of ideal armies and “became the earliest zoological conscripts of warring peoples” (Lockwood, 2009, p.9). Unlike some animals with more famous military legacies, such as the horse, social insects are not likely to desert battle in the aim of self-preservation. When faced with danger, these insects certainly do act to preserve their genes—but in a violent way that doubly serves as a method of attack. Female worker bees, for example, are sterile sisters; their ability to pass on their genetics lies with their mother, the queen. As a result, the worker bees will defend the queen to their deaths, despite the fact that the bees’ own aggressive acts of stinging results in their own demise. (Lockwood, 2009). The way that some insects instinctively act to defend themselves can be paralleled to modern day weapon technology. Therefore, it is only logical to consider using insects as weapons themselves. An example of this parallel can be found within the bombardier beetle, whose defensive techniques mirror American binary weapons and munitions such as nerve gas stores. Binary weapons have two moderately harmless chemicals stored in partitioned compartments, which during launch are combined to create an incredibly lethal substance. The exact same conditions exist for the bombardier beetle, which has two separate glands at the end of its abdomen, each of which has two different chambers, one of hydrogen peroxide and a phenol mixture and one of enzymes. When the beetle goes into defence or attack mode, it “opens a valve between the two chambers, resulting in a toxic mixture,” which it excretes at its target (Cole, 1998). Moreover, the way that humans used and continue to use insects is distinct from the way humans use other organisms in warfare practices. Funda-

mentally, this distinction exists upon the fact that humans were able to utilize insects as weapons without expending much energy in training them. This was certainly not the case for animals like dogs and horses, which needed to undergo extensive training before their roles as guard dogs or battle horses could be properly integrated into a military company. Although precautions were in fact taken and practices were certainly developed to control insects before their release on an enemy, insects could be effective “simply by doing what they could do naturally” (Mayor, 2004, p.176). Considering their incredibly small size, insects possess the ability to create a relatively large amount of “damage and chaos far beyond their bodily dimensions” (Mayor, 2004, p.176). Arguably, their very nature can be considered warrior-like and militaristic, as many are equipped with “sharp stingers, chemical poisons” and an innate predisposition to both defend and attack (Mayor, 2004). All of these characteristics make insects an extremely valuable tool in warfare.

Insect societies, with their complex social orders, were considered models of ideal armies Symbolism and Stigmatization

The tenacious nature that many insects possess has allowed them to be used not only physically, but also symbolically by several cultures to invoke fear or respect in enemies. For example, the Egyptian King Menes, also known as the ‘Scorpion King’, used the hornet as the symbol of his rule, likely to represent the pain he could inflict upon his opponents (Lockwood, 2009). Insects have had the ability to instil fear into people for a long time and this stigma continues to exist today. The Bible provides a wealth of information concerning entomological warfare, most notably in the narrative of the Ten Plagues, in six of which insects play a key role. The Bible demonstrates the long-standing fear people generally have of insects; a “fear factor […] put to


Entomology & Warfare   3 symbolic military use among the ancient Greeks, who painted scorpion emblems on their shields to frighten foes” (Mayor, 2004, p.182). The Roman Praetorian Guard, who acted as the emperor’s personal guards, had a scorpion as its official insignia (Mayor, 2004). This symbolic motif continues today, as the US military uses weapons named “scorpion, stinger, [and] hornet” in an attempt to both increase soldier morale and “inspire fear among the enemy” (Mayor 2004, p.183).

any kind of stinging insects “had to be kept peacefully in their nest before the ammunition was used against the foe” (Mayor, 2004, p.179). To do this, it is believed that people would seal the openings with grass or mud or place the hives or nests into some sort of vehicle, such as a sack or a basket (Lockwood, 2009). Pots were also used to transport bee nests, and sometimes bees were even made to colonize within special containers specifically intended for transport (Mayor, 2004).

Risks and Precautionary Practices

Insect Usage

Weapons are most effectively used when employed with a sense of purpose and direction. In warfare, humans needed to ensure that they would not “[become] a victim of [their] own weapon” (Lockwood, 2009, p.11). Therefore, the integration of insects into a war machine required the development of several precautionary measures and practices. Dangers and hazards associated with using insects were endless, making these methods not only valuable, but extremely necessary (Mayor, 2004).

Finally, risk persisted in the actual use of these insects as weapons. As a result, practices were also developed to mitigate these dangers. An example of such a risk is the potential for ‘blow-back’ (or backfire) involved with the utilization of beehives as bombs (Mayor, 2004). To minimize the potential for ‘blow-back,’ beehive bombs were thrown forcefully but carefully at the enemy so that the nest would burst, “[releasing] hundreds of very nervous hornets, [bees, or wasps] on the target” (Mayor, 2004, p.179). Some armies even developed special

The way that some insects instinctively act to defend themselves can be paralleled to modern day weapon technology. Gathering & Transport

Early humans are thought to have “gathered the insects at night when [the insects] are slowed by cooler temperatures and unable to see their abductor’s approach” (Lockwood, 2009, p. 11). Since the early Neolithic period, smoke had been used, often by shamans (Mayor, 2004), as a tool for bee control (Lockwood, 2009). These shamans also used toxic dust to control the creatures (Mayor, 2004). Essentially, the smoke or dust would pacify the bees and prevent any “misdirected stings” (Mayor, 2004, p.180). After these insects were gathered, the next step was to transport them. In transporting entire hives or nests, there existed a huge danger of “premature explosion” (Mayor, 2004, p.179). Therefore,

instruments for transporting and propelling beehive bombs. For example, the Tiv people, an ethnic nation of West Africa, “kept their bees in special large horns, which also contained a toxic powder” (Mayor, 2004, p.179). The horn’s shape and length directed the bees towards the enemy, while the toxic powder both intensified the bees’ venom and calmed them while in the horn (Mayor, 2004). Essentially, the horn was a “bee cannon” (Lockwood, 2009). As technology advanced, beehive bombs were frequently used during war, and advancements allowed for improved risk management. One of these technological advancements was the catapult, which constituted “a very effective delivery system for launching… hornets’ nests while avoiding


4

Ampersand

collateral damage” (Mayor, 2004, p.180). What is more, soldiers of World War I would “set up hives with trip wires along the enemy’s route”, and they would thus avoid the risks of propelling the bees altogether (Mayor, 2004). Insects as Weapons Historical evidence has demonstrated that insects of all kinds “were important military agents in tactics of ambush, guerrilla raids and flushing out primitive strongholds” (Mayor, 2004, p.178). Much of this evidence is found in ancient Hebrew and Arabic sources (Mayor, 2004). As these descriptions could be about any of the many poisonous insects endemic to the Near East, contemporary entomologists have done their best to interpret these sources carefully (Mayor, 2004). Although some records have been confirmed to be describing specific insect species, the majority offer little more than educated speculation. Beetles The Peaderus Beetle

In these Hebrew and Arabic sources, there are specific references to “hordes of unidentified flying insects that were summoned to attack [enemy eyes] with acrid poison fluids, blinding or killing them” (Mayor, 2004, p.178). Some scientists believe that these sources are describing the gadfly, or eye fly. The majority, however, believe poisonous beetles to be the insect in question. In regards to sources that describe insects used to attack the eyes of the enemy with poison, entomologists believe that this evidence is specifically referring to the Paederus beetle (Mayor, 2004, p.178). The Peaderus beetle belongs the Staphylnidae family of rove beetles (Mayor, 2004). Approximately an inch in length, the insects can be classified as “predatory flying insects” (Mayor, 2004, p.73). Paederus beetles excrete a very potent poison liquid called pederin, which upon contact results in “suppurating sores and blindness” (Mayor, 2004, p.178). Pederin is found within the hemolymph, the liquid analogous to blood in beetles, and is one of the most ef-

fective animal toxins in existence—even more fatal than cobra venom (Mayor, 2004). It is believed that pederin was collected from the beetles to fashion poison arrows (Mayor, 2004). Therefore, it seems that these beetles were used directly and indirectly as war weapons: both as soldiers themselves and in weapon production. The story of the dikairon bird in India, whose toxic droppings were used to make a lethal poison, is now thought to have been about the Paederus beetle, who is similar in description and often lives in birds’ nests. The dikairon poison was used by the Kings of India and Persia in their endless political exploits (Mayor, 2004). The Diamphidia Beetle

A beetle which is still used as a weapon of war is the Diamphidia beetle, whose larvae the da San Bushmen of the Kalahari Desert use to make poison arrowheads (Mayor, 2004). This is done by creating a poison out of the “Larvae of Diamphidia simplex Paringuey” (p.52) which some consider to be toxic as a result of microorganism growth within the decomposing larvae (Hall, 1927). Even though some argue that these arrows actually result in infection rather than direct poisoning, the fact that these arrows are ultimately fatal is nevertheless evident (Hall, 1927). The Blister Beetle

Other species of beetles that were used for their poison during ancient times include the blister and Staphylinus beetles, which excrete poisons powerful enough to kill animals as big as cattle (Mayor, 2004). Blister beetles are known to reflex-bleed, which means that any slight physical agitation results in a release of blood from its knee joints (Eisner, 2003). This blood contains canthadrin, a toxic compound, which is incredibly lethal to humans (Eisner, 2003). In fact, as little as 100 milligrams is sufficient to kill a person (Eisner, 2003). These beetles, as well as other poisonous beetles are considered to have been used in antiquity in the making of insect bombs. Insect bombs were earthenware jugs filled with these poisonous beetles, wasps, scorpions, and other noxious insects which were


Entomology & Warfare   5 thrown at the enemy (Mayor, 2003). Employing these insect bombs was a very effective battle tactic: regardless of how many enemy troops were actually stung by the insects, the terror that they would incite had an undeniably distracting and overall negative impact on the battle morale and performance of the enemy forces (Mayor 2004). Bees, Wasps, and Hornets An Offensive Weapon

As previously mentioned, bees were also widely used as weapons of war. In fact, they were used so frequently that the historian Ambrose has hypothesized that the “Romans’ extensive use of bees in warfare” may have been the cause of the decline in the number of beehives within the late Roman Empire (Mayor, 2004). The use of bees in warfare is both direct and indirect, as they are employed as stinging agents meant to both distract and injure and as the producers of toxic honey. Their direct use existed and continues to exist in the form of beehive bombs and booby traps. Remarkably, beehive bombs are considered to be one of the first types of projectile weaponry. The Mesopotamian scholar, Edward Neufeld, hypothesized that during Neolithic times, these so-called beehive bombs were simply hornets’ nests that were thrown towards enemies who were hiding in caves (Mayor, 2004). The use of beehive bombs continued to evolve over time and adapted to new technological advancements. For instance, in the eleventh century, Henry I used catapults to propel beehives over long distances at enemy troops. A more recent example involves the use of beehive bombs and catapults by the Hungarians in their war against Turkey in 1929 (Mayor, 2004). A Defensive Weapon

In addition to being a valuable offensive tool, bees have also been very effectively utilized in defensive strategies (Mayor, 2004). In fact, all sorts of stinging insects were a major defensive strategy of forts during ancient times (Mayor, 2004). During the fourth century, Aeneas the Tactician wrote a howto book entitled How to Survive under Siege which

stated that those under attack should discharge bees and wasps into tunnels that were dug under protective walls so that they may help to stop the attackers (Mayor, 2004). In 72 BC, this strategy was used by King Mithridates of Pontus who released bees, as well as other wild animals, into tunnels, which were being invaded by the Romans (Mayor, 2004). In medieval times, castle guards of a castle in Astipalaia, an Aegean island, thwarted attacking pirates by “dropping their beehives from the parapets” (Mayor, 2004, p.187). In 1642, during the Thirty Years War in Germany, soldiers were able to stop attacking Swedish knights by using beehive bombs, which although were unable to penetrate the knights’ armour, caused their horses to run wild (Mayor, 2004). In 1935, when Ethiopia was invaded by Mussolini, the Ethiopians dropped beehives onto Italian tanks which scared the drivers, causing them to crash. Using bees as booby traps to ward off trespassers constitutes another major defensive strategy. Evidence for the use of booby traps in antiquity can be found within Sacred Mayan texts like the Popol Vuh (Mayor, 2004). These sacred descriptions explain how “dummy warriors outfitted in cloaks, spears, and shields were posted along the walls of the citadel” (Mayor, 2004, p.177). These warriors were constructed with what looked like war bonnets on top of their heads, which were in reality “large gourds filled with bees, wasps, and flies” (Mayor, 2004, p.177). If a trespasser attempted to climb the city walls, these gourds were broken, the insects were released, and the intruders “were sent stumbling and falling down the mountainside” (Mayor, 2004, p.177). Bee booby traps were also used all the way through modern times, an example of which being the booby traps left for American soldiers by the Vietcong during the Vietnam War in the 1960s (Mayor, 2004). As a result of these attacks, the Pentagon created their own bee weapon to use against the Vietcong, which has its foundations in bees’ use of pheromones to “[mark] victims for a swarming attack” (Mayor, 2004, p.180). Interestingly enough, this weapon is still in development by the Pentagon (Mayor, 2004).


6

Ampersand

A condemned individual was first force-fed milk and honey to induce severe diarrhoea and then stripped and tied to a hollow log. Next, he was smeared with honey and set adrift in a boat on a stagnant pond. The honey would attract a swarm of wasps that would sting the captive repeatedly. That was not the worst of it... Toxic Honey

Bees were also used indirectly as weapons of war as a result of their honey production, which often results in toxic honey supply. It can be said that ancients used toxic honey in a way that is very similar to how poison gas is used today (Mayor, 2004). Essentially, toxic honey, because of its incredibly sweet taste, could easily be used as “a secret biological weapon to disable or kill enemies” (Mayor, 2004, p.147). In fact, throughout ancient times, military leaders would purposefully leave toxic honeycombs in abandoned camps, which they knew the enemy would eventually inhabit (Mayor, 2004). Soldiers were often unable to resist consuming the sweet honey in large quantities, which would then result in sickness and sometimes death (Mayor, 2004). Therefore, this honey would debilitate troops so that they could be defeated with less trouble (Mayor, 2004). Colchis, an area with an exceptionally large bee population, had abundant amounts of toxic honey, as the bees there would collect their nectar from rhododendron blossoms, which were extremely poisonous (Mayor, 2004). Interestingly, bees are completely immune to the strong neurotoxins in rhododendrons and are thus able to produce such toxic honey without harm (Mayor, 2004). There were several instances in Colchis where groups of soldiers would set up camp and ingest this toxic honey (Mayor, 2004). Consequently, they were often unable to fight effectively in battle and many died (Mayor, 2004). Theoretically then, the enemies’ consumption of this toxic honey would en-

sure victory in battles, making this honey a rather valuable weapon of war. Toxic honey was also used in the making of antidotes, which were extensively used in Europe throughout the Middle Ages and Renaissance. Obviously, these antidotes were very important to military commanders who used poison weapons (Mayor, 2004). Use of Insects in Torture As well as being used in combat, insects have been applied in practices of torture and interrogation that come as a result of war. The earliest people to use insects as torture devices were the Persians who made use of a gruesome practice called sending the prisoner to “the boats” (Lockwood, 2009). A condemned individual was first force-fed milk and honey to induce severe diarrhoea and then stripped and tied to a hollow log. Next, he was smeared with honey and set adrift in a boat on a stagnant pond. The honey would attract a swarm of wasps that would sting the captive repeatedly. That was not the worst of it; flies would breed in the diarrhoea and lay eggs inside the victim’s anus and the flesh of their gangrenous body. The person would eventually succumb to septic shock and die (Lockwood, 2009). It is also reported that Native Americans made use of arthropods—particularly ants—to punish those who were deemed deserving of a painful death (Lockwood, 2009). Apaches would stake captives over anthills and smear honey on the eyes and lips of the victim or simply hold their mouth open with


Entomology & Warfare   7 sharp objects so that ants could enter (Lockwood, 2009). Employed by the emir of Bukhara, Uzbekistan, assassin bugs of the carnivorous family Reduviidae have also been used to torture jailed prisoners for information or as a punishment (Lockwood, 2009). Their curved piercing beak is reported to feel like a hot needle and their toxic saliva and digestive enzymes they secrete cause festering sores. Using insects for the purpose of torture is a perfect example of how they not only inflict physical damage but also instil fear, thus “defeating both the bodies and the minds of one’s opponent” (Lockwood, 2009, p.16). Insects as Vectors of Disease and Destroyers of Crops Early uses of insects in warfare and torture were straightforward extrapolations of an individual’s encounters with them, common experiences of being bitten or stung. By harnessing this power, many factions have gained military advantage in a number of circumstances, but it was not until insects were used as agents of disease transmission that their full potential as assassins was unleashed. This is not a new occurrence however; insects have brought disease onto the human battlefield for more than two thousand years, turning the tide of a number of historical battles (Lockwood, 2009). In the fifth century B.C., an invasion of Sicily by the Athenians was thwarted when the Athenian force was devastated by Malaria. Historians are unsure of how the Sicilians drew the invaders into the marshlands infested with the vector of this disease, the mosquito, but this tactic turned the tide of the siege (Lockwood, 2009). Clearly, the existing presence of insects in a landscape can and has been harnessed to achieve victory, but insects have also been used to spread disease in a more directly controlled way. Catapults and trebuchets were used in medieval warfare to

launch decomposing carcasses of animals, primarily bovine and horses, infested with disease-carrying insects into enemy strongholds or cities. With the advent of cannons, attention shifted to exploding ammunition, regardless of this, the insect vector carrying biological payloads played a decisive role in several conflicts in the eighteenth and nineteenth centuries (Lockwood, 2009). Military strategists also realized that crop destruction could weaken an enemy nation by reducing food supplies or stockpiles necessary to feed their armed forces and also by damaging that nation’s economy. In the Cold War, the Cuban government formally charged the U.S. with intentionally releasing a species called Thrips palmi and other insects over their agricultural land in order to damage their crops (Lockwood, 2009). In the Second World War the Germans ran an entomological warfare program and stockpiled millions of Colorado potato beetles with the intention of releasing them into British potato fields. Whether or not these were released is unknown, but they were certainly cultivated by Germans, as well as the French, who had similar aspirations to use the beetles against the Germans (Lockwood, 2009).

Entomological terrorism is a concept unfamiliar to most, but insects must be regarded, especially by western nations, as a possible tool of bioterrorists to attack both agriculture and people. Future Uses of Insects in Warfare and Conclusion It should be noted that the use of insects in warfare is not a historical legacy of the past. Insects are still used today and efforts to bring their military potential to fruition are pursued by governments and perhaps other militaristic factions all over the world. Entomological terrorism is a concept unfamiliar to most, but insects must be regarded, especially by


8

Ampersand

western nations, as a possible tool of bioterrorists to attack both agriculture and people (Lockwood, 2009). It has been speculated that the reintroduction of certain pest species, such as the screw-worm (larva) to the U.S., which was previously eradicated in 1966, could cost the nation’s government tens of millions of dollars (Lockwood, 2009). It has been stated that a particular beetle found in Brooklyn, New York, has the potential of causing $669 billion in damages, compared to $27.2 billion in direct economic losses associated with the terrorist attacks in New York on September 11th, 2001 (Lockwood, 2009). Another current development in entomological military research is the creation of insect “cyborgs”, which are insects attached to and integrated with technology (Lockwood, 2009). The most prevalent use of such insect-machine hybrids thus far has employed them as enemy detection systems by recording the insects own detection systems and relaying that information back to military officials. It is clear that the future of entomological warfare is full of possibilities and, considering the fact that insects were used both extensively and effectively in ancient warfare, it is unusual that knowledge of its existence is not more mainstream. Ultimately, one can only hope that the future of insects in warfare will yield positive results by helping to protect people, as opposed to being used to spread fear and put human populations at risk. Refernces Cole, L. A. (1998). The Poison weapons taboo: Biology, culture, and policy. Politics and the Life Sciences, 17(2): 119-132. Eisner, T. (2003). For Love of Insects. United States of America: President and Fellows of Harvard College. Hall, I. C. (1927). A Pharmaco-bacteriologic study of African poisoned arrows. The Journal of Infectious Diseases, 41(1): 51-69. Lockwood, J. A. (2009). Six-Legged Soldiers: Using Insects as Weapons of War. New York, New York: Oxford University Press, Inc. Mayor, A. (2004). Greek Fire, Poison Arrows and Scorpion Bombs. New York, New York: The Overlook Pres, Peter Meyer Publishers, Inc.


Ecosystem & Economics   9

Why is the world’s coral facing extinction? Why does it matter? Examining ecology through an economic lens, Molly Krishtalka investigates the threats to coral and discusses what must be done to avert a global catastrophe.

L

ike most issues of its ilk, the impending worldwide extinction of coral is entirely due to human civilization. Over the past two centuries, the ever-increasing greenhouse gas emissions have severely damaged the atmosphere, raised temperatures worldwide, and acidified the oceans to levels too toxic for many marine organisms. Warmer, more acidic oceans cause coral to bleach and die, leaving swaths of dead zones where fish and other marine organisms can no longer live. Decreasing fish populations result in decreased catches for local fishermen, and thus the increased need to use higher-yielding but more destructive fishing techniques in order for these fishermen to maintain their livelihoods. Such high-yield, destructive fishing techniques strain the

already vulnerable marine ecosystem, leading to overfishing and further coral destruction and death. As coral populations decrease, the effects of coral harvesting become more significant. This has led to the banning and regulation by many countries and international bodies of the harvest and trade of coral, as well as the destructive fishing techniques. However, neither effective law enforcement nor programs to aid fishermen in realizing sustainable livelihoods accompany these bans and regulations. As a result, coral harvesting, coral trading, and destructive fishing are able to continue. As global warming worsens, the effects of these now-criminal activities on coral populations increase, leading to even fewer viable fish and coral stocks. Thus, the cycle is allowed to continue. By examining in detail


10

Ampersand

this cycle and analyzing relevant cases, this paper will illustrate criminal activities that have created this dire situation. A discussion of the future of coral, including both scientific predictions and potential solutions, will conclude the paper. Coral: Not Your Average Animal

Despite common perceptions of coral as a plant, it is in fact a tiny animal. Coral polyps live in colonies and “derive nourishment and energy from a symbiotic relationship with zooxanthellae algae” (Butler, 2005). Here, symbiotic relationship refers to one in which the existence of one species depends on the existence of the other, and vice-versa. The algae use photosynthesis to process nutrients into energy. Coral reefs are formed by the slow build-up of the calcified skeletons of coral polyps. As coral reefs on average grow a half inch per year, it can take a thousand years to add a metre to its height (Dean, 2005; Butler, 2005).

production, and waste disposal (Kirby, 2005). Some of the recently discovered colonies are up to eight thousand years old, and contain species previously thought to be extinct. These corals have extremely slow growth and reproduction rates, meaning that it could take centuries for a colony to recover fully from a bottom-trawling episode, if at all (Kirby, 2005).

One-third of all marine life lives in coral reefs—approximately nine million species—and even more species depend on reef-dwelling organisms as sources of food.

Coral species can be divided into two categories: hard and soft. Species of hard coral include stony coral, black coral, and red coral. The precious and semi-precious species of hard coral are generally used to make jewelry or tourist souvenirs. Less valuable hard coral species provide lime and construction materials for homes, roads, and sewage treatment (Debenham, year unknown). Additionally, many species of hard and soft coral are bought live by coral enthusiasts to be kept and displayed in home aquariums. Although it was widely believed for many years that coral only thrived in warm, shallow waters, in recent years, scientists have discovered previously unknown coral species growing in extremely cold waters as deep as six thousand three hundred metres beneath the surface (Kirby, 2005). Although these corals are not yet common in the commercial coral trade, they face other dangers, including unsustainable fishing practices, oil and gas exploration and

The Wide Use of Coral in Industry and Ecology

Coral reefs play extremely important roles both in sustaining the marine ecosystem, as well in providing humans with a major source of food protein, environmental protection, commerce, tourism, and pharmaceuticals. One-third of all marine life lives in coral reefs—approximately nine million species—and even more species depend on reef-dwelling organisms as sources of food (Debenham). Half a billion humans depend on species associated with coral reefs for their livelihoods and food. In terms of fishing alone, coral reefs are ten to one hundred times as productive per unit area as the open sea; in the Philippines, ten to fifteen percent of the revenue of the fishing industry comes from coral reefs alone. According to a survey conducted in twentyseven coastal communities in Papua New Guinea and Indonesia, the majority of households in all the communities engage in fishing, which is the primary or secondary occupation of roughly half the respondents (Cinner, 2009). However, coral reefs provide much more to humankind than just abundant, rich fishing grounds. Coral reefs are instrumental in “buffering adjacent shorelines from wave action, erosion, and the im-


Ecosystem & Economics   11 pact of storms” (Butler, 2005). Studies have shown that had the coral surrounding the Indonesian coast been healthy, the 2004 tsunami would have been half as destructive (Too Precious to Wear, 2009). When the Maldives was forced to build a concrete breakwater to replace a damaged coastal reef, it cost ten million US dollars (USD) per kilometre, whereas it costs just seven hundred seventy-five USD to protect a square kilometre of coral in marine parks (Doyle, 2006). Coral reefs are also key elements of the tourism industry in many equatorial and coastal countries. In the Caribbean, for instance, coral reefs near resorts are worth as much as one million USD (Doyle, 2006). As mentioned before, coral has found many uses in the jewelry and tourism industries. For instance, black coral is a semi-precious hard coral most commonly found in the warm, shallow waters off the coast of Hawaii, although colonies have recently been found in the Mediterranean Sea (Richard, 2009). In Hawaii alone, scuba divers harvest roughly five thousand to six thousand pounds of raw black coral annually. The coral sells for between thirty and three hundred USD per pound and is used primarily in the jewelry and tourism industries (Lum, 2005). The seven coral species of the genus Corallium, also known as red or pink coral, are precious hard corals. Most commonly found in the Mediterranean Sea and the western Pacific Ocean, these corals are harvested primarily for use in jewelry. Corallium species comprise the most valuable coral jewelry market, with the raw coral worth up to nine hundred US dollars per kilogram and the finished coral worth up to twenty thousand US dollars per kilogram (Too Precious to Wear, 2009). Major exporting countries include China, Taiwan, Indonesia, and Italy; the United States of America is the largest importer, importing twenty-six million pieces between 2001 and 2006 (Too Precious to Wear, 2009). Due to the protracted overharvesting of Corallium species, the remaining colonies have lost considerable reproductive output and genetic

diversity, thus making the species even more vulnerable to extinction through harvesting and destruction (Too Precious to Wear, 2009). Additionally, coral and chemicals found in coral are increasingly used in pharmaceuticals. Pharmaceutical companies already use the chemicals in coral in AZT, a compound used to treat HIV; Curacin A and Bryostatin-1 are both anti-cancer drugs derived from coral (Too Precious to Wear, 2009). Bamboo corals and other porous corals have been used in orthopedic bone implants for many years as well. The current and projected value of coral to the pharmaceutical industry is one billion USD per year (Too Precious to Wear, 2009). Scientists and non-governmental organizations estimate the financial value of coral reefs to humankind at approximately four hundred billion USD per year. However, many view this number as far too low, arguing that “the true cost of losing coral reefs is incalculable”, as coral reefs are “dynamic life-support systems of the most concentrated biodiversity on our planet” (Heimbuch, 2009). Based on current trends in greenhouse gas emissions, these projected losses may become a reality in the near future.

Studies have shown that had the coral surrounding the Indonesian coast been healthy, the 2004 tsunami would have been half as destructive. Climate Change and Destructive Fishing Decimate Coral

The rapid and continuing climate change of the past century is caused by the release of massive volumes of greenhouse gases, notably methane and carbon dioxide, into the air and water. These gases are


12

Ampersand

generated by human activities, such as the burning of fossil fuels, agriculture, and deforestation (Intergovernmental Panel on Climate Change, 2007). As greenhouse emissions increase, world temperatures rise, warming the oceans and changing their chemical balances, and in turn, negatively affecting coral populations. Elevated water temperatures cause the tiny algae living on the coral to lose pigment, causing the coral to lose its color, bleach white, and die (Cayman Compass, 2009). Secondly, as the ocean becomes more acidic through higher levels of dissolved carbon dioxide, “the availability of the chemicals that corals and other animals need to build their limestone skeletons”, changes. Hence, coral is prevented from “[secreting its] calcium carbonate [skeleton]”, and making it “more vulnerable to storms and disturbances” (Debenham). A third effect involves the influence of warming oceans on other organisms. Populations of crown-of-thorns starfish, also known as coral-eating starfish, explode when ocean temperatures and nutrient levels rise (UnderwaterTimes News Service, 2007). One starfish can eat six square metres of coral reef in a year, and as overfishing has reduced the populations of crown-of-thorns starfish predators, these starfish regularly decimate coral colonies (UnderwaterTimes News Service, 2007). These three ecosystem impacts of warming oceans cause coral populations to become extremely stressed and self-destruct, withdrawing “to a smaller colony size in the hope that they can then survive to the next summer and then start to regrow” (Australian Broadcasting Corporation, 2007). In turn, the reduction of coral reefs has a cascading impact on other marine organisms. The chevroned butterflyfish, historically a very stable species with numerous populations worldwide, is now becoming extremely rare, due to the disappearance of its main food source, the coral Acropora hyacinthus (Science Alert, 2008). In the Great Barrier Reef, reductions in coral populations have led to corresponding reductions in the populations of sea turtles and dugongs, which have plummeted to twenty percent and three percent of their 1960s levels, respectively (Debenham, year unknown). More gen-

erally, numerous studies have shown that “mortality rates of juvenile fish are greatly reduced”, when coral reefs are healthy and abundant; high levels of species diversity is also correlated with a dense distribution of coral (Too Precious to Wear, 2009). Unfortunately, the negative effects extend beyond reef-dwelling organisms. As was mentioned above, coastal communities depend on coral reefs for their basic livelihoods, generally fishing. Each hectare of coral reef that is lost causes a corresponding loss of at least twenty tons of fish (UnderwaterTimes News Service, 2007). For subsistence fishermen, this loss is devastating. In the Philippines, local fishermen are experiencing such poor catches that they have resorted to dynamiting their coastal waters in attempts to salvage scrap metal from shipwrecks. This scrap metal sells for only four Filipino pesos per kilo—approximately nine Canadian cents—yet the fishermen regularly use a gallon of dynamite per trip; this serves as strong evidence for their extreme need and desperation (UnderwaterTimes News Service, 2007). However, other fishermen also faced with decreasing yields, instead resort to higher yielding but more destructive fishing techniques, such as cyanide poisoning, dynamiting, muro-ami, and bottom trawling. Cyanide is generally used when the fishermen want to keep the fish alive: “divers squirt cyanide into the holes and crevices of a reef ”, stunning the fish for easy capture (Haessy, 2008). Although the cyanide will kill the fish in three to five days, the fish remain alive long enough to be sold to chic restaurants and eaten by unsuspecting customers. The effects on the reefs, however, are both more destructive and more immediate. The cyanide poisons the tiny algae living on the coral polyps, disrupting photosynthesis and causing the coral to bleach and die (World Wildlife Fund and the University of Queensland, 2009). Dynamiting is considerably more destructive, both to the fish and to the coral. Dynamite bombs are made out of beer bottles and thrown directly into the reefs. The subsequent explosion causes the reef-dwelling fish to float to the surface, dead and


Ecosystem & Economics   13 easy to collect, and leaves the reef completely destroyed—a one-kilogram bomb can easily destroy two square metres of reef (Haessy, 2008). Equally destructive is muro-ami, a practice involving the pounding and crushing of coral by divers in order to scare fish to the surface. Not only does this practice completely destroy entire sections of reef, but the act of destruction also stresses the surviving coral, causing it to lower its reproductive and growth rates (Dolan, 1991). The most destructive fishing technique is undoubtedly bottom trawling. This practice involves a fishing boat pulling a large net that drags along the ocean floor, crushes anything obstructing its path, and catches massive quantities of fish in the process. According to the World Wildlife Fund,

vulnerable to certain diseases (Debenham). Moreover, coral species have symbiotic relationships with many species of fish. Species such as the parrotfish eat the seaweed that grow on reefs; as parrotfish populations decrease, so do the checks on seaweed populations, allowing the seaweed to “quickly take over and smother reef corals” (Debenham). Laws and Regulations Fail to Protect Coral

Simultaneously, but not in conjunction with the above activities, coral is harvested and traded around the world for use in jewelry and home aquariums. This trade preceded both global warming and the destructive fishing techniques, yet it suffers now as a result of both. Not only have coral harvests been steadily declining since the 1980s, but many countries have also now imposed either blanket bans or stiff regulations on the destruction, harvest, and trade of coral, in attempts to protect the few remaining coral colonies. Red and pink coral, once primarily harvested in the western Pacific Ocean, are now mainly harvested in the Mediterranean Ocean, as the Pacific colonies are completely exhausted (Too Precious to Wear, 2009). The Philippines and Timor-Leste are among the few countries to have banned the harvest of all species of coral; many coastal countries either have bans on the harvesting of certain coral species or of coral in certain regions or heavy regulations on coral harvesting (Timor-Leste, 2004; Australia, 1983; World Wildlife Fund, 2003; UnderwaterTimes News Service, 2007; Jayasinghe, 2009; Lum, 2005; Too Precious to Wear, 2009). Additionally, some species of coral are protected from harvesting on an international level, both by international trade conventions, such as the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), and by regional environmental and marine commissions (CITES, 2009; Too Precious to Wear, 2009). As the consequences of fishing techniques such as muro-ami, bottom trawling, and

A single cable that was dragged through a Florida reef left a swath of destruction that was two football fields long and eighty feet wide; the effects of bottom trawling nets are considerably worse. a single trawl can destroy twenty-five percent of corals in the trawled area; bottom trawling “is the marine equivalent of clear cutting old-growth forests” (World Wildlife Fund, 2002). A single cable that was dragged through a Florida reef left a swath of destruction that was two football fields long and eighty feet wide; the effects of bottom trawling nets are considerably worse (Stapleton, 2008). Beyond the immediate effects, these fishing techniques result in long-term harm to coral populations. Studies have shown that not only do coral populations experience extreme stress due to severe and sudden decreases in fish diversity, causing the coral to grow and reproduce at slower rates and be more susceptible to bleaching, but these decreases in fish diversity can also make coral populations more


14

Ampersand

cyanide poisoning have become more evident, many countries have issued bans on or harsh regulations of these activities (WWF 2004, 2009, 2006, 2004, 2007, 2003, 2005; McAvoy, 2009; Pemberton, 2006; Kirby, 2003; OSPAR Commission, 2003). Notably, the United Nations (UN), after holding a conference to discuss a proposed worldwide ban on bottom trawling, opted not to ban bottom trawling globally. The UN instead opted only to ban bottom trawling temporarily in areas not covered by fisheries management groups and to ask all member nations to establish regional marine governing bodies to determine the necessity of banning bottom trawling in those regions (WWF, 2006). Laws and regulations regarding coral harvesting and destruction are only as effective as their monitoring and enforcement. In terms of coral destruction, either due to certain fishing practices or other human activities, the very size and quantity of coral reefs can make monitoring difficult, especially in developing countries with strained budgets. While some countries, such as the United States, regularly charge individuals and issue fines for the destruction of coral, other countries, such as the Philippines, have less effective enforcement (McAvoy, 2009; UnderwaterTimes News Service, 2006; WWF, 2007). A Sri Lankan wildlife official was quoted as saying that his department did not have the time or money to enforce laws protecting coral, as his department was too busy protecting the elephants (Jayasinghe, 2009). Law enforcement aside,

hoods. In these economic “life or death” situations, no amount of laws or law enforcement will prevent the banned activities from occurring. Laws regulating and/or banning coral harvesting, and coral trading are enforced even less than laws prohibiting coral destruction. In part, this is due to the nature of coral reefs and the relevant laws. Scuba divers harvest live coral by hand, and the illegal specimens are mixed in with legally harvested specimens for shipping purposes. The relative lack of lay knowledge concerning different species of coral, combined with the relative ease of forging export permits, makes it extremely easy for smugglers to falsely declare the protected coral as unprotected coral, or to fake the CITES-required or nationally-mandated paperwork. On the demand side, the illegal coral is sold in aquarium shops or jewelry stores next to legally caught and imported coral. As the trade in coral is not banned, just regulated, once the coral is smuggled past customs, it is as if it was harvested and imported legally, and aquarium and jewelry shop owners and coral aficionados are none the wiser. Thus, as the following cases demonstrate, law enforcement is not sufficiently effective at protecting coral. In a 2007 case in the Philippines, customs officials accidentally discovered a twenty-foot long container van containing twenty tons of semi-precious corals (UnderwaterTimes News Service, 2007). A similar case of fortuitous discovery occurred in the

In 2006, a scientific panel claimed that, due to increasing greenhouse gas emissions, the world had maximum sixty years until ninety percent of the world’s coral would be permanently lost. it is difficult to expect poor fishermen facing severely reduced catches to abide by laws essentially prohibiting them from making their livings. In the case of the Filipino fishermen previously alluded to, dynamiting reefs to find scrap metal was their last resort, as they had already lost their fishing liveli-

United States, where a customs inspector’s sensitive nose led him to over forty tons of illegally harvested and imported live coral (Kansas City InfoZine, 2009). A third accidental discovery also occurred in the United States, with the Coast Guard discovering five hundred pounds of illegally collected live


Ecosystem & Economics   15 rock, coral, and sea fans on a boat initially stopped for lacking proper navigation lights (Coast Guard News, 2007). A 2007 case in the United Kingdom makes the failure of law enforcement even more evident. After successfully evading marine law enforcement and customs officials in Indonesia and Malaysia, an aquarium supply wholesaler was finally caught by UK customs officials attempting to smuggle into the UK an amount of endangered coral estimated to be worth fifty thousand British pounds (Brown, 2008; Manchester Evening News, 2008; Clarke, 2008). In summary, these laws do little to quell the destruction, harvest, or trade of coral; these activities continue unabated despite their being criminalized and regulated. Thus, coral populations continue to be decimated by illegal fishing activities and the ostensibly illegal and/or regulated coral industry. Also unabated is the cycle of continued global warming. Human destruction of and trade in coral now exacerbates the effects of global warming on coral: overfishing, cyanide, and outright coral destruction stress surviving coral populations, make them more vulnerable to the effects of global warming, such as bleaching, and decrease the number of surviving coral colonies (Debenham).

to increasing greenhouse gas emissions, the world had maximum sixty years until ninety percent of the world’s coral would be permanently lost (Television New Zealand Limited, 2006). In 2008, a panel of scientists warned that the world had already permanently lost nineteen percent of its coral, and that most coral would be gone by 2048 due to warming oceans and water acidification if emissions were not drastically cut immediately (McDermott, 2008). Most recently, in 2009, a panel of scientists proclaimed that “even if the world acted to put the toughest regulations on greenhouse gas emissions, and climate change was halted [by 2059], the oceans would still simply be too warm”, and we will lose the majority of the world’s coral by 2100 (Merchant, 2009). The question remains, is it too late for coral? To save the world’s coral, policymakers must confront two separate, but intertwined problems. The first involves human destruction and harvest of coral. The World Oceans Conference 2009 has produced a new program, the Coral Reef Initiative, aimed at using the existing system to maintain coral reefs (Heimbuch, 2009). While its ideas are admirable, it does not address the ever-present issue of enforcement. A second proposal involves returning to traditional marine management models (Under-

Until people destroying coral are prosecuted, until people harvesting coral are caught, and until effective customs controls no longer depend on happenstance and sensitive noses, policies will not succeed and coral populations will continue to decline. Is it Too Late for Coral?

The future seems even more dire. During the past three years, scientific predictions on the fate of coral reefs have become increasingly pessimistic. In 2006, a scientific panel claimed that, due

waterTimes News Service, 2007). Though these models have been proven successful, it is possible that fish stocks have been so depleted in certain areas that sustainable fishing and community livelihoods centered on fishing are no longer compatible. A third policy, already in place in much of the


16

Ampersand

world, asks national governments or regional bodies to create Marine Protected Areas (MPAs), in which fishing and coral harvesting is either banned outright or strictly regulated. Despite the popularity of this mechanism, recent studies have suggested that coral in MPAs is not necessarily more likely to regrow itself after bleaching episodes or be protected from fishing than coral not in MPAs (UnderwaterTimes News Service, 2008; Asia Is Green; Jayasinghe, 2009). A fourth policy proposal, also already in place in parts of the world, bans the use of certain fishing gear. Although in theory this policy would prevent destructive fishing practices from continuing, in practice it would be, as it already is, extremely difficult to enforce, and would further disadvantage the poor (UnderwaterTimes News Service, 2009). A fifth option, the most radical of the proposals, involves regrowing coral in nurseries or artificial environments (Dean, 2007; Agence France Presse, 2008). These projects are too recent to conclude whether or not the cost and time involved will make these programs viable options. However, the best way to protect coral does not involve implementing a new policy; rather, it involves effectively implementing, monitoring, and enforcing the existing policies and laws. Until people destroying coral are prosecuted, until people harvesting coral are caught, and until effective customs controls no longer depend on happenstance and sensitive noses, policies will not succeed and coral populations will continue to decline. Moreover, it is important to remember that even with effective law enforcement, these policies cannot succeed unless the world adequately addresses global warming—the second problem. Global warming has considerably fewer proposed solutions, as it is a more multi-faceted problem. Global warming requires significant reductions in greenhouse gas emissions, which the world has tried and failed to do effectively for the past twenty years—as the toothless Kyoto Protocol evidences. Even discounting the recent scientific predictions as overly pessimistic, the greatest threat to coral populations worldwide starts and ends with global warming. The world will not succeed in preserv-

ing and sustaining its natural reserves of coral if it does not stem global warming and its deleterious consequences, both natural and anthropomorphic. Global warming sets the cycle in motion and keeps coral-protecting policies from succeeding. Ironically, the coral-destroying greenhouse gas emissions are caused predominantly by developed countries, yet the effects of a mass coral extinction will be initially and most felt by developing countries. Not only would the loss of the world’s coral result in the loss of a third of all marine life and the destruction of coastal livelihoods, but it would also endanger the very existence of coastal communities. This loss of coral could create a mass migration of increasingly impoverished people to already overcrowded cities, further straining the already-wobbling social security and poverty-assistance schemes. Just as the threat to coral starts with greenhouse gas emissions in developed countries, so too the salvation of coral and coral-based economies must start with the curbing of these emissions in developed countries. Increased law enforcement in developing countries, as well as the further outlawing and regulation of certain fishing and harvesting activities is certainly necessary. But without severe and immediate reductions in greenhouse gas emissions, it is safe to say that no amount of law enforcement will keep the world’s coral alive for much longer than this century. References Atlantic corals now protected. World Wildlife Fund. (2007, February 6). Retrieved November 1, 2009 from <http://www.panda.org/ wwf_news/?93700/Atlantic-corals-now protected>. Atlantic fisheries commission protects cold-water corals from trawling. World Wildlife Fund. (2004, November 14). Retrieved November 1, 2009. <http://www.panda.org/what_we_do/how_we_work/ conservation/marine/news/?16611/Atlantic-fisheries-commissionprotects-cold-water-corals-from-trawling>. Australia. Great Barrier Reef Marine Park Regulations. (1983). Retrieved November 1, 2009. <http://www.austlii.edu.au/ cgibin/sinodisp/au/legis/cth/consol_reg/gbrmpr1983366/s15. html?query=coral>. Brown, D. (2008, January 19). Rare coral reefs plundered for fish tanks. Times [on-line]. Retrieved November 8, 2009. <http://www. timesonline.co.uk/tol/news/uk/article3213245.ece>. Butler, R. A. (2005, November 17). Coral reefs decimated by 2050, Great Barrier Reef ’s coral 95% dead. Mongabay.com. Retrieved November 8, 2009. <http://news.mongabay.com/2005/1117-corals. html>.


Ecosystem & Economics   17 Butterflyfish may go extinct. Science Alert. (2008, February 25). Retrieved November 8, 2009. <http://www.sciencealert.com.au/ news/20082502-16948.html>. Cayman coral reefs bleached. Cayman Compass. (2009, September 25). Retrieved November 8, 2009. <http://www.caycompass.com/cgibin/CFPnews.cgi?ID=10385778>. Cinner, J. Unpublished data. James Cook University. Found in: “The Coral Triangle and Climate Change.” World Wildlife Fund and the University of Queensland. Accessed November 12, 2009. Clarke, M. (2008, January 15). UKs largest ever illegal coral seizure. Practical Fishkeeping Magazine. Retrieved November 8, 2009. <http:// www.practicalfishkeeping.co.uk/pfk/pages/item.php?news=1528>. Conservation Group: Starfish Invasion Threatening Philippines’ Coral Reefs; ‘Far From Normal. UnderwaterTimes.com. (2007, April 4). Retrieved November 9, 2009. <http://www.underwatertimes.com/news. php?article_id=83264109501>. Convention on International Trade in Endangered Species of Wild Fauna and Flora. Accessed 12 November 2009. http://www.cites.org/ Coral self destructs under stress: expert. ABC News. Australian Broadcasting Corporation. (2007, March 21). Retrieved November 8, 2009. <http://www.abc.net.au/news/newsitems/200703/s1877317. htm>. Coral smugglers foiled. Manchester Evening News. M.E.N. Media. (2008, January 15). Retrieved November 8, 2009. <http://www.manchestereveningnews.co.uk/ news/s/1032053_coral_smugglers_foiled>. Dean, C. (2007, May 1). Coral Is Dying. Can It Be Reborn? New York Times. Retrieved November 8, 2009. <http://www.nytimes. com/2007/05/01/science/earth/01coral.html?pagewanted=1&_r= & ei=5087&em&en=2dbf bf b813112cda&ex=1178251200>. Debenham, P. Corals in the Red: The State of Corals and Recommendations for Recovery. Too Precious to Wear. Web. Retrieved November 12, 2009. Doyle, A. (2006, January 24). Coral reefs cheaper to save than neglect - UN. Independent. Retrieved November 8, 2009. <http://www.int.iol.co.za/index. php?set_id=14&click_id=143&art_id=qw11381182206B251>. EU bans Canary and Azores bottom trawling to save coral reefs. World Wildlife Fund. (2005, September 22). Retrieved November 1, 2009. <http://www.panda.org/wwf_news/?23501/EU-bans-Canary-andAzores-bottom-trawling-to-save-coral-reefs>. EU throws lifeline to Scotland’s coral reefs. World Wildlife Fund. (2003, August 20). Retrieved November 1, 2009. <http://www.panda.org/ wwf_news/?8407/EU-throw lifeline-to-Scotlands coral-reefs>. Florida Man Sentenced for Illegal Coral Importing. Coast Guard News. (2007, December 20). Retrieved November 8, 2009. <http:// coastguardnews.com/florida-man-sentenced-for-illegal-coral importing/2007/12/20/>. Haessy, J. P. (2008, September 15). Illegal fishing and coral destruction in the Philippines. Qondio Global. Retrieved November 8, 2009. <http://www.qondio.com/ illegal-fishing-and-coral-destruction-in-the-philippines>. Heimbuch, J. (2009, May 1). Coral Reef Alliance Talks About What’s Stressing Out Coral Reefs. Tree Hugger: A Discovery Company. Retrieved November 8, 2009. <http://www.treehugger.com/ files/2009/05/coral-reef-alliance-talks-about-whats-stressing-outcoral-reefs.php>. Heimbuch, J. (2009, May 12). 6 Steps to Saving the World’s Coral Reefs. TreeHugger: A Discovery Company. Retrieved November 8, 2009. <http://www.treehugger.com/files/2009/05/6-steps-tosaving-the-worlds-coral-reefs.php>. Kirby, A. (2003, June 21). Norway lauded for saving coral. BBC News. Retrieved November 1, 2009. <http://news.bbc.co.uk/2/hi/ science/nature/3006616.stm>. Japan’s biggest coral reef artificially restored. Cosmos: The Science of Everything. (2008, November 18). Retrieved November 8, 2009. <http://www.cosmosmagazine.com/news/2333/ japans-biggest-coral-reef-be-artificially-restored>.

Jayasinghe, H. P. (2009, January 18). Sri Lanka’s largest coral region uncared for and neglected. The Sunday Times. Retrieved November 8, 2009. <http://www.sundaytimes.lk/090118/News/ sundaytimesnews_25.html>. Lum, C. (2005, June 2). Black-coral harvest rules please all. The Honolulu Advertiser. Retrieved November 1, 2009. <http://the. honoluluadvertiser.com/article/2005/Jun/02/ln/ln11p.html>. McAvoy, A. (2009, August 2). Hawaii protecting coral reefs with big fines. San Francisco Chronicle. Retrieved November 1, 2009 <http:// www.sfgate.com/cgibin/aritcle.cgi?f=/n/a/2009/08/02/national/ a100155D11.DTL>. McDermott, M. (2008, December 12). Many of World’s Reefs Will Be Gone By 2050: 25% of Marine Species Too, and Half a Billion People Without a Job. TreeHugger: A Discovery Company. November 8, 2009. <http://www.treehugger.com/files/2008/12/ many-coral-reefs-gone-by-2050.php>. Merchant, B. (2009, October 26). Dying Coral Reefs to be Frozen, Preserved for the Future. TreeHugger: A Discovery Company. November 8, 2009. <http://www.treehugger.com/files/2009/10/ dying-coral-reefs-to-be-frozen.php>. Ocean’s 30: Poachers arrested in Philippine marine park. World Wildlife Fund. (2007, January 5). Retrieved November 8, 2009. <http:// www.panda.org/wwf_news/?91300/Oceans-30-Poachers-arrestedin-Philippine marine-park>. OSPAR Commmission Biological Diversity and Ecosystems. OSPAR Recommendation 2003/3 on a Network of Marine Protected Areas. Bremen: OSPAR, 2003. Retrieved November 1, 2009. <http://www.ospar.org/v_measures/browse.asp?m enu=00750302260124_0001_000000>. Pemberton, M. (2006, June 30). A helping hand for Alaska’s coral gardens. IOL. Retrieved November 1, 2009. <http://www.iol. co.za/index.php?set_id=14&click_&art_id=qw115161606722B2 26>. The Philippine Fisheries Code of 1998. (1998). AsianLII. Retrieved November 1, 2009.<http://www.worldlii.org//cgibin/disp.pl/ph/ legis/republic_act/ran8550126/ran8550126.html?query=coral>. Report: Filipino Fisherman Having a Blast, Scaring Whales, Destroying Coral. (2006, March 19). UnderwaterTimes. Retrieved November 8, 2009. <http://www.underwatertimes.com/news. php?article_id=10429537018>. Research: Coral Bleaching Disturbs Structure of Fish Communities; MPA’s Have Little Impact On Recovery. (2008, October 28). UnderwaterTimes.com Retrieved November 8, 2009. <http://www.underwatertimes.com/news. php?article_id=61041085923>. Richard, M. G. (2009, April 16). The World’s Largest Forest of Rare Black Coral Found in Mediterranean. Treehugger: A Discovery Company. Retrieved November 12, 2009. http://www.treehugger. com/files/2009/04/rare-black-coral-world-largest-mediterranean sea.php>. Ronald, E. D. (Ed). (1991). Philippines: A Country Study. Washington: GPO for the Library of Congress. Retrieved November 28, 2009. <http://countrystudies.us/philippines/>. Scientists, Conservationists Call on Congress to Renew A Strengthened Coral Conservation Act. (2007, July 23). Underwater Times. Retrieved November 1, 2009. <http://www.underwatertimes.com/ news.php?article_id=81504761039>. Scientists: Fishing Gear Ban Could Help Save World’s Coral Reefs from Climate Change. (2009, June 18). UnderwaterTimes.com. Retrieved November 8, 2009. <http://www.underwatertimes.com/news. php?article_id=23651107408>. Scientists warn of coral reef demise. (2006, March 29). TVNZ: Television New Zealand Limited. Retrieved November 8, 2009. <http://tvnz.co.nz/view/page/488120/691223>. Sensitive deep sea coral reefs protected for the first time in the Mediterranean. (2006, January 30). World Wildlife Fund. Retrieved November 1, 2009. <http://www.panda.org/ wwf_news/?57840/Sensitive-deep-sea-coral-reefs-protected-for the-first-time-in-the-Mediterranean>.


18

Ampersand

Shipping Containers Full of Illegal Coral in Oregon Leads to Indictment. (2009, October 16). Kansas Info Zine. Retrieved November 1, 2009. <http://www.infozine.com/news/stories/op/storiesView/ sid/37931/>. Stapleton, C. (2009, October 18). Coral reef suffers major damage. Palm Beach Post. Retrieved November 1, 2009. <http:// www.palmbeachpost.com/localnews/content/local_news/ epaper/2008/11/18/1118reef.html?imw=Y>. Study: Indonesia Coral Reefs Survive Tsunami But Not Cyanide Bombs. (2007, September 27). UnderwaterTimes.com. Retrieved November 8, 2009. <http://www.underwatertimes.com/news. php?article_id=91014805672>. Theft of ‘More than a Hectare of Coral Reefs’ Uncovered in the Philippines. (2007, February 8). UnderwaterTimes.com. Retrieved November 8, 2009. <http://www.underwatertimes.com/news. php?article_id=10189654327>. Too Precious to Wear. A SeaWeb Program. Retrieved November 8, 2009. <http://www.tooprecioustowear.org/index.html>. UN fails to protect deep sea life from bottom trawling. (2006, November 24). World Wildlife Fund. Retrieved November 1, 2009. <http://www.panda.org/what_we_do/ how_we_work/conservation/marine/news/?87880/ UN-fails-to-protect-deep-sea-life-from-bottom-trawling>. Western Australia’s Ningaloo coral reef given more protection. (2004, November 30). World Wildlife Fund. Retrieved November 1, 2009. <http://www.panda.org/wwf_news/?16851/ Western-Australias-Ningaloo-coral-reef-given-more-protection>. WWF report shows seafloor trawling is destroying Reef biodiversity. (2002, August 16). World Wildlife Fund. Retrieved November 8, 2009. <http://www.panda.org/wwf_news/?2647/WWF-reportshows-seafloor-trawling-is-destroying-Reef-biodiversity>.


Philosophy & Mathematics   19

DIAGRAMS, PROOFS, AND DIAGRAMMATIC PROOFS ANDY YU


20

Ampersand

In this paper, I argue for the existence of diagrammatic proofs. After discussing the goals and considerations of mathematical proof in general, I outline the received view, which criticizes diagrammatic arguments in proofs for being unreliable, too particular, or unable to be formalized in the same way verbal arguments can be. I suggest that with a more charitable approach, the existence of diagrammatic proofs becomes much more plausible. This plausibility is established in large apart by Mumma’s successful formalization of diagrammatic arguments in Euclid’s Elements. * Introduction It is striking that although diagrammatic arguments were once respectable as parts of proofs, they have now fallen into disrepute. Most famously appearing in Euclid’s Ele­ments, diagrammatic arguments occur frequently not only as illustrations, but as parts of purported proofs. Of course, few will deny that diagrammatic arguments have no role whatsoever in mathematics. But according to the received view, diagrammatic arguments cannot be parts of proofs. My aim in this paper is to challenge the received view by pre­senting a modest thesis: I argue that diagrammatic arguments can be (essential) parts of proofs. In other words, I argue for the existence of diagrammatic proofs. For the purposes of this paper, I define diagrammatic proofs as proofs that depend on diagrammatic arguments, and diagrammatic arguments as arguments that depend on one or more diagrams. Similarly, I define verbal proofs as proofs that depend on verbal arguments, and verbal arguments as arguments that depend on natural or formal language. Proofs are distinguished from other arguments in that proofs are necessarily sound arguments—valid arguments with sound premises. With this in mind, the existence of diagrammatic proofs reduces to the existence of sound diagrammatic arguments. This thesis is modest for three reasons. First, I do not claim that all diagrammatic arguments are parts of proofs, as this seems quite obviously false. As I will show, there are many examples of diagrammatic arguments that are not parts of proofs, since

dia­grammatic arguments can be valid or invalid like all other arguments. Second, I do not claim that diagrammatic arguments can by themselves constitute proofs. It suffices that diagrammatic arguments can be proper parts of proofs. Third, I do not claim that diagram­matic arguments are in general parts of proofs. It suffices that diagrammatic arguments can be parts of proofs for one or more claims, not that those claims must be proved with diagrammatic arguments. In the second section, I will discuss the nature of mathematical proof in general. In the third section, I will outline the received view on diagrammatic arguments in proofs, including the views of Leibniz, Pasch, and Hilbert. The concerns raised by this view is that diagrammatic arguments are unreliable, too particular, and not formalizable. In section four, I will discuss attempts by Barwise, Etchemendy, and Brown to argue for the existence of diagrammatic proofs. I will show, however, that these attempts fall short. In contrast, I will suggest that Mumma’s formalization of Euclid’s Elements is a successful attempt to establish the existence of diagrammatic proofs. Nature of mathematical proof Two goals: knowledge and understanding

Before delving into a more involved discussion of diagrammatic proofs, it is instructive to provide some context concerning the goals and considerations of mathematical proof in general. There are import-

* I would like to thank Melissa Gail Cabigon, William Gillis, Sierra Robart, Rachel Rudolph, and Theodore Widom for providing comments on a draft of this paper.


Philosophy & Mathematics   21 ant pedagogical and theoretical goals we want to achieve with proofs, and among the most important of these is the epistemological one (Tieszen, 1992; Resnik, 1992). The most important goal of proof is to show that claims are true, and thereby help us acquire knowledge of them (Avigad, 2008). Still, there seems to be a difference between knowing that claims are true and under­standing why they are true. To illustrate the difference between knowledge and understand­ing, consider a person who has memorized an algebraic proof for Pythagoras’ theorem and in some sense knows that the theorem is true, but cannot explain to a student why the theorem is true with, say, a geometrical illustration. This person knows that the theorem is true, but does not understand why it is true. On another note, if the only goal of proof is to show that claims are true, then we would be content with a single proof for every claim. Yet, even a cursory review of the history of mathematics shows that there are often many proofs for the same claims. For example, there are over half a dozen proofs each for Pythagoras’ theorem and the claim that 0.999 ... = 1. This suggests that we have different proofs for the same claims because different proofs help us understand the significance of a single claim in different ways. For example, one proof for the claim that 0.999 ... = 1 depends on only elementary algebra, another depends on the convergence of infinite series, and yet another on Dedekind cuts.1 They offer

What is distinctive about mathematical arguments is that they are rigorous, and they are rigorous in that they are formalizable. 1  Dedekind cuts are a way of partitioning rational numbers. These cuts are used in one of the proofs for the claim that 0.999 … = 1.

different insights, ranging from the merely computational to sophisticated proofs based on mathematical analysis. Thus, I take it that the two goals of proof in general are first, to show that claims are true (or help us know that they are true), and second, to show why claims are true (or help us understand why they are true). Two considerations: proofs and better proofs

The two goals of mathematical proof just mentioned lead to some related considera­tions (Avigad, 2006). First, what does it take for arguments to be parts of proofs? Mathematical arguments are distinguished from other arguments in that they are deduc­tive rather than inductive (Auslander, 2008).2 But even this distinction is not precise enough, since deductive arguments used in everyday life are often not mathematical proofs. What is distinctive about mathematical arguments is that they are rigorous, and they are rigorous in that they are formalizable. In principle, they can be symbolized and verified by computers. For arguments to be parts of proofs, then, they must be formalizable. Second, what, if anything, makes some proofs “better” than others? The fact that we have different proofs for the same claims suggests that some proofs are better than others. While all proofs must show that claims are true, different proofs for the same claims may show why those claims are true in different ways. Some proofs may be better than others, then, according to how they help us understand claims, perhaps in conjunction with other considerations such as simplicity and intuitiveness.

2  Given any argument, which consists of a set of premises and an associated conclusion, the truth of the premises may or may not guarantee the truth of the conclusion. When the truth of the premises guarantees the truth of the conclusion, the argument is deductive. When the truth of the premises only suggests, but does not guarantee, the truth of the conclusion, the argument is inductive.


22

Ampersand

Informal proofs, rigor, and formalizability

A distinction G. H. Hardy finds in Hilbert between two kinds of proofs helps us better po­sition where verbal and diagrammatic proofs fit in (Hardy, 1929). Proofs are either “official” formal proofs inside mathematics, or “unofficial” informal proofs in the meta­mathematics. On the one hand, formal proofs are precisely defined as strings of symbols in axiomatic systems, but they are rarely encountered in everyday mathematics. On the other hand, informal proofs are guided more by intuition, and are more often encountered in everyday mathematics. Still, they are just as rigorous as formal proofs in that both kinds of proofs are formalizabile and show that the claims are true. Clearly, verbal arguments encountered in everyday mathematics—strictly speaking, the metamathematics—can be parts of

they still are today, except today they are mere stepping stones to learning legitimate proofs. Beginning in the nineteenth century, diagrammatic arguments lost their prestige and became viewed as “imperfect, lacking sufficient mathematical rigor, and relying on a faculty of intuition that has no place in mathematics” (Avigad, 2008). The part played by diagrammatic arguments in proofs was reduced and eventually eliminated, as axiomatizations by Pasch and Hilbert were viewed as corrections or improvements to the shortcomings of Euclid’s arguments. According to the received view, legitimate mathematical proofs are formalizable as logical proofs— strings of symbols, where each sub-string is either an assumption of the proof or derived from preceding sub-strings via sound inference rules, and

Beginning in the nineteenth century, diagrammatic arguments lost their prestige and became viewed as “imperfect, lacking sufficient mathematical rigor, and relying on a faculty of intuition that has no place in mathematics”. informal proofs. My claim is that diagrammatic arguments can as well. I take it that verbal proofs are formalizable, but not necessarily formal. I think diagrammatic arguments are formalizable as well, though again, not necesarily formal. We should accept both verbal and diagrammatic arguments as parts of rigorous, legitimate proofs. Received view on diagrams and proofs

where the last sub-string is the claim to be proved (Avigad, 2007). Of course, the formal proofs of Hilbert are logical proofs and therefore legitimate. But although the verbal arguments encountered in everyday mathematics are parts of informal proofs, they are formalizable, and so they are also legitimate. In contrast, because diagrammatic arguments are not formalizable, they are not legitimate. Leibniz, Pasch, and Hilbert on the received view

History of the received view

As I suggested in the introduction, the received view we have today on diagrammatic arguments in proofs is a relatively recent development. For almost two thousand years since Euclid’s Elements, diagrammatic arguments were the “paradigm of rigor” as far as arguments were concerned (Avigad, 2008). They were used to introduce students to proofs, as

On the received view, diagrammatic arguments cannot be parts of proofs for one of two reasons (or both): either diagrammatic arguments are only heuristic devices to help us understand claims once they have been proven by other means, or they must first be corrected or improved upon and then translated into verbal arguments to earn legitimacy, but even then it is only really the verbal arguments that


Philosophy & Mathematics   23 are legitimate. In either case, since diagrammatic arguments are not formalizable and cannot show that claims are true, they cannot be parts of proofs at all. In his New Essays Concerning Human Understanding, the seventeenth century German philosopher and mathematician Leibniz attests to the former view, that diagram­matic arguments are only heuristic devices. [G]eometers do not derive their proofs from diagrams, though the expository approach makes it seem so. . . . It is universal propositions, i.e. definitions and axioms and theorems which have already been demonstrated, that make up the reasoning, and they would sustain it even if there were no diagram (Leibniz, 1704/1981). Similarly, Pasch, who was a late-nineteenth century German mathematician interested in the foundations of geometry and among the first to criticize Euclid’s Elements, agrees with Leibniz on the former view. But Pasch also attests to the latter view, that diagrammatic argu­ments must be repaired to earn legitimacy. [T]he process of inferring must . . . be independent of diagrams. . . . In the course of the deduction, it is certainly legitimate and useful, though by no means necessary, to think of the reference of the concepts involved. If it is indeed necessary to so think, the defectiveness of the deduction and the inadequacy of the . . . proof is thereby revealed unless it is possible to remove the gaps by modification of the reasoning used (Pasch, 1912). Even Fomenko, a philosopher of mathematics in our own time, attests to the need to repair diagrammatic arguments. It happens rather frequently that the proof of one or another mathematical fact can

at first be ‘seen,’ and only after that (and following the visual idea) can we present a logically consistent formulation, which is sometimes a very difficult task requiring serious intellectual efforts (Fomenko, 1994). Hilbert, one of the most influential mathematicians of the early twentieth century, and who was also interested in the foundations of geometry, joins Leibniz and Pasch in warning us against diagrammatic arguments: Nevertheless, be careful, since it [the use of figures] can easily be misleading. . . . The making of figures is [equivalent to] the experimentation of the physicist, and experimental geometry is already over with the [laying down of the] axioms (Hilbert, 1894). So from the received view, we can extract two main criticisms of diagrammatic arguments in proofs (Detlefsen, 2008). The first is that diagrammatic arguments are unreliable. Klein, for example, cited the diagrammatic “proof ” that all triangles are isosceles to substantiate this criticism (Klein, 1939). In fact, it is impossible to construct a diagram according to the specifications required by this diagrammatic argument, but with some manipulation, the diagram misleads us into thinking that it is possible. Similarly, we might think, after having drawn many curves with pen and paper, that continuous curves can fail to be differentiable at only a finite number of points, since we can only ever draw a curve with a finite number of jagged corners. But Weierstrass’ everywhere continuous but nowhere differentiable curve shows that the diagrammatic argument is invalid. Or again, we might try to draw a curve to illustrate the intermediate value theorem, which states that a function f defined on an interval [a, b] takes on all values in [f(a), f(b)], since any curve we draw that passes through a horizontal line must intersect the line at a point. But in fact nothing in the diagrammatic argument indicates what


24

Ampersand

the field under consideration is.3 The diagrammatic argument is a proof only if the field is the real numbers, and not, say, the rational numbers.

eral claims. At their very best, diagrammatic arguments can invite us to draw analogies or serve as inductive arguments for more general claims.

Either diagrammatic arguments are only heuristic devices to help us understand claims once they have been proven by other means, or they must first be corrected or improved upon and then translated into verbal arguments to earn legitimacy But diagrammatic arguments are said to be unreliable not just because, for example, poorly drawn diagrams and the appeal to implicit assumptions can mislead us into making invalid in­ferences. They are unreliable also because they are too particular. So the second main criticism is that diagrammatic arguments are too particular. An example often cited to substantiate this criticism is the diagrammatic argument for the general claim that 1 + 3 + 5 + ... + (2n – 1) = n2 (Brown, 2008) (Resnik, 1992). The argument depends on a diagram starting out at say, the bottom corner, with one dot representing 1, then adding a dot above it, to the right of it, and on its upper-right corner to represent 3. For each n, we represent the (n + 1)th case with successive layers to this diagram, each time by adding dots above, to the right, and on the upper-right corner, which results in an extra two dots per layer. For each n, we get an n × n array, which represents a square. But even granting that the diagrammatic argument is reliable enough to be part of a proof for a particular case, say n = k, with k 2 dots in a k × k square array, we can still be suspicious as to whether the argument is part of a proof for the general case, where n is arbitrary. While mathematical claims are often general in nature, arguments that depend on particular diagrams seem unable to establish such gen3  A field is a mathematical structure satisfying axioms that roughly correspond to arithmetical operations. The diagrammatic argument in question works only if the field under consideration is the real numbers, and not if the field is, say, the rational numbers. The problem is that the argument does not make this assumption explicit.

Towards a more charitable approach

Even with these criticisms in mind, however, I think the received view understates the fact that diagrammatic arguments in proofs, as exemplified by Euclid, had a respectable and stable position for many centuries. Perhaps all there is to say is that we now have more ma­ture and sophisticated views of mathematical proof. But with a more charitable approach, there might be more to say. As Avigad suggests, “We humans use diagrams because we are good at recognizing symmetries and relationships in information so represented” (Avigad, 2008). Further, “reflection on the Elements shows that there are implicit rules in play, and norms governing the use of diagrams that are just as determinate as the norms governing modern proof ” (Avigad, 2008). In the next section, I will explore attempts to establish the existence of diagrammatic proofs. Diagrammatic proofs First attempt: Barwise and Etchemendy

Few, but not many, have argued against the received view, some taking more extreme views than others. On the extreme end, C.S. Peirce thinks that almost all arguments, including logical and mathematical ones, are diagrammatic (Peirce, 1898). Taking a less extreme position, but one that is still controversial, logicians Barwise and Etchemendy claim that diagrammatic arguments in proofs “can


Philosophy & Mathematics   25 be important, not just as heuristic and pedagogical tools, but as legitimate elements of mathematical proofs” (Bar­wise & Etchemendy, 1991). Further, “diagrams and other forms of visual represen­tation can be essential and legitimate components in valid deductive reasoning” (Barwise & Etchemendy, 1991). In response to the criticism that diagrammatic arguments are unreliable as parts of proofs, they counter that the fact that diagrammatic arguments are sometimes unreliable does not entail that diagrammatic arguments are always unreliable. To support this claim, they cite examples in which diagrammatic arguments are present in proofs, including a proof for Pythagoras’ theorem that depends on both diagrammatic and verbal arguments. Further, they take diagrams in diagrammatic arguments to represent the claims they are about by exhibiting structural similarity: “a good diagram is isomor­ phic, or at least homomorphic, to the situation it represents” (Barwise & Etchemendy, 1991). However, as Detlefsen suggests, these examples are not entirely convincing. Barwise and Etchemendy barely provide an argument for their claim. They use phrases such as “one easily sees,” and just assert that the diagrammatic argument in the proof of Pythagoras’ theorem mentioned above is clearly “a legitimate proof ” (Detlefsen, 2008). The diagrammatic arguments they cite seem to have gaps, since notions such as replication and transparency are unclear. The precise role of diagrammatic arguments is also unclear, since sometimes it seems as though parts of proofs that are supposed to depend on diagrammatic arguments are in fact independent of them. In the same proof of Pythagoras’ theorem, Barwise and Etchemendy credit the diagrammatic argument for allowing us to infer the straightness of lines in a certain shape. But it seems as though the inference is really based on geometrical facts independent of the diagrammatic argument (Detlefsen, 2008). And Brown, a defender of the existence of diagrammatic proofs, suggests that in general diagrams are not isomorphic or homomorphic to the claims they represent, and so diagrams cannot represent claims in the way Barwise and Etchemendy take them to. In short, Barwise and Etchemendy’s

attempt to establish the existence of diagrammatic proofs falls short in that the examples they cite of diagrammatic proofs are unconvincing. Second attempt: Brown

There is an interesting way to get around the criticism that diagrammatic arguments are unreliable and too particular. The basic idea is this: although all proofs—whether verbal or diagrammatic—are visual in some respect, what matters is not their appearance or what they resemble, but rather what they represent. On this view, all proofs are mere stepping stones that guide us towards that “Aha!” moment where we see that a claim is true. In a series of lectures he gave on the foundations of mathematics, Wittgenstein seems to share a view similar to this: The figure of the Euclidean proof as used in mathematics is just as rigorous as writing—because it has nothing to do with whether it is drawn well or badly. The main difference between a proof by drawing lines and a proof in writing is that it doesn’t matter how well you draw the lines, or whether the r’s and l ’s and m’s and e’s are written well. [Referring to a sketched figure] This is perfectly all right. It really is a prejudice that these figures are less rigorous; partly because the role of such a figure is mixed up with the construction of a measurable pentagon—mixing up drawing used as symbolism with drawing as producing a certain visual effect (Wittgen­stein, 1976). In other words, just as what matters is not how neatly symbols are inscribed, what matters is not how accurately a diagram is drawn (within reason) or how detailed it is. Instead, what matters is what those symbols or diagrams lead us to see.


26

Ampersand

Brown, a philosopher of mathematics, adopts this line of reasoning and makes a bold claim (Brown, 2005): “Some ‘pictures’ are not really pictures, but rather are windows to Plato’s heaven” (Brown, 2008). He agrees that on the face of it, diagrammatic arguments are too particular and thus lack the generality desired in mathematics. But for him, this misses the point. The key point for Brown is that diagrammatic arguments work as instruments to help the “unaided mind’s eye.” Since both verbal and diagrammatic arguments do this, both can be parts of proofs. Two analogies illustrate this point.

Second, Brown compares his view with representation in differential geometry, in which there is a distinction between intrinsic and extrinsic features. While intrinsic features, such as curvature and arc length, are independent of any particular coordinate system, characterizations of intrinsic features called extrinsic features, such as parametrizations of curves, depend on particular coordinate systems. But just as, in vector geometry, particular parametrizations of curves depend on particular coordinate systems to describe the curves, diagrammatic arguments can depend on particular diagrams to describe general claims.4 Brown pushes this analogy further in suggesting that, in fact, most of mathematics as we know it is extrinsic, and intrinsic features are only “seen by the mind’s eye.” This applies to both verbal and diagrammatic proofs, so that knowledge and understanding of claims is made only in the “mind’s eye.”

All proofs are mere stepping stones that guide us towards that “Aha!” moment where we see that a claim is true. First, Brown compares his view with representation in aesthetics, in which a representa­tion X represents Y as a picture and Z as a symbol, and Y may or may not be identical to Z. Pictorial representation is similar to denotation (explicit representation), and symbolic representation is similar to connotation (implicit representation). A diagram, then, repre­sents only a particular claim as a picture, but represents the more general claim as a symbol. I think this is rather similar to Goodman’s account of representation, so using Goodman’s terminology (Goodman, 1976), I think Brown is saying this: a diagram represents a particular claim, but the diagram is also a general-claim-representing-diagram. Accord­ingly, diagrammatic arguments that, strictly speaking, only represent particular claims can also represent general claims as well, though in a more abstract way and independently of the fact that diagrammatic arguments do not “really” represent general claims. This, of course, does not mean that the general claims themselves do not exist. It means only that diagrammatic arguments do not necessarily represent the general claims pictorially, only symbolically.

However, the attempts by Barwise, Etchemendy, and Brown to establish the existence of diagrammatic proofs all seem to fall short. Barwise and Etchemendy’s attempt falls short in that examples of proofs that are supposedly diagrammatic do not seem to be diagrammatic at all. Meanwhile, Brown’s attempt falls short in that it appeals to a “Platonic heaven” and the “mind’s eye,” ideas that are unclear at best and mystical at worst (Detlefsen, 2008). This is not to say, however, that all attempts to do so are bound to fail. Barwise and Etchemendy might be right after all, even if they fail to convince us using their examples. And the points made by Wittgenstein and Brown suggest that the reliability of diagrammatic arguments is not an issue if they can inspire understandings that go beyond the face value of the diagrams themselves. At the very least, once we recog4  In response to a skeptical point as to whether we can prove that the curves we describe are really there: this is precisely Brown’s point. Brown thinks we cannot do so because any such proof can only take place in the “mind’s eye.”


Philosophy & Mathematics   27 nize that poorly drawn diagrams in diagrammatic arguments are no more of an obstacle to proof than bad handwriting is in verbal arguments, diagrammatic arguments can be reliable proofs of particular claims. But Brown’s defense of the generality of diagrammatic arguments thus far seems unconvincing. Perhaps we can do better. Third attempt: Mumma

Mumma has an interesting and, it seems, promising defense of the existence of diagram­matic proofs. In his doctoral thesis, “Intuition Formalized,” and his article, “Proofs, Pictures, and Euclid,” he introduces the formal system Eu to analyze diagrammatic argu­ments in Euclid’s proofs with the help of Manders’ distinction (Mumma, 2006; Mumma, 2008). I will give a rough sketch of how this system works later in this section, but let us first turn our attention to what Mander’s distinction is and why it is helpful.

when they are inferred from coexact properties or separate verbal arguments. This view of Euclid’s diagrammatic arguments is consistent with the fact that Euclid takes pains to prove seem­ingly simple claims, such as the triangle inequality. After all, such claims would require little to no proof if he could make unrestricted inferences from diagrams. The fact that Euclid restricts himself in the inferences he makes suggests that he himself distinguished, at least implicitly, between the valid and invalid inferences we can make from diagrams. He wanted to ensure that the diagrammatic arguments in his proofs were reproducible, and failure to adhere to the distinction between coexact and exact properties would make his arguments difficult to reproduce.

Diagrams [...] can provide a free ride in that we can automatically infer from them claims that are not immediately apparent from the set of assumptions.

Manders’ examination of Euclid suggests that both verbal and diagrammatic arguments can be parts of proofs, provided that we make a crucial distinction between exact and coex­act properties of diagrams (Manders, 1995). While exact properties are relations between magnitudes—lengths, areas, and angles—of the same type, coexact properties are topo­logical relations between these magnitudes—the regions they define, points of intersection, and so on (Mumma, 2008). Diagrams, then, vary with respect to their exact properties in that their sides, angles, and areas vary. But they do not vary with respect to their coexact properties in that the existence of intersection points and the location of certain points in certain regions do not vary. The suggestion, then, is that diagrammatic arguments can be parts of proofs, provided that these arguments depend only on the coexact properties of diagrams. In particular, diagrammatic arguments must forbid the inference of exact properties except

With the distinction Manders makes between exact and coexact properties, Mumma suggests that diagrammatic arguments can be reliable. In particular, he suggests that despite their particularity as diagrams, they can still be reliable as arguments for general claims. I have already alluded to the implicit rules and norms governing diagrammatic arguments adhered to by Euclid. The formal system Eu makes these rules and norms explicit with precisely defined, sound rules of inference. In the system, a well-formed atomic claim has the form Δ, A, where Δ is the diagrammatic argument and A is the verbal argument. While the diagrammatic argument specifies the coexact properties of diagrams and consists of labeled points with coordinates, the verbal argument specifies relations between magnitudes in diagrams and consists of a syntax similar to that of first-order logic. Propositions then are conditionals of the form Δ1, A1  Δ2, A2. If we prove Δ2, A2 from Δ1, A1, then we prove the conditional. Where Δ1 = Δ2, the conditional represents a theor-


28

Ampersand

em, and where Δ2 contains additional elements, the conditional represents a problem (to solve). However, we should note that the diagrammatic argument is rarely obvious. Often, we need to extract diagrammatic arguments from parts of proofs that specify rules for the construction of diagrams and what valid inferences can be made from those diagrams.5 I will not be delving too deeply into the specifics of Eu, but the point I hope to make here is that diagrammatic arguments are formalizable in the same way verbal arguments are. Since this formalization is done in a way faithful to Euclid, we have excellent reason to think that diagrammatic arguments can be parts of proofs. The distinction between the valid and invalid inferences we can make from a diagram correspond to coexact and exact properties. And since it is a formal system of proof, it allows us to verify the inferences (Avigad, 2008). Of course, as the examples mentioned earlier suggest, Mumma recognizes how not all diagrammatic arguments are parts of proofs. Even Euclid is guilty of being misled by diagrammatic arguments: when he assumes that two circles intersect at a point based on a diagrammatic argument alone, he fails to invoke the axiom of continuity that we now realize is necessary for this inference. But the problem is less serious than often thought: “Euclid’s proofs still have gaps, but they appear much smaller than they [sic] those which open up when a modern axiomatization serves as the ideal for rigorous proof in elementary geometry” (Mumma, 2002). This suggests that modern axiomatizations make it appear as though there are large gaps in Euclid’s arguments precisely because these axiomatizations ignore the diagrammatic argument and focus only on the verbal ones. Further, A critique which takes Eu as the ideal is much less damning. What Eu has, and Euclid does not, is an explicit method 5  The need to adhere to construction rules and the fact that there is a distinction between coexact and exact properties suggests that the context in which diagrammatic arguments appear is vital.

for judging what is and isn’t general in a constructed diagram. The extra steps that these rules require fit in naturally between the steps Euclid actually makes. Eu actually fills in gaps in Euclid’s proofs. It does not alter their structure completely (Mumma, 2008). What is remarkable about Eu is that it gives a precise account of how diagrammatic arguments in general can be parts of proofs. Diagrammatic arguments can be reliable parts of proofs for general claims, provided that we take care to make only valid inferences corresponding to coexact properties. In light of Eu, Mumma can substantiate the claim that diagrammatic proofs exist. Free rides

Before concluding, I want to make a brief remark about free rides, a term borrowed from Shimojima (Shimojima, 1996). Recall that on the received view, verbal proofs—the only legitimate proofs—are strings of symbols. Since they are linear, each substring is inferred directly from the previous substring, where each sub-string of symbols is a step in a proof. In contrast, diagrammatic proofs are visual images. Since they are cumulative, each step in the construction of a diagram depends not only on the previous step, but on all previous steps. Diagrams, then, can provide a free ride in that we can automatically infer from them claims that are not immediately apparent from the set of assumptions. To use Mumma’s example, consider the ordering of a set of points on a line (Mumma, 2008).6 Given the assumptions that B is between A and C, and that C is between B and D, we can infer that C is between A and D. Using a verbal argument, we can invoke an axiom or theorem of the form (∀x)(∀y)(∀z)(∀w)((Between(xyz) ⋀ Between(yzw))  Between(xzw))7

6  Interestingly, Sierra Robart informs me that students preparing for the Law School Admissions Test are taught to draw diagrams like these to help answer questions in the logic games section. This bolsters the claim that diagrammatic arguments can be reliable and efficient. 7  This axiom of first-order logic simply “says” that for all x, y, z,


Philosophy & Mathematics   29 to infer from Between(ABC) and Between(BCD) that Between(ACD). But using a diagrammatic argument along the lines of Eu, we can simply invoke the following diagram:8 ° ° ° ° A B C D

From the very construction of the diagram alone in accordance with the given assumptions, we can infer that C is between A and D for free. It should be clear that this happens no matter how we draw the diagram, provided that we follow the construction rules as specified.9 Conclusion I hope by now to have established the existence of diagrammatic proofs. Diagrammatic proofs show how claims are true and why they are true. And they do so because they are formalizable and can reveal different aspects of the same claims. Mumma’s formalization of Euclid’s arguments shows that diagrammatic arguments can be reliable and can be parts of proofs for general claims. We should remind ourselves, however, that the renewed interest in and defense of dia­grammatic arguments as parts of proofs is relatively recent. The literature on diagrammatic arguments is small but growing. So the acceptance of diagrammatic arguments alongside verbal arguments in proofs still leaves some ends untied. Many interesting questions still remain. We have seen, as in free rides, that diagrammatic proofs can be more intuitive than verbal ones. But granting that diagrammatic proofs are clearly distinct from verbal ones, what distinguishes them from each other? Also, Mumma provides a and w, if y is between x and z, and if z is between y and w, then z is between x and w. 8  The little circular dots represent points labeled respectively by A, B, C, and D. 9  Mumma points out the importance of linkage in the construction of a diagram: we must follow the construction rules carefully and in the correct order. Although a given diagram can be constructed according to many different construction rules, only a few of them will allow us to draw the relevant valid inferences for a given claim.

precise account of arguments that depend on geometric diagrams. But what about those that depend on Venn diagrams, category-theoretic diagrams for showing commutativity, and arrays used to prove the denumerability of the natural numbers? Can they be formalized in the same way Euclid’s arguments have been in Eu? Answers to these questions and more will offer promising new insights into diagrams, proofs, and diagrammatic proofs as old as Euclid’s own. References Auslander, J. (2008). On the roles of proof in mathematics. In B. Gold, & R. A. Simons (Eds.), Proofs and other Dilemmas: Math­ematics and Philosophy. Mathematical Association of America. Avigad, J. (2006). Mathematical method and proof. Synthese, 153, 105–159. Avigad, J. (2007). Philosophy of mathematics. In C. Boundas (Ed.), The Edinburgh Companion to Twentieth-Century Philoso­phies. Edinburgh University Press. Avigad, J. (2008). Understanding proofs. In P. Mancosu (Ed.), The Philosophy of Mathematical Practice. Oxford University Press. Barwise, J. & Etchemendy, J. (1991). Visual information and valid reasoning. In W. Zimmermann & S. Cunningham (Eds.), Visualization in Teaching and Learning Mathematics. Mathematical Association of America. Brown, J. R. (2005). Naturalism, pictures, and platonic intuitions. In P. Mancosu, K. F. Jørgensen, S. A. Pedersen (Eds.), Visualization, Explanation and Reasoning Styles in Mathematics. Springer. Brown, J. R. (2008). Philosophy of Mathematics, Second Edition. Routledge. Detlefsen, M. (2008). Proof: its nature and significance. In B. Gold & R. A. Simons (Eds.), Proofs and other Dilemmas: Mathematics and Philosophy. Mathematical Association of America. Fomenko, A. (1994). Visual Geometry and Topology. Springer Verlag. Goodman, N. (1976). Languages of Art. Second Edition. Hackett Pub­ lishing Company, Inc. Hardy, G. H. (1929). Mathematical proof. Mind, 38(149), 1–125. Hilbert, D. (1894). Grundlagen der Geometrie. Unpublished lectures. Klein, F. (1939). Elementary Mathematics from an Advanced Standpoint. Dover Publications. Leibniz, G. (1704/1981). New Essays Concerning Human Understanding. Cambridge University Press. Manders, K. (1995). The Euclidean Diagram. Ph.D. thesis, Unpublished Draft. Mumma, J. (2006). Intuition formalized. Ph.D. thesis, Carnegie Mellon University. Mumma, J. (2008). Proofs, pictures, and Euclid. Pasch, M. (1912). Vorlesungen über neuere Geometrie. Second Edition. Teubner. Peirce, C. S. (1898). The logic of mathematics in relation to education. Educational Review, 8, 209–216. Resnik, M. D. (1992). Proof as a source of truth. In M. Detlefsen (Ed.), Proof and Knowledge in Mathematics. Routledge. Shimojima, A. (1996). Operational constraints in diagrammatic rea­soning. In G. Allwen & J. Barwise (Eds.), Logical Reasoning with Diagrams. Oxford University Press. Tieszen, R. (1992). What is a proof? In M. Detlefsen (Ed.), Proof, Logic and Formalization. Routledge. Wittgenstein, L. (1976). Wittgenstein’s lectures on the foun­dations of mathematics, Cambridge 1939. From the Notes of R.G. Bosanquet, Norman Malcolm, Rush Rhees, and Yorick Smythies. University of Chicago Press.


30

Ampersand

Translated by Elena Ponte Music conveys emotions—joy, sadness, anger, and fear—that manifest on the psychological, physical, and neurological levels with various degrees of intensity; from pleasant and unpleasant emotional reactions, to physical responses that range from tears, chills, or shivers, to variations in heart and breathing rhythms. These complex neurological reactions are presented in detail in this paper. The article considers both the cognitivist and the emotionalist approaches to this issue as it reviews those characteristics of music that are capable of communicating a particular emotion to the listener. Music conveys emotions—joy, sadness, anger, and fear—that manifest on the psychological, physical, and neurological levels with various degrees of intensity; from pleasant and unpleasant emotional reactions, to physical responses that range from tears, chills, or shivers, to variations in heart and breathing rhythms. These complex neurological reactions are presented in detail in this paper. The article considers both the cognitivist and the emotionalist approaches to this issue as it reviews those characteristics of music that are capable of communicating a particular emotion to the listener.

Music is an art form known for its emotional power. At one time or another, most people have felt, more or less intensely, the emotion of music. These emotional reactions can manifest themselves in diverse ways, on the psychological, physical, and neurological levels. We have questioned the causes and mechanisms that allow music to make us feel emotions that are often very intense. Most literature on this subject only considers one of the aspects of this phenomenon; here we will try to integrate the different facets in order to better understand this phenomenon as a whole. We will first consider the characteristics of music that allow for it to carry on emotions. Can music by itself convey an emo-


Music & Psychology   31 tion clearly enough for it to be understood by the majority of listeners? We will then consider what this musically transmitted emotion consists of to those that experience it. We will move toward the neurological mechanisms that enter into play when emotions are perceived through music. Do the different emotions induced by music affect the brain in different ways? What are the regions of the brain that are implicated in the perception of emotions provoked by music? The musical characteristics implicated in the stimulation of emotions

Research seems to support the idea that certain intrinsic characteristics of music can actually communicate a particular emotion or feeling to the listener. We will fist review several studies that are based on the characteristics of arousal and pleasure of music to determine what emotions are expressed by the music. Then we will approach a second group of studies focused on specific musical structures that can lead to strong listener emotions.

Unexpected harmonic events spur an emotion in the listener. Several authors work with the Circumplex Theory of Emotions to support their studies. This is a general theory of emotions that proposes two dimensions for a stimulus closely related to the emotion being expressed: arousal potential, from sleepy to pleasurable, or the positive–negative valence, as in pleasant–unpleasant, like–dislike. Both dimensions are placed in a vertical continuum for the arousing potential, from excited to sleepy, and from pleasant to unpleasant on the horizontal axis. This theory is conceptualized as a circle with four quadrants where most emotions can be placed. Each of the quadrants represents a specific combination of two dimensions, as shown in figure 1 (Posner, Russel, & Peterson, 2005).

Figure 1 Representation of the “Circumplex Theory of Emotion” from Posner, Russel and Peterson (2005) North and Hargreaves (1997) use an adaptation of this theory to confirm that the emotion expressed by a musical piece can be correlated with its qualities of arousal and pleasure. Their results indicate that an emotion expressed by music can be predicted as a function of its levels of arousal and excitement, each being rated by independent subjects, one group having noted the emotion expressed and the other the arousal level. Furthermore, a significant combination of the levels of both dimensions predicts the emotion expressed. For example, a musical piece noted as having a low arousal level and a high pleasure level by one of the groups is also noted as having higher relaxation and calmness. In addition, the North and Hargreaves study suggests a relation of inverted U between the level of arousal and pleasure, a moderated level of arousal being the most pleasant and preferred to the extreme levels. This suggests a relation between the preference by an individual for a musical piece and the emotional reaction provoked by this music, as both are related to the same characteristics: the level of arousal determines the level of pleasure and the levels of arousal and pleasure together determine the emotion expressed (North & Hargreaves, 1997). Researchers Mark Meerum Terwogt and Flora Van Grinsven (1991) have also based their approach on the Circumplex Theory of Emotion. They tried to


32

Ampersand

determine if subjects from different age groups were capable of recognizing the emotions expressed by classical music pieces selected by a professional musician, and further approved by an independent jury of eight adults, non-musicians, that express singular emotions: joy, rage, fear, and sadness. The objective of the study was to determine if certain emotions were easier to express with music than others. Their results show that the majority of listeners of all ages, including children not yet five years old, easily recognize if the emotion expressed is positive or negative, they can thus differentiate joy from the three other emotions. The recognition of sadness appears also to be easy, but fear and anger seem to be more difficult to identify. A possible explanation to this phenomenon, as per the authors, is the confusion among the subjects, particularly among the youngest, between the emotion expressed by the music that they are being asked to identify and the emotion that they are feeling. An aggressive piece of music expressing anger can generate fear in the subject, hence the confusion. From all emotions, fear appears to be the most difficult to express through music. This could be explained due to the lack of a precise arousal level for this emotion: fear can either paralyze or arouse the person, contrary to the emotions such as joy, sadness, and anger that are well defined both on the arousal axis as well as in the positive or negative valance axis. (Terwogt & Grinsven, 1991). Other than the dimensions of arousal and pleasure in music, certain music characteristics seem to give place to “chills” or “thrills”; that is, very intense emotional reactions often accompanied by chills or goose bumps and sometimes even by tears or other pleasant physical responses (Sloboda, 1991). These physical signs allow them to locate the exact moments when a piece of music induces a certain emotion and to determine if certain elements in the music cause these emotional states. It is by combining physiological measures like skin conductance response, psychological measures (surveys), and the pressure of a mouse when the subject shivers that researchers Grewe, Nagel, Kopiez, and Altenmüller (2005) have reached the conclusion that certain structural elements in music can be

attached to strong emotions in the listener. In any case, shivering is not a common reaction and is not activated in the same manner in different subjects. The authors have not been able to discern a specific musical structure leading to shivering. Nevertheless, they have identified several factors that might contribute to their activation, such as harmonic sequences, the entrance of a voice, and the beginning of a new part, that is, a violation of the expectancies (Grewe, Nagel, Kopiez, & Altenmüller, 2005; p447). The results are supported by Steinbeis, Koelsch, and Sloboda (2005) who found that unexpected harmonic events spur an emotion in the listener. Thus, physiological measures (electro-dermal and electroencephalogram activity) and psychological measures (emotional self-reports) suggest that these violations predispose subjects to an increased emotionality (Steinbeis, Koelsch, & Sloboda, 2005; p460). Both these studies are based on the idea that certain musical characteristics, in this case the violation of listener’s expectations, can induce in the latter a certain emotional state. Nonetheless, Grewe, Nagel, Kopiez, and Altenmüller specify that these emotional responses “do not occur in a reflex-like manner, but as the result of attentive, experienced, and conscious musical enjoyment” (2005; p448). A study by John A. Sloboda (1991) also supports this idea, although he goes further. He not only associates shivers to the new or unexpected harmonies, but also attributes tears to sequences or appoggiaturas, and a faster heartbeat, although rare, to an acceleration in cadence (Sloboda, 1991). It is interesting to note that although the reaction itself is innate, the author suggests that the association of this response to music is a learned ability. This is explained by three reasons: these responses are not shared by young infants and other musical cultures, musical structures have to be learned in order to be perceived, and the emotional response increases with increased exposure (Sloboda, 1991). He puts forth the hypothesis that “certain musical structures represent significant emotion-provoking events at a rather abstract level” (Sloboda, 1991).


Music & Psychology   33 Physiological and neurological mechanisms in the perception of music

The two most popular theories in the field of music perception are the Emotivist theory which stipulates that music gives place to emotional reactions in the listener, and the Cognitivist theory, that states, to the contrary, that listeners only feel the emotion transmitted by the structure of the musical piece which they perceive or recognize. This second part of the paper focuses on the studies based on neurological and physiological reactions to musical stimuli where the researchers take a position on the emotivist approach.

chosen because it was considered very important to the subjects in the study. These changes took place in specific areas of the brain, recognized as areas strongly correlated to emotions, such as the amygdala, the mesencephalus, and the prefrontal cortex regions. These areas are equally stimulated in cases of drug abuse, cocaine and heroine, sexual activity, or food intake (Blood & Zatorre, 2001). These reactions are also accompanied with changes in heart and breathing rhythms. More precisely, the subjects did shiver in seventy-seven percent of the cases where the music piece was emotionally important to them; heart beat and breathing increased significantly. When researchers took scans using a piece of music that was neutral for the individual, no shivering was observed. The subjects showed a tendency towards a higher level of emotional intensity compared to the level of intensity of the shivers; this was explained by the fact that emotional intensity has to be significantly high before shivering. The average level of emotional intensity was seven-point-four over ten compared to an average level of intensity of the shivering of four-point-five over ten. During the scanning, researchers noticed an activation and increase in rCBF (Regional Cerebral Blood Flow) in the regions of the left ventral striatum, the dorso-internal mesencephalus, the bilateral insula, the thalamus, the SAM (supplementary motor area), and the bilateral cerebellum. The increase of rCBF is related to the paralimbic regions and to the areas relating to arousal and motor processing. It is also related to the brain reward circuitry. Activity in this region is usually related to the production of dopamine as well as other neurotransmitters. Furthermore, rCBF decreases with the increase of intensity in the shivering in the regions of the right amygdala, the left hippocampus, and the posterior bilateral neocortical regions, all usually related to negative emotions (Blood & Zatorre, 2001). The study concludes that

These emotional responses “do not occur in a reflex-like manner, but as the result of attentive, experienced, and conscious musical enjoyment”.

To link the emotions perceived through music with physiological reactions, Krumhansl (1997) classifies musical pieces as per three distinct emotions: sadness, fear, and joy. Stimuli of sad music give place to stronger physiological reactions such as changes in heart rhythm, blood pressure, and body temperature. Stimuli of fearful music generate stronger reactions in the variation of speed of blood flow and its amplitude. Stimuli of joyful music cause stronger variation in breathing patterns. Sad music pieces have a stronger effect on the heart and electrodemic systems. Thus fear inducing music affects blood pressure and joyful music affects mostly the respiratory system (Krumhansl, 1997). As for the easily quantifiable variances in physiological reactions; certain researchers have furthered the study towards neurological changes due to physiological changes, notably “chills”or “shivers”. In a study at McGill University by Blood and Zatorre (2001), with the aid of PET (positron emission tomography) scans, brain reactions to shivers induced by pleasant music were observed and changes in blood flow in the brain in reaction to the stimulus were noted. These changes led to very positive emotional reactions. In this study, a certain piece of music was


34

Ampersand

in activating the reward circuit, not only pleasure is increased, but brain activity in regions associated with negative thoughts and emotions decreases.

inducing strong emotions and that these reactions cannot be measured or detected by the electroencephalogram (Baumgartner et al., 2005).

Physiological and neurological reactions are undeniable, but it is necessary to evaluate the importance of these reactions by evaluating their intensity level compared to stimuli of different kinds, but representing the same basic emotion. In a study at

As for the purely neurological reaction to music, generally the dominant tendency in the left hemisphere activity is observed with positive reactions to stimuli of pleasant music. It activates more precisely the region of the primary auditory cortex, the

The study concludes that in activating the reward circuit, not only pleasure is increased, but brain activity in regions associated with negative thoughts and emotions decreases. the University of Zurich by Baumgartner, Esslen, and J채ncke (2005), carried out with very simple stimuli (musical pieces which represented three precise emotions; fear, joy and sadness or fixed images representing the same emotions) and the use of combined stimuli (both musical pieces and images at the same time). The researchers proved that emotional, physiological, and neurological reactions were stronger and more precise in the subjects using combined stimuli (Baumgartner et al, 2005). As a result, not only were the reactions stronger in general using only the images, but the scores with the combined stimuli largely surpassed both simple stimuli. This is to say that emotional reactions are present in the perception of music, but are not the most intense reactions in comparison to the other stimuli tested. The study also took into account the Alpha-Power-Density, APD, obtained by analysis of the alpha band in the electroencephalograms that represents a level inversely related to brain activity. The highest scores were found in the reactions to music stimuli, which implies that brain activity was less strong, compared to other simple or combined stimuli (Baumgartner et al, 2005). These results seem to confirm that the intensity of emotional reactions to music is the lowest, which can be explained by the fact that musical stimuli activate internal modes of brain function where the amygdala, the striatum, and the thalamus are recognized as

temporal medial gyrus, and the cuneus (FloresGutierrez, Diaz, Barrios, Favila-Humara, Guevara, Del Rio-Portilla & Corsi-Cabrera, 2007). In the study by Flores-Gutierrez et al. (2007), all musical stimuli activated the superior temporal gyrus in both hemispheres, but only the positive emotional responses involved the left gyrus and only negative emotional responses involved the right gyrus. The activations in the right regions of the brain were only observed for negative emotions with both analysis methods. These regions are important elements in the paralimbic system. The researchers noticed that cerebral activity was more intense in the case of positive rather than negative emotions (Flores-Gutierrez et al., 2007). In most cases, music activates regions found in the right hemisphere, even if the music evokes negative emotional reactions which are related to regions in the left hemisphere (Boso, Politi, Barale, & Emanuele, 2006). In the majority of research with EEG (Electroencephalography), frontal left asymmetries are associated to positive emotional reactions, or the decrease of negative emotions during music stimuli, and the frontal right asymmetries are associated with negative emotional reactions or the decrease of positive emotions (Boso et al., 2006).


Music & Psychology   35 In accordance with the modular theory of music perception, certain musical characteristics can be associated with precise neurological regions in both hemispheres during the process of music perception (Boso et al., 2006). Positive emotional reactions activate the frontal brain regions, whereas negative emotional reactions activate temporal brain regions (Boso et al., 2006). More specifically, the regions activated by negative reactions are the amygdala, the hippocampus, the parahippocampal gyrus, and the temporal lobes. For positive reactions, the inferior frontal gyrus, the inferior Brodmann region in the neocortex, the anterior insula, the ventral striatum, the rolandic operculum, and Heschl’s gyrus are activated (Boso et al., 2006).

The presence of endorphins and endocannabinoids in the blood has also been detected in the reactions to musical stimuli. Other than activating certain specific brain regions, the perception of music and the emotional reactions that it induces can also implicate certain neurotransmitters and other biochemical mediators. A positive reaction to music is partly due to the release of dopamine from the ventral striatum (Boso et al., 2006). The presence of endorphins and endocannabinoids in the blood has also been detected in the reactions to musical stimuli (Blood & Zatorre, 2001). Conclusion

To conclude, we believe that the cognitivist and emotivist theories do not necessarily oppose, but, on the contrary, complete each other. As we have seen, several studies indicate that certain characteristics of music are indeed capable of communicating emotions that are identifiable by the listener. The

levels of arousal and pleasure (pleasant – unpleasant) would allow music to express a specific emotion, whereas a violation of the listener’s expectations, which can be expressed in various ways, can give place to a very strong emotion accompanied by physical reactions. Furthermore, the studies relating to physiological and neurological reactions support the emotivist approach, and establish that the perception of music entails real changes that are related to the concrete emotion felt by the listener. We can thus conclude that both these theories are not contradictory, but represent two different approaches to research. To achieve a real view of the whole field, it seems necessary, in our opinion, to integrate both approaches. References Baumgartner, T., Esslen, M., and Jäncke, L. (2006). From emotion perception to emotion experience: Emotions evoked by pictures and classical music [Electronic version]. International Journal of Psychophysiology, 60, 34-43. Blood, A. and Zatorre, R. J. (2001). Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion [Electronic version]. Proceedings of the National Academy of Sciences, 98 (20), 11818-11823. Boso, M., Politi, P., Barale, F., and Emanuele, E. (2006). Neurophysiology and neurobiology of the musical experience [Electronic version]. Functional Neurology, 21 (4), 187-191. Flores-Gutiérrez, E. O., Diaz, J-L., Barrios, F.A., Favila-Humara, R., Guevara, M. A., Del Rio-Portilla, Y., and Corsi-Cabrera, M. (2007). Metabolic and electric brain patterns during pleasant and unpleasant emotions induced by music masterpieces [Electronic version]. International Journal of Psychophysiology, 65, 69-84. Grewe, O., Nagel, F., Kopiez, R. and Altenmüller, E. (2005). How does music arouse « chills »? Investigating strong emotions, combining psychological, physiological and psychoacoustical methods [Version électronique]. Annals of the New York Academy of Sciences, 1060, 446-449. Krumhansl, C.L. (1997). An exploratory study of musical emotions and psychophysiology [Electronic version]. Canadian Journal of Experimental Psychology, 51 (4), 336-352. North, A.C. and Hargreaves, D.J. (1997). Liking, arousal potential, and the emotions expressed by music [Electronic version]. Scandinavian Journal of Psychology, 38, 45-53. Sloboda, J.A. (1991). Music structure and emotional response: Some empirical findings [Electronic version]. Psychology of music, 19, 110-120. Steinbeis, N., Koelsch, S. and Sloboda, J.A. (2005). Emotional processing of harmonic expectancy violations [Electronic version]. Annals of the New York Academy of Sciences, 1060, 457-461. Terwogt, M.M. and Grinsven, F.V. (1991). Musical expression of moodstate [Electronic version]. Psychology of Music, 19, 99-109.


36

Ampersand

Are poverty reduction and sustainable development mutually exclusive? Through a historical look at land use in the Peruvian Amazon, Nicholas Moreau identifies a variety of factors which shape the relationship between people and the land. In doing so, he traces the motivations behind current policy aimed at poverty reduction and sustainable land use in order to question their legitimacy. Concern with environmental degradation as a result of overpopulation and land misuse is an increasing influence on development initiatives. Population alarmists argue that the drastic nature of environmental degradation and climate change requires that poverty reduction and cooperative measures take a back seat or be set aside altogether (Sen, 1994). Certain areas in the Peruvian Amazon seem to confirm this hypothesis. The ribere単os, the largest group in the area, are subsistence farmers left over from the rubber boom of a hundred years ago (Coomes, 2000). In one village, increased land use in the last thirty years has reduced the amount of primary forest from fifty percent to about one percent. Furthermore, the time allotted for land regeneration has been reduced by more than half

(Coomes, 2000). But what has been the historical relationship been between local people and the landscape, and how does this compare to more recent history? This paper will address the historical development of land use in the Peruvian Amazon. Particular attention will be given to external factors such as government policy, technology, and demographic trends; and to the impact each has on land use. It is necessarily the locals who use the land, so it is reasonable to assume that they must be primarily responsible for its condition. But what motivates or manipulates their interaction? This paper will seek to show how external forces and actors not only


Development & Conservation   37 have had substantial roles in shaping the landscape through time, but they are also the primary drivers of the very degradation they now seek to stop. Two common themes will emerge from this line of reasoning. The first reflects the historical relationships between exploitation, poverty, and development. An analysis of the history of the Peruvian Amazon evidences the blurred line between exploitation and development. The second theme relates to peasant wellbeing, as the sustainability of the landscape and the connection with land use. Historical land use changes in this area will be described and presented as evidence for how local modes of production are both efficient and sustainable at certain levels. This will be relevant for future initiatives concerned with impacting ribereños and their land either in the name of development or of conservation.

agement; artificial charcoal deposits, providing evidence of burning practices; and specialized forests, which contain unnaturally high densities of useful plant species (Denevan, 1992). In total, it is estimated that forty percent of currently uninhabited ‘pristine’ tropical forest is actually a direct product of historic human clearing and management (Denevan, 1992). Indeed, swidden-fallow agriculture has

Swidden fallow is an appropriate response to certain environments. And, if well-practiced, it can be more than just sustainable; it can be a land enhancement tool.

Swidden Agriculture: Land Use Until 1492

Until recently, academic literature of the New World described the pre-contact Americas as uninhabited and pristine. Contrary to this ‘pristine myth’, the Americas were densely populated and their landscapes were actively managed (Denevan, 1992). This is the case even in the Peruvian Amazon which, while not a territory of intensive agriculture, was widely used by swidden-fallow agriculturists (Denevan, 1992).1 There is a wide body of evidence demonstrating active forest management. This includes the presence of terra pretta2, rich soils up to fifty centimeters deep built up by careful land man1  Swidden-fallow agriculture—also known as slash and burn, swidden, or shifting agriculture—uses multiple plots of land held in various stages of growth and re-growth. Primary forest is initially cleared and burned to add nutrients to the soil. The area is then farmed for up to 20 years before being allowed to fallow (or regrow) for up to 40 years. Then the cycle starts anew. 2  Terra pretta, which translates to ‘black earth’, is the result of an anthropogenic process of building up rich soils by mixing charcoal, bone, manure, and other nutrients.

been utilized in the Peruvian Amazon for thousands of years by indigenous inhabitants who actively managed and transformed their environment (Denevan, 1992). Academic discourse on the usage of swidden agriculture provides the ideal background for considering its environmental impacts. Early literature regarding Amazonian prehistory wrote off swiddenfallow agriculture as an environmentally destructive and economically inefficient land usage scheme (Brookfield & Padoch, 1994). More recently, it has been realized how effective swidden agriculture is as an adaptive response to certain environments which cannot support intensive agriculture (Coomes & Burt, 1997). For instance, while the Amazonian inhabitants practiced swidden agriculture, they were in contact with neighbouring populations outside the Amazon who practiced more intensive and productive modes of agriculture (Denevan, 1992). The fact that swidden farmers did not adopt their neighbours’ more productive practices, despite thousands of years of contact, is a testament to the efficiency of swidden agriculture in the Amazonian environment.


38

Ampersand

In addition to its efficiency, the environmental benefits of swidden agriculture, when adapted to local conditions and when practiced within certain limits, is also gaining recognition in academia (Brookfield & Padoch, 1994). For instance, swidden agriculture provides a mosaic of land cover, as patches of land kept in various stages of growth (Kleinman, Pimentel, & Bryant, 1995). This can enhance the landscape through a heightened level of floral biodiversity—providing more habitats and, thereby, more niches for species to occupy (Fairhead & Leach, 1995). This practice increases the resili-

ity to European diseases (Weinstein, 1983). In 1520, Peru had an estimated population of nine million. But, the population had declined ninety-two percent, to approximately 670 thousand, a hundred years later (Weinstein, 1983). Furthermore, the remaining population was concentrated along the more densely populated coast and Andean highlands, not in tropical areas. With so few people left working the land, the intensity and scale of interaction either decreased or ceased altogether. This also contributed to the misconception of pristine landscapes, since areas once populated were now

Protecting the environment is hard to accomplish while increasing production, unless advances in technology or technique improve efficiency and reduce impact. ence and decreases the vulnerability of the area. This is because diverse landscapes are less vulnerable to disturbances, such as forest fires (Fairhead & Leach, 1995). Moreover, swidden agriculture is a closed system; no external inputs, such as fertilizer, are necessary for its continuation. Finally, rather than causing erosion, the evidence of terra prettas demonstrates how swidden agriculture can actually increase soil depth and quality over time (Denevan, 1992). This point is particularly salient because erosion is a pressing problem in the Amazon because of the area’s extremely shallow soil depth. The archeological evidence in the Peruvian Amazon confirms the recent consensus in academic literature: swidden fallow is an appropriate response to certain environments. And, if well-practiced, it can be more than just sustainable; it can be a land enhancement tool. ‘Pristine’ Landscape: European Contact, 1492 – 1860

Before European contact, land use in the Peruvian Amazon was less intensive, but much more widespread than today (Denevan, 1992). There was an enormous loss of lives when newcomers arrived because the indigenous populations lacked immun-

empty and lands once actively managed seemed untouched. When Europeans explored these lands, they saw a snapshot of an uninhabited landscape, but they took it to represent the characteristic state of the land. Following the population decline, the Amazon remained relatively untouched by Europeans for three and a half centuries. The area lacked appeal because it had few easily extractable natural resources, such as gold, and few crops suitable for plantations, such as sugar cane. While elsewhere in the Americas, development focused on areas which offered these commodities or, at least, eligible labour pools. This reveals how, from an early period, European presence in the Americas was based on, and motivated by, the exploitation of the environment. Throughout Latin America land use shifted from traditional practices to Western forms set on extracting products for foreign markets.


Development & Conservation   39 Commodity Extraction: Rubber Boom, 1860-1910

In the mid-nineteenth century, the commoditization of rubber ignited foreign interest in the Peruvian Amazon (Barham & Coomes, 1996). European explorers were aware of the natural rubber in the Amazon as early as 1743 and indigenous tappers3 were familiar with its extraction (Weinstein, 1983). Natural rubber was used in products such as raincoats and erasers, but it had several undesirable qualities—it was extremely sticky and broke down easily—that limited its value (Weinstein, 1983). This changed drastically with the discovery of the vulcanization process in 1839 (Barham & Coomes, 1996). Vulcanization removed the undesirable qualities of rubber, providing the Western world, which was in the early years of industrial revolution, with an ideal material for manufacturing items such as tires and gaskets (Weinstein, 1983). This discovery was to have direct effects on land use in the Western Amazon. With the commoditization of Rubber, the Peruvian Amazon became a prime target for Western extraction. Rubber barons claimed land, investment poured into the area, railroads were constructed, and native tappers were conscripted as the principle extractors of rubber (Weinstein, 1983). The local populations in the Amazon had never recovered from disease, so the influx of workers migrating to the area contributed to a growing population. However, foreign interest was restricted to rubber; they were not concerned with the land or the labourers. Indigenous workers were used, not to develop the area, but because “Amazonian settlers simply did not have access to large numbers of African slaves” (Weinstein, 1983). European and US prospectors’ interest in the area was based solely on profit. With no way to make them accountable for their actions, prospectors often exploited their workers to the fullest extent possible.

3  Rubber tapping involves a process similar to maple syrup extraction. Rubber trees are located and incisions are made into the tree such that rubber latex flows out into a receptacle attached to the tree.

Since the Americas lacked both regulation and understanding of the environment, areas with natural commodities were unsustainably exploited. Mahogany was completely exhausted in most areas, as was seagull guano in the Chincha islands. Rubber, however, was an exception because of the Amazon’s inhospitable environment. Rubber trees are naturally spread sporadically throughout the forest (Barham & Coomes, 1996). Attempts to create plantations failed because rubber trees are extremely susceptible to fungal leaf diseases when clustered at significant densities (Sharples, 1936). Rubber tappers had to travel through the forest finding and tapping individual trees for the duration of the Boom (Barham & Coomes, 1996). In effect, the inhospitable nature of the Amazon led to a reduced ecological impact compared with other areas containing valuable resources in the Americas (Barham & Coomes, 1996). Recession: Post Boom, 1910 – World War II

The hostile physical environment of the Amazon was both the environment’s saving grace and the initiator of the boom’s collapse (Weinstein, 1983). Henry Wickham smuggled some seventy thousand rubber seeds from the Amazon in the late nineteenth century; by the early twentieth century, numerous plantations were set up in around Asia (Trager, 1876/2006). In these Asian climates, rubber trees fared extremely well. They were not subject to the diseases which prevented plantations of rubber from being established in the Amazon (Barham & Coomes, 1996). Compared with the Amazon, Asian plantations offered a much more efficient means of extraction. Supply increased exponentially and the world market was flooded with rubber (Barham & Coomes, 1996). By 1910, the price of rubber fell ninety-five percent after reaching peak prices just years before (Barham & Coomes, 1996). The availability of rubber meant that Amazonian rubber was no longer competitive—the boom was over (Barham & Coomes, 1996). Investment in the Peruvian Amazon had been restricted to the one commodity. Little had been done to improve the area, excepting the development of


40

Ampersand

infrastructure necessary for extraction. Rubber was the only product that generated substantial revenue in the area. After world rubber prices plummeted, the rubber barons quietly left, taking the labourers’ wages with them (Weinstein, 1983). Many workers left in search of employment, but a significant number stayed and engaged in small-scale subsistence agriculture (Coomes, 2000). These people were largely Mestizos. They would become known as ribereños, or peasants, who would work the land for the next century (Coomes, 2000). During World War II, Japan withheld Asian rubber from the Allies (Rohter, 1996). The demand for Amazonian rubber skyrocketed and, once again, the locals were exploited as labourers. The Rubber Development Cooperation, financed by the US under the Board of Economic Warfare, even paid South American governments one-hundred dollars per worker delivered to rubber extraction areas. This was necessary because the areas had mostly been abandoned after the first collapse (Board of Economic Records, 1942-1944). The Brazilian government took up this initiative. Workers were paid about thirty thousand. But half of the workers died from disease and exhaustion (Rohter, 1996). After the war, Asian markets were reopened to the West. Coupled with the introduction of synthetic rubber, demand for Amazonian rubber crashed once more. Again, the lack of wages forced the labourers elsewhere. The government was supposed to cover the expense of workers’ return transportation, but it was never provided (Rohter, 1996). Only six thousand of the original workers made it home, and at their own expense (Rohter, 1996). Four decades after the first boom, no lesson, regarding treatment of labourers or sustainable development, had been learned or applied. Livelihood vs. Conservation: Development Initiatives, Post–WWII

Before World War II, the exploitation of the people and the environments had defined land usage in the Peruvian Amazon. However, after WWII, there was a shift in discourse regarding exploitation. Independence was granted to colonies all over the

world. Former colonists were denied direct control over the areas, and public policy could not be openly exploitive. As such, any interaction in the newly named ‘third world’ had to be dressed in terms such as ‘development’ and ‘poverty reduction’. Involvement, or interference, in developing areas had to be for the locals’ benefit (Escobar, 1995). But who has development really benefited? To address this question, the history of the ribereños after WWII will be traced. Examining the human and environmental impacts of government and international policy questions the motivation behind ‘development’. The end of the second rubber boom paralleled that of the previous. Once again, cheaper supply was available in Asia so investment in the Peruvian Amazon ceased. The newly unemployed either migrated to the city, to family elsewhere, or remained to practice subsistence agriculture (Coomes, 2000). From then until the late 1980s, the ribereños continued to practice swidden agriculture (Coomes, 1996). To produce surplus cash crops for local and foreign markets, they also diversified their productive practices between fishing, forest extraction, swidden agriculture, and floodplain cropping (Coomes, 1996). One advantage they had over the urban poor is they controlled their own food supply. Materially, however, the ribereños remained amongst the poorest and most neglected group in Peru (Coomes, 1996). Forty years would pass before they were targeted by government development initiatives (Coomes, 1996). In the 1980s, the government passed PRESA4, the first development program in the area (Coomes, 1996). The government stated its primary objectives to increase production, decrease poverty, and protect the environment. The ribereños are, and have been, among the poorest and most neglected people in Peru, with incomes around three hundred and fifty dollars a year (Coomes, 1996). Why then, were they suddenly targeted for development initiatives? At the time, cities were becoming densely populated (Coomes, 1996). For example, the nearby city 4  Programa de Reactivacion Agropecuaria y Seguridad Alimentaria


Development & Conservation   41 of Iquitos had grown more than sixty-five percent in twenty years as people emigrated in search of economic opportunities (Coomes, 1996). The ribereños were the principle food suppliers in the area, and the government wanted to increase this supply (Coomes, 1996). Protecting the environment is hard to accomplish while increasing production, unless advances in technology or technique improve efficiency and reduce impact. It seems that, contrary to the officially stated goals, PRESA simply aimed to increase production and, thereby, increase food supply in urban areas.

Examining PRESA provides several insights regarding the interaction between local and foreign factors. Firstly, how policy affects land use. With more capital for labor and equipment, ribereños initially cleared more land and acquired more fields. Later, to fight inflation, the ribereños switched to cattle grazing, changing their land use again. When money became scarce, some people migrated to the cities, while those who remained returned to swidden agriculture. From a historical perspective, the issue of scale is worth noting. For thousands of years, the Peruvian Amazon was used relatively constantly through subsistence swidden agriculture.

External actors—largely misinformed historically, for instance, the negative views of swidden agriculture—can drastically affect agricultural practices and local populations PRESA was designed to increase production by increasing the amount of credit available to the ribereños (Coomes, 1996). Loans tripled and harvested land doubled during this time (Coomes, 1996). PRESA sparked a land rush, consisting mostly of urban poor returning to the land and newcomers leaving their urban jobs (Coomes, 1996). However, hyper-inflation—nearly a thousand percent—ensued, and the peasants could not generate the cash required to make interest payments on PRESA loans (Coomes, 1996). To hedge against inflation, agrarian associations purchased cattle, but the area was poorly suited for cattle and the land quickly degraded (Coomes, 1996). Yet, each head of cattle was the equivalent of approximately thirty five years of income. Cattle, therefore, were maintained even as they destroyed surrounding fields. Then, by 1990, only four years after the loan program began, a new government completely reversed policy towards peasants. Loans were terminated and social programs ended. In effect, money ceased to flow into the area (Coomes, 1996).

Then, within four years, land use intensified, shifted altogether, and returned to its original use. Policy that changes so frequently in so short a time cannot be effective, especially given the scale of activities like swidden fallow where cycles can last up to forty years. An incongruity is seen here between the time scale of the environment (thousands of years), local’s land use (decades), and government policy (years). The fact that external actors—largely misinformed historically, for instance, the negative views of swidden agriculture—can drastically affect agricultural practices and local populations must be noted by governments and NGOs as they continue to impact local areas. Conclusion

Tracing the history of land use in the Peruvian Amazon provides three insights. Firstly, swidden agriculture has been practiced in the area for centuries; it is appropriate and sustainable if practiced at appropriate levels and customized to the landscape. While it is obvious that increasing population places increasing pressure on the land, as more fields are acquired and fallow periods decrease,


42

Ampersand

there are still opportunities for increased production and efficiency (Coomes, 2000). This includes experimenting with plant mixtures to increase land productivity and soil quality maintenance. Here conservation groups could play a crucial role. By combining science with local knowledge, rather than using science as a means to invalidate local practices, the interests and goals of both groups can be supported. Secondly, history has shown how self-interest has governed the involvement of outsiders. Before the rubber boom there was virtually no interaction because the area contained no valuable resources. Meanwhile, during the rubber booms, involvement was based solely on extracting rubber and maximizing profits. And finally, in between and after the booms, external investment completely dried up. In the case of PRESA, the governments prioritized increasing food surpluses in urban areas as opposed to increasing the ribereños’ well being. Even the agendas of NGOs and development organizations often focus on environmental concerns rather than the welfare of the local peoples. Coomes highlights how traditional groups have been considered, historically, as the antithesis to forest preservation (Coomes & Barham, 1997). Conservation groups have questioned local practices, urging them to change or stop altogether. Only recently have they attempted to work within local production systems. Increasingly, conservationists are incorporating locals into projects by instating considerations and incentives for local cooperation (Coomes & Barham, 1997). For sustainable development to be implemented in areas of environmental concern, the trend towards cooperation rather than coercion must be encouraged.

Finally, we see that government policy aimed at development has actually impeded the ribereños. A lack of land rights, a vulnerability to volatile government policy, and a position ‘outside the system’ have caused the ribereños great difficulty. However, though PRESA failed to alleviate poverty, it has provided some potentially lasting benefits as well as lessons for future initiatives (Coomes, 1996). These loan programs helped secure some land rights where the ribereños had little more than squatters’ standing before. Also, newcomers from the city who began agriculture during the PRESA campaign exhibited superior management skills and familiarity with bureaucracy, making them more competitive farmers (Coomes, 1996). This reveals how education could potentially do a great deal to enhance the interaction between the ribereños’ and the landscape as well as the market and government bureaucracy in which they are increasingly in contact. Finally, the formations of agrarian collectives during this time have helped peasants organize a larger political voice. While the ribereños have had a hard time, hopefully the seeds of improvement have been sown.

For sustainable development to be implemented in areas of environmental concern, the trend towards cooperation rather than coercion must be encouraged.

This historical review demonstrates a complex interplay between external and local actors. This interaction has been governed by the environment and what each party sought to obtain from it, be it rubber or, simply, food. Changes in technology, policy, and market demand have altered the interaction’s appearance but not its composition. Considering how pronounced the interaction has been in the past, its contribution in the future cannot be doubted. Previous development initiatives which neglected this interaction, by abandoning poverty reduction and cooperative measures, have not adequately addressed environmental degradation. In the future, hopefully consideration will be given to knowledge acquired from the past.


Development & Conservation   43 References Barham, B. L. & Coomes, O. T. (1996). Prosperity’s Promise: The Amazon rubber boom and distorted economic development. Boulder, CO: Westview Press. Coomes, O. T. & Barham, B. L. (1997). Rain forest extraction and conservation in Amazonia. The Geological Journal, 163. Coomes, O. T. (1996). State credit programs and the peasantry under populist regimes: Lessons from the APRA experience in the Peruvian Amazon. World Development, 24. Coomes, O. T. (1996). Income formation among Amazonian peasant households in northeastern Peru: Empirical observations & implications for market-oriented conservation. Yearbook, Conference of Latin Americanist Geographers, 22. Coomes, O. T., Grimard, F., & Burt, G. J. (2000). Tropical forests and shifting cultivation: Secondary forest fallow dynamics among traditional farmers of the Peruvian Amazon. Ecological Economics, 32. Coomes, O. T. & Burt, G. J. (1992). Indigenous market-orientated agroforestry: Dissecting local diversity in western Amazionia. Agroforestry Systems, 37. Denevan, W. (1992). The pristine mythe: The landscape of the Americans in 1492. Annal of the Association of American Geographers, 82. Escobar, A. (1s995). The problematization of poverty: The tale of three worlds and development. Encountering Development: the Making and Unmaking of the Third World. Princeton, NJ: Princeton University Press. Fairhead, J. & Leach, M. (1995). Local agro-ecological management and forest-savana transitions: The case of Kissidougou, Guinea. In T. Binns (Ed.), People and Environment in Africa. West Sussex: John Wiley & Son Ltd. Kleinman, P. J. A., Pimentel, D., & Bryant, R.B. (1995). The ecological sustainability of slash-and-burn agriculture. Agriculture, Ecosystems, & Environment, 52. Perrault-Archambault, M., & Coomes, O. T. (2008). Distribution of agrodiversity in home gardens along the Corrientes River, Peruvian Amazon. Economic Botany, 62. Rohter, L. Brazil ‘rubber soilders’ fight for recognition: Harvesters pressed into WWII service are still in Amazon. (1996, November 13). The New York Times. Sen, A. (1994) Population: delusion and reality. New York Review of Books, 1-2. Brookfield, H. & Padoch, C. (1994). Appreciating agrodiversity: A Look at the dynamism and diversity of indigenous farming practices. Environment, 36. Sharples, A. (1936). Diseases and Pests of the Rubber Tree, London: MacMillan. Trager, J. (2006). The People’s Chronology: A Year-By-Year Record of Human Events from Prehistory to the Present, New York: Holt, Rinehart, & Winston. United States Board of Economic Warfare, kRecords, 1942-1944, office of imports section. Weinstein, B. (1983). The Amazon Rubber Boom 1850-1920, Stanford: Stanford University Press.


44

Ampersand

The Black Death is remembered as one of the most devastating pandemics in human history. As it cut through Europe, it greatly altered the landscape of the population and the quality of life of the people. Rachel Li investigates the short and long term effects of this tragic disease. The few millennia that make up the history of mankind are really nothing more than a hiccup in the lifespan of the world. However, no other creature has changed the face of the Earth as drastically as humans have. That is not to say, however, that nature cannot occasionally present a serious challenge. The plague remains one of the most devastating calamities to challenge mankind. The disease cut swaths through the population of Europe. As Guillame de Machaut summarized in his poem, as quoted by Herlihy (1997): For many have certainly Heard it commonly said How in one thousand three hundred and forty-nine Out of one hundred there remained but nine.

Regardless of the enormous numbers that the contagion killed, however, the populace persevered. The plague forced people to change the way they thought about life, about death, and about disease. In turn, it was the impetus for them to institute measures for healthier and hardier societies. Furthermore, the drastic decrease in population in Europe actually created a better standard of living for those who survived. It can be argued that, in the long run, the plague improved health in Europe. The resilience of the disease is perhaps one of its most remarkable aspects. The main culprit behind this major bout of human suffering is generally thought to be a bacterium, discovered by Alexander Yersin and named Yersinia pestis in his honour (Perlin & Cohen, 2003). This bacterium absorbed genetic codes from many different viruses and other


Pathogens & History   45 bacteria throughout the years, finally mutating into an incredibly infectious and deadly strain. The genome of this bacterium was mapped in 2001, which allowed scientists to further understand the nature of Yersinia pestis (Lavelle, 2004). This unique strain cannot survive outside of a host body, and furthermore does not have the ability to invade a host’s cells. However, it compensates extremely well for these weaknesses by quickly and efficiently producing toxins that knock out the host’s defense cells, allowing it to breed incredibly rapidly. The highly destructive and infectious nature of the bacterium is most probably the reason the Black Death managed to swiftly kill such a huge portion of the population of Europe.

confession and absolution of sins, which had previously helped them accept their deaths. As Samuel Pepys, an aristocrat of seventeenth century England noted in his journal: “The Chaplin, with whom but a week or two ago we were here mighty high disputing, is since fallen into a fever and dead” (Pepys in Streissguth, 2004). Funeral arrangements became less concerned with ceremonious burials and more concerned with removing the infected body from the home. Comforting traditions such as elaborate burials to honour the dead that used to set people’s minds at ease were no longer available (Herlihy, 1997). It is not surprising, therefore, that people changed rapidly from accepting and welcoming death to fearing it.

The changes that the plague enacted over Europe are substantial. The post-plague attitudes of the general population towards death and disease, for example, were completely different from the ideas expressed in pre-plague Europe. Death became a terrifying entity that needed to be avoided rather than simply another step towards eternal rest in Heaven (Herlihy, 1997). People tried to keep death, illness, and anything related to these as distant as possible. The way people viewed disease also changed in the centuries following the plague, mainly because the behaviour of the plague was incongruent with their medical beliefs (Byrne, 2004). In short, the contradictory nature of the plague planted a seed of confusion that developed new ideas within the minds of its victims and witnesses. As such, the people of Europe started to consciously and subconsciously take measures that would keep them healthier.

A poem written during the plague speaks of death as if the author were disputing with the worms that feast upon the body:

Changing Conceptions of Death and Disease

Before the plague struck Europe, the Catholic Church had succeeded in mollifying the fear of death for its followers. Death, they claimed, was not the abrupt and terrifying end that it seemed to be. Through rituals and rites of passage, the Church urged people to accept the inevitable and move on with their lives. With the onset of the plague, however, this changed. Many times, citizens died far too quickly and far too often for the priests to perform necessary ritualistic ceremonies, such as the

The most unkind neighbours ever made are you, Food for dinner and supper all too little, Now arguing, now eating, you have searched me through, With a completely insatiable and greedy appetite, No rest-since always you suck and bite, No hour or time of day do you abstain, Always ready to do violence to me again (Byrne, 2004).

The works of art and literature of the time reflect the population’s changing attitudes. The general conception of death and dying transformed during the plague, as it became ugly and violent—a thing to be spurned.

The works of art and literature of the time reflect the population’s changing attitudes. Attitudes towards disease were also affected by the onset of the plague. In the early fourteenth century, Galenic ideas described disease as an imbalance of humours within the body; this conception


46

Ampersand

had survived and prospered for well over a thousand years (Campbell, 1931). Each disease was seen as a separate event, a distinct incident that occurred due to the changes of humours within a body. Of course, the plague did not fit neatly into such a simplistic explanation; it struck people of all kinds, without distinguishing between young or old, rich or poor, man or woman (Moote, 2004). It became obvious

Because of these uneasy ideas about illnesses, death, and the dying, there arose “a new tension between the living and the dead, even between the living and the sick” (Herlihy, 1997). People were inclined to segregate infected members of society, partially because they wanted to distance themselves from the death and destruction, and partially because they were starting to believe that disease

People were inclined to segregate infected members of society, partially because they wanted to distance themselves from the death and destruction, and partially because they were starting to believe that disease could be transmuted from person to person. that the contagion had little to do with humoural imbalances. Indeed, people started noticing that the disease seemed to be infectious, spreading from person to person. As Byrne (2004) writes, “The Irish friar John Clynn, who succumbed to the disease, noted that it was so contagious that anyone who merely touched the dead died himself ”. Thus, there came to be a divide between the traditional conceptions of illness, as established by Classics such as Galen and Avicenna, and the actual experiences people had during the plague. It is important to note, however, that although the idea of contagion was introduced, the unshakable faith that people had in the absolute knowledge of the ancients did not suffer greatly—there was no immediate medical or scientific revolution. In fact, researchers didn’t start to develop medically sound ideas, at least by modern standards, about the cause of the plague until about a century ago (Byrne, 2004). Nevertheless, the plague definitely planted in people’s minds the idea of contagion, and of commutable disease spreading between people.

could be transmuted from person to person. As such, the dead were often buried outside the boundaries of the cities to prevent the contagion from spreading to the living (Byrne, 2006). The sick were, quite often, sequestered as well, either within their own homes or outside city walls (Byrne, 2006). Consequently, these new attitudes that people had were factors in improving their health. The need to separate themselves from death and illness protected them, at least slightly, from any contagious diseases that the sick were vectors of. The altered regard for death as a thing that should not necessarily be welcomed—as was promoted by the Church—may have contributed to improving the general health of the people of Europe. After all, people were more likely to take care of themselves if they were less prepared to go to their graves. Developing Strategies for Public Health

The ideas and beliefs of each individual helped protect him from disease by influencing his quotidian activities. However, society adopted these new ideas and instituted measures to maintain its population through the times of the plague. For example, Health Boards were created; they were given the


Pathogens & History   47 freedom to do whatever they saw fit to keep contagion at bay (Herlihy, 1997). Cities also implemented quarantines for foreigners and for indigenous citizens who were suspected carriers of the disease (Naphy, 2004). The aim was to regulate society in an attempt to maintain order during the inevitably chaotic years. The focus of the time was to keep society functioning through plague years, and as such, the need to stay organized was pressing. Birth and death rates were tracked scrupulously. London in the 1660’s, for example, kept records of the total number of burials as well as the number of plague burials that occurred on a weekly basis (Moote, 2004). In addition, intercity relations became more organized. Cities were forced to open up lines of communication. If every locale announced whether or not it had fallen prey to the pestilence, then it would be easier to issue embargoes that kept infected people and goods out of healthy areas. Furthermore, once neighbouring cities established lines of communication, they had the means to implement health certificates. Issued by the last city or town that a person had visited, these certificates were a guarantee that neither that person nor any of his goods had come in contact with the plague (Naphy, 2004). Of course, all of these measures required large amounts of organization and cooperation; it “was a tremendous burden on any state and was dependent upon a welldeveloped magistracy and civil service” (Naphy, 2004). Nevertheless, once these safety regulations were developed, they clearly contributed towards a society that was much more organized and prepared to defend itself against disease. Thus, because of the plague, societies were pushed to create more sophisticated and effective protections against contagions. The Boards of Health were yet another measure instituted in response to the plague. These managerial bodies were given the sole responsibility of doing whatever they thought necessary to either keep the plague out of the city or to regulate the plague within the city, and they were given almost infinite power in this regard (Herlihy, 1997). In Pistoia, Italy in 1348, a Health Board of wise men chosen from the community issued four ordinances that

sought to keep the plague from raging out of control within the city. The ordinances included regulations of how deeply the dead must be buried, what funeral processions were permitted for plague victims, and who was allowed to comfort the widow (Campbell, 1931). Other regulations made by the Boards of Health included: how sewage and garbage were disposed of, that the sick were segregated, and that the city was protected against foreigners (Streissguth, 2004). As Byrne (2006) states: They hired doctors and gravediggers, guards and corpse-inspectors, nurses and pest-house administrators. They sanitized, quarantined, shut in, regulated, prosecuted, isolated, fumigated, and incinerated. The created mortality records, built hospitals, instigated religious rituals, and encouraged the repopulation of their cities as the pestilence subsided. And when the spectre appeared again after a few years’ respite, so did the cycle of civic responses.

Obviously, these Boards of Health were an integral part of the cities’ defences, time and time again. Even as the plague slowly released its grip over Europe, the Health Boards remained cautious. In England, a special column designated specifically for plague deaths was kept on the Bill of Mortalities until 1703, fully twenty-four years after the last recorded incidence of the plague in Europe: a single death recorded in 1679 (Moote, 2004). Therefore, even after the plague had left Europe, the measures enacted by the Health Boards remained. They were considered valid innovations that made societies healthier units. Quarantine is an example of a method used to this day to prevent the spread of contagious disease. The first documented quarantine occurred in Ragusa, Italy—though it was actually considered a “trentina”, or a thirty day isolation period. The idea was to isolate those suspected of carrying pestilence in Old Ragusa for a month before they were allowed to enter the city and mix with the citizens (Campbell, 1931). Later, cities such as Venice “moved, at the first sign of pestilence to impound all incoming vessels for a full forty days” (Naphy, 2004). Because


48

Ampersand

of the prolongation of the isolation period, the term was changed to “quarantine”. Occasionally, a quarantine period could get shortened to about twenty days if a person could present a health certificate to city officials (Naphy, 2004). Within the city, another form of isolation was instituted, though it was not called quarantine at the time. Cities would often confine infected people either to their homes or to pest houses removed a ways from the bulk of the community’s population (Byrne, 2006). Cloistering the sick was an idea that was widely practiced during the plague. And, as is true of the other methods, it helped maintain the health of the general population, bolstering the overall levels of health in Europe.

when the plague reduced the population of cities, more people could migrate to urban areas (Dyer, 1978), where life was much easier. City life provided for “higher wages, low rents, poor relief, and generally better living conditions available within the walls” (Byrne, 2006). As the pressure of the num-

The inflated wages allowed people to eat better food, to dress in higher quality and warmer clothing, and to live in cleaner, sturdier houses—much better standards of living than before the plague

Reducing Population, Improving Health

The physical health of the people was also influenced by population size. The drastic reduction in numbers was, perhaps, the most striking effect of the plague on the people of Europe. Pre-plague Europe was grossly overpopulated—the land was already sustaining the maximum number of people (Herlihy, 1997). The plague relieved Europe’s oversaturated economy. This allowed distribution of resources to a fewer number of people, and it allowed for more money per capita to circulate (Herlihy, 1997). In general, people in post-plague Europe enjoyed a better standard of living, and therefore, better health. “The population [of Europe] had reached the maximum number capable of surviving on the land available. Undoubtedly, this left many, if not most, people struggling on a subsistence existence” (Naphy, 2004). Poverty was rampant in Europe before the plague, so people did not enjoy decent living conditions. Furthermore, much of the population of the time lived in rural farmsteads (Dyer, 1978);

bers in Europe was relieved, and as a larger portion of the population moved into the cities, the overall quality of life of the people increased. One of the contributing factors to the general health of the population was how, due to the drastic decrease in the number of people, the land was no longer over-saturated. There were more resources to go around, so fewer people were starving. Before the plague, nearly all of the land in Europe was used to produce wheat, as it was most efficient in feeding a great many people (Naphy, 2004). However, after the plague, the landscape of Europe was transformed. Because there were fewer people, the demand for grain dropped; therefore, what was previously farmland could be used for other things, such as cattle rearing (Naphy, 2004). In effect, people were not only getting more to eat, they were eating better. The readiness and availability of a variety of resources contributed to an improved standard of living in the post-plague days. Because people enjoyed more well-rounded diets, they were stronger and healthier. After the plague, poverty became much less of an issue. In particular, wages in the post-plague era were incredibly inflated. With the sudden drop in population, human labour became a valuable com-


Pathogens & History   49 modity. Even women or unskilled workers could demand wages that were higher than levels of pay during the thirteenth century (Herlihy, 1997). In his article, Pamuk (1997) compares wages across Europe in the years directly following the first plague: “The large decreases in population and the labour force also resulted in dramatic changes in relative factor prices and in the sectoral terms of trade. Real wages double in most countries and cities during the century following the first occurrence of the plague.” Money was no longer an issue for most people in the years following the plague. In fact, there were times in the late Medieval Ages when silk was purchased and worn more frequently than wool (Herlihy, 1997). This indicates that people were on average well off, as they could afford such luxuries. Furthermore, records also show that a specific type of law was enacted often in post-plague Europe. These regulations, called sumptuary laws, attempted to stop the poorer class from acting as if they were rich. It was said that: Members of the elite were shocked to find that ‘mere shopkeepers’ were able to give lavish banquets, afford extravagant wedding feasts, be buried with immense pomp, and, worse, dress their wives and daughters in great finery (Naphy, 2004). The inflated wages allowed people to eat better food, to dress in higher quality and warmer clothing, and to live in cleaner, sturdier houses—much better standards of living than before the plague (Herlihy, 1997). All of these factors contributed to healthier individuals. For, if a person’s life is comfortable and easy, she will experience less stress, be exposed to less filth, sustain a stronger immune system, and be an overall healthier individual. Furthermore, people with money had the means to afford doctors, medicines, and treatments for whatever illnesses they may succumb to. Overall, the reduction in population increased wages and, thereby, improved health.

Conclusion

The plague tore through Europe and laid claim to thirty to fifty percent of its population (Cambridge, 2003). Mortality rates were steep and terrifying. Nevertheless, the survivors enjoyed better overall health than they had before the plague struck. The quality of life during the seventeenth century was considerably better than that of the thirteenth century because resources were more plentiful and money more abundant. Furthermore, the plague also changed the mindset of the people of Europe. It gave them more cautious attitudes towards death, and incited conceptions of disease as contagion rather than an imbalance of humours. This new perspective allowed people to take measures that were more effective in ensuring their health. The pestilence also forced the populace to scramble frantically to organize itself. To fight off the disease, societies established public health innovations that succeeded in making communities healthier as a whole. Although “plague” is not generally associated with the concept of good health, it can be said that as the Black Death ravaged Europe, it actually improved general health. This no longer seems contradictory given the factors traced in this paper. It is now evident that mankind managed to benefit from the devastation of the plague. Therefore, even though the destruction caused by the plague is saddening, it is important to remember the lessons that have been learned and the advances that have been made. The plague stands as a testament to the remarkable tenacity of mankind. References Byrne, J.P. (2006). Daily life during the black death. London, England: Greenwood Press. Byrne, J.P. (2004). The Black Death. (2004).Greenwood guides to the historic events of the medieval world. London, England: Greenwood Press. Campbel, A. (1931). The Black death and men of learning. New York, NY: Columbia University Press. Dyer, A.D. (1978). The Influence of bubonic plague in England, 15001667. Medical History, 22, 308-326. Herlihy, D. (1997). The Black death and the transformation of the west. Cambridge: Harvard University Press. Lavelle, P. (2004, January 22). On the trail of the black death. Retrieved from http://www.abc.net.au/science/features/blackdeath/default. htm Moote, A.L, & Moote, D.C. (2004). The Great plague. Baltimore: The John Hopkins University Press.


50

Ampersand

Naphy, W, & Spicer, A. (2004). Plague: black death and pestilence in europe. Stroud, England: Tempus Publishing Limited. Pamuk, S. (2007). The Black death and the origins of the ‘great divergence’ across europe, 1300-1600.European Review of Economic History, 11(3), 289-317. Perlin, D, & Cohen, A. (2003). The Complete idiot’s guide to dangerous diseases and epidemics. United States of America: Alpha Books. Plague: the black death. (n.d.). Retrieved from http://science. nationalgeographic.com/science/health-and-human-body/humandiseases/plague-article.html Streissguth, T. (2004). The Black death history firsthand. United States of America: Greenhaven Press. The Cambridge historical dictionary of disease. (2003). Retrieved from Black Death,” http://www.credoreference.com/entry/5169715/ Zietz, B. (2000). The History of the plague and the research on the causative agent yersinia pestis. International Journal of Hygiene and Environmental Health, 207(2), 165-178.


Pop Culture & Pharmacology   51

The Safety and Efficacy of Prozac: 1987-2008 Yael Smiley

“Prozac” by Beth Page. (2006). http://www.antistockart.com


52

Ampersand

Consider Prozac, the Wonder Drug, and the Prozac era; from the enthusiastic embrace by the public and the scientific community to its decline in reputation and demise. The article proposes a detailed look at major academic journals as well as newspaper and magazine articles to trace the history of this psychopharmaceutical. The importance of developments in clinical psychology, biological psychiatry, the Anime Theory of Depression and other ‘tranquilizers’, such as valium, are examined in relation to the history of Prozac. In his book Prozac on the Couch, Jonathan Metzl (2003) presents Mickey Smith’s Law of the Wonder Drug. Smith originally developed this theory to explain the rise in popularity and fall from grace of minor tranquilizers (e.g. Valium), but Metzl adopts the theory to discuss the history of Prozac use in terms of individual experiences (although the Law could also be applied to cultural patterns of use). The Law of the Wonder Drug defines three sequential and distinct phases. Phase one follows the launch of the drug. The public embraces the drug with “wild enthusiasm” and older forms of treatment fall out of fashion. Professionals and those in the scientific community endorse the new drug, leading to a period of popular euphoria where it is “overvalued, over-requested and overprescribed.” In phase two, flaws of the Wonder Drug are exposed. This period of disenchantment leads to undervaluing and overcondemnation of the drug. Finally, phase three is a period of resolution, which is characterized by “ap-

safety issues, Prozac experienced immense public and professional popularity. Only when questions were raised about Prozac’s efficacy did its reputation decline. The analysis of Prozac is relevant for a number of reasons. First, Prozac experienced unique fame in the United States of America (USA) in both academic and popular literature. Second, Prozac was the first selective serotonin reuptake inhibitor (SSRI) launched in the USA, and it subsequently monopolized the market for four years (Healy, 2004). Thus, Prozac came to represent all of the SSRIs in the public mind leading to titles of books like Prozac Nation, Listening to Prozac and discourse citing the “Prozac era” (Herzberg, 2009).1 Third, the debates about the safety and efficacy of Prozac were widely publicized. Finally, questions regarding the safety and efficacy of Prozac have had lasting impacts in the discipline of psychiatry.

Valium’s success set the standard of fame that a Wonder Drug could enjoy; the public embrace of a psychopharmaceutical was part of the essential groundwork that made Prozac a success. propriate evaluation of the comparative worth of the drug.” This paper will question whether or not these phases are chronologically distinct in the history of Prozac by analyzing both academic and public opinion. I will evaluate these phases in academic psychiatry by an analysis of psychiatric literature on Prozac in major academic journals. To discern the public perception of these issues, I will evaluate newspaper articles that mention Prozac between 1987 and 2008. I will suggest that despite significant

History and Biology of Prozac

Prozac was developed and marketed as an antidepressant by Eli Lilly and Co. just as depression was gaining recognition as a major illness (Metzl, 2003). In the late 1980s the US National Institute of Mental Health headed a campaign to supply 1  I will not discuss the marketing of Prozac or the personal experience of Prozac. For further reading on these topics, see Let Them Eat Prozac by David Healy and Listening to Prozac by Peter Kramer.


Pop Culture & Pharmacology   53 physicians and patients with facts about depression (Healy, 2004) Pharmaceutical discovery and nosology helped spark the recognition of depression. As historian and psychiatrist David Healy (2004) explains, “depression was all but unrecognized before the antidepressants; only about fifty to one hundred people per million were thought to suffer from what was then melancholia. Current estimates put

Prozac was released on the heels of another blockbuster drug, Valium (Healy, 2004). Valium, a drug used to ease anxiety, enjoyed immense popularity in the 1960s. It was regularly prescribed as a minor tranquilizer and widely discussed in public circles (Tone, 2009). Valium’s success set the standard of fame that a Wonder Drug could enjoy; the public embrace of a psychopharmaceutical was part of the

The academic community was ripe to embrace theories that tied a disease to a neurotransmitter (e.g. dopamine) or other biological process, so that a drug could be prescribed to act on the identified target that figure at one hundred thousand per million.” Here, Healy makes reference to the first class of antidepressants: the monoamine oxidase inhibitors (MAOIs) and the tricyclic antidepressants. MAOIs and tricyclics were both discovered in 1957 and in the following decades committees of psychiatrists and patient groups organized to educate the public about depression (Healy, 2004). The one-thousand-fold increase in prevalence cited by Healy is also attributable to the creation of a new diagnostic category. The third edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III) published in 1980 by the American Psychiatric Association was the first to include a specific diagnosis for depression (Hirshbein, 2009). Other scholars have shown that the deletion or addition of a diagnosis to the DSM is a major factor in disease prevalence. For example, Conrad (1975) published an article which reported the discovery of hyperkinesis, later known as attention-deficit hyperactivity disorder (ADHD). Conrad’s article compels the reader to consider whether or not the condition of hyperkinesis always existed and what it meant for it to become medicalized. Once hyperkinesis was given a medical definition and incorporated into the DSM-II, prevalence skyrocketed (Conrad, 1975).

essential groundwork that made Prozac a success. In 1991, just as Prozac was taking off, the historian of pharmacology Mickey Smith wrote that “with the exception of the oral contraceptives, no class of drugs has had the social significance and impact of the minor tranquilizers.” The public was primed by the “tranquilizer bonanza” to embrace Prozac as the next wonder drug upon its launch in 1987 (Tone, 2009). Valium also represented a shift in the psychiatric community, in which using drugs to treat mental illness had become standardized practice. The shift towards drugs and biological explanations of psychiatric illness is captured in the DSM-III. Historian Tone (2009) writes that the publication of the DSM-III displayed “rhetoric and findings… ineluctably yoked to biological psychiatry, particularly the views of a new breed of psychopharmacologists who believed that clinicians could and should use drugs to treat patient’s symptoms.” Prozac was discovered precisely at the time when organized psychiatry sought to explain mental illness in neurological terms. For example, the biological origins of schizophrenia were published in 1987: schizophrenia resulted from too much dopamine in the brain (Seeman, 1986). The academic community was ripe to embrace theories that tied a disease to a


54

Ampersand

neurotransmitter (e.g. dopamine) or other biological process, so that a drug could be prescribed to act on the identified target (Herzberg, 2009). The Amine Theory of Depression was another appealing explanation of mental illness in terms of brain chemistry (Herzberg, 2009). The Amine Theory was supported by the proposed biological mechanisms of MAOIs and tricyclics. Communication in the brain is regulated by neurons. Neurons, at their terminal end, release neurotransmitters into an inter-neuron space. The released neurotransmitters then bind to receptors at the proximal end of an adjacent neuron. The place where this neuronneuron communication occurs is called a synapse. Serotonin and norepinephrine are the two monoamine neurotransmitters that MAOIs and tricyclics affect. These drugs increase the levels of the neurotransmitters in the synapse, letting them signal for a longer period of time (Kramer, 1993). Prozac, the trademarked name for fluoxetine hydrochloride, was developed in the context of this model of brain chemistry. The scientists at Eli Lilly and Company sought a compound with antidepression properties that lacked the side effects of the MAOIs and tricyclics. These side effects were known to be caused by the norepinephrine system (Kramer, 1993). Pro-

Methodology

To analyze the phases of euphoria, disenchantment, and resolution that Prozac experienced I conducted a search of the online archives of the New York Times for all articles containing the word “Prozac”. The search yielded over ten thousand results. The first match, “Times Topics: Prozac” took me to a database of 126 articles that the New York Times complied as “a free collection of articles about Prozac (Drug) published in the New York Times”. I charted the results by year and determined based on the title and review of the article if it related to safety, efficacy, or both (Figure 1). This methodology is limited for various reasons. The New York Times selected only 126 articles to include in the database and therefore excluded a great deal of material relating to the history of Prozac. However, by compiling the database the New York Times has provided a view of what it believes to be the essential elements of its coverage of the history of Prozac. This is a unique means to gauge public understanding of the drug. In instances where the database did not cover significant events, I supplemented the database with articles that were not included or that came from other news sources. This analysis is also limited by my subjective determination of which articles qualify as an article about safety or an article about

Prozac experienced a phase of euphoria in which it was the “most prescribed and most profitable psychopharmaceutical in history” zac and all SSRIs were developed to do just this by raising the amounts of serotonin in brain synapses without affecting the norepinephrine system. The advent of Prozac and the serotonin hypothesis of depression was another success for biological psychiatry. This hypothesis was widely accepted by the psychiatric community and indoctrinated into psychiatric vernacular, with countless articles citing a “chemical imbalance” in the brains of depressed patients (Herzberg, 2009).

efficacy. Broadly, if an article mentions the words safe, suicide, danger, warnings, or harmful I classified it as an article about safety. If an article mentioned the words effective, working, or cure I classified it as an article about efficacy. Despite its limitations, this methodology is useful to discern the general trend in popular opinion which can be compared to academic literature regarding the safety and efficacy of Prozac. I used the National Center for Biotechnology Information (NCBI) website to access academic articles. I conducted searches of the PubMed database


Pop Culture & Pharmacology   55 Figure 1

for the words “antidepressants”, “Prozac”, “fluoxetine”, “safety”, and “efficacy” between the years 1987 and 2008 and selected articles based on the relevance of the abstracts. I also retrieved articles that were presented in Healy’s book Let Them Eat Prozac. Phase I: Euphoric Reception

Prozac experienced a phase of euphoria in which it was the “most prescribed and most profitable psychopharmaceutical in history” (Herzberg, 2009). In this phase, Prozac was heralded as a safe and effective drug in the treatment of depression in academic journals such as the Journal of Clinical Psychiatry, International Clinical Psychopharmacology, and The American Journal of Psychiatry (Mendels, 1987; Feighner et al., 1989; Rickels & Schweizer, 1991; Beal et al., 1991). In addition to the explicit testaments in the academic literature, the prescribing trends of office-based psychiatrists suggested a medical culture whereby Prozac was believed to be safe and effective. A paper in the American Journal of Psychiatry in 1993, titled “Trends in the Prescription of Antidepressants by Office-Based Psychiatrists” found that the number of antidepressant prescriptions by office-based psychiatrists nearly doubled between 1980 and 1989 (Olfson, 1993). This growth

was attributed to the availability of new drugs, such as Prozac, which accounted for 29.6 percent of the antidepressant prescriptions in 1989 (Olfson, 1993). Prozac enjoyed major success in the professional world in the late 1980s and early 1990s. The prescribing habits of office-based psychiatrists and the lauding of Prozac in the academic literature suggests that Prozac was well received by academics in its early years. During the first phase, Prozac, which was a success in the medical community, was nothing short of a sensation to the American public. Healy writes that “right from the time of its launch in America, patients were lining up asking for Prozac by name, an experience new to American psychiatrists” (Healy, 2004, p.38). Prozac first appeared in the business section of the New York Times on September 12, 1987, in a short article announcing its approval by the Food and Drug Administration (FDA) (Reuters, 1987). In December 1989, Prozac was the subject of a New York Times Magazine piece titled “Bye Bye, Blues: A New Wonder Drug for Depression.” (Schumer, 1989) The seven page spread contains an image of the pill and an enlarged claim that “on Prozac, Rachel felt better within days. Within weeks, she was in a good mood.” This article exemplifies how the public embraced Prozac


56

Ampersand

in the phase of euphoria. Concerns regarding safety and efficacy are not raised. Rather, the article reinforces the popular belief about Prozac’s efficacy in the treatment of depression: Prozac “offers many advantages. It works in small doses, it’s fast, and it’s relatively easy to use.” In the next few years, Prozac appeared on the cover of Newsweek magazine, in two dozen more New York Times articles, and as the centerpiece of Listening to Prozac by psychiatrist Peter Kramer (1993). Herzberg (2009) credits Listening to Prozac with launching Prozac from moderately famous to a blockbuster drug. In Listening to Prozac, Kramer illustrates the transformative effect that Prozac has on the lives of his patients, describing moderately depressed patients who do “better than well” on the drug. Kramer’s book played a major role in establishing Prozac as a safe and effective drug and spent four months on the New York Times bestseller list. The New York Times ran an article in the midst of the book’s success titled “With Millions Taking Prozac, a Legal Drug Culture Arises” (Rimer, 1993). The article parrots Kramer’s (1993) assertions, and solidifies the common belief about Prozac in the phase of euphoria, that “it is considered to be generally safe and nonaddictive.” New York Times coverage between 1987 and 1994 demonstrates the public response to Prozac in the first phase. Prozac was considered to be a safe, effective drug that could dramatically transform the lives of people suffering from depression.

O

ne fascinating element of the Prozac story is that its public fame, as seen in Newsweek and New York Times, occurred concurrently with a major debate regarding its safety in the specialist literature. The debate began in 1990 with a publication by Martin Teicher, Carol Glod, and Jonathan Cole (1990) in the American Journal of Psychiatry. The paper describes six patients who become obsessively preoccupied with suicide and in some cases make a suicide attempt after being treated with Prozac (Teicher et al., 1990). The publication led to a flurry of reports and letters in the Journal of American Academic Child Psychiatry, the New England Journal of Medicine, the Journal of Clinical Psychiatry, and the Archives of General Psychiatry of authors describ-

ing similar cases of patients who became suicidal while taking Prozac (King et al., 1991; Rothschild & Locke, 1991; Wishing et al., 1992). In response, the scientists at Eli Lilly and Co. argued that suicide was a possible consequence of depression, not Prozac (Beasley et al., 1991). Despite the contentions of the drug manufacturer, two reports published in 1991 provide evidence that suicidal ideation was caused by the drug, rather than the illness it treated. One report, authored by King et al. (1991), notes the effects that Prozac had on patients with obsessive-compulsive disorder (OCD). Since suicidal ideation is not an expected side effect of OCD, the development of such thoughts can be attributed to the drug used in treatment, rather than the illness. The authors describe six children who developed serious self-destructive behavior after receiving Prozac to treat OCD (King et al., 1991). The evidence is limited: only six patients are reported, and three were diagnosed with depression in addition to OCD. However, the cases suggested the possibility that Prozac had the specific ability to cause or exaggerate suicidal behavior and thoughts. The second study, by Anthony Rothschild and Carol Locke (1991) reports the results of a challengedechallenge-rechallenge2 study on three depressed patients. The patients had been previously treated with fluoxetine, and had all made serious suicide attempts. When the patients were re-exposed to Prozac the suicidal symptoms returned and the patients’ reactions were severe. One patient claimed, “I tried to kill myself because of these anxiety symptoms. It was not so much the depression…” (Rothschild & Locke). Another patient who felt like “jumping out of his skin” recognizes “this is exactly what happened the last time I was on fluoxetine, and I feel like jumping off a cliff again.” This study is also limited by the small group of patients. The authors propose a potential mechanism mediating the negative 2  The patient is put on a drug (challenge), taken off the drug (dechallenge), and then put back on the drug (rechallenge). This type of study is typically done to confirm side effects of a drug. It is understood that if the side effects discontinue during the dechallenge phase but then re-emerge in the rechallenge they are due to the drug.


Pop Culture & Pharmacology   57 effects of Prozac; akathasia, “which literally refers to an inability to sit still” is cited by the authors as the primary cause of the patients’ desires to commit suicide (as cited in Healy, 2004). The severe agitation, or akathasia, was a known side effect of Prozac since its early clinical trials (Healy, 2004). Furthermore, akathasia has been implicated in the development of suicidal and homicidal thoughts as well as violent intentions (Rothschild & Locke, 1991). This paper presents more evidence linking suicidal ideation to Prozac, rather than depression, calling the safety of Prozac into question.

claims that Prozac could induce suicidal behavior. The article references the Teicher (1990) publication and brings to light the confounding factor that “depressed patients are at strong suicide risks” (Angier, 1990b). The New York Times published another report on February 7, 1991 titled “Suicidal Behavior Tied Again to Drug” (Angier, 1991). This article demonstrates further publicity of the link between Prozac and suicide. It is surprising that these reports surfaced in the same time period that Prozac was enthusiastically embraced. Between 1987 and 1994 fourteen articles advertised safety concerns about Prozac. However, another fifteen were unrelated to safety. This overlap demonstrates a blurring of the first and second phases of Prozac’s wonder drug career. In the same time period, major problems were raised about Prozac in the New York Times, yet its popularity continued. How did Prozac manage to overcome reports about its questionable safety?

“Bye Bye, Blues: A New Wonder Drug for Depression.”

I

n response to the articles proposing a connection between Prozac and suicidal ideation, Eli Lilly and Co. responded with data published in the British Journal of Psychiatry titled “Fluoxetine and suicide: a meta-analysis of controlled trials of treatment for depression” (Beasley et al., 1991). The article concludes that data based on 3,065 patients shows no link between fluoxetine and a heightened risk of suicidal acts or thoughts (Beasley et al., 1991). Healy (2004) presents flaws in the Beasley article, namely that patients were co-prescribed anti-anxiety medication, that suicidal ideation is improperly measured, and that the analysis omits patients who dropped out because of anxiety and agitation. Despite the flaws, the Beasley et al. (1991) report served as a major scientific study in contrast to the anecdotes and smaller studies of only a handful of practitioners (Healy 2004). These safety problems, which might indicate the beginning of the second phase, were first reported in the New York Times on March 29, 1990. The article, “New Antidepressant is Acclaimed but Not Perfect”, reports that experts were starting to see “an increased number of patients on Prozac who suffer from intense agitation…or an apparently medication-induced preoccupation with suicide” (Angier, 1990). In August of the same year, an article was published about lawsuits that were being brought against Eli Lilly and Company due to the

One answer lies in the results of a September 21, 1991 vote. The FDA convened a hearing to vote on “whether there was credible evidence that antidepressants increased the risk of suicide” (Healy 2004, 61). Patient groups, Eli Lilly and Co. representatives, and other senior figures in psychopharmacology made presentations. The panel voted not to support the position that antidepressants increase the risk of suicide. Healy (2004) remarks that this vote “became a defining moment, one that somehow let Lilly off the hook.” This is further supported by The New York Times coverage of the FDA hearings. Five articles appeared between April 27 and September 21 of 1991 about the upcoming meeting. The article presenting the outcome of the hearing, “Warning Label on Antidepressant is Opposed”, was the last article published about the safety or efficacy of Prozac until 1993 (Associated Press, 1991). The phase of popular euphoria continued until 1993 after the FDA exonerated Prozac at this hearing. Prozac continued in the first phase of Wonder Drugs, despite known flaws of the drug (usually characteristic of Phase two) due to the approach taken by its makers regarding the safety concerns.


58

Ampersand

The champions of Prozac, including Eli Lilly and Co. and the American Psychiatric Association, could refute the suicide connection with a few points: Prozac was the most studied drug in history, suicide was a consequence of depression, not Prozac, and depression was still under-diagnosed and undertreated (Healy, 2004). Prozac only shifted into the second phase, the phase of disenchantment, when questions were raised regarding its efficacy. Phase II: Flaw Exposure

Questions of Prozac’s efficacy were first published in a 1994 meta-analysis in the Journal of Nervous and Mental Disorders. Data analysis from thirteen double-blind placebo controlled studies conducted between 1985 and 1992, including the trials presented by Eli Lilly and Co., aimed to demonstrate Prozac’s efficacy to the FDA (Greenberg et

is no more effective than tricyclics in the treatment of depression (Nemeroff, 1994; Moller, 1996). There were other indications in the literature that Prozac only demonstrates moderate efficacy. In the same two journals, in 1996 and 1999 respectively, papers compared efficacy and discontinuation data in trials comparing Prozac to other SSRIs (Zarate, 1996; Edwards & Anderson, 1999). The second set of papers use new indicators: when patients discontinue Prozac or switch to another SSRI, it implies that the drug is not working. The articles questioning Prozac’s efficacy were essential to launching Prozac into the phase of disenchantment. The second phase in Smith’s Law of the Wonder Drug is characterized by the public discovery of problems with the drug. Though safety problems had been raised in 1990, Prozac continued to experience the wide prescription and acclaim characteris-

“Worsening Depression and Suicidality in Patients Being Treated With Antidepressants” on March 22 (FDA). The note warns health care providers to carefully monitor patients on the basis that Prozac (among other antidepressants) may worsen the depression or suicidal preoccupations. al. 1994). The authors concluded that Prozac’s therapeutic effects are modest and comparable to other older antidepressant drugs. Greenberg the lead author, had previously published another paper with Seymor Fisher (1993) questioning the double-blind design as an effective means of studying psychotropic drugs. They suggest that the double-blind study can be biased based on the expected side effect profile of the drug being tested. When investigators evaluate patients they suspect to be on the drug of interest, they are likely to inflate the desired outcome. This analysis was followed by reviews in the Journal of Clinical Psychiatry and Drugs in 1994 and 1996 respectively (Nemeroff, 1994; Moller, 1996). The authors assert in both papers that Prozac

tic of the first phase of a wonder drug. The phase of disenchantment only began publicly when efficacy questions were raised about Prozac. The New York Times picked up on the Greenberg analysis in an article titled “New View of Prozac: It’s Good, but Not a Wonder Drug” that ran on October 19, 1994 (Goleman, 1994). The article presents publicly what was emerging in academic literature, that Prozac is only as effective as the tricyclics and only moderately effective in treating depression. This was the first article to question Prozac’s efficacy outright. A 1993 article “Drug works, but Questions Remain” had raised Prozac’s side effects such as sleep disturbances and sexual dysfunction, but the words Drugs Works asserts the public perception of Prozac’s ef-


Pop Culture & Pharmacology   59 ficacy. (Angier, 1993) After the October nineteenth article exposed the concern within the academic world about the efficacy of Prozac, suspicion regarding Prozac’s safety and efficacy were prevalent in the New York Times. Forty-five articles mentioning Prozac appeared between 1994 and 1999, and sixteen of them mentioned the safety or efficacy of Prozac. The language in the articles reflects a feeling of disenchantment. On October 28, 1994, one New York Times contributor noted the critics who contend that drug companies may not release unedited trial data (Foderaro, 1994). In June 1995, in a review about an upcoming television special on Prozac, a journalist writes that Prozac “apparently” works by increasing the brain’s serotonin (Goodman, 1995). An article published on February 23, 1996 references the constant medical reports and scientific articles that connect Prozac to suicide (Rosenthal, 1996). By 1998 the disenchantment was growing: in an article titled “With Prozac, the Rose Garden Has Hidden Thorns”, the author explores the idea that even when Prozac is helpful, patients still experience social unease and marital and professional crises (Fox, 1998). The article links these problems back to Prozac and implies popular dissatisfaction with its therapeutic effects. The New York Times ran an article and editorial on March 19 and March 21 of 1999 demonstrating explicit dissatisfaction. The article, “New and Old Depression Drugs are Found Equal”, covers a government sponsored study finding SSRIs and tricyclics equally effective in treating depression (Goode, 1999). The editorial “Placebo Nation” references Prozac’s marginal superiority to placebos in clinical trials (Horgan, 1999). These articles demonstrate that the language used to deal with Prozac in popular discourse changed in the middle of the 1990s, reflecting the second phase of the wonder drug. Phase III: Resolution

The second phase continued until 2004, when the FDA issued a public health advisory note titled “Worsening Depression and Suicidality in Patients Being Treated With Antidepressants” on March 22

(FDA). The note warns health care providers to carefully monitor patients on the basis that Prozac (among other antidepressants) may worsen the depression or suicidal preoccupations. The New York Times covered the recommendation in a series of articles. After the advisory note was released, the article opens with the claim “patients taking antidepressants can become suicidal in the first weeks of therapy, and physicians should watch patients closely when first giving the drugs or changing dosages, federal regulators said yesterday” (Harris, 2004). An article titled “Overprescribing Prompted Warning on Antidepressants” cites a number of reactions to the warning (Grady & Harris, 2004). Some express concern that the warning will prevent doctors from prescribing the drugs, limiting patients’ access to treatment. Others contend that the warning is too limited. The variety of opinions reflects entry into the third phase: Prozac was no longer popularly overvalued or condemned, but discussed in terms of its benefits and risks. The language in these articles indicates a shift into the third phase of the Wonder Drug. Popular disenchantment was replaced with a dialogue about Prozac’s risks and benefits, indicating appropriate evaluation of Prozac. The advisory note led to a committee vote within the FDA to recommend a black-box warning on antidepressants. On October 15, 2004 the FDA issued another public health advisory note, instructing manufacturers to label SSRIs and other antidepressant drugs with a warning indicating the possibility for increased suicidal tendencies. Twelve articles in the New York Times database were published about Prozac in 2004, the most in any year. Nine articles were published in 2005, followed by one to three articles in each of the next three years. The New York Times coverage of Prozac tapered off after the start of the third phase. This lack of interest in Prozac between 2005 and 2008 may indicate stable attitudes towards Prozac in these years. The academic papers published in the third phase question the safety and efficacy of Prozac. Publications in the Annals of Internal Medicine in 2005 and 2008 conclude that second generation antidepressants, including fluoxetine (Prozac), display minimal differences in efficacy (Hansen, 2005; Gartlehner,


60

Ampersand

2008). Prozac and other SSRIs are compared in to determine positive responses and adverse events. These papers indicate Smith’s third resolution phase, in which the benefits of the drug are rationally assessed. Impact of the Prozac Experience

These phases had a significant impact on how psychiatrists viewed the etiology of the disease. Historian Edward Shorter (2009) concludes in his book Before Prozac that issues of safety and efficacy of Prozac played a major role in the evolution of SSRIs and the discipline of Psychiatry. While the impact of Prozac’s dangerous consequences has been already been discussed by Healy (2004), the impact of Prozac’s efficacy must be further explored. One important impact of the efficacy of Prozac was on how biological psychiatry articulates mental illness in terms of the brain. Herzberg (2009) writes that in the first phases of the wonder drug, the serotonin model of depression was readily embraced because it suggested that brain chemistry might be a relatively straightforward system. It is likely that if Prozac had worked with staggering efficacy, this academic mentality would have persisted. After the major efficacy questions raised in the second phase, psychiatrists coped with Prozac’s lack of efficacy by changing the conversation about brain chemistry. Rather than relying heavily on the serotonin hypothesis to explain depression, the psychiatrist Mimi Israel explains that “depression is unlikely to be caused by a single gene, brain region, or neurotransmitter” (2009). Prozac’s lack of efficacy significantly affected this discourse. Like Prozac’s third phase, biological psychiatry has now entered an era with a more nuanced understanding of the brain and psycho-pharmaceuticals. Mickey Smith’s Law of the Wonder Drug has served as a useful framework to analyze the history of the safely and efficacy of Prozac. The analysis of the New York Times articles has shown how popular coverage has reflected the trends of academic psychiatric discourse regarding Prozac. In its first years on the market, Prozac was championed by practitioners, academics and journalists. How-

ever, Prozac is unique in that it experienced problems indicative of the second phase without being launched into a phase of total dissatisfaction. Problems regarding Prozac’s safety were insufficient to curb the euphoria of Prozac’s first phase. Only upon reports of Prozac’s inefficacy did the problems associated with the second phase cause public disenchantment. The FDA’s action and inaction helped to shape this unique entry into the second phase by exonerating Prozac during the 1990 hearing. Prozac was criticized for having poor safety and efficacy in the second phase, but transitioned into a period allowing for rational assessment of its risks and benefits. Prozac impacted some thousands of patients and the practice of psychiatrists and doctors of ranging specialties. This analysis sheds light on how these actors reacted to a new agent and on the different receptions Prozac received over time. How academics and laypeople grapple with the intersection of mental health and scientific discovery will continue to be of relevance as each field advances. References Angier, N. (1990, Mar 29). New Antidepressant Is Acclaimed but Not Perfect. The New York Times. Retrieved from http://www.nytimes. com. Angier, N. (1990, Aug 16). Eli Lilly Facing Million-Dollar Suits On Its Antidepressant Drug Prozac. The New York Times. Retrieved from http://www.nytimes.com. Angier, N. (1991, Feb 7). Suicidal Behavior Tied Again to Drug. The New York Times. Retrieved from http://www.nytimes.com. Angier, N. (1993, Dec 13). Drug Works, but Questions Remain. The New York Times. Retrieved from http://www.nytimes.com. Associated Press. (1991, Sept 21). Warning Label on Antidepressant Is Opposed. The New York Times. Retrieved from http://www. nytimes.com. Beal, D. M., et al. (1991). Safety and Efficacy of Fluoxetine. American Journal of Psychiatry 148, 12. Beasley, Charles M. Jr., Dornseif, B. E., Bosomworth, J. C., Sayler, M. E., Rampey Jr., Alvin H., Heiligenstein, John H., Thompson, Vicki L., Murphy, David J., and Masica, Daniel N. (1991). Fluoxetine and Suicide: A Meta-Analysis of Controlled Trials of Treatment for Depression. British Medical Journal, 303, 685-92. Boyer, W. F., and Feighner, J. P. (1989). An Overview of Fluoxetine, a New Serotonin-Specific Antidepressant. Mount Sinai Journal of Medicine, 56, 136-40. Conrad, Peter (1975). The Discovery of Hyperkinesis. Social Problems, 23, 12-21. Edwards, J. G., and Anderson, I. (1999) Systematic Review and Guide to Selection of Selective Serotonin Reuptake Inhibitors. Drugs, 57, 507-33. Feighner, J. P., et al. (1989). A Double-Blind Comparison of Fluoxetine, Imipramine and Placebo in Outpatients with Major Depression. International Clinical Psychopharmacology, 4, 127-34. Fisher, Seymour, and Greenberg, R. (1993). How Sound Is the Double-Blind Design for Evaluating Psychotropic Drugs? Journal of Nervous and Mental Disease, 181, 345-350.


Pop Culture & Pharmacology   61 Foderaro, Lisa W. (1994, Oct. 28). Whose Fault When the Medicated Run Amok? The New York Times. Retrieved from http://www. nytimes.com. Fox, Margalit (1998, Oct 4). Ideas & Trends; With Prozac, the Rose Garden Has Hidden Thorns. The New York Times. Retrieved from http://www.nytimes.com. Gartlehner, G., et al. (2008). Comparative Benefits and Harms of Second-Generation Antidepressants: Background Paper for the American College of Physicians. Annual Internal Medicine, 149, 734-50. Goleman, Daniel (1994, Oct. 19) . New View of Prozac: It’s Good But It’s Not a Wonder Drug. The New York Times. Retrieved from http://www.nytimes.com. Goode, Erica (1999, March 19). New and Old Depression Drugs Are Found Equal. The New York Times. Retrieved from http://www. nytimes.com. Goodman, Walter (1995, June 6). Television Review; About Drugs and the Happiness High. The New York Times. Retrieved from http://www.nytimes.com. Grady, Denise and Harris, Gardiner. (2004, Mar 24) Overprescribing Prompted Warning on Antidepressants. The New York Times. Retrieved from http://www.nytimes.com. Greenberg, R. P., et al. (1994). A Meta-Analysis of Fluoxetine Outcome in the Treatment of Depression. Journal of Nervous and Mental Disease, 182, 547-51. Hansen, R. A., et al. (2005). Efficacy and Safety of SecondGeneration Antidepressants in the Treatment of Major Depressive Disorder. Annals of Internal Medicine, 143, 415-26. Harris, Gardiner (2004, Sept. 15) FDA Panel Urges Stronger Warning on Antidepressants. The New York Times. Retrieved from http://www.nytimes.com. Healy, David (2004). Let Them Eat Prozac: The Unhealthy Relationship between the Pharmaceutical Industry and Depression. In Andrea Tone (Ed.) Medicine, Culture, and History. New York and London: New York University Press. Herzberg, David (2009). Happy Pills in America: From Miltown to Prozac. Baltimore: The Johns Hopkins University Press. Hirshbein, Laura D. (2009). American Melancholy : Constructions of Depression in the Twentieth Century. Critical Issues in Health and Medicine. New Brunswick, N.J.: Rutgers University Press. Horgan, John (1999, March 21). Placebo Nation. The New York Times. Retrieved from http://www.nytimes.com. Israel, Mimi (25 Nov 2009). Thinking and Feeling: Drug Therapies and the Brain. Speech at McGill University. Montreal, QC. King, Robert A., Riddle, Mark A., Chappell, Phillip B., Hardin, Maureen T., Anderson, George M., Lombroso, P., and Scahill, L. (1991). Emergence of Self-Destructive Phenomena in Children and Adolescents During Fluoxetine Treatment. Journal of the American Academy of Child and Adolescent Psychiatry, 30, 179-86. Kramer, Peter D. (1993). Listening to Prozac. New York, N.Y., U.S.A.: Viking. Mendels, J. (1987). Clinical Experience with Serotonin Reuptake Inhibiting Antidepressants. Journal of Clinical Psychiatry, 48, 26-30. Metzl, Jonathan (2003). Prozac on the Couch : Prescribing Gender in the Era of Wonder Drugs. Durham: Duke University Press. Moller, H. J., & Volz H. P. (1996). Drug Treatment of Depression in the 1990s. An Overview of Achievements and Future Possibilities. Drugs, 52, 625-38. Montgomery, S. A. (1996). Efficacy in Long-Term Treatment of Depression. Journal of Clinical Psychiatry, 57, 24-30. Nemeroff, C. B. (1994). Evolutionary Trends in the Pharmacotherapeutic Management of Depression. Journal of Clinical Psychiatry, 55, 16-7. Olfson, M., and Klerman, G. L. (1993) . Trends in the Prescription of Antidepressants by Office-Based Psychiatrists. American Journal of Psychiatry, 150, 571-7. Reuters. (1987, Sept 12). Eli Lilly drug passes a test. The New York Times. Retrieved from http://www.nytimes.com.

Rickels, K., and Schweizer, E. (1990). Clinical Overview of Serotonin Reuptake Inhibitors. Journal of Clinical Psychiatry, 51, 9-12. Rimer, Sara (1993, Dec 13). With Millions Taking Prozac, A Legal Drug Culture Arises. The New York Times. Retrieved from http:// www.nytimes.com. Rosenthal, Elisabeth (1996, Feb 23). Little Evidence on Effects Of Combinations of Drugs. The New York Times. Retrieved from http://www.nytimes.com. Rothschild, A. J., and Locke, C. A. (1991). Reexposure to Fluoxetine after Serious Suicide Attempts by Three Patients: The Role of Akathisia. Journal of Clinical Psychiatry, 52, 491-3. Schumer, Fran (1989, Dec 18). Bye Bye, Blues: A New Wonder Drug for Depression. The New York Magazine. Retrived from http:// www.books.google.com Seeman, Philip (1986). Dopamine Receptors and the Dopamine Hypothesis of Schizophrenia. Synapse 1, 133-52. Shorter, Edward (2009). Before Prozac : The Troubled History of Mood Disorders in Psychiatry. Oxford ; New York: Oxford University Press. Smith, Mickey (1991). A Social History of the Minor Tranquilizers: The Quest for Small Comfort in the Age of Anxiety. Binghamton, New York: Haworth Press, Inc. Solomon, Andrew. (2004). A Bitter Pill. The New York Times. Retrieved from http://www.nytimes.com. Teicher, M. H., Glod, C., and Cole, J. O. (1990). “Emergence of Intense Suicidal Preoccupation During Fluoxetine Treatment.” American Journal of Psychiatry, 147, 207-10. Tone, Andrea (2009). The Age of Anxiety : A History of America’s Turbulent Affair with Tranquilizers. New York: Basic Books. Wirshing, W. C., et al. (1992). Fluoxetine, Akathisia, and Suicidality: Is There a Causal Connection? Archives of General Psychiatry, 49, 580-581. Zarate, C. A., et al. (1996). Does Intolerance or Lack of Response with Fluoxetine Predict the Same Will Happen with Sertraline? Journal of Clinical Psychiatry, 57, 67-71.


62

Ampersand


Medical & Cosmetic Surgery   63

Innovations in plastic surgery over the past century have created a dichotomy wherein the medical and cosmetic aspects are distinct. In tracing the developments in twentieth century plastic surgery, Todd Plummer elucidates how advances in the surgical techniques of medicine and changes in conceptions of the human body have led to a new genre of surgery—cosmetic surgery. He argues that despite cosmetic surgery’s origins as a medical procedure, it can no longer be considered as a medical procedure. Over the past century, the field of plastic surgery has evolved and diversified. Although many of its developments have been medical in nature, there has also been an unprecedented rise in non-medical elective procedures: plastic surgery has transformed from the crude reconstruction of World War veterans’ injuries into a variety of sophisticated reconstructive and cosmetic procedures. While reconstructive surgeries aim to restore the medicallydamaged body to its healthy state, cosmetic surgeries—especially in the past ten years—attempt to improve the body purely for aesthetic purposes. The specialty of cosmetic plastic surgery has become increasingly problematic throughout its development, for its critics question whether it is “medical” to use surgery as a tool to enhance or improve something which poses no threat to an individual’s health. It would be unwise, however, to generalize that plastic surgery should not be considered “medicine”. Dr. Dax Guenther, plastic surgeon, says of the history of plastic surgery, “Neurosurgeons own the brain, and orthopedic surgeons own the bone. What plastic surgeons own is innovation” (Guenther, 2008). Twenty and twenty-first century plastic surgery still

Plastic surgeons maintain that although they are trained in essentially the same way as is any other surgeon, their art is one which is, in its nature, both medical and cultural

aims to redress problems arising form the body in human beings. In this way, plastic surgery can be viewed as branch of medicine, and—in particular—one which has evolved more than any other in response to changing perceptions of the body over time. Whereas reconstructive plastic surgery has remained medical and academic in nature, however, cosmetic surgery has largely strayed from what should be considered purely medical practice.

P

lastic surgery before the twentieth century was rarely practiced, and so there existed no clear division between it and general surgery. Although surgery did not truly become practical until the discovery of anesthesia and asepsis in the later nineteenth century (Bankoff, 1952), there are records of plastic surgery procedures dating as far back as the first century BC in Ancient India, where many people had nasal reconstructions (Ohana, 2006). In Ancient Indian culture, the nose was a “marque de l’honneur” and nasal deformities were therefore a great source of shame (Ohana, 2006). One particularly popular reconstructive technique, devised by Hindu surgeon Susruta Samhita, “is the basis of a very frequently used method today—one still known as the Hindu method” (Bankoff, 1952). The procedure involved lowering a flap of skin from the forehead over a sunken nose and manipulating it so as to minimize the appearance of deformity (Bankoff, 1952). While such surgeries were physically unnecessary, they were arguably invaluable as methods of emotional healing. Dr. Joseph E. Murray, champion of the field of plastic surgery during the mid-twentieth century, describes in his memoir Surgery of the Soul that there is an inexorable link between deformity and the human psyche. He explains that simple procedure to repair a cleft palate in a child or a head-to-toe skin graft to save a burn victim will “truly heal wounds of the soul” (Murray, 2001). In-


64

Ampersand

deed, throughout history, plastic surgery has been a response to human perceptions of the body. Plastic surgeons maintain that although they are trained in essentially the same way as is any other surgeon, their art is one which is, in its nature, both medical and cultural (Guenther, 2008). In the beginning of the twentieth century, prior to the advent of anesthesia and asepsis, the physical traits which today might warrant a desire for cosmetic surgery, such as “big noses, small breasts, and wrinkles of all sizes[…]were simply facts of life, and the dignity with which one bore them testified to the strength of one’s character” (Haiker, 1999). A lack of surgical techniques inevitably meant a lack of desire for cosmetic procedures. The lineage of modern cosmetic surgery can be traced back to the field of reconstructive surgery which developed in reaction to an array of injuries inflicted on soldiers in World War One. Wartime injuries frequently resulted in grotesque disfigurements, more so than in any previous war. In such a “mechanical and warring” era, the field of reconstructive surgery flourished (Byars & Kaune, 1944). The bulk of procedures which were pioneered and refined during this era included grafts of skin, cartilage, and bone (Santoni-Rugiu & Sykes, 2007). One of the most important leaders of the reconstructive surgery movement was Jean Louis Tessier. Tessier discovered how to remove the ocular orbits from the rest of the skull and move them into new locations secured by bone grafts as a way of reshaping and reconstructing the face (Jones, 2008). The impact of this procedure was immense for the branch of plastic surgery known as craniomaxillofacial surgery, which includes the reconstruction of bones, skin, and cartilage in the face and jaw area (Edgerton & Gamper, 2001). Today, craniomaxillofacial surgeons deal with the repair of severe facial trauma and congenital birth defects like cleft palates. Since the original craniomaxillofacial work of Tessier, the field has advanced significantly, allowing for “the anatomical restoration and stable fixation of the craniofacial skeleton” (Edgerton & Gamper, 2001). The developments in the field of craniomaxillofacial surgery, born in a tradition of academic and scientific study, reflect that much of plastic surgery is still medical in nature.

Compared to those of World War One, the injuries inflicted on soldiers in World War Two were often more disfiguring. In World War Two there was a much higher use of aircraft in battle. Many pilots, should their planes crash, suffered severe burns over large portions of their bodies, resulting in extreme amounts of pain, susceptibility to infection and the psychological stress which accompanies such traumatic disfigurements (Ohana, 2006). One such pilot was Charles Woods, who flew cargo for the United States Army Air Corps. After sustaining burns over seventy percent of his body, Charles was sent to Valley Forge General Hospital where he was placed in the care of Dr. Joseph Murray. As a temporary way of protecting Charles’ weakened body, Dr. Murray used the novel technique of allografting skin from a donor to Charles. The allografts protected Charles’ body from infection and allowed it to slowly heal. When some of Charles’ own, unburned, skin was deemed healthy, Dr. Murray began the slow process of autografting, whereby a patient’s healthy skin is removed and transplanted to a wounded part of the body. After twenty-four surgeries, Dr. Murray effectively transformed Charles Woods from a burned, nearly skinless soldier, to a healed man with healthy skin and working lips, eyelids, and hands (Murray, 2001). Plastic surgery during World War Two surpassed boundaries which had never been crossed by any other form of surgery. Not half a century earlier, a burn as severe as Charles Woods’ would have certainly meant death; thanks to innovators such as Dr. Murray, plastic surgery became capable of healing in ways that had never before been possible.

T

he work of surgeons such as Dr. Murray was novel and progressive, but it was not without its challenges: one of the greatest problems met by plastic surgeons was that of rejection (Santoni-Rugiu & Sykes, 2007). Skin grafts taken from a donor and given to a recipient tended to last only briefly before losing moisture and shriveling. It was the work of Peter Medawar and Thomas Gibson which at last unlocked the mystery of rejection: combining data from their studies on allografting and autografting in humans and animals, the two men concluded that


Medical & Cosmetic Surgery   65 tissues applied to a patient from a donor initiate an antigenic response from the recipient’s immune system (Santoni-Rugiu & Sykes, 2007). To overcome this obstacle, plastic surgeons yet again needed to innovate new ways of approaching unprecedented problems; they began to explore the medical concept of immunosupression, a manner of slowing and even stopping a patient’s immunoresponse usually with some sort of chemotherapy, decreasing the likelihood of tissue rejection. These chemotherapies have become so advanced and transplant surgeries so sophisticated that facial transplantation has become a reality. In 2005, the first partial facial transplant was performed on a woman in France whose face had been mauled off by a dog. The surgery was successful, and the woman has since commented on how her quality of life has been rescued as a result of decreased facial disfigurement. There are many ethical considerations, however, involved in this type of procedure—and other plastic surgery transplant procedures, at that—concerning immunosuppression. The woman who was the recipient of the partial facial transplantation, since her new face is from a donor and therefore would intrinsically trigger antigenic response and rejection, must take immunosuppresants for the rest of her life lest her new face be rejected. As a result, transplant “patients on life-long immunosuppressive regimens suffer from significant side-effects that include an increased incidence of cancer, infections, and nephrotoxicity” (Pomahac et al., 2008). Plastic surgery must deal with such complicated medical issues and ethical considerations just as other fields of medicine do.

tinue expand the field of plastic surgery coincided at the confluence of larger historical forces which, when all considered together, have contributed to the boom in elective plastic surgery: Early in the twentieth century, the interrelated processes of industrialization, urbanization, and immigration and migration transformed the United States from a predominantly rural culture, in which identity was firmly grounded in family and locale, to a predominantly urban culture, in which identity derives from ‘personality’ or self-presentation. The ethos of acquisitive individualism that emerged from this brave new world encouraged Americans to rethink their attitudes toward cosmetic surgery. Today the stigma of narcissism that once attached to cosmetic surgery has largely vanished... (Haiken, 1999)

transplant “patients on life-long immunosuppressive regimens suffer from significant sideeffects that include an increased incidence of cancer, infections, and nephrotoxicity”

During the nineteen-fifties, after plastic surgeons had gained plenty of experience in reconstructive surgery in the wake of World Wars One and Two, and after scientific discoveries such as that of understanding rejection, plastic surgeons looked for new frontiers to practice their art. Their desire to con-

Around the nineteen-fifties and nineteen-sixties, an important change occurred in cosmetic surgery which began to differentiate it more clearly from reconstructive surgery: elective cosmetic surgery began to be marketed in advertisements, and plastic surgeons offered competitive pricing for certain procedures, even payment plans for people who could not readily afford cosmetic surgery. Of course, this was driven by the consumer culture which was rampant in mid-century America (Haiken, 1999). For the first time in history, surgeons were advertising their trade and trying to convince people to undergo surgery for non-medical reasons. Instead


66

Ampersand

of being responsible for helping disfigured veterans regain some quality of life after war injuries, plastic surgeons now began to persuade people to go under the knife to enhance quality of life. As more and more plastic surgeons began to see elective cosmetic surgery as “a most profitable and satisfactory specialty” (Haiken, 1999), a trend of divergence of cosmetic surgery from the rest of plastic surgery can be seen.

T

he results of this divergence are highly apparent today. Whereas the bulk of plastic surgery is still highly medical and reconstructive in its nature, cosmetic surgery, although its roots are medical, has de-medicalized. Today, much of cosmetic surgery involves injections of fillers to shape the face and lips, chemicals to tighten the skin, and simple surgical procedures to alter the shape and appearance of the face. An important development which supports this difference is the change in standards of who can practice cosmetic surgery. Plastic surgeons who focus more on reconstruction such as skin flap

standards for who can practice what are today the majority of cosmetic surgery procedures highlights the medical field’s understanding of the de-medicalization of cosmetic procedures. Standards for who can practice plastic surgery have not dropped, and are still roughly equivalent to other more traditional branches of surgery such as orthopedics or cardiothoracics (Guenther, 2008). There are several noteworthy differences between contemporary plastic surgery and other branches of medicine. According to Dr. Dax Guenther, a physician with experience in both general surgery and plastic surgery, there is a significant difference in the doctor-patient relationship between these two branches of surgery. “The relationship with my patients [now that I am practicing plastic surgery] is by far more personal than it was when I was in general,” he says, “and for the first time in my medical career, it is strange to see patients coming to me and asking for surgery versus having to convince someone why I need to amputate their leg or something like that” (Guenther, 2008). Indeed, Dr. Guenther’s statement is true of the historical change of the doctor-patient relationship in the field of plastic surgery in the twenty and twenty-first centuries. During the era of World Wars One and Two, plastic surgery was largely focused on reconstruction of disfigures soldiers; plastic surgeons’ work was considered to be focused on restoring quality of life. In the later part of the twentieth century, especially with the offshoot and boom of cosmetic surgery, many people began to seek plastic surgery and approach doctors and request elective surgery. Also, during the rise of plastic surgery after World War Two, surgeons began to refuse to operate on people requesting plastic surgery on the grounds that whatever requested procedure was unnecessary or would risk the health of the patient. “I have had to reject women requesting facelifts,” says Dr. Guenther, “particularly those

Instead of being responsible for helping disfigured veterans regain some quality of life after war injuries, plastic surgeons now began to persuade people to go under the knife to enhance quality of life. procedures and tissue grafting are required to go through a prodigious amount of training in order to be legally certified to practice; from start to finish, a plastic surgeon must hold an undergraduate degree, a medical degree, successfully complete a residency program in general surgery, undertake one or more fellowships in order to specialize in plastics, and pass rigorous examinations for the American Board of Plastic Surgeons (Guenther, 2008). In order to administer Botox or collagen injections—which today account for the bulk of cosmetic surgery procedures (Liu & Miller, 2008)—one need only be a Registered Nurse (Guenther, 2008). This drop in


Medical & Cosmetic Surgery   67

“…at some point the plastic surgeon needs to draw the line between helping improve someone’s selfesteem with a procedure or two and refusing patients who have literally become addicted to plastic surgery”

who have already received multiple cosmetic operations and have had literally gallons of Botox injected into their skin…at some point the plastic surgeon needs to draw the line between helping improve someone’s self-esteem with a procedure or two and refusing patients who have literally become addicted to plastic surgery” (Guenther, 2008). The change between doctor-patient relations between plastic surgery in the earlier twentieth century and the later twenty and twenty-first centuries indicates how advances in the field of elective cosmetic surgery have exacerbated people’s insecurities of body image. In other words, when the medical knowledge was not existent or even nascent, people did not desire cosmetic surgery; after medical knowledge increased, people began to see cosmetic surgery as a way of alleviating self-perceived problems with the human body. When applying this to answering the question of whether or not plastic surgery is still medicine, it must be stated that cosmetic surgery is what its recipients have made it; understanding of cosmetic procedures such as liposuction and breast augmentation has been driven by people’s desire for surgeons to pioneer these procedures.

Especially in the twenty-first century, cosmetic surgery has truly become driven by consumer culture. “Cosmetic non-surgical procedures increased by 3158 percent [since the late twentieth century], with a compound annual growth rate of 27.1 percent. Spending for cosmetic surgery in 2005 exceeded ten billion (US)” (Liu & Miller, 2008). It has even been suggested that the rise of reality television programs featuring patients undergoing cosmetic surgery has contributed to the desensitization of the average person to cosmetic surgery, making them more likely to undergo cosmetic surgery themselves (Liu & Miller, 2008). The field of cosmetic surgery is still expanding, and it has been forecasted that in order to meet the rising demand of non-surgical cosmetic procedures, there will be a surge in the number of practitioners who are not certified by the American Society of Plastic Surgeons (Liu & Miller, 2008).

Lowering the standards of who can practice cosmetic procedures will only lead to further demedicalization of the field of cosmetic surgery.

C

osmetic surgery, since it has standards lower than the rest of plastic surgery and it is driven by consumer culture, must not be considered medicine. Cosmetic surgery must, however, be considered in context—if it were not for the unprecedented medical breakthroughs of plastic surgery in the earlier twentieth century, cosmetic surgery would not exist. Thus, although cosmetic surgery as it is known today hardly resembles medicine, the much larger field of reconstructive plastic surgery from which cosmetics is derived is still largely medical in nature as it deals with alleviation of problems arising from the human body. References

Bankoff, G. (1952). The Story of Plastic Surgery. London: Faber and Faber Limited. Byars, L. T. & Kaune, M. M. (1944). Plastic surgery: Its possibilities and limitations. The American Journal of Nursing, 44(4): 334-342. Edgerton, M. T. & Gamper, T. J. (2001). “Plastic Surgery” In S. Lock, J. M. Last, & G. Duena (Eds.), Oxford Companion to Medicine. Oxford Reference Online, Oxford University Press. Retrieved October 29, 2008. http://www.oxfordreference.com Guenther, M.D., Dax. (2008). Interview by author. Haiken, E. (1997). Venus Envy: A History of Cosmetic Surgery. Baltimore, MD: The Johns Hopkins University Press. Jones, B. M. (2008). Paul Louis Tessier: Plastic surgeon who revolutionised the treatment of facial deformity. Journal of Plastic, Reconstructive & Aesthetic Surgery, 61(9): 1005-1007. Liu, T. & Miller, T. (2008).Economic analysis of the future growth of cosmetic surgery procedures. Plastic and Reconstructive Surgery, 121(6): 404-412. Murray, M.D., J. E. (2001). Surgery of the Soul: Reflections on a Curious Career. Canton, MA, USA: Science History Publications. Ohana, Dr. S. (2006). Histoire de la chirurgie esthétique: De l’Antiquité à nos jours. Paris, France: Flammarion. Pomahac, B., Aflaki, P., Chandraker, A., & Pribaz, J. J. (2008). Facial transplantation and immunosuppressed patients: A New frontier in reconstructive surgery. Transplantation 85(12): 1693-1697. Santoni-Rugiu, P., & Sykes, P. J. (2007). A History of Plastic Surgery. New York: Springer.


68

Ampersand

The Trees are Dancing Kyle Teixeira Martins


Ecology & Metaphor   69

Trees: pillars of plywood or trans-millennia dancers? This essay considers the paradoxical nature of tree movement over three time scales: across evolutionary time, at a single instance between conspecifics, and over a tree’s lifetime. In order to understand subtler aspects of tree evolution and ecology, a metaphorical approach is taken which views trees as explorers, dancers, and long-distance runners. The early nineteenth century view of trees was that of exploitable cellulose to the end of producing manufactured goods. In fact, at any given instant, trees appear in themselves inert, stationary and ready to be cut into plywood. In terms of movement, it might then seem puzzling to view trees as explorers, dancers, or runners outside of a purely poetic sense. A more holistic ecological and evolutionary perspective reveals that what was once considered relatively immobile exhibits considerable movement, first across evolutionary time, then at an instant between conspecifics,1 and finally over the single lifetime of a tree. In consideration of the above, the central discussion will focus on the anomalous spatio-temporal mobility of trees from a scientific and metaphorical perspective across the aforementioned three scales, held in contrast against their apparent stillness. When plotting total net primary productivity versus the leaf area index 2 of many tree species, a considerable amount of scatter is demonstrated in the data set. This not only suggests that there are many different ways of being a tree, but that a phylogenetic exploration of branching architectures took place that maximizes fitness. A pursuit of optimal design took place, in which there was a gradual development governed by a set of tradeoffs. It becomes a matter of maximizing sunlight inter-

ception (minimizing self-shading) while minimizing the risk of stem breakage. Accordingly, the solution is expressed by altering a set of three variables: branching angle, rotation angle, and the probability of branching at all. Metaphorically speaking, a fossil record flipbook would demonstrate a stepwise tweaking of these three factors, an exploratory movement through three-dimensional space. The early Devonian predecessors of trees were primitive vascular land plants with simple linear systems. As time progressed, the Gilboa forests of the middlelate Devonian already towered nearly ten metres in height and captured sunlight with a fan like structure that seemed to be shed with seasonal climate. By 362-380 MYA, the genus Archaeopteris utilized a leaf-like tissue with varying degrees of webbing attached directly to woody branchlets. From a modern perspective, Niklas and Kerchner (1984) made a set of predictions in terms of what branching architecture would optimize sunlight interception then reviewed the fossil record for verification across evolutionary time. These predictions were in line with the progression from linear vascular ancestors to Gilboa then Archaeopteris trees, where there was an evolution to a trunk and crown 3 architecture and planation4 of the branching system. Altogether, this exploration of photosynthetic efficiency in tree design reveals a great deal of movement across evolutionary time.

if one were to take a set of pictures of different mature white birches of similar size, then flip through them like a stop-motion film, it would seem as if they were dancing. 1  Individuals that are members of the same species 2  Ratio of total upper leaf surface of a given species divided by the surface area of the land on which the said species grows

3  For a tree, its crown consists of the leaves, branches, and reproductive structures stemming from the trunk 4  Dichotomies in two or more planes, flattening out into one plane


70

Ampersand

In aligning with the metaphor of the fossil record flipbook, if one were to take a set of pictures of different mature white birches of similar size, then flip through them like a stop-motion film, it would seem as if they were dancing. Each tree reflects a different movement in a set of choreography, in part informed by the constraints imposed by resource availability and a set of tradeoffs to cope with these limitations. For example, the posture of the bending bough is informed by wood strength and thus

ability in the height-diametre wood ratio. Therefore, a wide variety of tree forms is possible through differential wood densities, with these tradeoffs in mind. Moreover, there is a certain degree of scatter between wood density and the modulus of elasticity8 and rupture9 in both dry and green woods, reflecting the niche requirements of the tree. A tree exposed on a cliff, for instance, is said to require a greater modulus of elasticity than one further down the slope of the same species, reflective of pheno-

Although a tree may appear to be frozen in time at any given instant, trees like D. antarctica can be more so like temporal long-distance runners. wood density. A study by Larance et al. (2004) on Amazonian trees showed that growth rate tended to decrease with increasing wood density, representing a greater construction cost for the tree; pioneer trees with a shorter estimated longevity than emergent trees had correspondingly lower density wood. Wood density, in turn, is largely a function of the percent investment in xylem vessels,5 fibres, and parenchyma6. Each of these three cell types is involved in the transport of water, structural support, and storage and cushioning of other cells, respectively. As the total amount of cross section invested in xylem vessels increases, the dry wood density tends to decrease.

typic plasticity10. Altogether, there is a continuum of cellular considerations in vascular transport and wood density that is expressed in the macroscopic variability in trunk and crown architecture. It is this set of tradeoffs that engenders the conspecific stopmotion waltzing of birches, the music to which they dance.

Less dense wood also tends to be more susceptible to wind-induced cavitation, as a bending branch poses a greater angular strain on the soil-plant-air continuum than an erect one. Simply stated, although less dense wood allows for faster growing trees and a greater water-carrying capacity, it also makes the tree more susceptible to embolism7. Increased wood density tends to permit increased vari-

A stop-motion film would express only an illusion of a moving tree, however. In this sense, one might consider the veritable caterpillar crawl of Dicksonia antarctica as a more literal representation of horizontal displacement. D. antarctica grows to a height of about five metres and falls over. The plant survives, and the apex continues to grow upwards. Consequentially, it leaves a trail of decomposing trunks, with one tree reported to have left a trail of ten metre in length. Considering D. antarctica’s slow growth rate (0.5-8.8cm/year), this specimen may have originated up to two thousand years ago, though five to ten hundred years seems more likely (Mueck et. al 1996). One might also reflect on the clonal populations of Poplus tremu-

5  Mixed vascular tissue that conducts water and mineral salts, taken in by roots, throughout the plant

8  The ratio of the stress applied to a tree to the strain produced

6  A plant tissue consisting of roughly spherical and relatively undifferentiated cells, frequently with air spaces between them. The cortex and pith are composed of parenchyma cells 7  When a blockage in a circulation vessel of a tree is caused by an air bubble

9  A tree’s modulus of rupture is the highest degree of stress experienced by a tree at its moment of rupture 10  Phenotypic changes in response to alterations in the environment


Ecology & Metaphor   71 loides as an illustration of a ‘single’ tree’s horizontal growth, through their system of interconnected underground stems which poke back up into the air, over hundreds of years. These two examples speak to the considerable longevity in trees. Be it a growth in diametre, height, root, or branch length, either horizontal or vertical, it is this capacity to create structures that can be expanded, and behaviors that can be repeated year after year that in part allow for a tree’s relatively long lifespan (Lanner, 2002). During their lifetime, trees can be subject to a slew of differential environments. Using D. antarctica again as an example, it is a fire resistant fern that, within its lifetime, can occupy a humid environment beneath a forest canopy and an exposed environment following a wildfire–one with high light intensities and dry atmospheric conditions (Hunt et al. 2002). This movement through time to disparate microenvironments is analogous to walking considerable distances from a dense shrouded forest to an open plain. Although a tree may appear to be frozen in time at any given instant, trees like D. antarctica can be more so like temporal long-distance runners.

defense, tree phenology, or dispersal, there is a direct interaction between the slow processes of a tree’s growth, maturity and evolution and these considerably faster ones. Therefore, the question then becomes, how does one inform the other? The aforementioned study conducted by Niklas and Kerchner (1984) regarded tree branching architecture as influenced by photosynthetic efficiency. The exploration that took place especially in the Carboniferous period, when much of the world’s landmass was located at the equator and thus allowed considerable trial and error, can be inferred to have taken place over an expansive time frame and according to the tempo of the tree species involved. It is questionable how these aforementioned faster processes influenced this evolution in branching architecture. For instance, depending on plant attributes, certain dispersal modes can be excluded. In particular, sub-canopy trees depend less on wind-dispersal than those in the over-story, relying predominantly more on zoochory.11 Their branching architecture would be less

Metaphors are important inasmuch that they allow for a better understanding of complex systems.

Be that as it may, there is nothing peculiar in an arborescent century-long jog, a slow motion waltzing, or a branching architecture exploration that extends the millennia considering that trees exist on long timescales. One might expect that which lives long to move slowly. It may then be peculiar that those biotic or abiotic processes upon which a tree depends for its dispersal, or guard against in its defense, exist often at much faster temporal scales. This anomaly is resolved when deliberating upon a tree’s capacity to react to its environment. For instance, in a study conducted by Chmielewski and Rötzer (2001), birch trees responded to a period of short leaf duration in previous years by leafing considerably earlier in 1990, extending the growing season in order to compensate. Similarly, Pecan masting in alternate years suppresses unruly squirrel populations, while seasonal variations in oak leaf tannins control Oxford caterpillars (Feeny 1970). Regardless of the example, whether it is herbivory

shaped by the biomechanical strain of strong winds over many generations, and more so by immediate biotic interactions. As a case in point, New Zealand tree branches were structured so as to make vulnerable fruit less accessible to predating birds. In deciphering how slower and faster scales interact in terms of tree evolution, it is then important to incorporate leaf fossil records demonstrating herbivory, or the evolutionary framework of fruit dispersal vectors into models like that of Niklas and Kerchner (1984). In light of the metaphors used in this discussion, it may in fact be unconventional to view trees as explorers, dancers, or long-distance runners from an 11  Seed dispersal by an animal


72

Ampersand

ecological or evolutionary perspective. This is not to say that scientific literature surrounding trees is devoid of metaphorical constructs. The “arms-race” between herbivores and primary producers is a fitting example. Metaphors are important inasmuch that they allow for a better understanding of complex systems. Bateson (1994) describes that, “[all] thought relies on metaphor, on ways of noticing similarity so that what has been learned in one situation can be transferred to another.” This becomes rather germane in our understanding of tree evolution, form, and function as they live at a different tempo than human beings, and it is often too easy to see them as inanimate objects, or a given in a particular environment. A metaphor drawn from human experience allows for the paradoxical nature of their stationary movement to be brought into better light. In fact, an increasingly trans-disciplinary approach may be needed in understanding more subtle aspects of their anatomy and abiotic-biotic interactions.

A metaphor drawn from human experience allows for the paradoxical nature of their stationary movement to be brought into better light. By using a metaphorical approach, a tree fossil record flipbook revealed a trans-millennial exploration of three-dimensional space. From simple linear systems in the Devonian to the complex angiosperm branching architecture of today, there was an evolutionary movement towards the most efficient capture of sunlight counterbalanced by biomechanical stress. Moreover, a stop-motion film of many individual trees of comparable size created the illusion that they were dancing. One bends its bough like so, the other is sterner, while each movement in the choreography is a unique expression of tradeoffs made to maximize photosynthetic efficiency. Final-

ly, a tree within its single lifetime, at times extending thousands of years, will be subject to a variety of microenvironments. A fire resistant tree beneath a forest canopy, once damp and shaded, will be subject to intense light when the rest of the forest burns down. This is analogous to a movement from inside the forest to beyond its treeline over a much shorter time period. Taken together, the three metaphors describe different ways of being a tree over expansive temporal scales, and the puzzling nature of their movement. References Bateson, M. C. (1994). “Turning into a Toad.” In Peripheral Visions: Learning Along the Way. 127-143. Chmielewski, F. M., & Rotzer, T. (2001). Response of tree phenology to climate change across Europe. Agricultural and Forest Meteorology, 108, 101-112. Feeney, P. (1970). Seasonal changes in oak leaf tannins and nutrients as a cause of spring feeding by winter moths caterpillars. Ecology, 51, 565-581. Hunt, M., Davidson, N., et al. (2002). Ecophysiology of the soft tree fern, Dicksonia antarctica Labill. Austral Ecology, 27(4), 360-368. Lanner, R. (2002). Why do trees live so long? Ageing research reviews, 1(4), 653-671. Larance, W. F., et al. (2004). Inferred longevity of Amazonian rainforest trees based on long-term demographic study. Forest Ecology and Management 190, 131-143. Mueck, S. G., Ough, K., et al . (1996). How old are wet forest understories? Austral Ecology, 21(3), 345-348.


Genetics & Sociology   73

The Etiology of

sChIZOPhrenIA A Gene-Environment Interaction Explanation

LAURA HICKEY


74

Ampersand

As of yet there is no clear consensus on the cause of Schizophrenia, but there is strong evidence pointing towards environmental causation. This essay weaves in the pertinent arguments vying to explain the causation of the disease, and discusses and elaborates on the evidence scientists are using to support their claim. It does a strong job of highlighting the inextricable link between a mental disease such as Schizophrenia and our environment. Schizophrenia is a chronic, debilitating brain disorder characterized by hallucinations, delusions, cognitive defects, and social withdrawal. It is also a disease with many adverse social effects, preventing those affected from thinking clearly, managing their emotions, and relating to others. Patients with schizophrenia may sometimes hear voices that do not exist, and develop paranoia and a fear of others. This sense of unease causes them extreme agitation and may frighten others. The underlying cause of schizophrenia is still unknown, but treatments are available to help alleviate most of the symptoms associated with the disorder. Although schizophrenia has no cure to date, these treatments enable patients to live relatively normal lives (National Institute of Mental Health, 2009). The statistical prevalence of schizophrenia is much greater than most are aware. In a study by Wu et al. (2006) in 2002, schizophrenia was estimated to afflict 5.1 per 1000 people in the United States, with a slightly higher percentage present in men. Men have also been shown to have an earlier onset of schizophrenia than women (between 46 and 55 years of age in men, and 56 to 65 in women). In 2002, the highest prevalence of schizophrenia was seen in the Medicaid and uninsured population, with the lowest prevalence of the disorder in the privately insured population. It was also reported in 2002 that thirty percent of schizophrenic patients had no health insurance coverage. In addition, the onset of schizophrenia was shown to occur most often during the productive work years of adults’—the resulting burden on both the economy and the individual is therefore quite evident. The main question regarding the etiology of schizophrenia is encompassed in two competing theories, the Social Causation and the Social Selection theories. These theories are analogous to the well-known “nature versus nurture” theory in

their attempts to understand whether genes and/or underlying brain conditions cause individuals to be vulnerable to schizophrenia, or whether the disorder is more directly related to adverse societal influences. However, the strongest theory explaining the cause of schizophrenia is one linking it to genetic predisposition and/or biological abnormalities, as well as adverse environmental factors surrounding susceptible individuals.

S

chizophrenia is most prevalent in lower socioeconomic groups regardless of what theory one supports. Link et al. (1986) attempted to try to further understand this prevalence by exploring the conflicting social selection and social causation theories. The social selection theory asserts that having schizophrenia causes individuals to be at risk of falling into, or failing to rise out of, lower socio-economic status groups as a result of their disorder. The social causation perspective, however, states that a low socioeconomic status exposes an individual to socio-environmental risk factors that may cause them to develop schizophrenia. Evidence that low socioeconomic status may be a result of the disorder is seen by the social drift hypothesis. Some studies have supported this view in trying to show that the symptoms associated with schizophrenia prevent individuals from remaining in their current jobs. Such individuals are forced to move into lower socioeconomic jobs, if they are even able to secure a job at all after the onset of disease. This study by Link et al. (1986), however, focuses on the first employment statistics of individuals prior to developing schizophrenia. The findings demonstrate that individuals who developed schizophrenia were first employed in positions exposing them to noisome work conditions involving intense stimulation. The adverse work conditions not only involved


Genetics & Sociology   75 a significant level of noise, but also intense heat or cold, fumes, and physical hazards that are associated with blue-collar jobs. Link et al. (1986) therefore propose that since these work exposures precede the onset of schizophrenia, class-linked stress could be a causal factor for developing schizophrenia.

Both familial and community socioeconomic deprivation can elevate stresses that may contribute to the onset of schizophrenia Further evidence of low socioeconomic status and increased risk for schizophrenia can be seen in a study by Werner et al. (2007), in which risks starting at birth were examined. Socioeconomic status at birth was measured based on parents’ education level and occupational prestige. Results of the study indicated a correlation between certain “risk factors” at birth and a future development of schizophrenia, such as lower parental education level, lower job status, and lower residential socioeconomic status. It is important to note that even when father’s occupation is held constant, there is still an inverse relationship observed between the community socioeconomic level into which an individual is born and the risk of developing schizophrenia. This indicates that both familial and community socioeconomic deprivation can elevate stresses that may contribute to the onset of schizophrenia. Werner et al. (2007) suggest that these stressors may include fewer educational opportunities, harsher living environments, poorer social networks, greater social isolation, and greater exposure to crime and violence. The study leaves the possibility that social factors may be acting on a pre-existing genetic predisposition to schizophrenia open to discussion.

Compounding these prior findings, a study by Corcoran et al. (2009) saw a forty percentage increase in risk of developing schizophrenia for those within the lowest social class category. This relationship persisted even when controlling for an individual’s sex, parental age, and parental country of origin. However, the authors did not find a socioeconomic gradient with risk of schizophrenia; this suggests that genetic influences may also be playing a role in an individual’s risk for schizophrenia.

A

nother population observed to be at higher risk for schizophrenia is the immigrant population. Weiser et al. (2008) performed a study on the risk of developing schizophrenia for immigrants in Israel who immigrated at the age of seventeen or younger, controlling for socioeconomic status and gender. The study was performed in Israel because a large proportion of its population, approximately thirty-five percent in 2005, consisted of immigrants. Results indicated that both first- and second-generation immigrants from all countries were at a significantly higher risk for developing schizophrenia. In addition, the countries with the greatest risk for developing the most severe forms of schizophrenia were countries with strong ethnic populations such as Ethiopia, where immigrants’ physical appearance was distinct from that of the majority of the population. Similarly, Cantor-Graae and Selten (2009) evidenced this in a study which reported that Blacks had a greater risk of schizophrenia than White counterparts. This is likely caused by discrimination. The previously discussed immigrants to Ethiopia also had the highest hospitalization rates for schizophrenia. The authors suggest that immigrants may have an increased risk of psychosis due to the additional stresses associated with cultural differences such as dress, family structure, education, and employment. A previous study by Mortensen et al. (1997) also confirmed that immigrants had an increased risk for all psychiatric illnesses observed in both sexes. Of these, schizophrenia was the most prevalent. The immigrants who developed schizophrenia were not representative of the more socially disadvantaged


76

Ampersand

Characteristics linked with onset of the disorder include single marital status, unemployment, low education level, low self-esteem, and use of marijuana. groups, downplaying the role of socioeconomic status on risk of schizophrenia among immigrants. The authors hypothesize that negative selection factors—i.e. the negative aspects of an immigrant’s native country that had initially forced them to leave—could have already been operating in their risk towards developing schizophrenia. CantorGraae and Selten (2005) also report strong correlations between immigrants and risk of schizophrenia. However, they observed an additional finding that migrants from developed countries have a greater increase in risk of schizophrenia than immigrants from developing countries. More studies supporting the theory of social causation have also classified which immigrants are most prone to developing schizophrenia. Immigrants are classified into categories of those that have an integrated, separated, assimilated, or marginalized identity (Veling, 2009). Immigrants considered to be integrated are those that have both a strong ethnic identity and strong national identity. A separated identity refers to those that retained a strong ethnic identity while having a weak national identity. An assimilated identity refers to those that had a strong national identity at the expense of giving up their ethnic identity. A marginalized identity refers to immigrants that have neither a strong ethnic nor strong national identity. Assimilated and marginalized immigrants are considered to be at

the greatest risk for schizophrenia since they have the weakest and most negative ethnic identities. In addition, second-generation immigrants had higher risk for schizophrenia compared to first-generation immigrants. Researchers attribute this to their greater assimilation. Stresses due to adverse social experiences of ethnic discrimination or isolation in areas where there is a small ethnic minority group are therefore strongly correlated with the development of schizophrenia. Characteristics linked with onset of the disorder include single marital status, unemployment, low education level, low self-esteem, and use of marijuana. Another study by Corcoran et al. (2009) further attempted to give rise to insight into the roots of this underlying causation. She found that secondgeneration immigrants were particularly at risk for schizophrenia primarily because of discrimination attributed to small minority ethnic groups and the struggles of acculturation. Meanwhile, low socioeconomic status seemed to play a less important role. Corcoran et al. (2009) failed to find a link between low socioeconomic status of poor immigrants in Israel and increased incidence schizophrenia. However, this may be explained by the low socioeconomic status of immigrants who fled anti-

She found that secondgeneration immigrants were particularly at risk for schizophrenia primarily because of discrimination attributed to small minority ethnic groups and the struggles of acculturation.


Semitic countries after World War II to come to Israel. This “ethnic density”, in which immigrants came as entire villages and communities, seemed to serve as a protective factor against discrimination. In addition, these immigrants were fleeing to a Jewish territory, allowing them to embrace their own Jewish identity and integrate more easily into Israeli society. In contrast, Moroccan immigrants, a predominantly Muslim population, moving into Christian territories in the Netherlands received a significant amount of discrimination. Thus, the authors find there is a strong association between the degree of interpersonal and institutional discrimination and the risk of developing schizophrenia among second-generation immigrants. They assert that a negative social and cultural identity among immigrants is a stronger determinant of schizophrenia onset than social class.

A

Genetics & Sociology   77

nother environmental factor with a strong correlation with schizophrenia is geographical location. Studies have shown that the prevalence of schizophrenia increases with increasing geographical latitude and with increasingly colder climates (Kinney et al. 2009). The lowest rates of schizophrenia are near the equator with a prevalence of 0.9 per 1000 cases recorded for Accra, Ghana and Jakarta, Indonesia. This is in large contrast to a prevalence of 28 per 1000 cases in Oxford Bay, Canada near the Arctic Circle. The physical area of regions with the highest rates of schizophrenia have shown to be ten times greater than areas with the lowest rates of schizophrenia, suggesting that an environmental versus genetic influence is playing a stronger role in risk of developing the disorder (Kinney et al. 2009).

Studies have shown that the prevalence of schizophrenia increases with increasing geographical latitude and with increasingly colder climates

Studies provide evidence against the social selection theory in immigrants by demonstrating that siblings of schizophrenics have not been determined to be at any greater risk for schizophrenia than unaffected individuals in the general population. Genetic vulnerability, it would seem, may therefore not play as strong a role as environmental factors. The effect of environmental factors is furthermore evidenced by studies on animals. According to Veling et al., “Results from animal experiments suggest that social stress may induce changes in the brain that resemble those in schizophrenia” (Veling, 2009, pg. 6). In addition, Caribbean and Surinam immigrants have relatively high rates of schizophrenia when they migrate, even though there exists normal rates of schizophrenia in their own countries. As such, this prevalence cannot be attributed to genetics. (Cantor-Graae & Selten 2005).

In addition, latitudes near the equator are usually associated with developing countries, while developed countries tend to be found in more northern latitudes. Even though developing countries have higher infant mortality rates and weaker health care infrastructure, they have lower rates of schizophrenia as compared to developed countries. Kinney et al. (2009) noted that disadvantaged groups (which would be associated with developing countries) were at a greater risk of schizophrenia, but that the increase in prevalence was even higher at more northern latitudes. These relationships are clarified by the observation that at the same latitude, there is greater prevalence of schizophrenia in areas with higher infant mortality. These findings suggest that the adverse affects of higher latitudes on risk of schizophrenia override the protective factor of better health care in developed countries.


78

Ampersand

Diagram from Kinney et al. (2009) demonstrating the effect of level of fish intake in relation to latitude. The diagram demonstrates the exacerbated effects of fish intake in Scandinavia due to its high latitude.

The higher latitudes and colder climates are thought to affect schizophrenia risk by increased vitamin D deficiency. Vitamin D is an important vitamin for cognitive performance, and a deficiency can result in detrimental brain and behavioral development. Cherniak et al. (2009) state that “Vitamin D activates receptors on neurons in regions implicated in the regulation of behavior, stimulates neurotrophin release, and protects the brain by buffering antioxidant and anti-inflammatory defenses against vascular injury and improving metabolic and cardiovascular function.� Vitamin D comes from exposure to UVB radiation in sunlight as well as from food sources in the diet (e.g. fish). Higher latitudes are associated not only with weaker sunlight intensity but also reduced exposure to sunlight, thus preventing individuals’ skin from receiving UVB radiation. The greater the severity of the winter climate, it would seem, the greater the risk of schizophrenia. Further evidence of perinatal or prenatal risk exposure to schizophrenia is that higher latitudes increase the chances of having a winter birth due to the temporal length of the winter season, and thereby increase the chances of vitamin D deficiency (Kinney et al. 2009). Studies have shown that in the United States, children born in the winter and early spring are more likely to develop schizo-

phrenia. It is also interesting to note that the risk of developing schizophrenia in Caribbean immigrants was seen to increase the farther North they moved. (Cherniak 2009). In tandem with Vitamin D deficiencies, diets low in fish tend to also be associated with higher risks of schizophrenia. Fish is the most important source of dietary intake of vitamin D, and as such a high consumption of seafood is associated with a lower rate of schizophrenia. Scandinavian countries, at particular risk for schizophrenia due to their Northern latitude, provided strong evidence for this inverse relationship. Lower socioeconomic groups in higher latitudes have an additional disadvantage in that they do not have access to vitamin D supplements and may not be able to afford healthier food choices, such as often expensive fish (Kinney et al. 2009). The increase in prevalence of schizophrenia with latitude is also greater for darker skinned groups and groups with prenatal exposure to infectious diseases. At the same latitude, the prevalence for schizophrenia is greater among darker skinned groups. Darker skinned individuals cannot produce vitamin D as effectively when exposed to sunlight, and therefore are more likely to be vitamin D defi-


Genetics & Sociology   79

They suggest that living in an urban environment may be a causal risk factor for schizophrenia since it precedes schizophrenia onset. They also report evidence for a dose-response relationship between urbanicity and schizophrenia. cient. This is supported by studies indicating that non-white European emigrants have increased rates of schizophrenia compared to white emigrants because of their lack of sufficient vitamin D levels. Higher latitudes and colder climates are also associated with an increased risk of prenatal exposure to infectious diseases like influenza and toxoplasmosis. These infectious agents are well adapted to colder environments and are associated with higher levels of schizophrenia. In the first half of pregnancy, exposure to the influenza virus was seen to increase the risk of schizophrenia development threefold. (Corcoran et al. 2009).

U

rbanicity has also been established as having a strong link to higher rates of schizophrenia. Farris and Dunham observed in 1939 that the rate of schizophrenia in urban areas is potentially double than in rural areas. Krabbendaum and van Os (2005) have shown that one-third of schizophrenia cases are associated with environmental factors of urban areas affecting children and adolescents. The social selection theory explains this correlation by arguing that people with schizophrenia are more prone to move into urban areas. However, Krabbendaum and van Os (2005) claim that studies actually support the theory of social causation since changes in urbanicity exposure in children affect later risk for schizophrenia in adulthood. Therefore, they suggest that living in an urban environment may be a causal risk factor for schizophrenia since it precedes schizophrenia onset. They also report evidence for a dose-response relationship between urbanicity and schizophrenia. They propose a strong geneenvironment interaction in the etiology of schizophrenia. Studies that control for potential genetic

risks only slightly decrease the strength of the urbanicity and schizophrenia relationship, indicating that there is still an underlying environmental association. But because only a small fraction of the population in urban environments actually develops schizophrenia, it suggests that there may also be some preexisting genetic vulnerability to schizophrenia (Krabbendaum & van Os 2005). A study by Pederson and Mortensen (2006) looked further into the relationship between urbanicity and schizophrenia to examine whether higher incidences of schizophrenia were caused by environmental factors in families or only in individuals. Their study revealed that the more urban the location of an individual’s older sibling’s birth, the greater the degree of schizophrenia for the individual. This shows the important familial role in schizophrenia risk. The authors elaborate, stating that this relationship indicates that a family’s urban residence prior to an individual’s birth can cause an increase in the incidence of schizophrenia. They hypothesize that this may be due to risk exposures like maternal lead concentration, maternal toxoplasma infection, or sibling infections, which accumulate and are transmitted to the individual during fetal life or upbringing. These risks work in conjunction with a pre-existing family history of schizophrenia to increase the likelihood of onset. (Pederson & Mortensen 2006). However, these findings are speculated to be related to an underlying genetic predisposition and not simply the environmental factors. Pedersen and Mortensen (2006) attempted in yet another study to understand the underlying causes of the urban-rural differences in schizophrenia. Al-


80

Ampersand

though the causes are still unknown, Pederson and Mortensen (2006) state that they are “hypothesized to include toxic exposures, diet, infections, stress, or an artifact due to selective migration.” In addition, the authors identify both a Danish study and an American study that link a greater risk of schizophrenia with a greater exposure to air pollution from traffic and a greater prenatal exposure to lead. Pederson and Mortensen (2006) looked at the relationship of traffic exposure to risk of schizophrenia, measuring participant’s exposure based on their residential distance to the nearest main road each year of their life from birth to fifteen years of age. They measured for the first fifteen years of life because past studies have shown that the most influential period of urbanization on the risk of schizophrenia is repeated exposure during upbringing. Pederson and Mortensen (2006) found that people living between fifty to a thousand metres of the nearest major road had the greatest risk of developing schizophrenia. The researchers do note that distance to the nearest main road is more associated with degree of urbanization than actual geographical distance. These findings provide additional credence for the social causation theory of schizophrenia onset.

On the other hand, studies have also yielded evidence that causes of schizophrenia may have an important biological basis. Childhood trauma would be one such basis. Reader et al. (2005) identify studies that relate the adverse effects of childhood

“Oestrogen is capable of warding off schizophrenia onset the more strongly, the weaker the individual’s vulnerability or strength of predisposition”

A

nother childhood risk factor that has been identified is sexual and physical abuse. Reader et al. (2005) identified studies reporting that survivors of child sexual and physical abuse scored higher on schizophrenia and paranoia scales. One study consisting of a sample of schizophrenic adults reported that eighty-five percent had experienced childhood abuse or neglect. Another significant finding reported by Reader et al. (2005) says, “An in-patient study found one or more of the DSM’s five characteristic symptoms for schizophrenia in seventyfive percent of those who had suffered CPA [child physical abuse], seventy-six percent of those who had suffered CSA [child sexual abuse], and hundred percent of those subjected to incest.”

abuse to cause biological abnormalities in the brain. Abused children were seen to have an overactive HPA axis, implicated in the body’s stress response. This overactive HPA causes the release of higher levels of dopamine, a neurotransmitter presumed to have an association with higher risk of schizophrenia. The adverse effect of the stressors of childhood trauma, therefore, have a strong effect on risk of subsequent brain abnormalities.

T

he intrinsic biological difference between genders also seems to have a correlation with different variations of schizophrenia onset. Women are suspected to have a later onset of schizophrenic symptoms due to the protective effects of oestrogen during pre-menopause. Hafner (2003) demonstrated that oestrogen decreases the sensitivity of dopamine receptors in ovariectomized rats that are supplemented with oestrogen treatment. Further evidence in humans indicates that oestrogen has moderate neuroleptic (antipsychotic) effects on risk of schizophrenia in humans. Hafner (2003) acknowledged a study by Riecher-Rossler looking at thirty-two schizophrenic women and twenty-nine women with depressive symptoms, finding that the greater the plasma levels, the smaller the schizophrenia symptoms score. Hafner (2003) provided more support for the oestrogen hypothesis by citing a study by Seeman and Cohen which found that “an earlier onset of functional oestrogen secretion


Genetics & Sociology   81

Diagram from Hafner (2003) demonstrating the oestrogen hypothesis effect in the presence of familial history of schizophrenia.

with puberty might be associated with a later onset of schizophrenia in women” (p.29). Therefore, the oestrogen hypothesis assumes that more women experience schizophrenic symptoms after menopause when oestrogens levels significantly decrease. It also assumes that a longer exposure to oestrogen, due to earlier puberty, would also delay the onset of schizophrenia in adulthood. Hafner (2003) noted that studies support a theory of a genetic vulnerability in the onset of schizophrenia when looking at the effects of oestrogen on the relation to an individual’s familial of schizophrenia. He noted that “oestrogen is capable of warding off schizophrenia onset the more strongly, the weaker the individual’s vulnerability or strength of predisposition” (p. 31). Therefore, the effects of oestrogen were found to be more protective if there was no previous family history of the disease. The genetic predisposition to developing schizophrenia was then further compounded by corroborating findings in two twin studies. Picchioni et al. (2009) say, “The neurodevelopmental model suggests that the origins of the illness lie in early life and that patients manifest signs of abnormal neural function well before overt psychotic symptoms.” In the study by Picchioni et al. (2009), it was observed that adults with schizophrenia were reported by

their parents to have had abnormal social development and personality characteristics in childhood and development. However, evidence of a familial genetic influence is determined by observation that in discordant monozygotic twins, in which one twin had schizophrenia and the other did not, the healthy twin still exhibited abnormalities in all three developmental ratings used in the study. Compared to their healthy counterparts, the schizophrenic twins were abnormal not only in social development but also personality, academic ability, and subclinical nonpsychotic symptoms. The heritability of schizophrenic traits in twins is observed to be between twenty-seven and sixty percent and schizophrenic traits are strongly linked to the later development of schizophrenia. However, the authors also assert that environmental influences contribute to approximately one-third of the variability observed between twins, suggesting an interaction between genetic and environmental risk factors. Another topic that questions the social causation theory of schizophrenia is childhood schizophrenia. How could environmental factors cause such a profound and rapid impact in children? Evidence suggests that genetic versus environmental factors may provide a more feasible explanation for childhood onset schizophrenia. A study by Addington


82

Ampersand

of our patients.� However, there is still much more research to be done to determine the specificity of these suspected genetic pathologies.

Diagram from Addington and Rapoport (2009) representing the percentages of various developmental histories in schizophrenic patients.

and Rapoport (2009) on a sample of childhood onset schizophrenics revealed that ten percent of the sample had cytogenetic abnormalities. In addition, many of the copy number variations were observed to be inherited and were far more prevalent in younger onset schizophrenia cases. Some of the abnormalities observed in children with schizophrenia are the deletion of 22q11.21 chromosome and the duplication of 16p11.2 chromosome. Furthermore, a few studies have associated the deletion of chromosomes in the locus NRXN1 with schizophrenia as well as autism and mental retardation. The multiplicity of neurological disorders arising from this specific mutation suggest that this locus has an important effect on early neurodevelopment. The genetic copy number variations most likely to affect schizophrenia were those that were associated with some form of developmental delay. The authors state, “If we allow ourselves to assume that all the novel CNVs [copy number variations] that impact genes discovered in the COS [childhood-onset schizophrenia] cohort to date are pathogenic, then we can explain the etiology of almost forty percent

In concordance with biological association, another study points out brain abnormalities found in schizophrenic patients (Eisenburg & Berman, 2009). One area of abnormality is seen in the dorsolateral prefrontal cortex, which is implicated in higher-order cognitive processing. Within this area there is decreased N-acetylapartate, which may be the result of working memory impairment in affected schizophrenics. Another finding in schizophrenic patients is an abnormally elevated level of activation in the ventrolateral prefrontal cortex, which seems to be compensating for the decreased activation and impaired function of the dorsolateral prefrontal cortex. The anterior cingulate cortex is yet another area in the brain found to have abnormalities in its neurons from postmortem experiments. MRI scans have also shown a reduction in the anterior cingulate volume, gray matter quantity, and N-acetylpartate. Other areas that have shown abnormalities include the inferior parietal lobule, medial temporal cortex, neostriatum, and thalamus. Eisenberg and Berman (2009) assert that schizophrenia is highly heritable. Their claim rests on neuroimaging findings, and show that healthy family members share similar phenotypic abnormalities in the brain, but in an attenuated form. One of the genes associated with risk of schizophrenia includes catechol-O-methyltransferase, which is an enzyme involved in cortical dopamine catabolism. In addition, a single nucleotide polymorphism has been identified in the GRM3 gene, which reduces prefrontal excitatory mRNA expression. The PPP1RB1 gene haplotype in schizophrenic patients is thought to be correlated with decreased IQ , verbal fluency, and working memory. Other abnormalities are seen in genes PRODH, AKT-1, DISC-1, NRG-1, and ZNF804A. All of these structural and genetic abnormalities are thought to adversely affect an individual’s executive functioning in schizophrenia.


O

verall, Lim et al. (2009) present the view of schizophrenia as “progressive developmental pathophysiology of the illness that may result from several factors working individually or in combination.” They suggest that the risk of illness requires the presence of risk factors throughout the life course. Lim et al. (2009) suggest that early risk factors include pre-

Genetics & Sociology   83 individual, and the exact mechanisms through which these occur should also be explored. When looking at socioeconomic status and immigrants, it has been observed that socioeconomic status does not have as great an impact on schizophrenia risk as other social factors such as ethnic identity. However, most studies focusing on immigrants look at the treated population, which may consist of immi-

It is increasingly likely that genetic effects in schizophrenia are to a large degree conditional on the environment and vice versa, that environmental effects are conditional on genetic risk natal stress, child abuse, and urban environment and may have a role in adversely altering neural networks in the brain. The authors also introduce the topic that late psychosocial factors may provide the tipping point in determining which individuals actually develop schizophrenia. One of the late psychosocial factors includes disrupting or very influential life events. Lim et al. found that one study reported forty-six percent of the sample to have experienced such a life event at some point during the three weeks prior to the illness onset, possibly contributing to the triggering of the disorder. Other late psychosocial factors thought to contribute to schizophrenia risk include minority status during migration and high-expressed emotion. These psychosocial stressors are believed to have a neurobiological effect in damaging the hippocampus and contributing to abnormalities in the HPA axis, mentioned earlier, both of which are associated with schizophrenia onset. Although there have been several studies offering greater insight into the risk factors for schizophrenia, these studies are still limited. In terms of the relationship between low socioeconomic status and greater risk for schizophrenia, there needs to be more research on the actual mechanisms of how a lower family socioeconomic status results in greater risk for psychosis. Various aspects of low socioeconomic status at the community level adversely affect the

grants of higher socioeconomic status. It is therefore likely that the lower socioeconomic groups may be underrepresented (Cantor-Graae and Selten, 2005). To further elaborate on risk caused by immigration, more research should be done while controlling for the social role of discrimination in the increased risk for being diagnosed with schizophrenia. Medical professionals may more readily diagnose a minority with a stigmatizing illness versus someone from the majority population. Weiser et al. (2008) noted that “Young, white, middle-class patients are often diagnosed with psychotic depression or psychotic bipolar disorder [versus schizophrenia], which is less stigmatizing, in the early stages of illness.” Therefore, the prevalence of schizophrenia must be distinguished between actual higher rates among immigrants versus possible diagnostic biases due to discrimination. Another bias that could be present in schizophrenia prevalence is related to geographical distribution. It should be investigated whether the lower prevalence in developing countries near the equator could be attributed to under-diagnosing. Developing countries have fewer resources and access for mental health services and specialists. People suffering from mental disorders in such weak healthcare infrastructures may be treated as if their illness is actually a physical manifestation, if they


84

Ampersand

are treated at all. Along with studies on geographical distribution of schizophrenia, caution should be taken when looking at the relationship with urbanization. There are many varying methods of measuring urbanization, limiting comparability between studies. Another interesting topic that could be explored more is the causal perceptions of schizophrenics about the origin of their own mental illness. Understanding what those affected believe are the causes of their disorder may point towards more specific causative agents to focus on; and if environmentally based, could provide opportunities for defensive strategies against the disorder.

W

here possible, interventions to establish preventative measures against schizophrenia should be sought. One area in which this could occur is immigrant groups; efforts should be taken to empower ethnic minorities so they do not associate negative feelings with their ethnic identity and thus avoid the associated stress of this negative self-perception. Another area for intervention could be in health care, motivating physicians to ensure that pregnant patients receive an adequate amount of vitamin D throughout pregnancy, and offering Vitamin D supplements when necessary. A study done in Finland showed that infants who received vitamin D supplements had a decreased risk of developing schizophrenia compared to those not given these supplements (Cherniak et al. 2009). This demonstrates that vitamin D intervention strategies could be successful in lowering schizophrenia risk. It could also be effective to educate patients about the importance of Vitamin D so as to ensure they take a proactive role in consuming it in their diets. In addition, Reader et al. (2005) call for prevention by suggesting that more help be given to child abuse survivors, believing that schizophrenia as a result of childhood trauma can be prevented. Also, further research relating to genetic impacts on schizophrenia may eventually allow for prenatal screening and possible early treatment strategies for those identified at a significant risk for developing schizophrenia.

The social causation theory provides substantial support from studies on socioeconomic status in childhood, immigration, exposure to infections, geographical location, vitamin D, urbanicity, childhood trauma and more to indicate that there are definitely some environmental factors that play a role in risk of schizophrenia development. However, none of these environmental factors can likely be the sole explanation for schizophrenia development; they must be present in tandem. Many authors argue against the social causation theory and instead use social selection theories of social drift to account for differences in socioeconomic status and urbanicity. There is also convincing evidence from studies that explain the biological effects on the risk for schizophrenia, such as the beneficial influence of estrogen on delaying schizophrenia onset, abnormalities in genes and structures in the brain, and familial heritability as shown in twin studies. It is very difficult to study schizophrenia as there are many variables that need to be controlled to assess the true relationships of suspected causal factors since these factors cannot be studied in isolation. Improved study designs that attempt to control for conflicting risk factors will help provide more verifiable evidence of causal effects and may narrow the focus of schizophrenia studies. It may be hypothesized that a future explanation as to the origin of schizophrenia likely lies in the interaction between exposure to environmental and biological and/or genetic risk factors, combining the social causation and social selection theories together. The etiology of schizophrenia is perhaps best explained in the words of Krabbendaum and Os (2005) stating, “It is increasingly likely that genetic effects in schizophrenia are to a large degree conditional on the environment and vice versa, that environmental effects are conditional on genetic risk� (p. 795). The exact causes of schizophrenia remain largely unknown and neither the social causation nor the social selection theory provides sufficient evidence to determine that either environmental influences or genetic, biological influences act alone in increasing an individual’s risk for developing schizophrenia. This idea is further reinforced by Reader et al. (2005) with his concept of synergism


Genetics & Sociology   85 saying, “Some environmental factors may show synergism with genetic risk, i.e. genes and environment reinforce each other so that for example, their separate weaker effects become a joint strong effect.” It is important to understand the multifaceted nature of schizophrenia so that improved prevention strategies and treatment options may yield a cure in the future. References Addington, A. M., and Rapoport, J. L. (2009). The Genetics of childhood schizophrenia-onset schizophrenia: When madness strikes the prepubescent. Current Psychiatry Reports, 11:156-161. Cantor-Graae, E., and Selten, J. P. (2005). Schizophrenia and migration: A Meta-analysis and review. Am J Psychiatry 162:12-24. (Retrieved from http://ajp.psychiatryonline.org on November 20, 2009.) Cherniak, E. P., Troen, B. R., Florez, H. J., Roos, B. A., and Levis, S. (2009). Some new food for thought: The Role of vitamin D in the mental health of older adults. Current Psychiatry Reports, 11:12-19. Corcoran, C., et al. (2009). Effect of socioeconomic status and parents’ education at birth on risk of schizophrenia in offspring. Soc Psychiatry Psychiatr Epidemiol, 44:265-271. Corcoran, C., Perrin, M., Harlap, S., Deutsch, L., Fennig, S., Manor, O., Nahon, D., Kimhy, D., Malaspina, D., and Susser, E. (2009). Incidence of schizophrenia among second-generation immigrants in the Jerusalem perinatal cohort. Schizophrenia Bulletin, 35:596-602. Eisenburg, D. P. and Berman, K. F. (2009). Executive function, neural circuitry, and genetic mechanisms in schizophrenia. Neuropsychopharmacology Reviews. (Retrieved from advance online publication at www.neuropsychopharmacology.com on November 22, 2009.) Hafner, H. (2003). Gender differences in schizophrenia. Psychoendicrinology, 28:17-54. Kinney, D. K., Teixeira, P., Hsu, D., Napoleon, S. C., Crowley, D. J., Miller, A., Hyman, W., and Huang, E. (2009). Relation of schizophrenia prevalence to latitude, climate, fish consumption, infant mortality, and skin color: A Role for prenatal vitamin D deficiency and infections? Schizophrenia Bulletin Advance Access. Krabbendaum, L. and van Os, J. (2005). Schizophrenia and urbanicity: A Major environmental influence-conditional genetic risk. Schizophrenia Bulletin, 31:795-799. Lim, C., Chong, S. A., and Keefe, R. S. E. (2009). Psychosocial factors in the neurobiology of schizophrenia: A selective review. Ann Acad Med Singapore, 38:402-407. Link, B. G., Dohrenwend, B. P., Skodol, A. E. (1986). Socioeconomic status and schizophrenia: Noisome occupational characteristics as a risk factor. American Sociological Review, 51:242-258. Mortensen, P.B., Cantor-Graee, E., and McNeil, T.F. (1997). Increased rates of schizophrenia among immigrants: some methodological concerns raised by Danish findings. Psychological Medicine, 27:813-820. Pederson, C. B., and Mortensen, P. B. (2006). Are the cause(s) responsible for urban-rural differences in schizophrenia risk rooted in families or individuals? Am J Epidemiol, 163:971-978. Pederson, C. B., and Mortensen, P. B. (2006). Urbanization and traffic related exposures as risk factors for schizophrenia. BMC Psychiatry, 6:2.

Picchioni, M.M., Walshe, M., Toulopoulou, T., McDonald, C., Taylor, M., Waters-Metenier, S., Bramon, E., Regojo, A., Murray, R. M., and Rijsdijk, F. (2009). Genetic modelling of childhood social development and personality in twins and siblings with schizophrenia. Psychological Medicine. Read, J., van Os, J., Morrison, A. P., and Ross, C. A. (2005). Childhood trauma, psychosis, and schizophrenia: A literature review with theoretical and clinical implications. Acta Psychiatr Scand, 112:330-350. “Schizophrenia.” (2009). National Institute of Mental Health. (Retrieved from http://www.nimh.nih.gov/health/topics/ schizophrenia/index.shtml on November 13, 2009.) Veling, W., Hoek, H. W., Wiersma, D., and Mackenbach, J. P. (2009). Ethnic identity and the risk of schizophrenia in ethnic minorities: A Case-control study. Schizophrenia Bulletin Advance Access. Weiser, M., Werberloff, N., Vishna, T., Yoffe, R., Lubin, G., Shmushkevitch, M., and Davidson, M. (2008). Elaboration on immigration and risk for schizophrenia. Psychological Medicine, 38:1113-1119. Werner, S., Malaspina, D., and Rabinowitz, J. (2007). Socioeconomic status at birth is associated with risk of schizophrenia: Populationbased multilevel study. Schizophrenia Bulletin, 33:1373-1378. Wu, E. Q., Shi, L., Birnbaum, H., Hudson, T., and Kessler, R. (2006). Annual prevalence of diagnosed schizophrenia in the USA: A Claims data analysis approach. Psychological Medicine, 36:1535-1540.


86

Ampersand


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.