Death and Dying

Page 1


R E S E Arē’sûrch’ R C H ]

- Activity Patterns of Golden Eagles in San Benito County, CA Taichi Natake

- Orientation-Dependent Neuronal Degradation Resulting from Axonal Stain Experienced in Football-Realistic Acceleration Evan Lyall, Spencer Scott, Jason Silver, and Samantha Smiley

research, reviews and theses

/

/

S J scientific journal • the journal of young scientists • berkeley scientific journal • the journal of young sc berkeley scientific journal • the journal of young scientistsB• berkeley tists • berkeley scientific journal • the journal of young scientists • berkeley scientific journal • the journal of young scientists • berkeley scientific journal • the journal of yo scientists • berkeley scientific journal • the journal of young scientists • berkeley scientific journal • the journal of young scientists • berkeley scientific journal • the journal of young scientists • berkeley scientific journal • the journal of young scientists • berkeley scientific journal • the journal of young scientists • berkeley scientific journal • th journal of young scientists • berkeley scientific journal • the journal of young scientists • berkeley scientific journal • the journal of young scientists • berkeley scientific jou

C I E N T I F I C /bûrk’lē S sī’ən-tĭf’ĭk /

/kŏn’těnt’s/

Nuclear Power Jonathan Melville

-

Medical Robots and Devices Jing Chen

-

Asteroid Impacts Shareth Reddy

-

Embalming Alvin Huang

-

Antibiotic Resistance Rohini Behl

-

Water Intoxication Nithya Lingampalli

-

Radiocarbon Dating Sean Purcell

-

Interview with Professor Alexei Filippenko Prashant Bhat, Kuntal Chowdhary, Jingyan Wang, and Ali Palla

-

death and dying T U R E S & I N T E R V I E W

/fē’chərs [ F E /A

B S J

c o n t e n t s

Interview with Professor Kathleen Collins Prashant Bhat, Kuntal Chowdhary, Jingyan Wang, and Ali Palla

Spr. 2013

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 3

2 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

berkeley scientific journal • the journal of young scientists • berkeley scientific journal • the journal of young scientists • berkeley scientific journal • the journal of tists • berkeley scientific journal • the journal of young scientists • berkeley scientific journal • the journal of young scientists • berkeley scientific journal • the jou scientists • berkeley scientific journal • the journal of young scientists • berkeley scientific journal • the journal of young scientists • berkeley scientific journal • th of young scientists • berkeley scientific journal • the journal of young scientists • berkeley scientific journal • the journal of young scientists • berkeley scientific jo journal of young scientists • berkeley scientific journal • the journal of young scientists • berkeley scientific journal • the journal of young scientists • berkeley scie

c o n t e n t s /kŏn’těnt’s /

BSJ

B E R K E L E Y

( ) 17 2


BSJ

/kŏn’tākt’/

C O N T A C T ]

Mailing Address Berkeley Scientific Journal 5 Durant Hall #2940 Berkeley, CA 94720-2940

/stāf S T/ A

[ B /S J ]

bē ěs jā/

F F ]

Editor-in-Chief Managing Editor

Kapil Gururangan

Features Editors

Emily Low Jessica Robbins Michael Sadighian

Interview Editors and Team

Kuntal Chowdhary Prashant Bhat Ali Palla Julianne Bozzini

Phone Number (510) 643-5374

Online http://www.ocf.berkeley .edu/~bsj/contact.htm Research Editor Design & Layout Editor Features Writers

Research

Design & Layout

David Ding Malone Locke Matthew Miranda Victoria Nguyen Alvin Huang Jing Chen Jonathan Melville Nithya Lingampalli Rohini Behl Sean Purcell Sharath Reddy Brenda Luna Hee Soo Kim Tommy Shi Emily Domanico Julian Zhu Krystal Smith Lucy Zhang Spring Chau

()

B S J

Email berkeleyscientific@gmail.com

Manisha Rai

4 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


The Decline Death of

and

Jonathan Melville

Nuclear power is history. That is to say, nuclear power is deeply steeped in history. The atomic nucleus itself was found to be a heterogeneous mass of protons and neutrons in 1932, but it was only a mere 6 years after the composition of the nucleus was determined when Lise Meitner and Otto Hahn discovered that bombarding heavy elements with neutrons could crack their nuclei in two -- a process they called nuclear fission. Only 4 years after that, the first nuclear reactor, Chicago Pile-1, went critical, the first self-sustaining nuclear reaction ever. At 1:50 PM on December 20th, 1951, in the tiny town of Arco, Idaho, Experimental Breeder Reactor I powered on for the first time. For 22 short minutes, the light bulbs above the heads of the scientists were lit not by inspiration, but by nuclear power. For the first time in history, the power of the atom was constructively harnssed. In 19 short years, nuclear chemistry had evolved from a fledgling concept to a science that altered the balance of power in the world forever.

BBerkeley erkeleySScientific cientificJJournal ournal••D Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 1

B S J

Nuclear Power


B S J

nuclear power are as dissimilar as two subjects that both contain the word “nuclear” can be. Nuclear reactors in power plants are intrinsically distinct from nuclear bombs, not merely in application or even in construction, but in that they utilize completely different radioactive fuel sources; the fuel used in nuclear power plants is almost completely useless for weapons-grade radioactive material, due to the presence of adulterating Plutonium-240 that greatly impedes the ability of fissile Plutonium-239 to be weaponized (Sutcliffe & Trapp, 1997). However, this has not severed the understood connection between all things nuclear that causes the public to look with disdain upon nuclear power the more nuclear weaponry is on the world’s stage. With the aid of antinuclear watchdog groups, nuclear power has been warped into a political talking point by people who do not fully understand the science behind it. Without public support, nuclear power loses government support, and with that goes research and expansion funding, causing nuclear power to simply fall off the energy map. That is not to say that the disappearance of nuclear power is a foregone conclusion. Nuclear power may be dying or in decline, but it is far from dead. In the US, at least, nuclear power is at a crossroads: no new reactors have been built on US soil since the Three-Mile Island incident in 1979. At the same time, however, the US Nuclear Regulatory Committee has approved the first two nuclear reactors in 35 years, to be constructed in Georgia and expected to begin operation in 2016 (Tracy, 2012). While a majority (71%) of US citizens favor the use of nuclear power as an energy source, only a mere 43% believe that more nuclear power plants should be constructed (“The Thirty-Year Itch”, 2012). Nuclear power faces intense opposition in the future, mostly due to public interest groups rooted in deep-seated misconceptions, but it is possible that in the next decade or two we may see a resurrection of the nuclear power industry in the US. Sadly, it is not so easy to make the same claim for many other countries worldwide. In Europe, nuclear energy has been a highly competitive power source for decades, but many countries are uneasy about continued nuclear development and several have made motions to phase them out completely. Even in France, where 80% of all energy is produced by nuclear power plants, 83% of the public is opposed

“Because of the unparalleled pools of energies waiting to be tapped in the nucleus, [nuclear power] developed not just as a tool, but as a weapon.”

2 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

B S J

The rapid development of nuclear chemistry was due largely to the political and economic forces that acted upon it in its formative years. Because of the unparalleled pools of energies waiting to be tapped in the nucleus, the new technology was developed not just as a tool, but as a weapon -- a fact exacerbated by the era its maturation coincided with, World War 2 and the Cold War. It is this historical baggage that holds back nuclear power today, bearing the ire of a sensationalist media and an uninformed populace, while governments refuse to relinquish the nuclear arms that continue to define warfare -- and hence international politics -- today. While fossil fuels pump our atmosphere full of greenhouse gases and we desperately scramble to find alternative-energy solutions, neglecting nuclear power as a viable energy source is an imprudent move. The most powerful driving force behind both the growth and decline of nuclear power has always been public sentiment. When nuclear power first came into the public eye in the 1960s (and up until the mid1970’s), nuclear chemistry was a highly regarded field. Support for the construction of nuclear power plants was a 2:1 majority among the general population, especially in the context of an Arab oil embargo and the first hints of a burgeoning “energy crisis” (Rosa & Dunlap, 1994). Even in the immediate aftermath of Three-Mile Island, the first and only nuclear energy disaster on US soil, nuclear power retained a plurality of popular support. In the early 1980s, however, public opinion suddenly flipped, as voters now opposed the continued growth of nuclear power by a 2:1 ratio; support for nuclear power has never held a plurality since (Ramana, 2011). A major factor for this is the crystallization of opinion against nuclear power, a steady stream of voters going from being “unsure” or “ambivalent” about nuclear power to firmly against it. Nuclear power bottomed out at the height of the Cold War, when paranoia of global nuclear annihilation reached its peak. It is this unspoken association between nuclear weaponry and nuclear power that is responsible for much of the fear and mistrust of nuclear power, even today. Scientifically, this premise is fundamentally flawed; nuclear weaponry and

Figure 1 Nuclear power is the most-used non-fossil fuel energy source in the US, and contributes more than all forms of renewable energy combined (Energy Information Administration, 2012).

to the building of new reactors to meet rising energy demands. In Germany, 88% of the population voted against the renewal of nuclear power plants for 12 more years; along with Switzerland and Belgium, they have passed movements to phase out nuclear power completely in the next 10-20 years (Phillips, 2011). In Canada, a majority of the population opposes nuclear power as an energy source; the entire province of British Columbia has declared itself a nuclear-free zone. In fact, the governmentowned electricity company BC Hydro has gone so far as to state that they “[reject] consideration of nuclear power in implementing [their] clean energy strategy” (BC Hydro, 2010). In Japan, every single nuclear power plant has been shut down, the result of a firestorm of anti-nuclear rhetoric in the aftermath of the Fukushima Daiichi disaster. In fact, of all the G8 countries, only the US, UK, and Russia have not made motions toward the phasing out of nuclear power as an energy source, as compared to Germany, France, Canada and Japan (Italy has no reactors, yet recently scrapped a plan to construct some). However, with

the energy demands of all these countries rising, and because nuclear power provides 15% for the least of these countries’ total energy supply, it is unlikely that they will be able to completely replace nuclear power with renewable sources of energy without resorting to fossil fuel sources. While these statistics do illustrate an overlying trend in the decline of nuclear power, a majority of the more recent motions to phase out nuclear power can be traced back to the Fukushima Daiichi nuclear crisis. Prior to Fukushima, nuclear power was holding relatively steady in opinion polls -- still a minority, but, having largely faded from the public consciousness, was not a major political talking point (Ramana, 2011; Harvey, Vidal, & Carrington, 2012). When the March 2011 earthquake and tsunami hit Japan, it caused the six reactors at Fukushima Daiichi Nuclear Power Plant to shut down, while flooding prevented auxiliary generators from keeping emergency coolant pumps from running. The disaster was worsened by poor communication and general incompetence of many officials; it has been described described as

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 3


B S J a “snowballing disaster” with poor disaster response and characterized by a lack of government action. The plant itself was built in an unsafe region, next to the ocean on a tsunami-prone coast. When the threat of reactor meltdown was recognized, plant officials delayed a final attempt to cool the reactors by flooding them with seawater because doing so would damage them irreparably. By the time the government ordered that the plant be flooded, it was too late to prevent the reactors from melting down. After the plant itself melted down, Japanese officials consistently underestimated the magnitude of the disaster, and neglected to make the severity of the incident clear to the public or the media. When the US Department of Energy provided data on radiation levels that showed that the radiation danger zone stretched far outside the evacuation radius, Japanese officials failed to act. It was not until a week later, when the US maps were published, that

the Japanese government released similar findings and expanded the evacuation efforts. Despite terrible damage control and abysmal public communication (at one point evacuees were recommended to move from an irradiated area to a zone with higher radiation levels), epidemiologists estimate on the order of only 0-100 potential radiation casualties due to the incident (Funabashi & Kitazawa, 2012). Despite the small direct damage of the event, it has led many countries to reevaluate their nuclear programs, and is the direct cause for Germany, Belgium, and Switzerland’s movements to phase out nuclear power entirely. One of the major claims by opponents of nuclear power is that nuclear power plants are inherently dangerous, releasing radioactive material into the environment and presenting a regional threat

“...the potential danger a nuclear power plant poses is greater than any other source of energy, and no safety measures are perfectly preventative.”

4 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

B S J

Image 1 In the aftermath of Chernobyl, hundreds of thousands of “liquidators” scoured the area around Chernobyl, isolating radiation pockets. The vehicles they used lie untouched, still dangerously radioactive.

in the form of a potential nuclear meltdown. It is mostly for these reasons that the Nuclear Regulatory Commission was founded in the US, to supervise and regulate the construction and maintenance of nuclear power plants (US Nuclear Regulatory Commission [US NRC], 2012). The NRC mandates strict safety regulations regarding containment of nuclear power plants, as well as physical security to deter theft, sabotage, or acts of terror, in addition to requiring a stringent application process before any reactor construction is approved (US NRC, 2013). The best example of the success of these safety and

Image 2 The Chernobyl Plant explosion released around 40 GJ of energy -- equivalent to about 10 tons of TNT (Dubasov & Pakhomov, 2009).

containment protocols is the 1979 Three-Mile Island incident in Pennsylvania, when operator error and a core meltdown resulted in the release of quantities of fission byproducts to the environment

via a stuck release valve. Because of the containment structures put in place, only gaseous xenon and krypton were released in any significant quantity; areas near the reactor were exposed to approximately 1.4 mrem of radiation (for context, a typical dental x-ray is about 3 mrem). The day-today environmental effects of nuclear power plants are not much higher, either. Studies have shown that that coal power plants, counterintuitively enough, release more radiation into the environment than nuclear power plants, due to the concentration of trace uranium and thorium in coal when it is burned -- radiation levels of crops grown near coal plants have been found to be 50-200 times higher than crops grown near nuclear power plants (Hvistendahl, 2007). Notably, neither level is high enough to biologically harmful, but the Image 3 While the Three-Mile Island nuclear incident belief that nuclear reactors release resulted a core meltdown and the release of radioactive significant amounts of dangerous isotopes, effective control mechanisms meant that the radiation into the environment is epidemiological effects of the disaster were minimal. fundamentally mistaken.

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 5


B S J

saved thousands of lives), there were still innumerable casualties. Various epidemiological studies have estimated between 5,000 to 50,000 premature deaths by cancer due to the incident. To prevent further contamination, a 30 kilometer “exclusion zone” was established around the plant, which is not expected to be habitable for hundreds of thousands of years (IAEA, 2006; González, 1996). These saddening statistics underlie a simple fact about nuclear power: the potential danger a nuclear power plant poses is greater than any other source of energy, and no safety measures are perfectly preventative. In the event of a disaster, damage control can be unreliable due to the potential magnitude of the incident; as such, the best we can do is do everything we can to reduce the likelihood of a mishap, both by learning from and adapting to past mistakes, and by exercising constant vigilance in nuclear reactor maintenance and security. However, when a nuclear disaster does occur -- which it inevitably will -- even the best disaster control could leave anywhere between dozens to millions of lives up in the air. Despite these caveats, nuclear power is

Image 4 The Fukushima Daiichi disaster, despite being exacerbated by bureaucratic incompetence, was orders of magnitude less damaging than Chernobyl due to successful containment structure (Funabashi & Kitazawa, 2012). 6 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

certainly a viable source of energy for an advancing world. Compared to traditional fossil fuels, it is clean, sustainable, and is far less polluting on a day-to-day basis; compared to renewable energy sources, it is more efficient and has a greater maximum energy potential in regions where geothermal, wind, or hydroelectric energy is not geographically optimal. While nuclear disasters are, to say the least, catastrophic, they are few and far between. Ultimately, it is this constant fear of catastrophe that is responsible for public mistrust of nuclear power. It is common knowledge that coal power plants are filthy and polluting, but because their environmental and societal impact is not immediate, they are exposed to far less public scrutiny. Nuclear power’s negative effects are not cumulative: they are short, sudden, violent, and easily headlined by the media, lingering in the public consciousness for years. By learning and adapting from past disasters, we can make nuclear power plants iteratively safer. Of the three major nuclear power disasters that have defined the science -- Three-Mile Island, Chernobyl, and Fukushima -- only Chernobyl caused significant amounts of casualties and had deep economic and environmental ramifications. Three-Mile Island and Fukushima, by comparison, were nuclear containment success stories, resulting in orders of magnitude less radiation released and hardly any radiation casualties as a result. While all three were serious radiation breaches and any loss of life is terrible, to continue to presuppose all nuclear power by a single 45-year-old worst-case-scenario is shortsighted. In the future, a movement away from nonrenewable, polluting fossil fuels to clean, sustainable alternate energy sources is inevitable; ignoring nuclear power as an important intermediary in this transition only makes such a transition more difficult and less likely. Nuclear power is the largest non-fossil-fuel source of energy in the US, producing 19% of total energy generated, while every form of renewable energy combined comprises only 13% (US Energy Information Administration, 2012). An attempt to phase out both nuclear energy and fossil fuels at the same time would take decades at the least and could overload the US energy market with unrealistic wind, solar and hydroelectric energy demands that vastly outstrip these sources’ capacities. To push away from nuclear power now would only increase US dependence on unsustainable sources of energy and increase the difficulty of tackling the

“Nuclear power’s negative effects are not cumulative: they are short, sudden, violent, and easily headlined by the media, lingering in the public consciousness for years.” energy crisis. Nuclear power is history; it has been defined by its history ever since the first atom bombs were dropped on Japan. It has been slowly dying for decades, wrongly maligned for some implicit yet completely nonexisteWnt association with nuclear weaponry and preconceived notions based on a single historical worst-case scenario. Rather than learn from the past and improve upon it, there has been a push to abandon nuclear power entirely. While nuclear power is far from perfect, it is a definite improvement upon polluting fossil fuels, and a powerful ally in the transition away from them toward ultimately renewable sources like wind, hydroelectric, and solar energy. While in some countries, like Germany and France, the anti-nuclear movement has taken such a hold that its salvation is increasingly unlikely, in the US there is still a glimmer of hope for future development and research. For the first time since the Cold War, nuclear power plants are being planned and constructed. Only time will tell if these reactors will pave the way for the next generation or are merely the dying gasps of a doomed industry. References

BC Hydro (2010). New Act powers B.C. forward with clean energy and jobs. BC Hydro - For Generations. Retrieved from http://www. bchydro.com/news/press_centre/press_releases/2010/new_act_ powers_bc_forward.html Dubasov, Y. V.; Pakhomov, S. A. (2009). “Estimation of Explosion Energy Yield at Chernobyl NPP Accident”. Pure and Applied Geophysics 167(4–5): 575. doi:10.1007/s00024-009-0029-9. The Economist (2012). Nuclear power: The 30-year itch. The Economist. http://www.economist.com/node/21547803 Energy Information Administration (2012). US electricity generation by energy source. US Energy Information Administration. Retrieved from http://www.eia.gov/tools/faqs/faq.cfm?id=427&t=3 Funabashi, Y., & Kitazawa, K. (2012). Fukushima in review: A complex disaster, a disastrous response. The Bulletin of the Atomic Scientists, 0(0), 1-13. doi:10.1177/0096340212440359 González, A. J. (1996). Chernobyl -- Ten Years After. IAEA Bulletin, (38), 2-13. Retrieved from http://www.iaea.org/Publications/Magazines/ Bulletin/Bull383/38302740213.pdf Harvey, F., Vidal, J., & Carrington, D. (2012). Dramatic fall in new nuclear power stations after Fukushima. The Guardian. http://www. guardian.co.uk/environment/2012/mar/08/fall-nuclear-powerstations-fukushima Hvistendahl, M. (2007, December). Coal Ash Is More Radioactive than Nuclear Waste.Scientific American, 11-13. http://www. scientificamerican.com/article.cfm?id=coal-ash-is-more-radioactivethan-nuclear-waste

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 7

B S J

The catch, of course, is that when these precautionary measures founder and a nuclear reactor does fail, the potential results are catastrophic. The prime example of a cataclysmic nuclear accident is the 1986 Chernobyl disaster in Ukraine -- mostly because it is the only disaster of that level to ever occur. Due to an engineering oversight, the control rod reactor shutdown systems did not function perfectly, and after a routine experiment they caused the reactor to overheat and explode. Radioactive fallout spread across Eastern Europe, triggering radiation alarms in nuclear power plants as far away as Sweden. The Soviet disaster response was relatively prompt: teams of volunteer “liquidators” were sent in to clear radioactive debris and a hasty concrete “sarcophagus” was erected to isolate the reactor: the total cost of cleanup came to about $37 billion today, functionally bankrupting the USSR. An estimated 200,000 people were evacuated; the nearby (and now iconic) towns of Pripyat and Chernobyl still lie abandoned as a testament to the calamitous event (International Atomic Energy Agency [IAEA], 1992). Despite their immediate and efficient actions (which doubtless


B S J

The International Atomic Energy Agency (1992). INSAG 7: The Chernobyl Incident: Updating of INSAG-1. The International Nuclear Safety Advisory Group (Safety Series), 75. Retrieved from http://www-pub. iaea.org/MTCD/publications/PDF/Pub913e_web.pdf The International Atomic Energy Agency (2006). Chernobyl’s Legacy: Health, Environmental and Socio-Economic Impacts and Recommendations to the Governments of Belarus, the Russian Federation and Ukraine. The Chernobyl Forum: 2003–2005. Retrieved from http://www.iaea.org/Publications/Booklets/Chernobyl/ chernobyl.pdf Phillips, L. (2011). Europe divided over nuclear power after Fukushima disaster. The Guardian. http://www.guardian.co.uk/ e nv i r o n m e n t / 2 0 1 1 / m a y / 2 5 / e u r o p e - d i v i d e d - n u c l e a r - p o w e r fukushima Ramana, M. V. (2011). Nuclear power and the Public. The Bulletin of Atomic Scientists. Retrieved from http://www.thebulletin.org/webedition/features/nuclear-power-and-the-public Rosa, E. A., & Dunlap, R. E. (1994). Poll Trends: Nuclear Power: Three Decades of Public Opinion. The Public Opinion Quarterly, 58(2), 295324. Retrieved from http://www.jstor.org/stable/2749543 Sutcliffe, W. G., & Trapp, T. J. (1997). Nonproliferation and Arms Control assessment of Weapons-Usable Fissile Material Storage and Excess Plutonium Disposition Alternatives, pp. 37-39. Lawrence Livermore National Laboratory UCRL-LR-I 15542.. Excerpt retrieved from http://www.ccnr.org/plute.html Topf, A. (2011). Public’s support of nuclear power waning; Brits and Americans buck the trend. Mining.com. http://www.mining.com/ publics-support-of-nuclear-power-waning-brits-and-americansbuck-the-trend/ Tracy, R. (2012). U.S. Approves Nuclear Plants in South Carolina. The Wall Street Journal. http://online.wsj.com/article/SB100014240527023038 16504577313873449843052.html United States Nuclear Regulatory Commission (2012). NRC: Nuclear Security and Safeguards.nrc.gov. Retrieved from http://www.nrc. gov/security.html United States Nuclear Regulatory Commission (2013). NRC: New Reactors. nrc.gov. Retrieved from http://www.nrc.gov/reactors/new-reactors. html World Nuclear Association (2010). Radioactive Waste Management: Storage and Disposal Options. World Nuclear Association. Retrieved from http://world-nuclear.org/info/Nuclear-Fuel-Cycle/NuclearWastes/Appendices/Radioactive-Waste-Management-Appendix-2-Storage-and-Disposal-Options/

Image Sources

http://cdn.theatlantic.com/static/infocus/chernobyl25/c09_11025038. jpg http://inapcache.boston.com/universal/site_graphics/blogs/ bigpicture/chernobyl_25th_anniversary/bp2.jpg http://neutrontrail.com/wp- content/uploads/2011/03/Three -Mile Island-djc-dot-com.jpg http://3.bp.blogspot.com/-RHg5lRUAvF0/T YggY1kVQ2I/ AAAAAAAADRg/-4Ewe2SQjlQ/s1600/japans-power-plantexplosion.jpg http://puu.sh/2jeOL

8 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


Collision Course: The Threat and Effects of an

Asteroid Impact B S J

SHARATH REDDY

Berkeley Scientific Journal • death and dying • Fall/Spring Year • Volume #17 • Issue #2 • 1


W

hen an asteroid collides with another massive object (e.g. the earth), the kinetic energy of the asteroid is converted into heat and sound, creating pressure waves which travel radially outwards from the impact center, similar to that of an atomic bomb. However, an asteroid has the potential to be much more devastating than an atomic bomb: an asteroid just a few tens of meters across can yield the energy equivalent of ten to fifteen megatons of TNT (Institute of Physics). An asteroid of this size was the perpetrator of the 1908 Tungsuka event, during which an asteroid exploded midair in a sparsely populated region of Siberia, releasing pres-

sure waves that felled trees for 30 miles, an impact which was more powerful than the atomic bomb dropped on Hiroshima (Phillips, 2008). Villagers living nearly 40 miles from the blast epicenter were eyewitnesses to the event, communicating what they saw to Russian geologists and astronomers. This effects of this impact resulted from an asteroid only a few tens of meters across. What NASA and other space agencies are searching for are asteroids which are a kilometer long, much larger than a few tens of meters. The destructive capabilities of such an asteroid are difficult to imagine, but there is at least one such impact which scientists have studied. The archetypal example of an asteroid causing the extinction of entire species is the Cretaceous mass extinction, which occurred about 65 million years ago. Although fossils of dinosaurs had already been found in rock strata across the globe, the reason as to why they were there was not as well understood. Luis Alvarez, a physicist from UC Berkeley, and his son Walter (currently a professor at UC Berkeley), began studying rock strata using radioisotopes present in the different layers. While studying the layer of strata formed between the Cretaceous and Paleogene geological periods, they found levels of the element Iridium to be much higher than in regular rock strata. orrelating this fact with the knowledge that asteroids were known to contain high concentrations of Iridium, Alvarez posited that an asteroid must have made impact between those periods depositing sediment containing high levels of Iridium on rock strata. Since fossils of dinosaurs were found only in Cretaceous strata (66 million years ago) and older, this led to the formulation of the Alvarez hypothesis, which theorizes that the cause of the extinction of dinosaurs was an asteroid impact. One valuable piece of evidence in support of this hypothesis is a large crater on the Yucatan peninsula of Mexico, Chicxu-

lub that formed about 65 million years ago, precisely when the K-Pg (Cretaceous Paleogene) extinction occurred. Asteroid impacts are point events, which lead to a longer term effects such as a decrease in temperature and death of vegetation, which would eventually cause the extinction of dinosaurs (Alvarez, 1983). In more recent years, with the advent of radiometric dating, the formation of the Chicxulub crater has been placed to be within

it actually occurring. Comparing the threat of an asteroid impact to the threat of other geological threats allows us to gain a better understanding of the relative threat an NEO poses. Megatsunamis caused by massive undersea earthquakes and volcanic supereruptions such as the Yellowstone Caldera comprise some such possible geological threats. Both supereruptions and megatsunamis occur much more frequently, about once every

B S J

B S J

I

n the winter of 2012, believers in the Mayan prophecy predicting the cataclysmic destruction of the Earth on December 21st, 2012 were disappointed when life continued on the 22nd. Although that may have been a “near miss” for the planet’s inhabitants, a new threat is just upon the horizon: asteroids, comets, and meteoroids, all poised for a path dangerously close to our home. With the recent Russian meteor impact on February 15th, 2013 (Wall, 2013), and a host of other nearby impact candidates, an asteroid impact is an important global threat that must be investigated. These asteroids, comets, and meteoroids which pose a significant threat to the Earth are collectively known as near-Earth objects (NEOs), defined as comets and asteroids that have been nudged by the gravitational attraction of nearby planets into orbits that allow them to enter the Earth’s neighborhood (LSST 2003). Among these was asteroid 99942 Apophis, which in 2004 was predicted to collide with the Earth in the year 2029. NASA has since determined that there is no longer any significant probability of impact; however, considering the sheer number of NEOs in our solar system, investigating the possible threats and effects of such an impact is a necessity.

C

“When an asteroid collides with another massive object (e.g. the earth), the kinetic energy of the asteroid is converted into heat and sound, creating pressure waves which travel radially outwards from the impact center, similar to that of an atomic bomb.” 2 • Berkeley Scientific Journal • Death and dying • Fall/Spring Year • Volume #17 • Issue #2

Figue #2 The frequency of asteroid mpacts on Earth 33,000 years of the time of the K-Pg extinction. General acceptance of the Alvarez Hypothesis by the scientific community has also followed, cementing the notion of asteroids as the harbingers of mass extinction. An NEO such as that which collided with the Earth 65 million years ago poses just as great a threat to humanity as it did to the dinosaurs.

I

n order to visualize the threat an asteroid poses, we have to consider two things: magnitude of the event, and the probability of

50,000 years; the last major asteroid impact event was 65 million years ago, although smaller asteroids hit every couple decades (such as the Tunguska event and the recent Russian meteor). In fact, the earth is constantly bombarded by smaller particles originating in the asteroid belt. Objects a meter across impact on a yearly basis. However, more catastrophic events are much rarer, occurring tens of millions of years apart. (Image #2) Asteroid impacts would have a much larger range of devastation relative to a megatsunami or

Berkeley Scientific Journal • death and dying • Fall/Spring Year • Volume #17 • Issue #2 • 3


B S J

W

A

A

K

4 • Berkeley Scientific Journal • Death and dying • Fall/Spring Year • Volume #17 • Issue #2

temperatures (like during “For example, some astrobiologists the Cretaceous mass extinchold that meteors were actually tion) is about the vector of life, bringing 1 kilometer. At the impact microorganisms formed site, multiple elsewhere in the universe to events will earth, a theory known as lthough asoccur. First, a teroids and massive crapanspermia.” other NEOs ter will form may have a where the reputation as civilization threatenNEO collides, leading to a massive release ing, they also have life bringing capabilities. of heat and sound in the form of a firestorm; For example, some astrobiologists hold that this will lead to the complete obliteration of meteors were actually the vector of life, bringflora and fauna in the immediate impact area, ing microorganisms formed elsewhere in the which can be up to ten times the size of the universe to earth, a theory known as pansperactual asteroid. Wildfires around the impact mia. Furthermore, asteroids played a key role site would lead to more loss of life and smoke in the development of modern civilization blocking the sun. Earthquakes could also ocand the rise of humans. As Luis and Walter cur up to Richter magnitude 13, potentially Alvarez discovered, the impact of a large altering the orbit of the Earth. asteroid during the Cretaceous period led to here would also be long-term effects on the the extinction of dinosaurs, allowing the rise climate: within hours of the impact, a globof mammals as the dominant animal life form al firestorm would start due to the reentry on the planet. This in turn would lead to the of ejecta back into the Earth’s atmosphere evolution of apes, and eventually humans (Nelson, 2011). Dust caused by the impact (Institute of Physics). Of course, this doesn’t would block sunlight, leading to continual really help out the current human population. darkness, death of vegetation, food shortage, However, steps are being taken to protect us and death. The ozone would also be affected from NEOs: the United Nations is currently by a large impact, leading to increased exdiscussing possible international defense posure of UV radiation to Earth (Edwards, systems and missions to predict and protect 2010). This is because an impact in the ocean humanity from such a threat (David, 2013). would eject water vapor containing chlorine And although there are thousands of other and bromine high into the atmosphere, leadasteroids circling around the solar system, ing to reactions similar to those caused by CFCs, resulting in the creation of a massive ozone hole. The depletion of ozone leads to another serious effect: global warming, possibly for centuries. This would affect not only the climates of areas, but also the flora and fauna that live there, possibly changing entire ways of life. Another possible consequence of an ocean impact is a large tsunami which could flood the interior of continents, causing devastating short term damage, but also long term changes on the populations that near these areas (Paine, 2011). Clearly, the threat and effects of an asteroid impact are very real, and have potentially disastrous consequences.

A

T

Figure #3. The Torino Scale is for categorizing the imact hazard associated with near-Earth Objects. Berkeley Scientific Journal • death and dying • Fall/Spring Year • Volume #17 • Issue #2 • 5

B S J

volcanic supereruption; for instance, the landslide of the La Palma island in the Atlantic Ocean could trigger a megatsunami which would devastate the eastern seaboard of the US, but few other places of the world would be affected. An asteroid a kilometer across would almost certainly have global effects (McGuire, 2006). hile a 1 kilometer asteroid impact only happens once every 600,000 years, the magnitude of the asteroid impact is very great compared to the other two possible geologic events. Even though impact rates have been stable for 50,000 years, the impact of an object greater than 1.5 kilometers in diameter would have a global effect, causing an estimated 1.5 billion deaths (Bland, 2005). Figure #1 The discovery of known near-Earth asteroids. Compared to the other possible geological threats the earth faces, an asteroid would ranging from ground based optical telescopes affect the largest area and cause the most to infrared space telescopes with the purpose deaths. Furthermore, the statistical probabilof observing and predicting the movements ity of dying due to an asteroid is 1 in 20,000; of potentially hazardous asteroids and comets this relatively high number is due to the very (LSST, 2003). Some are capable of predicthigh magnitude an impact would have. If ing the orbital paths of an asteroid for the an asteroid were to collide, even if it hapnext century; some probes are being sent to pens very rarely, you would have a higher already detected comets in order to collect chance of being killed. One reason the threat samples (Evans et al., 2003). of impact is diminished is because discovery problem which persists, even with adof a potentially dangerous near Earth object vanced asteroid detection technologies in usually occurs a decade or more in advance, place, is the recurring impact of smaller giving us ample time to mount a defense asteroids, like the one that recently hit Russia. against such an impact. Considering all of They are too small to be detected by curthese factors, the overall threat of an asterrent technologies put in place to find NEO’s, oid impact still comes out to be lower than a meaning they could continue to collide with megatsunami or supereruption (Bland, 2006). the Earth with little to no warning (Wall, 2013). The NASA Near Earth Object Program lthough the probability of an NEO imis charged with finding only objects which pact is small compared to other geologic are 1 kilometer or larger; the recent Russian catastrophes, the massive magnitude an impact was only 15 kilometers wide, yet it impact would have calls for investigation and was still able to injure hundreds. Less than detection of these Near Earth Objects. Most one percent of asteroids that are 40 meters or NEOs come from the Kuiper belt, an icy area larger have been detected, meaning the vast exterior to Pluto with abundant comets, and majority are still out there without our knowlfrom the Asteroid belt, between Mars and edge (Wall, 2013). Jupiter. NASA has been attempting to catalog the majority of the large NEO population nowing the threat posed by NEOs allows in a reasonable amount of time, setting their astrophysicists to predict the effects of initial estimates for about a decade (Pilcher, an impact. These effects are categorized 1998). Following an increase in funding from by the Torino Impact Scale, which ranks only $1.5 million per year (1998) to $16 milimpact candidates based on the kinetic enlion (2012), the rate of discovery of new NEOs ergy and the probability of the impact. The has increased exponentially (Image #1). global threshold size of an asteroid hitting the Currently, there are multiple systems in place earth and causing a catastrophic fall in global


with the advent of new technologies to identify NEOs, we are giving ourselves more time to react in the case that an asteroid is headed straight for us, a collision which we’ll be able to meet head on. Thus, the catastrophic effects an asteroid impact might have made them a global priority which is exactly what we need to face such a threat.

McGuire, W. (2006). Global risk from extreme geophysical events: Threat identification and assessment. Philosophical Transactions: Mathematical, Physical and Engineering Sciences, 364(1845), 18891909. Nelson, S. (2011, November 16). Meteorites, impacts, and mass extinction. Retrieved from http://www. tulane.edu/~sanelson/geol204/impacts.htm

B S J

Paine, M. (2011, February). Source of the Australasian tektites?. Retrieved from http://users.tpg.com.au/ horsts/paine_indochina.pdf References Alvarez, L. (1983). Experimental evidence that an asteroid impact led to the extinction of many species 65 million years ago. Proceedings of the National Academy of Sciences, 80, 627-642. Bland , P. (2005). The impact rate on earth. Philosophical Transactions: Mathematical, Physical and Engineering Sciences, 363(1837), 2793-2810. Chamberlin, A. (2012). Neo discovery statistics. Retrieved from http://neo.jpl.nasa.gov/stats/ David, L. (2013, February 17). United nations reviewing asteroid impact threat. Retrieved from http://www.nbcnews.com/id/50840661/ns/ technology_and_science-space/t/united-nationsreviewing-asteroid-impact-threat/ Edwards, L. (2010, October 2010). Asteroid strike into ocean could deplete ozone layer. Retrieved from http://phys.org/news/2010-10-asteroid-oceandeplete-ozone-layer.html Evans, J., Shelly, F., & Stokes, G. (2003). Detection and discovery of near-earth asteroids by the linear program. Lincoln Laboratory Journal, 14(2), 199-220. Institute of Physics. (n.d.). Meteor and asteroid impacts. Retrieved from http://www.iop.org/ resources/topic/archive/meteor/index.html

Phillips, T. (2008, June 30). The tunguska impact--100 years later. Retrieved from http://science.nasa.gov/ science-news/science-at-nasa/2008/30jun_tunguska/ Pilcher, C. (1998, Mar 21). US congressional hearings on near earth objects and planetary defense. Retrieved from http://impact.arc.nasa.gov/gov_asteroidperils_3. cfm Wall, M. (2013, February 18). Russian meteor won’t be the last. Retrieved from http://www.weather.com/ news/russia-meteor-explosion-20130218

Image Sources 1) http://neo.jpl.nasa.gov/stats/images/web_total.png 2) http://www.tulane.edu/~sanelson/images/ impactrecurrence.gif 3) http://upload.wikimedia.org/wikipedia/commons/ thumb/8/8a/Torino_scale.svg/400px-Torino_scale.svg. png 4) http://upload.wikimedia.org/wikipedia/en/1/1b/ Lutetia_closest_approach_(Rosetta).jpg 5) http://upload.wikimedia.org/wikipedia/commons/ thumb/9/97/The_Earth_seen_from_Apollo_17. jpg/599px-The_Earth_seen_from_Apollo_17. 6) http://static.guim.co.uk/sys-images/Books/Pix/ pictures/2009/4/7/1239100063941/Asteroid-002.jpg

LSST. (2003). Near earth objects. Retrieved from http://lsst.org/lsst/science/scientist_solar_system

6 • Berkeley Scientific Journal • Death and dying • Fall/Spring Year • Volume #17 • Issue #2


B S J

The Technology of Sustaining Life Jing Chen

Death is a biological reality concomitant with life, but the way that humans react to death is a social reality. The ideal of Western

medicine has been to combat death, and the constant development of novel technologies has assisted physicians with actualizing this ideal. Humans today are able to live significantly longer than ever before; this is both an accomplishment of preventative medicine, as well a prelude to medical technologies that will continue to be developed and enhanced. However, with the advent of these groundbreaking inventions came an increased dependency of humans on technology that was nonexistent only a few years ago. Surgical robotic systems serve as assistants that allow doctors to perform operations with an ease and precision that were previously inconceivable. While human-robot interactions have been sensationalized in science fiction past, countless scientists in this field today predict that such interactions will become commonplace in hospitals and daily life, perhaps making the clinician’s current skills seem superfluous. As it stands now, many Americans have come to expect or rely on the use of current life sustaining technologies that rush to the rescue when the body fails to perform its duties. In these cases, the body cannot carry on without the help of some novel technology, such as dialysis for people with severe kidney diseases like renal cancer. And in the most extreme case, the human becomes a hybrid with technology when it is simply impossible to live in the absence of the devices. Thus, the designation “life sustaining” is quite literal in meaning. While these specific advancements only represent a small portion of the growing field of biotechnology, they adequately suggest the potential for humans to displace their own manual craftsmanship with the introduction of increasingly helpful and reliable forms of medical technology – all for the sake of postponing the death and dying experience. Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 1


Surgical Robots B S J

“The mutualistic relationship between humans and robots generally improves the effectiveness of medical procedures.”

Catherine Mohr, surgeon and inventor, astutely observes that any discussion concerning the frontiers of surgical technologies necessitates a historical survey of life’s ever-shifting technological landscapes. Mohr displays an image of a trephinated skull from ten thousand years ago, showing the crudest type of surgery from ancient times. Fascinatingly enough, the skull showed evidence of longterm healing around the border of the hole, meaning the surgery was actually successful in keeping the patient alive. Since prehistoric days, surgical techniques have been utilized to prolong lives, and new methods consistently emerge over time. Mohr describes the first laparoscopic surgery and how doctors were thrilled with this minimally invasive, safer style of operation, but failed to anticipate the frustration and difficulties that would accompany the tricky procedure (Mohr 2009). This is where surgical robots come into play.

2 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

The U.S. Army has shown special interest in the prospects of this technology and implemented the da Vinci in the Walter Reed Army Medical Center (WRAMC). It was used over one hundred times in the first year, mostly for cardiothoracic surgery. In a military context, surgical robots could potentially be major lifesavers; they can be sent into “hostile environments” with “clear applicability.” For soldiers, surgical robots could be a real breakthrough in aiding the wounded without putting others in danger. They did, however, encounter issues, as with any new technology. They found that the da Vinci requires a large team of dedicated operators, and they experienced similar physical limitations as previous users. Ambitious with the relative success of these pursuits, a team of scientists under Professor Jacques Marescaux managed to perform the first telerobotic surgery across the ocean, exemplifying a new level of medical revolution to look for in the future (Marohn, et. al. 2004). UC Berkeley’s own Professor Ken Goldberg studies problems in robotics, and currently works on the RAVEN surgical robotic system in conjunction with researchers at other universities. He predicts that robots in the medical field will reaching a “tipping point” and will soon become “more and more acceptable” in society (Goldberg 2013). Research in the field of surgical robotics is young, so many scientists and robot enthusiasts like Goldberg are optimistic about their prospects. In Goldberg’s TedTalk at Berkeley, he claimed that robots make us better humans for a variety of reasons (Goldberg 2012). On an even broader level, robots allow us to remain human by keeping us alive.

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 3

B S J

Laparoscopic surgery provides vast benefits over laparotomy, which involves completely opening a portion of the body; laparoscopy requires only two small incisions and therefore cuts down on recovery time and risk of infection. Although seemingly ideal, doctors soon encountered obstacles – the procedure requires exceeding amounts of skill and coordination to operate counterintuitive equipment. Robots, however, can cover the ground that humans struggle with. The Puma 560 was the first of its kind, and was capable of performing neurosurgeries with high levels of precision. It was soon followed by the ROBODOC, the first surgical robot approved by the FDA. A rise in popularity caused NASA and the U.S. Army to begin researching the possibilities of using the technology more commercially. Current devices in use are the AESOP, a voice-activated endoscope, and the Zeus and da Vinci systems, which are operated on a master-slave basis. A common misconception is that these surgical robots serve as substitutes for doctors, but this is far from the truth – robots are simply highly precise assistants that can be controlled and supervised by human doctors. The mutualistic relationship between humans and robots generally improves the effectiveness of medical procedures; the robot works under the control and supervision of the doctor. However, technology always comes with a price. The robots lack haptic feedback, a touch sensation that doctors are accustomed to and sometimes rely on during surgical procedures. With no gauge for force, doctors have trouble performing surgeries even with the advanced technology the robots provide. The devices are also expensive and require further study before they can be used on a larger scale (Lafranco, et. al. 2004).


Dialysis Many victims of kidney failure, particularly cancer patients, rely on a technology that can perform functions when their bodies have ceased to do so on their own. Kidneys are meant to clean blood like a filter and produce required hormones, and there are two types of dialysis that can be implemented to accomplish these tasks. Peritoneal dialysis uses the abdominal lining membrane as a filter for blood, bodily fluids, and other dissolved substances, while hemodialysis implements a machine for the same purpose. Peritoneal dialysis is more commonly used for people who are still comparatively self-sufficient and prefer the convenience, or cannot handle the strength of the hemodialysis machine. On the other hand, hemodialysis occurs in hospitals and requires minor operations to gain access to blood vessels. Basically, waste products are filtered out of the blood system, while the major blood cells and proteins are left intact. They can both cause side effects such as infections and physical weakening; the ethical dimensions of dialysis become acute when trying to weigh the benefits against the costs. For some people, dialysis is a vital part of their survival, and the choice to forego dialysis treatment is, oftentimes, simultaneously a decision to allow oneself to die. Such a decision is never easy to make, and patients as a result are forced to speculate as to the quality of life on a strict dialysis regimen.

More advanced technology in recent times has shown that dialysis can majorly reduce deaths in patients with severe renal diseases. Hemodiafiltration is essentially a more precise version of dialysis that removes larger toxins than the older techniques could detect, and therefore reduces the risk of infection. Researchers think, “larger toxins could play a role in inflammation and cholesterol buildup,” which can lead to death, especially for older patients at high risk of succumbing to other diseases. So many people in the world depend on dialysis as they wait for a kidney donor; according to the article, about 350,000 patients use dialysis. To test the effectiveness of a higher precision technology, a team in Spain tested hemodiafiltration against conventional dialysis methods and found that the death rate of dialysis patients dropped from 27 to 18-19 percent. The calculations showed that for every eight people that switched over, one death would be prevented per year (Pittman 2013). To understand how completely critical dialysis is to certain patients, many researchers have conducted experiments regarding their survival rates. It is important to note that data is scarce since these patients are difficult to track, which may cause a bias in the study despite careful methods and procedures. Barring these issues, however, the results are quite striking. It seems that especially for older patients, life spans increase with dialysis 4 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

due to better normalized blood pressures. The length of time for each dialysis treatment is of great importance as well; if the time is reduced significantly, the patient may suffer negatively from the adjustment, possibly fatally. Patients must follow a strict schedule and remain highly dependent on this machine that keeps them alive (Charra, et. al. 1992). Technology like dialysis can be seen as simply a lifesaver – how could anyone possibly argue that it causes harm? Another study by scientists on the rates of kidney transplantation patient survival depending on the levels of dialysis given before the operation illustrated an interesting point. After doing extensive studies on a group of adults receiving cadaveric kidney transplants that underwent varying degrees of dialysis, it turns out that patients who used no dialysis at all had the lowest relative mortality rates. In fact, the people who never had to rely on technology managed to live the longest on average. Those who used dialysis consistently, even for years on end, had the lowest survival rates due to side effects of the dialysis machinery such as severe infections, though the technology might have been instrumental in saving their lives at the time (Cosio, et. al. 1998). This situation is the perfect example of one that begs the question: do the benefits of new medical technology really trump increased dependency? Even when ignoring the shocking mortality statistics, one must consider the disparate lifestyles of a patient that relies on regular dialysis treatments and a patient that can function independently. The issue can be more easily explored in the most extreme case of reliance on technology – that is, when human and technology completely fuse together, as seen with artificial hearts. Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 5

B S J

B S J

“This situation is the perfect example of one that begs the question: do the benefits of new medical technology really trump increased dependency?”


Mechanical cardiac assistants in the past could only be used to the extent of supplying oxygen to the heart, but modern times have introduced highly advanced innovations. Now, various types of cutting-edge blood pumps and completely artificial hearts have been introduced into hospitals. Scientists believe that “mechanical cardiac support systems have reached the threshold of long-term applicability,” meaning that these machines are really turning the tides of the medical field (Akdis, et. al. 2005). In the past, the only artificial hearts available were those that could serve as a bridge between heart failure and heart transplantation, but now there is a solid possibility of a permanent piece of machinery that a patient could go home with, not just as a temporary stabilizer. While this device would reduce hospital time enormously and create the illusion of increased autonomy, it raises a question of independence. Despite being out of clinical care, the patient’s heartbeat would not be his own. This would be the epitome of human dependency on technology, both a frightening and exhilarating prospect. About 5 million people in the United States alone suffer from terminal cardiac failure, which is also the leading cause of death. In the past, the only method of treating this problem was by a complete heart transplant, which was not readily available for all patients. Ever since the first successful cardiac support system appeared in 1966, cardiac assistance technology has advanced at a swift pace. The first total artificial heart was implanted in 1982, and since then, all of the devices have been used to keep the patient alive during the short time between removal and transplantation. The research evolved from simple pumps to pumps that provide continuous flow, and now to safe, small, and sophisticated

6 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

patient’s heartbeat would not be his own.” devices that might be able to replace the others altogether (Leprince, et. al. 2008). Today, cardiac pumps can be manufactured to maintain a decent quality of life from a range of several months to a couple years. Researchers are experimenting with different types of devices, including rotary pumps, to test which technology is best at allowing the patient to return to a normal lifestyle. One of the primary causes of death after intense cardiac surgery is low cardiac output, but micro-axial pumps, which are minimally invasive and quick, are capable of solving this issue (Akdis, et. al. 2005). The various types of heart sustaining devices represent the most extreme case of dependency on technology by humans. If artificial hearts ever become fully self-sustainable and can serve as transplants into the human body, the ultimate form of hybridization between human and machine would occur. The bioethics of this now plausible future becomes complex and multifaceted; on one end of the spectrum, being able to save lives with this technology would be a discovery on the grandest scale, but it also introduces the highest form of reliance to date. With the rapidly developing technology in current times, this future may not be very far off.

The Big Picture

There exists a dichotomy when it comes to discussing medical technology – there are vast amounts of benefits, possibilities, and hope when new innovations swoop in to save the day, but a high dependency on biotechnology can also be less than ideal. Medical technologies are keeping people alive, but do they have a positive impact on the quality of their lives? Scientists did psychological studies on men with Duchenne muscular dystrophy who had to rely on mechanical ventrilators for their entire lives. They suggest that assistive technology may be beneficial, but it has a “paradoxical effect” because it can also lead to social “stigmatization.” The men they interviewed began to take the technology for granted, to the point that it became a part of their selfidentities (Gibson, et. al. 2007). On the surface level, new technologies are positives in the medical field; they are the strongest weapons researchers are equipped with to grapple with death, and have served their purpose very effectively since they first emerged. However, as they have advanced and evolved from

helpful but peripheral tools to essential, lifealtering devices, the concept of bioethics has arisen to question technology’s impact on human independence and quality of life. The debate about the ramifications of this dependency verges toward fusing ethical philosophy with science, but the issue is nevertheless vital for all to consider in the rapidly developing medical technology field.

References

1. Akdis, M. & Reul, H. (2005). Mechanical blood pumps for cardiac assistance. Applied Bionics and Biomechanics, 2(2), 73-80. 2. Charra, B., Calemard, E., Ruffet, M., Chazot, C., Terrat, J., Vanel, T., & Laurent, G. (1992). Survival as an index of adequacy of dialysis. Kidney International, 41, 1286-1291. 3. Cosio, F. G., Alamir, A., Yim, S., Pesavento, T. E., Falkenhain, M. E., Henry, M. L., Elkhammas, E. A., Davies, E. A., Bumgardner, G. L., & Ferguson, R. M. (1998). Patient survival after renal transplantation: I. The impact of dialysis pre-transplant. Kidney International, 53, 767772. 4. Gibson, B. E., Upshur, R. E. G., Young, N. L., & McKeever, P. (2007). Disability, Technology, and Place: Social and Ethical Implications

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 7

B S J

B S J

Artificial Hearts

“Despite being out of clinical care, the


of Long-Term Dependency on Medical Devices. Ethics, Place & Environment: A Journal of Philosophy & Geography, 10(1), 7-28. 5. Goldberg, K. (2012 February). 4 lessons from robots about being human. TEDxBerkeley. Lecture conducted from Berkeley, California. 6. Goldberg, K. (2013). Science Today [Interview audio file]. Retrieved from Science Today SoundCloud Website: https://soundcloud.com/ science-today/the-future-of-robotics 7. Lafranco, A. R., BAS, Castellanos, A. E., MD, Desai, J. P., PhD, & Meyers, W. C., MD (2004). Robot Surgery: A Current Perspective. Annals of Surgery, 239(1), 14-21. 8. Leprince, P., Martinez, N., Viguier, C., Pavie, A., & Nogarede, B. (2008). New technologies for mechanical circulatory support. Computer Methods in Biomechanics and Biomedical Engineering, 11(1), 13-14. 9. Col. Marohn, M. R., USAF, MC & Capt. Hanly, E. J., USAF, MC. (2004). Twenty-first Century Surgery Using Twenty-first Century Technology: Surgical Robotics. Current Surgery, 61(5), 466-473.

B S J

10. Mohr, C. (2009 February). Surgery’s past, present, and robotic future. TED. Lecture conducted from Long Beach, California. 11. Pittman, G. (2013 February 14). More thorough dialysis may reduce deaths. Reuters Health Information. 12. Wedmid, A., Llukani, E., & Lee, D. I. (2011). Future perspectives in robotic surgery. BJU International, 108, 1028-1036.

8 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


The STEP between life and death:

People die. This is a fact of life.

Embalming

When people do die,

there are many options

when it comes to disposing the body.

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 1

B S J


P

eople chemical agent “While the Egyptian civilization is die. This that is applied to is a fact the body and its most commonly associated with of life. When function and its people do effects are chiefdie, there are mummification, there have been ly responsible many opfor momentary tions when it many different cultures that have preservation of comes to disthe body. Howposing the ever, the copipracticed body preservation. ” body. Some ous amounts bodies are of formaldeburied, some hyde required are cremated, some are left to rot, for each embalmed body brings and some are embalmed. The about health concerns and risks latter option of temporarily preof exposure to dangerous chemiserving the body for aesthetic fucals. Though there is no concluneral purposes is relatively new, sive evidence for linking formalbut body preservation is steeped dehyde to various health issues, in centuries of history. The practhere are still some concerns tice of body preservation began regarding its safety in the workwith the preservation methods place and the environment. Givcommonly used for mummifien the prevalence of embalming cation. The act of embalming a practices in our society, it is pertibody primarily consists of replacnent to be aware of the possible ing bodi- ly fluids with embalmconsequences. ing fluids, removing he human practice of prevarious organs, serving a body after death and sealing the spans many body. Formalcentudehyde is the r i e s primary and civiliza-

T

tions. Mummification is perhaps the most famous form of body preservation. While the Egyptian civilization is most commonly associated with mummification, there have been many different cultures that have practiced body preservation throughout history. The ancient Egyptians practiced mummification because they believed that the person continued on to an afterlife, which required an unmarred body. Consequently, the Egyptians contrived various means to preserve the bodies of their dead. One of the simplest and earliest methods of body preservation was to wrap the body in linen, dig a hole in the desert, and place the body inside to allow the dry environment and scorching sun to do the rest; desiccation of the body is essential for natural preservation because “dehydrated bodies tend to decompose more slowly, as water is necessary for decomposition”(Quigley, 1994). Later on, more sophisticated methods were developed for nobility and aristocrats, such as removal of all bodily fluids, removal of the brain and viscera, filling the body cavities with stuffing material, desiccation

by dry natron, covering the skin with resin, and bandaging the body. The removal of the internal organs along with various bodily fluids was another method to delay decomposition because these parts were the first to undergo the decomposition process (Quigley, 1994). In other parts of the world such as 12th century Japan, Buddhist monks followed similar practices in regards to body preservation. Japanese monks would have their bodies mummified as a way of “entering into Nirvana” or becoming a “Buddha of the body” and being worshipped in their death. These mummies underwent a variety of procedures, including smoking the body to dry it, using lime powder instead of resins as varnish, no removal of the brains or viscera, and initially placing the body in a sealed chamber for three years after immediate death before any further actions are taken. Later in Renaissance Italy, the bodies of monarchs were mummified through removal of the brain and other body organs, stripping of flesh from the limbs, and the use of resins, wool, clay, lime, and other substances as for embalming. It is clear that body preservation has been present in

“The goal of embalming today remains the same as it was one hundred years ago and that is to make the body look respectable in the final moments before being permanently removed. ” 2 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 3

B S J

B S J


the turn of the 20th century, the physician Carl Lewis Barnes developed the extant embalming practices that are used in the contemporary United States. Barnes experimented with the use of chemicals in the human circulatory system and blood vessels. He also studied the human physiology, as a whole,

balming a body was picked up in the United States during the Civil War when the bodies of soldiers needed to be sent home. Physicians initially used arsenic as the chemical agent responsible for delaying decomposition and killing bacteria on the surface. However, embalming did not become popularized until after Abraham Lincoln was embalmed and his body placed for display around the country (Chiappelli, 2008). Around

in separate pieces, alive, and dead. Barnes sought to create a standard manner in which bodies could be made to look “alive” even after death through embalming the body. Eventually, Barnes formulated a procedure for embalming the body whereby the body’s blood is replaced with embalming fluids, most commonly formaldehyde. Barnes’ choice of formal-

B S J

M

4 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

dehyde as the ideal embalming fluid came about from his experience with using formaldehyde as a disinfectant during times of epidemics, as well as efforts to circumvent the legal prohibition against other agents such as arsenic (Podgorny, 2011). ver time, the purposes of embalming, the process of embalming, and the materials used in embalming have changed little. The goal of embalming today remains the same as it was one hundred years ago, and that is to make the body look respectable in the final moments before being permanently removed. Unlike mummification, modern embalming is only meant to keep the body looking “alive” for a short period for the mourning process and not meant as a method for longterm preservation. Instead, modern embalming employs the use of chemicals, usually formaldehyde, to kill off bacteria in the body and delay the decomposition of the body. This is achieved through replacing the bodily fluids with embalming fluids. The entire embalming process occurs as follows: First, the body must be cleaned and disinfected of all microorganisms that may be found on the surface. Then, the neck is cut and the body is massaged to help the outflow of blood through the arteries. Next, the embalmer goes over the body and cleans, trims, and sets the body straight. Af-

O

terwards, the eyes and mouth are sewn shut while the vagina and/or anus are stuffed with cotton to prevent leakages. The embalmer will then inject three to four gallons of formaldehyde into the body, with the intention of pumping formaldehyde into the neck, groin, and upper arm arteries. Formaldehyde serves as the main preservative chemical by acting on the cell proteins to prevent decomposition. Additional compounds such as methyl alcohol, humectants, and anticoagulants keep the body from decaying by halting desiccation, preventing blood clots, and other factors. All of these fluids are injected intravenously via delicate syringes or powerful pumps. Immediately after the embalming fluids have been injected, the body is placed into its final position as it quickly stiffens and becomes unalterable. Once this last step is completed, the body may be placed for funeral purposes. However, it should be noted that not all bodies are successfully embalmed. The process of embalming falls prey to the same troubles that plague all work done by human hands. The most prevalent causes for this are the use of too little or too strong of an embalming fluid, careless oversight to the speed at which blood is drained, or even ignoring the visceral organs to rot. Modern embalming purposes and practices have much in common with those of one hundred

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 5

B S J

many civilizations throughout time and the methods to accomplish a mummified body have been constantly changing and developing (Cockburn, 1998). odern embalming practices in the United States are based on contemporary medicine, physiology, and chemicals. The practice of em-


B S J

A

I

posed to formaldehyde concludes physiology and the circulation that “the evidence for human carsystem and his use of formaldecinogenicity of formaldehyde rehyde as an embalming fluid form mains unconvincing [and whether] the basis for modern embalmformaldehyde exposure is associing methods. However, recent ated with a small increase in the health concerns linking formalrisk of sino-nasal and/or nasophadehyde exposure to cancer may ryngeal cancer cannot be ruled out have consequences for the infrom the results of our study.” Furdustry, but more research is rethermore, another study by Marsh quired before anything can be et al. on metal workers concludes made certain. that “the results of our […] study suggest that the large nasopharyngeal cancer mortality excess in the […] cohort may not be due to formaldehyde exposure, but rather reflects the influence of external employment in the ferrous and non-ferrous metal industries References of the local area that entailed posJ., & Chiappelli, T. (2008). Drinking sible exposures to several suspect- Chiappelli, Grandma: The Problem of Embalming. ed risk factors for upper respiratory system cancer.” These studies Journal of Environmental Health, 71, 24-28. express caution around chemicals, http://ucelinks.cdlib.org:8888/sfx_local? especially when their long-term ef- sid=Entrez:PubMed&id=pmid:19115720 fects on human health have yet to Cockburn, A., Cockburn, E., & Reyman, T.A. be thoroughly established. (Eds.). (1998). Mummies, Disease & Ancient n conclusion, embalming a Cultures. body is a unique step taken by humans after death with Cambridge: Cambridge University Press. the goal of preserving one’s physical appearance. The practice of Coggon, D., Harris, E.C., Poole, J., & Palmer, K.T. (2003). Extended Follow-Up of a Cohort of body preservation goes back milBritish lennia to Egypt where bodies were mummified. However, body pres- Chemical Workers Exposed to Formaldehyde. ervation was not geographically Journal of the National Cancer Institute, limited to the banks of the Nile, 95, 1608-1615. doi: 10.1093/jnci/djg046 but also seen as prolific from one end of Europe to the other end in Collins, J.J., & Lineker, G.A. (2004). A review and meta-analysis of formaldehyde exposure Asia. Various techniques were deand leukemia. Pharmacology, 40, 81-91. veloped among various cultures, doi: 10.1016/j.yrtph.2004.04.006 but modern American embalming practices originate from Carl Lew- Iserson, K. V. (1994). Death to Dust: What is Barnes. His studies on human Happens to Dead Bodies?. Tucson: Galen

I

Press.

6 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 7

B S J

or even one thousand years ago, claims that formaldehyde is a illustrating the similar steps takcarcinogen and prolonged exen to preserve a body (Iserson, posure to it is connected to an 1994). increase risk in nasopharyngeal s alluded to earlier, there cancer, leukemia, and lung canhas been health concer (Collins, 2004; McLaughlin, cerns regarding the safe1994). While experiments with ty of embalming and the lab animals have shown formchemicals used in the process. aldehyde to be a carcinogen for Early anxieties were based on a rats, it is not considered to be a belief that the corpse itself was carcinogen for mice or hamsters a vehicle for bacterial infection: (McLaughlin, 1994). Thus, the the pioneers of modern emclassification of formaldehyde balming practices in the late as a carcinogen is not univer19th century assumed that emsally accepted from an experibalming a body mental point-of“The classification of would contain view. Multiple the bacteria studies have been formaldehyde as a and kill off the conducted to dem i c r o o r g a n - carcinogen is debatable termine the influisms responsience of formaldeble for disease. from an experimental hyde on human The practice health by observwas viewed as ing and collectpoint-of-view.” a public health ing data from cerresponsibility, and was the only tain occupations that have an recourse amidst fears of epidemabove-average exposure to the ic contagion. However, it soon chemical. Several longitudinal became clear that embalming studies have tracked the health a body would not prevent conof embalmers, doctors, workers traction of diseases, simply bein the metal industry, workers cause dead bodies are not likely at industrial plants, and workto spread diseases. Instead, a ers regularly exposed to plasnew caution arose concerning tics over the course of many the use of embalming fluids – years. Many of these studies namely formaldehyde, which conclude that the rates of canis the primary chemical agent cer among these professions do used in the embalming process not deviate significantly from (Iserson, 1994). the general population and re n the past few decades, frain from making any outright there have been concerns claims of association, much less about the negative health causation between the two effects associated with variables. A study by Coggon formaldehyde. There have been et al. on chemical workers ex-


Marsh, G.M., Youk, A.D., Buchanich, J.M., Erdal, S., & Esmen, N.A. (2007). Work in the metal industry and nasopharyngeal cancer mortality among formaldehydeexposed workers. Regulatory Toxicology and Pharmacology, 48, 308-319. doi: 10.1016/j. yrtph.2007.04.006 McLaughlin, J.K. (1994). Formaldehyde and cancer: a critical review. International Archives of

B S J

Occupational and Environmental Health, 66, 295-301. doi: 10.1007/BF00378361 Podgorny, I. (2011). Modern embalming, circulation of fluids, and the voyage through the human arterial system: Carl L. Barnes and the culture of immortality in America. Nuncius, 26, 109-131. doi: 10.1163/182539111X569784 Quigley, C. (1998). Modern Mummies: The Preservation of the Human Body in the Twentieth Century. Jefferson: McFarland & Company, Inc.

8 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


The Economic and Clinical Implications of Antibiotic Resistance Rohini Behl

T

The Problem

The dilemma of antibiotic resistance has evolved to become a multifaceted issue exacerbated by a series of molecular, operational, psychological, and economic factors. On the molecular level, antibiotic resistance is the result of bacteria changing in ways that lead to the reduced effectiveness of antibiotics to cure or prevent infections. There are three main mechanisms through which resistance is acquired in bacteria: 1) Natural selection, or the gradual scientific process by which biological traits such as resistance to antibiotics become common in a population (i.e. bacteria); 2) plasmids, or independent circular pieces of DNA that may carry genes for antibiotic resistance that can be conferred between bacteria; and 3) mutations, or permanent changes in the DNA sequence of a gene that can lead to the formation of new traits such as resistance (Ramanan, 2012). Furthermore, crossresistance, or the possession of a resistance mechanism by a bacterial strain that enables it to survive the effects of several antibacterial molecules, may continue even after halting or reducing antibiotic use. As such, antimicrobial resistant bacteria may emerge under the selective pressure of antibiotics and become the

dominant flora. There are a series of operational factors, such as the role of setting, modes of transmission, and lack of regulation enforcement, that appear to be rather simple to remediate, but in practice this has not been the case. An increasing number of cases of antibiotic resistance occur within hospital and institutional settings; in these situations, antibiotic resistant flora may live within the institution and be transferred to the patient (Sipahi, 2008). Transmission may occur when coming into contact with soiled hands of staff as well as contaminated surfaces and equipment, in addition to passing from patient to patient, which is why effective infection control and hygiene are essential to inhibit their spread. The severity of the disease being treated, length of current hospital duration, exposure to other ill patients, invasive surgical procedures, intensity of clinical therapy, and advanced age further increase the odds of susceptibility to antibiotic resistant bacterial strains. Over-prescription of antibiotics by physicians further increases the likelihood that particular strains will develop resistance and can be simplified into two main causes: 1) consumer insistence that antibiotics are a magic cure-all for flu, infections, and unrecognizable conditions and 2) doctors’ fear of malpractice lawsuits that impels them to err on the side of over-prescription. For example, contracting the flu comes from a virus, rather than a particular bacterial strain; therefore there is no reason to take an antibiotic as treatment for the flu. Taking antibiotics more frequntly than necessary results in natural selection in the body for the most resistant bacteria and contributes to higher levels of resistance; consumers incorrectly believe “better safe than sorry” and thus err on the side of overconsumption of antibiotics when in reality, by doing so, they are increasing their odds of susceptibility to antibiotic resistant bacteria. For this reason, associations have provided and implemented regulations that restrict the supply of antibiotics created in the first place. However, these appear to be mainly ignored by antibiotic manufacturers in order to promote antibiotic sales and resulting profits.

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 1

B S J

here is no doubt that antibiotic resistance is a growing phenomenon. As Nobel Prize Winner and famed molecular biologist Joshua Lederberg put it, “We are running out of bullets for dealing with a number of bacterial infections. Patients are dying because we no longer in many cases have antibiotics that work (2007).” This issue can be attributed to over-prescription of available antibiotics, their currently limited diversity, as well as lack of incentives to encourage investment in development of new classes of antibiotics. Antibiotic resistance is a problem that afflicts both affluent and impoverished countries, and the outlook it generates appears to be bleak due to rapidly rising costs of treatment, the threat of cross-resistance, as well as increased morbidity.


reasons. 1) The three doses clear the body of the leastresistant bacteria, making it easier for more resistant, powerful bacteria that remain to proliferate faster than they would have been otherwise able in the presence of competitors. 2) By default, patients possess leftover doses they may choose to take at a later date for a condition that antibiotics are not designed for treating. 3) It increases consumer complacency, making it likely

they will repeat the behavior in the future, and put themselves at risk for contracting a similar infection again. Antibiotic self-treatment is especially common in countries where antibiotics may be gained without a doctor’s prescription. This is one reason why U.S. hospitals are increasingly distributing antibiotics only in the exact dose required per infection, rather than in fixed packets (Sipahi, 2008). “Direct-to-consumer”advertising represents the shift in targeting physicians directly rather than patients in the purchase or request of antibiotic prescriptions. This is evidenced by the fact that drug companies increasingly spending more on advertisements in newspapers and popular magazines than in medical journals (Woloshin, 2001). While the original purpose behind such a transformation involved enabling individuals to engage in more rational and informed healthcare decision-making, instead it led to inappropriate antibiotic usage

Upon expounding on the factors that lead to antibiotic resistance, it is important to delve into the epidemiological consequences of antibiotic resistance on death and dying. For one, ineffectiveness of current antibiotics leads to longer and increased hospitalizations that may result from a higher frequency of surgical interventions required to control infection. The most obvious and dangerous consequence is increased risk of infection; many surgical procedures such as transplants and bypass operations depend on effective antibiotics to keep patients free of infection at times when their vulnerability to them is particularly amplified. Increased morbidity typifies a common result with infections of nosocomial bacteria, or hospital acquired infections, attributing to a mortality rate of thirty-five percent and adding on average twentyfour additional days of hospitalization as well as an excess cost of $40,000 per survivor (Niederman, 2001). According to Sipahi, antibiotic acquisition costs and

“…pharmaceutical companies spent U.S. $1.8 billion on direct-to-consumer advertising for prescription drugs in 1999, and this number is rapidly rising...”

Figure 1: These five states with the greatest antibiotic use can be especially targeted to reduce excess and unecessary antibiotic prescriptions in order to limit the threat of growing antibiotic resistance.

2 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

through the proliferation of artificial demand (Sipahi, 2008). According to research published by the Center for the Evaluative Clinical Sciences at Dartmouth Medical School, pharmaceutical companies spent US $1.8 billion on direct-to-consumer advertising for prescription drugs in 1999, and this number is rapidly rising, with no signs of stagnation. Economically speaking, the increase in antibiotic resistance is not solely dependent on whether physicians fail to take the negative externalities of increased cost and decreased effectiveness into account, but is also due to underinvestment in other means of infection control such as immunizations and good management practices (Ramanan, 2012). A negative externality is a common term used in economics to describe a behavior of an individual (or individuals) that leads to a negative result that impacts society and others not involved in the behavior or action. To “internalize the externality” is to require the individual perpetrating the behavior to take the additional costs he/she imposes on society into account. Generally, this occurs in the form of a tax or penalty imposed by the government, or in this case, by some higher regulatory medical, pharmacology, or epidemiological authority best suited to deal with antibiotic resistance.

increased length of stay are well-described parameters that have been studied widely and thoroughly (2008). However, necessary control measures, impaired hospital activity, increased morbidity, and higher mortality rates are poorly researched and described in the medical literature. More information and research regarding the effects on the latter categories of antibiotic resistance is vital to thoroughly assess the penetration of resistance in the community. Increased antibiotic resistance leads to elevated costs associated with using more expensive antibiotics in stronger potencies or even simply larger doses in order to combat the stronger, more resistant bacteria. For instance, the cost difference between amoxicillin and the combination of amoxicillin plus clavulanic acid is on the order of a factor of two (Sosa, 2010). Amoxicillin is a penicillin antibiotic that may be used to treat urinary tract infections, Lyme disease, bacterial skin disease, and ulcers, among others. However, due to bacterial resistance to amoxicillin, the combination of amoxicillin plus clavulanic acid may be necessary to more effectively combat the bacteria. Reported additional costs of MRSA (methicillin-resistant Staphylococcus aureus) versus MSSA (methicillin-susceptible Staphylococcus

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 3

B S J

B S J

The role psychological factors play in contributing to growing antibiotic resistance should not be underestimated. For instance, in many cases, patients fail to take antibiotics according to specifications, stopping treatment early when symptoms cease to appear or ignoring time specifications. Ingesting, for example, only three of five doses for a particular condition generates problems for the following


B S J

Response to antibiotic resistance greatly varies between the two environments as well. In prosperous nations, disposable items arrive in truckloads from warehouses and well-organized infection control teams stymie the spread of resistant bacteria. Impoverished nations lack these effective responses as afflicted individuals struggle with shortages of reusable items and one sink for an open ward.

Solutions When determining potential solutions to alleviate the challenges associated with antibiotic resistance, the main purpose should be to consider both the rate of infection and the decreasing effectiveness of antibiotics with use. Accordingly, when two antibiotics

“…antimicrobial comprounds account for more than 30% of hospital pharmacy budgets due to specialized equipment, longer hospital durations, utilization of stronger antibiotics, and isolation procedures...” are available, the optimal portion and timing of their usage depends on the “difference between the rates at which bacterial resistance to each antibiotic evolves and on the differences in their pharmaceutical costs” (Laxminarayan, 2011). Potential solutions need to harmonize epidemiological consequences as well as economic factors and cost-benefit analysis. To reduce the need for antimicrobial compounds to begin with, the healthcare community can increase immunity through vaccinations, improved nutrition, and minimizing the time for which the patient is immunocompromised. Improving the spacing between beds in hospitals can limit the spread of resistance as well since many of the cases are endemic to hospitals and institutions. On a larger scale, policymakers may consider employing economic instruments such as taxes, subsidies, and redesigned prescription drug insurance programs to

4 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

Consumer education embodies a more difficult ensure that incentives facing doctors and patients align with society’s needs and interests. Further research and tedious process, but it must be addressed. For and implementation of alternative treatment options example, if patients understand that incorrect dosages such as antiseptics, probiotics, and cranberry juice (for of antibiotics increase the resistance of bacteria in their urinary tract infections) can further reduce the U.S.’s bodies and reduce effectiveness of antibiotics to be consumed in the future, they may be more concerned current dependence on antibiotics (Sipahi, 2008). With regards to assessing antibiotic about knowing what the proper dosage is. Usage of the same antibiotics for a multitude effectiveness, tools called antibiograms provide information on the susceptibility of common bacteria of conditions remains a key contributor to why to varying antibiotics, serving as a useful guide for antibiotic resistance is developing at an accelerated physicians when prescribing antibiotics. Their use pace. As a result, it is increasingly important to arose from the preparation of cumulative antimicrobial advocate for diversity in antibiotic use in hospitals and susceptibility data for trend analysis and clinical pharmacies. However, as depicted in Figure 2, there decision-making. However, there are challenges are limited options to prescribe from, compared to the involved with the usage of antibiograms, including past. As such, from a long-term perspective, creating 1) maintaining accurate and current susceptibility incentives for investing in research and development ratings, 2) applying them to all facilities with only a for the synthesis of new antibiotics is essential so that minute number of bacterial isolates, 3) distributing a larger range of antibiotics, in different classes, may them in accordance with access restrictions that become available for prescription. Patents encourage investment in such efforts limit the streamlined dissemination of information, as they signify protection of intellectual property and 4) analyzing susceptibilities to predict the best initial drug combination in the face of unpredictable rights as well as the ability to sell one’s innovation cross-resistance across antibacterial classes. The final for use; accordingly they possess key advantages and challenge can be mitigated to a degree by utilizing disadvantages and should be weighed accordingly. cross-table susceptibility analysis, which permits On one hand, extending the duration of an effective selection of dual regimens with greater odds of being patent life could increase incentives for a company to minimize resistance, as the company would potentially effective (Fox, 2010). To remediate over-prescription, the use of reap the benefits of a longer period of monopoly formularies can restrict the menu of antibiotics available over the antibiotic’s effectiveness. Pharmaceutical to the physician to prescribe from, on the hospital companies would then effectively exercise their level. While traditionally a formulary contained a market power, or ability to charge higher prices than collection of formulas for the compounding and testing of medication, today the main function involves specifying which medicines are approved to be prescribed under a specific contract and is based on evaluations of efficacy, safety, and cost-effectiveness of antibiotics. Setting and enforcing more stringent standards to the supply of antibiotics available can prevent leftovers when prescribing, remediate reliance on antibiotics as the go-to prescription, as well as limit exorbitant profits pharmaceutical companies reap from flooding the market with antibiotics. Also, educating physicians regarding tactics to employ when working with patients may potentially halt consumer insistence and physician complacency, if promoted on local, regional, and national levels. Moreover, since resistance patterns vary from country to country and hospital to hospital, physicians and pharmacists greatly benefit from being informed about resistance patterns Figure 2: Greater diversity of antibiotics will decrease the in their geographic locations at various points development and spread of antibiotic resistance. in time (Rapp, 2011).

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 5

B S J

aureus), VRE (vancomycin-resistant Enterococcus) versus VSE (vancomycin-susceptible Enterococcus), and ESBL+ (positive extended-spectrum betalactamase) versus ESBL (extended-spectrum betalactamase), range between US $7,212 and $98,575, and additional length of a hospital stay ranges between 2 and 15.3 days (Sipahi, 2008). The aforementioned examples contrasting normal strains of bacteria with resistant strains exhibit the fatal consequences of the loss of an affordable antimicrobial and the need to supplant it with a more expensive replacement. On a larger scale, antimicrobial compounds account for more than thirty percent of hospital pharmacy budgets due to specialized equipment, longer hospital durations, utilization of stronger antibiotics, and isolation procedures that involve private patient rooms, dedicated personnel, as well as gowns and gloves, that inevitably increase the time and cost of treating infections (Siphai, 2008). The Forum on Emerging Infections of the U.S. Institute of Health discovered in 1998 that hospital cases associated with antibiotic-resistant bacteria generate a minimum of $4 to $5 billion in costs to U.S. society and individuals yearly (Horowitz, 2004). The international outlook similarly portrays trends of increases in resistance and morbidity, but for vastly different reasons. In developed countries, adequate housing and support for personal hygiene minimize interpersonal exchange of resistant bacteria and clean water limits their ingestion. However, many developed countries also feed substantive amounts of resistance-selecting antimicrobials to food animals and the countries’ residents ingest antibiotics found in both the food and water supply. Developed countries, primarily found in the North America, Eastern Asia, and Southern America, thus suffer from overprescription of antibiotics. On the other end of the spectrum, developing countries face the opposite problem; individuals suffer from under-prescription of effective antibiotics coupled with lack of sanitation and hygiene, leading to increased morbidity from curable infections (Sosa, 2012). At the same time, in such underdeveloped countries, it is not uncommon for many communities to operate under the false notion that the drugs they utilize retain their full potency, when in reality many have lost their efficacy due to resistance. One notable example involves the usage of data from Africa which indicated that antimalarial drugs lose their effectiveness long before the population recognizes their failure. Accordingly, preventative measures would ensure that individuals become sick with lower frequency, and are thus less likely to pass on fewer resistant infections to others.


B S J

the competitive market, and reap greater profits as a result. Yet, a pharmaceutical company under the protection of a specified patent on a drug that is cross resistant may have little concern about future resistance because when different antibiotics are employed, the benefits of reducing current production transfer to other companies. Assigning broad patents that cover a class of antibiotics as opposed to a single one may prevent firms from competing inefficiently for the same range of effectiveness embodied in a class of antibiotics, as well as further drive the incentive for investment in development of new antibiotics. From the insurance perspective, a single buyer such as national or private health insurance may have an incentive to reduce antibiotic resistance since they will likely bear the costs of future resistance in the form of higher costs for more expensive prescription coverage. Doctors, pharmacists, consumers, and regulatory agencies all possess roles to play in addressing increased antibiotic resistance. Of these actors, pharmacy’s involvement and role in the hospital system makes it a viable candidate for engaging in opportunities for leadership in antimicrobial stewardship (Rapp, 2011). Core strategies for antimicrobial stewardship would involve identifying the optimal selection, dose, and duration of an antibiotic that results in the optimal clinical result with regards to the treatment of infection, with minimal toxicity to the patient and minimal impact on the development of resistance (Fox, 2010). To propagate global coordination, the World Health Organization already released “Global Strategy for Containment of Antimicrobial Resistance,” a document designed to urge governments to take substantive action aimed towards containing antibiotic resistance. Along these lines, experts agree that a global system for tracking developments in antibiotic resistance trends would prove useful by serving as an indicator for recognizing “hot spots” and determining whether prevention programs contribute to positive results. Prominent organizations such as the Alliance for Prudent Use of Antibiotics and programs like the Global Antibiotic Resistance Partnership further seek to identify weaknesses in how antibiotics are developed, regulated, and maintained, and the degree to which countries track antibiotic use and resistance globally. It is evident that in order to affect meaningful change, implementation of policy on the government level, cost effectiveness analysis on the business platform, analysis of impacts on morbidity and mortality from the epidemiological setting, as well as direct targeting of consumers in an attempt to alter their behavior are all required in tandem to make substantive progress forward and protect the future in which antibiotics still possess the ability to cure illness.

References Fox, B. and Qutaishat, S. (2010). Using The Antibiogram to Guide Antimicrobial Therapy and Reduce Resistance. Premier Inc. Advisor Live Presentation. Horowitz, J. B. & Moehring, H. B. (2004). How property rights and patents affect antibiotic resistance. Health Economics, 13, 575-583. DOI:10.1002/ hec.851 Laxminarayan, R. (2012). A Matter of Life and Death: The Economics of Antibiotics. Milken Institute Review. Retrieved From: http://www. milkeninstitute.org/publications/publications.taf ?function=detail&ID=38801345&cat=mir. Laxminarayan, R. (2001). Economics of Antibiotic Resistance: A Theory of Optimal Use. Journal of Environmental Economics and Management, 42(2), 183-206. Retrieved from http://linkinghub. elsevier.com/retrieve/pii/ S0095069600911562. Laxminarayan, R. Resistance: Can role?143, 1-4.

(2011). Fighting Antibiotic Economic Incentives Play a

Niederman, M. S. (2001). Impact of antibiotic resistance on clinical outcomes and the cost of care. Crit Care Med, 29, 114-120.PubMed PMID: 11292886. Rapp, R. P. (2011). Antimicrobial Resistance and Antibiogram Evaluation: A New Practitioner’s Preparation for Antimicrobial Stewardship. American Society of Health System Pharmacists 2011 Midyear Clinical Meeting Presentation. Sipahi, O. R. (2008) Economics of Antibiotic Resistance. Expert Review of Anti-Infective Therapy, 6(4), 523-539. Retrieved from: http://www.medscape. com/viewarticle/580479 Sosa, A. de J., Byarugaba, D. K., Amabile-Cuevas, C. F., Sueh, P.R., Kariuki, S. & Okeke, Iruka N. (2010). Antimicrobial Resistance in Developing Countries. New York: Springer. Woloshin, S., Schwartz, L.M., Tremmel, J. & Welch, H. G. Direct-to-consumer advertisements for prescription drugs: what are Americans being sold? The Lancet, 358 (9288), 1141-1146. DOI: 10.1086/512810

6 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


Water Intoxication Nithya Lingampalli

Hyponatremia is a word translated from its Latin and Greek roots to mean, “insufficient salt in the blood,” which is an apt name given that it is a condition characterized by excessive levels of water retained in the body which then cause a dilution of salt and electrolytes to the point where they are no longer functional. This leads to the disruption of the kidney’s normal functioning and thereby causes many physical problems (Coco Ballantyne, 2007). There are multiple degrees of severity as well as persistence and the symptoms associated with these levels vary as well. Chronic hyponatraemia is when the blood sodium level drops over time. On the other hand, Acute hyponatraemia, or water intoxication, is when it drops over a shorter period of time and is thus much more dangerous and has more severe symptoms (Stöppler). Medically, this disease is defined as a serum sodium level of less

B S J

Water constitutes up to 75% of a human body’s composition during infancy, and although this percentage declines to 45% in old age, it is still a large component of your body mass (Hall et al., 2011). Given the enormous amount of water that is present in a human body, it would be expected that an increase in water consumption would have little to no impact on its functions. However, it turns out that a body’s homeostatic balance is significantly more delicate than anticipated, and even a few extra ounces of water can manifest as negative physical symptoms. Hyponatremia, also known as hyperhydration and water intoxication, is a state in which the body’s water levels are disrupted by an excess of water retention and this imbalance is exhibited through various physiological symptoms including strokes, coma, and death in severe cases (Water intoxication symptoms; Coco Ballantyne, 2007).

Figure 1. The human body is approximately 75% water during infancy, this percentage drops down to 45% into adulthood.

than 135 mEq/L and is characterized as an acute neurological disturbance due to the fact that brain cells swell and disrupt normal functioning. (Hyponatremia (water intoxication)). However, as we will examine later on, the exact procedure for diagnosis will vary with the population in question. Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 1


accessible material and as a result, they attempt to over-dose by over-consumption (Water intoxication alert). Schizophrenic patients are also observed to have a higher risk of contracting this condition due to their imbalanced mental state. Studies show this to be true especially among inmate populations when other methods of suicide may be inaccessible (Schoenly, 2012). There are also biological conditions that can put you at a higher risk for water intoxication including hypothyroidism, cirrhosis, and cortisone deficiency (Addison’s Disease) as all these conditions affect the water and electrolyte balance within your body (Water intoxication symptoms). Furthermore, taking medications such as antidepressants, diuretics and sulfonylurea drugs also has a negative impact as they decrease blood sodium levels when prescribed to treat other symptoms (Stöppler, Water intoxication symptoms.). A biological syndrome known as Syndrome of Inappropriate Antidiuretic Hormone Secretion (SIADH) also plays a role as it when the kidneys cannot function normally in excretion and thus begin to accumulate and excess of water (Water intoxication alert). This occurs because the posterior pituitary gland in the brain is stimulated to increase the secretion of an antidiuretic hormone known as vasopressin that causes the kidneys to increase the amount of water they conserve (Coco Ballantyne, 2007). SIADH is directly linked to hyponatremia and as a result, physicians often examine their patient’s medical history for cases of hyponatremia when diagnosing them with SIADH (Thomas, 2013).

Figure 2. Drinking more than 72 oz of water a day is above the recommended limit if the body is not undergoing activity what requires water to be replenished.

Although over-retention of water is the main trigger for this condition, there are multiple mental and physical factors that can expedite its occurrence. This is especially a concern among patients who exhibit severe depression as they also have strong, recurrent thoughts of suicide. Water is a relatively

Water intoxication is very hard to detect in its early stages as it presents fairly ubiquitous symptoms such as headaches, confusion, nausea and vomiting that can be linked to a multitude of diseases (Farrell et al., 2003). Symptoms of water intoxication are actually very similar to that of alcohol intoxication in terms of the nausea and altered mental state that is caused (Julia). The first symptom is always nausea as the stomach is unable to hold a greater than average amount of water. Next follow slurred speech, weakness, headache, bloating, hallucinations, and muscle cramps as the body attempts to regain homeostasis but fails as there is no way to excrete the excess water (Water intoxication symptoms). At this point,

2 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

in contrast to the other common signs, a unique characteristic of the disease appears in the form of psychotic symptoms such as the consumption of even more water in a crazed state (Farrell et al., 2003). After this point, the excess water begins affecting brain cells and causing them to swell in a condition known as cerebral edema. Apart from the impaired brain functioning, this condition is made more

includes the use of vasopressin receptor antagonists that inhibit the release of vasopressin from the posterior pituitary gland. As a result, the amount of vasopressin that impacts the kidneys is reduced and the body conserves less water, allowing for the excess to be naturally excreted (Water intoxication symptoms).

“Johns Hopkins Pediatric Center reported that they see at least three to four cases of water intoxication among infants every summer” dangerous by the fact that the skull acts as a stiff constraining mechanism and the swelling increases overall pressure on the soft tissue of the brain (Stöppler). As result of the ever-increasing cranial pressure and impaired cerebral functioning, the body soon shuts down in a coma or experiences severe seizures that can eventually lead to death. It is for this reason that severe hyponatremia has a mortality rate of 50% among those who contract it due to the cerebral edema that causes the nervous system to fail if the cerebral inflammation is not relieved immediately (Bhananker et al., 2004).

One form of this condition, self-induced hyponatraemia, generally ends up being an acute case that can only be treated through hospitalization and rapid intervention with hypertonic saline. It is generally a side effect of mental conditions such as severe depression and schizophrenia that include suicidal tendencies that lead to over-consumption. Any prolonged decrease in blood sodium levels increases the risk of permanent cerebral damage and the effects should be counteracted as quickly as possible. Induced urine output is also considered during treatment to restore the body to homeostasis as rapidly as possible without causing too drastic changes (Sterns et al., 2009).

Diagnosis of this problem generally consists of monitoring the body’s sodium and salt content as well as the body fluid levels to determine exactly Until recently this condition was thought to be what extent the balance has been disrupted. The most prevalent among athletes, especially longmedical history is also examined for instances of distance or endurance based sports, who often prolonged vomiting, excessive sweating, previous overcompensated for their need to hydrate and thus blood tests, and urine tests (Stöppler). experienced the negative effects of hyponatremia Treatment depends mainly on the amount of (Julia). Before the 1970’s, athletes had prided excess water consumed as that correlates with the themselves on feats such as running marathons amount of homeostatic imbalance present and the without drinking any water. However, starting from extent to which the body’s systems have begun to the 1970’s, athletes were advised to overcompensate fail. Early detection, as hard as it is, is necessary for for their thirst during training because previously these reasons in order to prevent the fatal onset they had been told that drinking fluids during that almost always results in life-threatening events exercise was harmful to their athletic performance. such as seizures and comas (Farrell et al., 2003). If the The negative impact of instruction to ingest “as much condition is caught in the early stages, treatment as tolerable” soon became apparent in 1981 when the with an IV fluid containing electrolytes can rectify the first case of exercise-associated hyponatremia (EAH) problem as it restores the normal salt concentration occurred. Following this, more than 10 documented in the blood and reestablishes homeostasis (Julia). deaths from EAH have been reported since 1991 This can also be accomplished through the increased due to EAH and the encephalopathy that occurred consumption of salty foods (Water intoxication as a result. Subsequent research has found that symptoms). Treatment for more severe cases also EAH is an entirely preventable condition and that Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 3

B S J

B S J

The biological basis for this condition is based on the abnormal retention of water by the kidneys due to various neurological and gastrointestinal problems. The extra water within the body dilutes the electrolytes and salt also present to a very weak concentration. As a result, the body’s cells are forced to absorb this water in order to restore this concentration and return to homeostasis. However, the inflammation of cells continues and disrupts other bodily functions until eventually the swelling reaches brain cells and causes cerebral edema that then results in strokes, coma, or death (Farrell et al., 2003; Coco Ballantyne, 2007).


B S J

there is always a chance to prevent fatal outcomes provided that there is an early diagnosis. There is also a risk during treatment because the rapid infusion of electrolyte solutions can increase intracerebral pressure that can then cause coma, respiratory arrest, and brain death (Parrish). Another population among which this condition is prevalent is infants and an article based on case studies at Johns Hopkins Pediatric Center reported that they see at least three to four cases of water intoxication among infants every summer that generally involve multiple seizures. Although these seizures are not harmful in the long run to the baby’s health, they are severe and can be prevented. An infant is particularly susceptible to seizures from water intoxication because it does not have enough sources in its diet to replace the extra salts (Pesheva, 2008) Sometimes children with specific chronic illnesses can meet the medical definition for the diagnosis of hyponatremia, however they do not actually have the disease and the calculation is a side effect of their pre-existing chronic condition. Serum sodium increase is the generally prescribed treatment, especially for infants, and at a specific rate of less than 1 mEq/L (Hyponatremia (water intoxication). Hyponatremia came into the popular view through cases of deaths that were highly publicized due to their unexpectedness. One of the most recent is the story of a woman that competed in a contest of who could drink the most water before needing to urinate. The prize for the challenge was a free Wii that this woman attempted to win for her children. Although she reported feeling normal at the end of competition, she had severe nausea and headaches when driving back home. Given that she was not hospitalized in time, the symptoms escalated until she died from an extreme seizure. As a result of the amount of negative publicity the competition received as a result of her death, the channel canceled the competition and warned viewers about the possibility of hyponatremia.

symptoms are not felt. Anyone who is drinking more than 72 oz. of water a day is above the recommended limit, unless they are involved in a situation where bodily fluids must be replenished rapidly. Intake should be restricted to about 1-1.5 quarts a day, but based on height, weight, and other bodily factors, some people may still be at risk for hyponatraemia at that amount (Water intoxication alert).

REFERENCES 1. Bhananker, S. M., Paek, R., & Vavilala, M. S. (2004). Water intoxication and symptomatic hyponatremia after outpatient surgery. Anesthesia and Analgesia, 98(5), 1294-1296. doi: 10.1213/ 2. Coco Ballantyne (2007, June 27). scientificamerican.com. Retrieved from http://www.scientificamerican.com/article.cfm?id=strange-but-truedrinking-too-much-water-can-kill 3. Farrell, D. J., & Bower, L. (2003). Fatal water intoxication. Retrieved from http:// www.ncbi.nlm.nih.gov/pmc/articles/PMC1770067/ 4. Hall, John E., and Arthur C. Guyton. Guyton and Hall Textbook of Medical Physiology. Philadelphia, PA: Saunders/Elsevier, 2011. Print. 5. Hyponatremia (water intoxication). (n.d.). Retrieved from http://easypediatrics. com/hyponatremia-water-intoxication 6. Julia, L. (n.d.). How stuff works. Retrieved from http://science.howstuffworks. com/life/human-biology/water-intoxication.htm 7. Parrish, Carol R., R.D., M.S. “Water Intoxication - Considerations for Patients, Athletes and Physicians.” Nutrition Issues in Gastroenterology 66th ser. (2008): 46-53. School of Medicine at the University of Virginia. Web. 18 Mar. 2013. <http://www.medicine.virginia.edu/clinical/departments/ medicine/divisions/digestive-health/nutrition-support-team/nutritionarticles/NoakesArticle.pdf>. 8. Pesheva, E. (2008, May 14). Too much water raises seizure risk in babies. Retrieved from http://www.hopkinschildrens.org/newsdetail. aspx?id=4844 9. Schoenly, Lorry. “Water Intoxication and Inmates: Signs to Watch out for.” Correctionsone.com. Correctionsone.com, 8 May 2012. Web. 11 Mar. 2013. 10. Sterns, R. H., Nigewekar, S. U., & Hix, J. K. (2009). The treatment of hyponatremia. Semin Nephrol, 29(3), 282-299. doi: 10.1016/j.semnephrol.2009.03.002 11. Stöppler, M. C. (n.d.). Hyponatremia (low blood sodium). Retrieved from http://www.medicinenet.com/hyponatremia/article.htm 12. Thomas, Christie P. “Syndrome of Inappropriate Antidiuretic Hormone Secretion.” Ed. Vecihi Batuman. Medscape Reference. WebMD, 25 Feb. 2013. Web. 11 Mar. 2013. <http://emedicine.medscape.com/article/246650overview>. 13. Water intoxication alert. (n.d.). Retrieved from http://www.pwsausa.org/ support/water_intoxication_alert.htm 14. Water intoxication symptoms. (n.d.). Retrieved from http://www.buzzle. com/articles/water-intoxication-symptoms.html

Although from the context of this article it may seem that hyponatremia is a highly prevalent disease, it is actually relatively rare within the general population. However, there are always precautions that can be taken to ensure that even the most negative 4 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


RADIOCARBON Applications of Accelerator Mass Spectrometry By Sean Purcell At this very moment, cosmic rays are penetrating the Earth’s atmosphere and colliding with atoms in the stratosphere to form secondary cosmic trays of highly energetic neutrons . These energized subatomic particles then collide with the nitrogen-14 that in large part makes up our atmosphere; the subsequent neutron capture produces both hydrogen and radioactive carbon-14 atoms (subsequently referred to as C-14). These radioactive carbon atoms go on to form carbon dioxide molecules that are absorbed by plants, and eventually consumed by humans. This, however, should not be cause for alarm, but rather reason to Berkeley Scientific Journal Death and Dying • Spring 2013 • Volume #17 • Issue #2 • 1

B S J

DATING


celebrate. Because C-14 undergoes betadecay, and C-14 absorption ceases at death, this process serves as a molecular timestamp, effectively putting a date on the death of biological organisms. The exploitation of this chemical phenomenon allows scientists to conduct what has come to be known as radiocarbon dating. In late 2011, scientists in Europe used a modified radiocarbon dating process

Oxford’s Radiocarbon Accelerator Unit (ORAU) under scientist Thomas Higham dated a human jawbone discovered at Kent’s Cavern, UK between 44.2 - 41.5 kyr cal BP (between 41,500 and 44,200 years old), filling a “key gap between the earliest dated Aurignacian remains and the earliest human skeletal remains” (Higham, 2011). Similarly, research conducted by Stefano Benazzi at the University

specialized sample preparation techniques to date particularly old and small samples with remarkable precision. Carbon dating as a process is relatively new, developed by Nobel Laureate and UC Berkeley professor Willard Libby in 1936 (Libby, 1967). Knowing that organic matter, after death, is unable to absorb significant

“Because C-14 undergoes beta-decay, and C-14 absorption ceases at death, this process serves as a molecular timestamp, effectively putting a date on the death of biological organisms.” problem. These radiometric-counting methods produced results with significant statistical uncertainty (due to the long half life of C-14) and required large sample sizes. In April of 1977, Richard Muller, while conducting research at the Lawrence Livermore Radiation Laboratory, found that by using a cyclotron as a high energy mass spectrometer, maximum

Archaelogists recently identified the newly uncovered skull of English king Richard III, using modern carbon dating and genetic techniques. to arrive at the startling realization that Neanderthals must have coexisted with anatomically modern humans, a claim that was once widely contested among historians and anthropologists as it defies generally accepted human evolutionary and migratory patterns (Higham, 2011). Research conducted at University of

of Vienna dated two teeth discovered at Grotto de Cavallo, Italy much earlier than previously thought, to between 43,000 and 45,000 calendar years before present (Benazzi, 2011). These scientists utilized a specialized form of carbon dating known AMS radiocarbon dating, which relies upon atomic mass spectrometry and

2 • Berkeley Scientific Journal • Death and Dying • Spring 2013 Volume #17 • Issue #2

A fossilized jaw, this section of jawbone was analyzed by the Higham group at the Oxford Radiocarbon Accelerator Unit, Oxford University. Berkeley Scientific Journal Death and Dying • Spring 2013 • Volume #17 • Issue #2 • 3

B S J

levels of C-14 and that the atmospheric concentration of C-14 is relatively stable, Libby’s model claimed that because the radioactive isotope C-14 undergoes beta decay with a half life of approximately 5730 calendar years, organic matter could be dated by determining the concentration of the isotopic carbon in fossils and remains (Libby, 1967). Scientists have

B S J

since developed a wide array of techniques to measure the radioactive decay of C-14. Two early methods, gas proportional counting and liquid scintillation counting (both radiometric methods which rely on monitoring the decay of specific C-14 atoms over time) while effective, presented scientists with a unique


One of the infant teeth from the Grotto de Cavallo, examined by Stefano Benazzi of the University of Vienna. age determination could be increased while simultaneously decreasing required sample size — bringing much headway to the problem of radioisotope dating that had so long troubled scientists researching trace element detection (Muller, 1977). This discovery resulted in the subsequent use of accelerator mass spectrometry as a means to isolate radioactive isotopes by bringing energy levels to amounts capable of removing interferences by other isotopes and atoms present (namely atomic nitrogen). The use of accelerator mass spectrometry to isolate radioactive C-14 has since transformed the carbon dating process, primarily because AMS radiocarbon dating not only removes much of the statistical uncertainty of radiometric counting methods and requires small sample sizes. Accelerator mass spectrometry is remarkable in its ability to detect extremely low concentrations of an isotope. In the case of C-14, the AMS technique detects

quantities as low as one C-14 atom per trillion C-12 atoms (Nelson, 1995). In order to carry out such a process, negatively charged carbon ions generated by a cesium sputter ion source and charged to between 3-10 keV physically knock atoms from the sample and contribute electrons to a fraction of the ejected particles forming negative elemental and molecular ions (Nelson, 1995). These ions are selected at a single mass unit by a magnetic dipole and then injected into a tandem electrostatic accelerator. The ions are immediately accelerated toward a highly charged positive potential, passing through a carbon film that removes electrons, making the carbon atoms positive ions. Positively polar carbon ions then accelerate through a second stage of the accelerator that brings their kinetic energies between 5-150 MeV, effectively eliminating interference from other isotopic atoms in the sample, including N-14 (Nelson, 1995). Using

4 • Berkeley Scientific Journal • Death and Dying • Spring 2013 Volume #17 • Issue #2

“AMS techniques can detect quantities as low as one C-14 per trillion C-12 atoms.” (C-12, C-13) in a sample. The process of AMS radiocarbon dating, while statistically more accurate, does have a major drawback. Due to the high costs of purchasing and operating a nuclear particle accelerator and its components, access to AMS radiocarbon dating methods is often limited. However, AMS radiocarbon dating offers scientists remarkable capabilities that surpass those of radiometric counting methods. Current research studies suggest the use C-14 as a biomedical tracer that can be used for labeling when fast analysis (less demanding accuracy than for carbon dating) of a large number of samples is desired, by using low voltage compact AMS facilities (Suter, 2000). AMS is sufficiently sensitive to detect C-14 at levels so low that much of the hazard and most significant interference has been eliminated. Recent studies show that AMS has a sensitivity over radiometric decay counting for long lived radioisotopes and common radiotracers that will allow for smaller samples and lower radioisotope

concentrations. New methods are being developed to exploit this selectivity and sensitivity in biochemical laboratories interested in pharmacokinetics and biomolecular interactions. These remarkable characteristics show great promise to researchers studying metabolism, macromolecular binding of candidate drugs and toxins, and even the pathology of bacterial and viral infection. The combination of accelerator mass spectrometry and radiocarbon dating demonstrates the remarkable capabilities of science at disciplinary boundaries. Research conducted at the junction of chemistry and physics has led to unprecedented discoveries relevant to both the biological and anthropological fields. For instance, the ORAU group’s discoveries serve to correct historical and anthropological beliefs and introduce the possibility of two waves of modern human migration and Neanderthal extinction, all in a timescale significantly different than previously recorded and accepted by the academic community. It is instances like these that demonstrate the remarkable capabilities of AMS radiocarbon dating and more generally the remarkable capabilities of interdisciplinary science.

References Benazzi, S., Douka, K., Cocquerelle, M., & Condemi, S. (2011). Early Dispersal of Modern Humans in Europe and implications for Neanderthal behaviour. Nature, 479, 525529. Higham, T., Jacobi, R., & Ramsey, C. (2006). AMS Radiocarbon Dating Of Ancient Bone Using Ultrafication. Radiocarbon, 48(2), 179-195. Higham, Tom, Tim Compton, Chris Stringer, Roger Jacobi, and Chris Collins. “Earliest Evidence for Anatomically Modern Humans in Northwestern Europe.”

Berkeley Scientific Journal Death and Dying • Spring 2013 • Volume #17 • Issue #2 • 5

B S J

B S J

magnetic quadrupole lenses to focus the C-14 and charge state to the entrance of a second dipole mass spectrometer removes interference before being filtered by a Wien system (Nelson, 1995). In combination with a multianode gas ionization chamber or solid-state detector, both the ion’s total energy and the rate of deceleration as it passes through the detector can be measured for low-medium mass isotopes, allowing C-14 to be identified (Nelson, 1995). Extracting and counting C-14 through the use of particle physics eliminates both the large sample requirements and timetable of radiometric counting methods while also determining the concentration of stable carbon isotopes


Nature 479 (2011): 521-524. Print. Gaisser, T. K. (1990). Preface, Cosmic Rays, Particle Physics. Cosmic rays and particle physics (pp. xv-21). Cambridge [England: Cambridge University Press. Libby, W. (1967). History of Radiocarbon Dating. Los Angeles, USA: Department of Chemistry and Institute of Geophysics, University of California Los Angeles.

B S J

Muller, R. A. (1977). Radioisotope Dating with a Cyclotron. Science, 196(4289), 489-494. Nelson, D. E., Vogel, J., Turteltaub, K., & Finkel, R. (1995). Accelerator Mass Spectrometry: Isotope Quantification at Attomole Sensitivity. Analytical Chemistry, ACS, 0003-2700/95/0367353A, 353A-359A. Suter, M. (2000). Tandem AMS at subMeV energies - Status and Prospects. Nuclear Instruments and Methods in Physics Research, 172(B), 144-151. Table of Isotopic Masses and Natural Abundances. (n.d.). University of Alberta, Chemistry . Retrieved March 10, 2013, from www.chem.ualberta.ca/~massspec/ atomic_mass_abund.pdf Vogel, J.S. (1990). Application of AMS to the Biomedical Sciences. Nuclear Instruments and Methods in Physics Research, 52(3-4), 524-530.

6 • Berkeley Scientific Journal • Death and Dying • Spring 2013 Volume #17 • Issue #2


B S J

Alex

Filippenko Prashant Bhat, Kuntal Chowdhary, Jingyan Wang, and Ali Palla

BSJ had the exciting opportunity to interview UC Berkeley’s 9 time “Best Professor” winner, Professor Alexei Filippenko, an astrophysicist and a professor of astronomy. His highly acclaimed research on progenitor stars and explosion mechanisms of different types of supernovae has appeared in numerous TV shows, documentaries, and textbooks. Stemming from our topic on Death and Dying, we discussed the exciting phenomena of star death and the formation of brilliant supernovae. BSJ: To begin, we wanted to know how you got involved in cosmos research and what led you to focus on supernovae? F: As a graduate student at Caltech, I was doing a survey of the five hundred brightest, nearest galaxies in the northern hemisphere to find evidence for a giant black hole that’s swallowing material. Little miniature quasars. Quasars are bright, luminous bodies very far away. We think that they are big black holes swallowing lots of material at the center of galaxies. So in nearby parts of the universe there should be descendents of quasars. In other words, the black holes should still be

there, swallowing material at a lower rate. I was doing a survey at the 200-inch (5.1 meter) Hale Telescope at the Palomar Observatory with my former thesis advisor at Caltech, Wal Sargent. This is now February of 1985, I’m a post-doctoral scholar now at Berkeley and at the end of the fifth night of the five night observing run I had time left for just two more galaxies to observe. I had a hundred possibilities because the survey was still in its early stages—I chose a galaxy almost at random because the picture of it looked interesting. So I said, “let’s survey that one.” When we pointed the telescope to that galaxy, we

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 1


BSJ: How do you determine accurate measurements of supernovae and other stellar particles from millions of light years away?

hydrogen it’s called Type II and if it doesn’t show hydrogen it’s called Type I. Now it turns out that Type I supernovae have several different subtypes: Ia, Ib, Ic. Ia are the classical Type I supernovae that are thought to be the thermonuclear runaway of a white dwarf star at the end of its life when it gets enough material from a companion star. The Ib and Ic supernovae are thought to be more related to Type II supernovae. The Type IIs are massive stars whose iron core collapses at the end of its life. That launches a rebound, which is then the explosion of the outer most parts. Core collapse versus thermonuclear runaway. The Ib and Ic supernovae I found in 1985 [mentioned earlier] helped solidify this idea that some Type Is are not the thermonuclear runaway of a white dwarf, but rather are the core collapse of a massive star that lost the outer envelope of hydrogen prior to an explosion. It spectroscopically looks like a Type I because it doesn’t

what parameters do you look at? F: What we do is take photographs of thousands of galaxies each week and then we repeat the process and even at the rate of one supernova or two supernovae per century per galaxy. If you’re looking at enough galaxies some of those will produce a supernova. And once we find the supernova we study it in detail. We start taking more detailed measurements of it. We’d love to be able to predict which star will become a supernova and we can predict it sort of in a general way like beetlejuice: the left shoulder of our eye. And I can say with a lot of confidence that it will blow up sometime in the next half a million years. But I don’t know when. It could be tonight, it could be half a million years from now. We would love to be able to predict that. We can’t yet. BSJ: Going back to the theme of death and core collapse, there are many descriptions and definitions

It wouldn’t keep the star hot inside and pressurized. What happens is the iron core builds up and then reaches a sufficiently big mass that it can no longer hold itself up against gravity, so it collapses. And the protons and electrons combine to form neutrons and neutrinos and you get a ball of neutrons, a neutron star, which overshoots its equilibrium point.. becomes smaller than it normally would, and then it rebounds a little bit. And the rebound that hits the surrounding layers launches them outward. And then the neutrinos that are also produced also help push on this material and that creates a successful explosion. So that in a simple way is kind of, you know, what happens. Imagine the basketball rebounding off the floor. That’s like the neutron star rebounding off of itself because the core collapses, overshoots like on a trampoline and then you rebound. So that’s that. And then if you have material around it, whose pressure support has suddenly vanished, it will fall, for example, a bouncing

“Imagine the basketball rebounding off the floor. That’s like the neutron star rebounding off of itself because the core collapses, overshoots like on a trampoline and then you rebound. So that’s that.” F: Well, we might be millions or even billions of light years away, but with a big telescope we can collect quite a bit of light— that’s what a telescope does. A gigantic eyeball that’s gathering light. We can pass that light through a prism or reflect it off of a grating and produce a spectrum. And with the spectrum we can study the chemical composition, the speed of the ejecta, the density of gases, etc. It is really through spectroscopy that we learn about the physics of the object and also through repeatedly taking pictures of the supernova and recording how fast it brightens and fades with time. BSJ: What are the different types of supernovae and how do you distinguish them? F: The major classification is based on whether the spectrum shows obvious hydrogen or not. If it shows

have hydrogen, but a bit of a weird Type I because it’s not a white dwarf. There are various observable characteristics from which we then try to get a physical understanding of what’s going on. And in the physical understanding there’s the thermonuclear runaway of a white dwarf versus the collapse of the iron core of a massive star. Those are the two main mechanisms. BSJ: It’s amazing how the most primitive element like hydrogen that we rely on most is absent from Type I supernovae. F: Right. Massive stars have gotten rid of their hydrogen prior to the explosion. They can do that through winds of their own and also by transferring material to a companion star. They can get rid of the hydrogen that way as well. BSJ: So how do you know which stars to research and

2 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

of how massive stars can undergo core collapse. How do you describe star death/collapse to your students? F: Near the end of a star’s life, there’s been a sequence of nuclear reactions where the ashes of one set of nuclear reactions becomes the fuel for the next set. So our sun right now is fusing hydrogen to helium and it does that for ten billion years, but later on it will fuse helium to carbon and oxygen. And our own sun will stop at that point because it’s not massive enough. But much more massive stars (say eight or ten times the mass of the sun or above) can fuse carbon and oxygen into things like neon and magnesium. And then silicon and sulfur and then finally iron. And there might be a few steps in between there. But the point is you build up an iron core. At each stage, the star releases energy through this nuclear fusion. Now iron nuclei fusing together would require energy rather than liberate energy. Iron is the most tightly bound of the atomic nuclei; fusing iron together wouldn’t do the star any good.

tennis ball. Now these balls normally don’t go up to the height where they started because some energy has dissipated due to the collision. But, if I put them one top of each other, the tennis ball goes shooting off. There is this rebound which in the sense chemically launches the explosion. And just as in the case of Earth’s gravity, the ball, if it could go high enough, wouldn’t come down. So, two, this mechanism fails to explode the star completely. That’s where you need these neutrinos, which are not very interactive, but some of them do interact and help push the material out. That, in some cases, gives you a successful explosion. That’s the basic idea of core collapse of a supernova. What’s left over is a neutron star, in some cases a black hole because the neutron star is sometimes too massive and it continues to collapse to form a black hole. But the rest of the material is ejected away. BSJ: So as you were saying, once a star collapses, it becomes either a supernovae or a black hole. What are

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 3

B S J

B S J

noticed a bright star that seemed not to be in the right place. In other words, it wasn’t the bright central part of the galaxy. So I said, well look as long as we getting a spectrum of the nucleus of the galaxy, let’s get a spectrum of this other thing that’s near the nucleus in case it’s something interesting. We got a spectrum and it turned out to be an exploding star. So I kind of found one almost by accident without looking for it specifically. And it turned out to be a particularly interesting type: a new kind of stellar explosion, which I studied in the course of the next few weeks and we published a paper on it—I became really interested in stellar explosions as a result of that chance discovery. A message I can give is to be on the lookout for opportunities and take full advantage of them.


the factors that ultimately determine a star’s fate? In your papers, we had seen several references to the Chandrasekar limit, so we were hoping you could expand on that. F: In the case of a core collapse of supernovae, normally you would get a neutron star. But if the collapsing core is too massive, then either the whole star can collapse to form a black hole, or you get a rebound if it stops temporarily as neutron star. You get this rebound, you get bunch of neutrinos and you can get a successful explosion, but the neutron star will continue to collapse to form a black hole. With massive you can go directly to a black hole, or through the supernovae explosion, ending up as black hole. Most of the time, it ends up as a neutron star. Now the Chandrasekar limit is technically the limit beyond which a white dwarf cannot grow. A white dwarf is what the Sun will become in about 7 billion years. And if it were to gain material from a companion, it could not exceed 1.4 Solar Masses and it would explode or collapse. In the case of an iron core of a massive star, it’s the iron core counterpart to the Chandrasekar limit. The limiting mass beyond so called electron degeneracy pressure, which is what holds these things up. Basically, electrons don’t want to be in the same state because they are fermions. They have the Pauli Exclusion Principles. They don’t want to be in the same state, yet they are being crammed into a smaller and smaller volume. So to be in that state, some electrons have to have a tremendously high momentum and tremendously high energy. This is not thermal random energy, it is an energy based on the Pauli Exclusion Principle and on the Heisenberg Uncertainty Principle. You have this quantum mechanical pressure holding the thing up and beyond 1.4 solarmasses, the Chandrasekar limit, and the degeneracy pressure is insufficient to hold something up against the pull of gravity. BSJ: So taking a step back, you are in charge of the Katzman’s Automatic Imaging Telescope (KAIT) down in San Jose. Can you tell us a bit about the project and how you became involved? F: In 1989, I got an award from the National Science Foundation called the Presidential Young Investigator Award. It gave me money with which to research, and they would give me more money matching what money I would get from private donors, industries, and etc. I got a telescope company

4 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

to donate a fraction of a telescope, which was worth some money, and then I got money from the NSF and bought an equivalent amount of equipment from the same company, so I effectively got everything at half cost. KAIT takes CCD images (digital images) of galaxies. Typically a thousand per night, maybe several thousand (7000-10000) a week and it repeats the process. I had a research associate, Wei Dong Lee, who unfortunately passed away in December of 2011, program this whole thing to take images of galaxies and automatically compare the new images with the old images of the same galaxies. Out of the 1000 images a night, we would get 15 candidate supernovae, but not all of them were for sure supernovae because sometimes cosmic rays, and charged particles interact with the detector and they look like a star or an asteroid may be passing through the field of view and it would look like a star. Then, a team of mostly undergraduate students looked at the few dozen images that the software tagged on the previous night as being potentially interesting, and with their superior eye-brain combination (laughs), would decide which ones would be genuinely supernovae and worthy of follow-up observations. For about a decade, we led the world in terms of total number of new exploding stars— relatively nearby ones (within a few hundred million light years) discovered each year. We would typically find 80-90 each year and we would study some of them in detail. Now there are bigger telescopes with wider-angle cameras that are able to scan a bigger fraction of the sky. More galaxies in a shorter time. But for 10 years, we were the undisputed leaders in finding them. Now we are evolving in the sense that, since other groups are finding more supernovae now, we have turned our attention to finding younger ones. We look at fewer galaxies, but we look at them more frequently. For example, each night, we will look at the same galaxies, instead of once a week. So if we discover a supernova, we are likely to discover it at an earlier stage of its explosions when a lot of the interesting physics is being revealed. We are also spending more of our time following up on supernovae that we or other people discovered. It is no longer the world’s most prolific discovery machine, but it is still at the leading edge of research. BSJ: How long did it take the newer, wider-angle telescopes to surpass KAIT? F: For 10 years we were told, we will blow you out of the water pretty soon, and I kept waiting. But more power to them, the science goes forward faster, that’s great. But in fact it took other groups 10 years to achieve what we achieved. So we had a pretty good run, and we’re still relevant, we’re still doing good stuff. If I were to start a new project, I wouldn’t build the exact same thing I built 15 years ago. BSJ: What is the, are the broader implications of your supernovae research, how does it help us understand the universe? F: That’s a good question— the supernovae people might say why spend any money on this kind of research? But there are a number of issues: first, we are learning our origins better; we see the elements of which we consist being created by stars and by the explosions themselves, and ejected into space, okay. Over many generations of stellar birth and death you get this gradual enrichment of the primarily hydrogen and helium gases with which the universe was born and you get an enrichment of heavier elements.

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 5

B S J

B S J

“... a team of mostly undergraduate students looked at the few dozen images that the software tagged on the previous night as being potentially interesting, and with their superior eye-brain combination (laughs), would decide which ones would be genuinely supernovae and worthy of follow-up observations. ”


The long-term benefits from science are harder to

B S J

quantify or predict, but it was Newton sitting around trying to understand why the moon is in orbit, that led to the development of much of classical physics. He didn’t do it to build a better toaster. It was Einstein sitting around trying to understand the motion of objects at high speeds, or the nature of gravity itself, that led to special and general relativity, which are now used in technology. GPS wouldn’t work if we didn’t take into account the equations of special and general relativity. Quantum physicists a century ago like Max Plank and Einstein, Schrödinger and Bohr, and many others again didn’t have any practical applications whatsoever in mind at the time. They were just trying to understand the nature of the atom and radiation at a deeper level. And now it’s very difficult to conceive of the high-tech world, based on computers and microchips, and lasers and so forth. It is hard to conceive of our modern world without understanding microscopic details, in particular, quantum physics. That was a century ago and if you had ever told those physicists that in 2013 the world would be the way it is based on quantum physics, they would have said “Let’s lock you up in the funny farm, you’re insane.” So it’s difficult to predict what the long-term benefits will be of this kind of research.

“Scientists and philosophers can coexist and can have fruitful conversations with one another.”

B S J

And then eventually clouds of gas can form that are sufficiently enriched in heavy elements that once they collapse to form stars and planetary systems, some of those planets will be rocky earth like planets, and this clearly happened in our own solar system, and so we arose as a result of all these previous generations of stellar birth and death through explosions. So in a sense, Carl Sagan used to say that we are made of “star stuff” or star dust; quite literally, the carbon in your cells, the oxygen that you breathe, the calcium in your bones, the iron in your red blood cells were formed through nuclear reactions in stars. The realization that we came from stars is just one of the most amazing discoveries in the history of science. So we do it because we want to know, and in science of course there always is or there often are unanticipated spin-offs of a more practical nature. At the very least, we get kids excited about science and they go into technical fields. But, the hook was this cool stuff that kids hear about in the news, and they then study math and science and technology and most of them go on into more practical fields like applied physics, or engineering, or computer science, and that’s a direct benefit to society.

Another aspect of the supernovae and why they’re important and interesting is that they’re very powerful; they’re very luminous and we can see them at very large distances. If we know how luminous, how powerful they really are by calibrating nearby ones like the ones we find with KAIT, you can determine the distance of that supernova and hence the distance of the galaxy in which it’s located. And by studying these supernovae in galaxies at progressively bigger distances, we’re studying them progressively farther back into the past. We can therefore examine the history of the universe, and in particular, we can study the expansion history. Supernovae themselves have a lot – they’re interesting in of in themselves, but they also tell us about the birth and evolution of the elements and the evolution of our universe as a whole.

“We are learning our origins better; we see the elements of which we consist being created by stars and by the explosions themselves, and ejected into space.” 6 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

BSJ: Amongst all the achievements you’ve had over your years as a researcher, what do you consider to be your proudest achievement? F: Well I’m enormously proud of my contributions to the research that led to the Nobel Prize. My main job on both teams was to get spectra of the distant

supernova candidates, making sure that they really are supernovae, and we wanted the Type Ia supernovae— the exploding white dwarfs. I was also responsible for getting the red shift of the galaxy in which they’re located, that is the amount by which the universe has expanded during the time that the light has been traveling toward us. So, the supernova brightness tells us the distance and the redshift tells us the expansion factor. By plotting the distance and slopeback time versus the expansion factor you get the expansion history of the universe. And from that we concluded that it’s expanding faster now than it was five billion years ago, leading to this conclusion about acceleration driven by dark energy. And that was what was recognized by the Nobel Prize, so I’m very proud of my contributions to that project because without the redshifts, and without knowing that these were Type Ia supernovae, we would have been dead in the water. I was the one who was primarily in charge of that aspect of it. In terms of something that I did myself and not as a team, I am proud that I took advantage of the opportunity that landed in my lap February of 1985 and immediately started looking at the data and analyzing the data and trying to understand what the data meant. I’ve seen other cases and in fact even in

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 7


So it was a lucky break that I just chose that galaxy and it happened to have this weird supernova, but I didn’t just sit around and not do anything. I was energized into motion and within two weeks we had submitted a paper to Nature on this discovery and its implications and that then led me down this path of studying supernovae. I was still interested in black holes and quasars and things but a whole new avenue of research opened up because I was ready to make this change and noticed that we had an interesting result on our hands. BSJ: The concept of Carl Sagan’s quote about how we are made from star stuff was very pertinent to our topic of death and dying, and how it relates to, literally, the death of the star bringing about new life. F: You might wonder that all these elements were there to begin with. But they weren’t. There was hydrogen, helium and a little bit lithium. That’s basically it. The elements have to come from somewhere, and it’s almost mind-boggling that we now know that the heavy elements in our bodies were generated in stars long ago. So, in other words, we definitely used to be part of a star. Nuclear reactions build up heavy elements from light ones.

we came from the stars. And that’s one of the key ideas that I tell my students in Astronomy C10. They have to know and remember that fact throughout their lives. Some day they might come back and if I ask them some obscure questions, okay if they don’t remember. But, if I ask them where the elements came from, and they don’t correctly answer, then I will retroactively fail them! They will lose the jobs that they got as a result of their good GPA at Cal! Obviously I’m joking, but it’s such an important concept. In the context of your topic, which is a really interesting one, coincidentally, it turns out that in June, I will be near Rome at a small gathering composed of philosophers, theologians, and scientists, discussing this very issue. The event is sponsored by the Templeton Foundation and there is a little seminar entitled “The Role of Death in Life.” My job is to talk about this very issue and philosophers and theologians will talk about other aspects. But yes, the topic is concerning the astrophysical role of death in life. BSJ: It’s somewhat fascinating how physics is seemingly starting to replace philosophy and the explanation of the origin of the universe. F: Philosophy and science have a very interesting love-hate relationship. Quite a few scientists really like philosophy, and quite a few really despise it, and say it has no business in any rational discourse on the observable, experimentally-verifiable universe. I personally think scientists and philosophers can coexist and can have fruitful conversations with one another. I don’t agree with one of my mentors, Dr. Dick Feynman at Caltech, who often belittles philosophy, actually saying there is no reason for it; there is no room for it.

Some stars have to explode, to get these synthesized elements out into space. It’s not enough to synthesize them through nuclear reactions; you need to get them out. The supernovae are important in getting them out and in producing (either directly or indirectly) all of the heavy elements.

But there are other scientists, like Einstein, who are deeply interested in philosophy and meaning of things at some ultimate level. You will get a diversity of opinions that I’m usually on the fence about. I run the middle line on that issue.

Studying this process of stellar death, especially violent death in the form of a supernova, informs us on the process of how clouds of gas get enriched in heavy elements and subsequently go through a new generation of star formation, followed by stellar death, and so on.

BSJ: Thank you very much for your time!

So by the time our solar system formed four and half a billion years ago, at least in some pockets of our Milky Way galaxy, enrichment up to a level of two percent by mass had occurred. So our sun is about two percent heavy elements. Earth is not a good representative of the composition of the universe. Sun is much more so. It’s mostly hydrogen and helium, two percent of heavy elements. That took billions of years to get up to that point. We understand that process pretty well—at least in its simplest form—so I can tell people without any real doubt

8 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 9

B S J

B S J

my own case sometimes there have been exciting data I did not capitalize on because I didn’t work on them right away and didn’t realize that they were trying to tell something interesting. Sometimes you’re lucky and sometimes you’re not, but part of the key to success is in capitalizing on your lucky breaks and recognizing them when they’re in the process of happening.


Telomerase: The Elixir of Life? An Interview with Professor Kathleen Collins By: Prashant Bhat, Kuntal Chowdhary, Jingyan Wang, and Ali Palla

B S J

Telomeres are short, repeating units of DNA located on the ends of chromosomes to protect genetic information during DNA replication. Telomere loss has been frequently associated with aging, suggesting that longer telomeres correlate with longer life span. Although not quite literally an “elixir of life,” components of the telomere complex have the exciting possibility to lead the path in creating new therapies for diseases, such as those involved in tissue failure. The enzyme responsible for generating telomeres, telomerase, is necessary in actively dividing cells like skin and blood cells, but also poses a danger when left uncontrolled in cancer cells. BSJ had the privilege to speak with Professor Kathleen Collins about her work, ideas, and remarks about the current state of telomere research.

BSJ: Our topic this semester is on death and dying.

We interviewed one of the astrophysicists here, Alexei Filippenko, on death of stars and core collapse. We thought your research in telomeres would be a nice parallel since we started our journey in astrophysics and now are working our way down to the inner workings of a cell. Primarily, to know more about you, can you tell us a bit about your background and how you first got involved in telomere research?

Collins: What science exactly you wind up studying

is often not something you predict because if you knew what research was going to be interesting, then you wouldn’t have to do that research. You would already know the answer. The way I wound up studying telomeres is slightly random. I had been in graduate school working on myosin and actin, so I was very interested in cell architecture and how cells build different kinds of surfaces. A leading edge of a cell looks very different than its contracting back

edge, and I was working on polarized intestine epithelia, which have to transport nutrients from the gut epithelial cells. The surface area that faces their gut is highly contoured and the surface area that faces where the nutrients are going to pass is not. So I was studying how that surface area gets highly contoured. And it turned out that there were molecular motors, these myosins, that were taking actin filaments and remodeling this whole area. From there I got interested in how molecular motors work, because in order to move, these actin filaments had to exert force. They had to translocate along their polymer. The idea is that you can translocate along a substrate and these enzymes do it incredibly efficiently, much more than any car anyone has ever built. And furthermore, these enzymes know which direction to go. Although there is a whole family of myosin proteins, there’s just one that knows to do

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 1


this surface remodeling. So how do different motors get specialized for their cellular task? How do motors work and how do motors get specialized? I would go to conferences and there were people who had worked for decades on actin and myosin, muscle contraction and I would offer some new hypothesis. And they’d say, “Yeah, I heard that twenty years ago!” So I was thinking, I want to study motors, but I want to study something that no one can get up and say, “Oh I heard that twenty years ago!”

“What’s surprising about telomerase is that it is evolving very rapidly.” move on nucleic acid is very different than how you move on a protein. So I thought about what to do as a post-doc and I interviewed in a myosin lab and a kinesin lab along the lines of what I was doing. But, a friend of mine had showed me a paper about this new polymerase—telomerase— and there were maybe five papers on it. But it seemed to be a very simple polymerase in the sense that it carried its own template and it bound one sequence-specific singlestranded DNA primer and it added simple sequences of DNA. I thought, that’s much simpler than RNA polymerase binding a double stranded DNA making an RNA. I’ll study how polymerases move by studying telomerase. And there were two people working on telomerase, Elizabeth (Liz) Blackburn and Carol Greider. So I visited Liz at UCSF and Carol at Cold Spring Harbor, and I just realized, “Boy, this is cool. I am going to be able to do this because there is no biochemist working on this at all. I am going to purify this enzyme and then I am going to do this single-molecule assay where I’ll add this template and I’ll watch it step. And it will translocate each time it makes a repeat and I’ll watch.” And so I wrote my post-doc proposal on this: what is the mechanis ple came to me (I was at the Whitehead Institute at MIT), and asked, ‘What is telomerase?’ And it makes sense, there were just a handful of papers on it, but they read the proposal and they liked the proposal, and so I got a post-doc fellowship and I went to Cold Spring Harbor. That was the incredibly naïve choice, one of many, because we didn’t know anything about telomerase. Right now, we are still reconstituting that enzyme. The very first single molecule assay was done just about five years ago by a former Berkeley graduate student, Michael

BSJ: Telomerase is an RNP

(ribonucleoprotein) complex. Is there a hypothesis or advantage for using RNA as template rather than a DNA template?

Collins: That’s a great question. So I could have thought to ask myself that twenty years ago, but I didn’t. It turns out that now we know the answer to that question. We spent quite a while trying to get telomerase to work as a protein with a little RNA and it would never work. It would never take a little template, whereas other reverse transcriptases, like viral reverse transcriptases take any RNA template base-paired to a DNA primer. They don’t need a big RNA component. The just need any ol’ RNA. But telomerase wouldn’t do that. A student, Michael Miller, performed an experiment with the following question: is it the template that is the problem or are there factors that the telomerase RNA provides, other than a template? So we took the template region as an RNA oligonucleotide, and we took the rest of the telomerase RNA as the RNA body. The template alone wouldn’t work, but as two physically separate molecules, he put back that rest of RNA. Now telomerase could copy an external exchangeable RNA template. So the template doesn’t need to be internal within the telomerase RNA. Telomerase needs the non-template motifs as a cofactor for its basic activity. We begin to understand that this constitution experiment lets us ask separately what is needed for template recognition and what is needed from the non-template motifs of the RNA. Protein-RNA interactions help fold the active site of telomerase, so the RNA is providing allosteric modulation of the protein. It’s bringing together different protein domains. One positions the template and one binds the primer and it’s doing other things we haven’t figured out yet. But the reason telomerase has to be a ribonucleoprotein

2 • Berkeley Scientific Journal • Death and Dying • Spring 2013• Volume 17 • Issue 2

(RNP) with an RNA template, is that it needs the non-template RNA portions for its activity. And this may be a part of the bigger question of evolution. There was an RNA world: RNA can fold and RNA can have many functions due its ability to fold. The way that RNA folds is very different than how proteins fold . So the RNA world may have given rise to the early RNP world where catalytic RNAs were helped by proteins. Protein active sites came to dominate. But, now we’re in a world where in evolution there is an explosions of non-coding RNA function. These new non-coding RNAs, they’re not catalytic, but they still can fold and have protein interactions. And, that combination of RNA and protein folding gives much greater structural and functional repertoire than a protein alone. So telomerase is just expanding from the protein world into this new RNP renaissance where these non-coding RNAs can give the protein new functionality.

a tertiary structure fold. Another reason it’s difficult to find telomerase RNAs is because they may be 150 nucleotides in one species and 1500 in others. Yet amidst all of this, there are a few motifs that are conserved. One is called a pseudoknot, which is a base triple pairing (three strands coming together). The other is an RNA stem-loop. The stem-loop is the binding site that binds to the reverse transcriptase protein and somehow allosterically gives it an active confirmation. We are pretty sure why the stem-loop is conserved, but we have absolutely no idea what the pseudoknot motif does despite studying it for a very long time.

BSJ:

Telomerase is commonly associated with aging, and although there are other factors that influence lifespan, what is the potential application for prolonging life by controlling telomerase?

Collins: That’s a good question. When

I was working on telomerase because of this link of telomere shortening and aging, people started saying, “Oh! It’s the fountain of youth! If we could activate telomerase, we would cure aging.” And I would always said, “Oh, come one! That’s ridiculous!” So, I was very skeptical. This idea came about because in the lab we were trying to understand the composition of the enzyme and we discovered that human telomerase had a protein in it that had been previously cloned as the locus of a disease in humans. Through that, we were able to show—to great skepticism in the community—that mutations in a particular protein called Dyskerin gave rise to disease by decreasing the amount of telomerase. So if you reduce the amount of telomerase you have, you die of bone marrow failure. If you decrease your amount of telomerase by half, you die of bone marrow failure in your 30s and 40s. And if you reduce it by ¼ its normal amount, you die in your teens. What this forced me and many other people to think about is if you have a little reduction, you run out of renewal, but only in certain highly proliferative tissues. Blood cells, for example, are turning over constantly—you need to renew them all the time in addition to the cells of the intestinal and epithelial tracts. If you look at people who inherit telomerase deficiency, they have many epithelial problems. So then, now you can go and look in the population and ask these questions. There are new population studies that try to correlate telomere length with either longevity, which is life span, or something we like to talk about more, health span, which is how healthy your tissues are. There is a correlation, in fact, between telomere length and cellular renewal capacity and lifespan. Telomerase is one of many things that would determine telomere length. We can’t do this experiment in humans (we can’t activate telomerase

“mutations in a particular protein called Dyskerin gave rise to disease by decreasing the amount of telomerase” BSJ: That perfectly transitions to our next question. In a review article you authored, you discussed the evolutionary conservation of various telomerase RNAs, TERs. Could you explain the evolutionary advantage of these conserved motifs?

Collins:

So evolutionary conservation is a way to look at important regions of a molecule because if they can’t mutate, they are selected for their function. What’s surprising about telomerase is that it is evolving very rapidly. If an enzyme is essential for chromosome replication, it could not tolerate much change. But in fact, over evolution, you can’t even recognize telomerase RNAs of yeasts versus vertebrates versus plants. And that’s abnormal for an enzyme that is highly conserved in DNA replication. But RNA has an interesting property that comes from its folding; if there is a region of secondary structure, like a base-paired stem, it doesn’t necessarily matter what the sequence is. All that matters is that it base pairs. So the sequence can diverge and the structure can stay the same. And you can’t predict structure from sequence. But the other difference for RNA is because small sequences of RNA can fold independently. RNA tolerates insertions and deletions very well compared to a protein, where a much longer amount of protein is required to make

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 3

B S J

B S J

So I went to a conference on molecular machines, and in addition to all the actin, myosin, kinesin, dynamin talks, there was a talk about RNA polymerase and how it moved along DNA. And I thought, “Great! Instead of cytoplasm, I’ll study a polymerase because there are far fewer people who have thought about this and it must be different.” For example, how you

Stone who had decided to do a post-doc in a single molecule lab at UC Santa Cruz. He decided, ‘I’m going to study telomerase and I am going to do singlemolecule method.’ And you know, that’s twenty years after from when I was working on myosin and actin. So this idea that I am going to use this simple system to study translocation events and molecular motors got me into telomerase. However, we have had to solve so many other questions before we can even think about this translocation mechanism. A current student in my lab, Alex Wu, is doing that right now. He is at the point where he is doing a collaboration with Ahmet Yildiz. He is going to watch this translocation event and he may accomplish it. So that’s the longwinded story of a really simple answer, which is, not at all for the reasons that we actually wound up studying telomerase.


But some of them and in particular things that compromise our health span, in the setting of chronic infection, like Hepatitis or HIV or environmental exposure to toxin that force high rates of cellular renewal in a certain tissue, will help. But there is a downside. Just recently, there was a publication that people who have two times the normal amount of telomerase in certain tissues inherit a higher risk of developing melanoma. So it is going to be a fine balance between anti-aging and anti-cancer, a tissueby-tissue, person-by-person choice, and I think we are going to need to check telomere length before we consider treating a patient with a telomerase activator.

genome instability, or convert to cancer. How to interpret a telomere length test is a very interesting thing because longer lengths are not better; too little is bad, but everything else is okay.

BSJ:

Elizabeth Blackburn gave a talk that we attended and through her data, she brought up the idea that women’s telomere lengths are substantially longer, and that telomere length correlated perfectly with age . It was outstanding to see that striking correlation.

Collins:I

think that is a useful indicator of the amount of stress, potentially, or the load of replication stress on a person. And I think what Liz was trying to say was you can use the rate of loss of telomere length as a warning sign, just like we use cholesterol as a warning sign. Now, you have an intermediate range of cholesterol; one number doesn’t necessarily tell you how to do therapy. If your numbers are increasing you might want to modify your lifestyle – if you’re old and your numbers are high you might want to modify your lifestyle. So if you’re young and your telomeres are short, Liz would say you want to modify your lifestyle because you need those telomeres for the rest of your life. Or if your rate of loss of telomeres is very high, again that might be a thing to concern yourself with because if you continue that rate, your telomeres are going to wind up too short. So as a monitor of lifestyle, I believe that would be the best theory for that.

“very short telomeres that will cause the cell to die, cause genome instability, or convert to cancer.” BSJ: Is there a happy medium between too much or too little telomerase activity? Is there a known “onand-off” switch?

Collins:

No, not really. So what’s interesting is that this telomere phenotype is unlike any other genetic basis of disease. If you have inherited sicklecell anemia, every cell in your body has a mutant in this gene, and that gene product is functional or dysfunctional; you genetically test for the disease. Telomere length is not that way. You can have long telomeres, medium telomeres, short telomeres— everybody is equal until the telomeres become critically short. Cancer doesn’t need telomerase until it wants to go metastatic, right? It can grow and grow and grow and then it hits telomeres that are too short and if it doesn’t have a telomere, it stops. Benign cancers don’t have to have telomerase. And likewise, you can fight off all the infection you want if your telomeres are short, or long, or intermediate. So no, there’s no way to say what’s a perfect telomere length, because a whole range of telomere lengths are just fine. It’s the risk of having very short telomeres that’s the problem, because it’s the very short telomeres that will cause the cell to die, cause

BSJ: The broader implications of telomere research might relate to your earlier statement about “the fountain of youth,” but that is a goal for the long-term future. What do you think is the next step for the telomere field in the short-term? Collins:

I think there are implications that are relevant to clinical therapy right now. So, for example, bone marrow failure patients, aplastic anemia, or any disease remotely associated with tissue failure, the choice of therapy is important, because we know that not all therapies are successful. Many people get the same drug, and ten people recover and ten people will not recover. And the great leap forward is going to be predicting. It would be great if we could tell in advance which people are going to respond positively to the chemotherapy and whose lives are going to be made worse by taking that chemotherapy than if they hadn’t taken it to begin with. You could both use treatments more selectively, but you could also greatly improve quality of life by treating the people who are going to benefit from it and not giving the side effects to people who aren’t. So for example, if I had any anemia, I would get my telomeres tested, because the standard therapy

4 • Berkeley Scientific Journal • Death and Dying • Spring 2013• Volume 17 • Issue 2

is to give someone a hormone that will stimulate blood stem cells to make more blood. But if you exhausted telomere length in your blood stem cells, that hormone is pointless. So why wait for that to fail? The standard way to do a bone marrow transplant is to oblate your entire existing bone marrow, and then put in the new marrow, but if your existing bone marrow is not going to proliferate anyways, why radiate somebody and cause more harm that needs to be repaired, right? And also if you’re going to do a bone marrow transplant, studies have shown very clearly that the proliferative demand on the transplanted cells is very high. We lose telomeres in our immune system very rapidly, from ages about 1 to 4, and then it levels off, which is good, because we couldn’t continue to lose telomere lengths at the rate we lose them at that age. But if you transplant bone marrow, telomeres are lost at a very high rate, because it’s replicating a lot. So you transplant bone marrow for someone who is healthy, but maybe their telomeres are long enough that they are going to be fine with the short rate of telomere attrition to support their immune system. But maybe the marrow transplanted doesn’t have long telomeres, so I would type all bone marrow transplant donors for telomere length in their system before taking cells to transplant. And I would only take a transplant if I knew those cell were going to have the capacity to renew. I think there are immediate therapies for these things that have medical precedence where short telomeres are going to determine the outcome of the therapy. And I think in broader cases, there may be ways to evaluate the toxicity of therapies based on that. In the choice of future therapy, we should not rely on just using telomerase, but using whole realms of information, to be able to pick, and individualize therapy. I’m not saying to sequence everyone’s genome and design a life for them based on that, but if you have a certain cancer, and we want to treat it, I think there needs to be a way—including telomeres and other genetic tests—to make that choice of treatment.

BSJ: With the telomere field growing so rapidly,

and knowing that you’ve been following it since the beginning, how do you think the telomere has changed since you first began research?

Collins: Wow, it’s a good question. I think early on

this aging idea took over, and it was just too early. There was a phase when every question I got at a seminar would be about aging. And people realized that we didn’t know enough yet to ask that. So that kind of died down. Then there was an, ‘Oh my gosh we can cure cancer’ phase, and a lot of studies researched different kinds of cancer: how many telomeres there were. People realized it was still too early to do things about that, because we still didn’t know enough basic mechanisms about how telomerase works. You are

not going to cure cancer if you don’t actually know what step to treat. So that sort of died down. And right now there is a lot more of the field doing very fundamental biology about what a telomere looks like, and how it is dynamic over the cell cycle. Not just what a telomere looks like in an average cell, but what telomeres look like at different points. How does it cause— at a molecular level— cell death or cancer? How is telomerase brought to a telomere, how does the cell know how much telomerase to make? I think it’s also really important for any field to bring in new ways of thinking and new expertise. For example, we are now seeing people who are interested in highresolution imaging and new model systems, which will be helpful.

BSJ: Our last question is about HeLa cells. In regard

to HeLa cells, which are considered immortal, what was the basic mechanism of the telomerase in the cells that renders them immortal?

Collins: We

like to say proliferatively immortal when we are talking about telomerase, because you can destroy HeLa cells with a little bit of bleach, a little bit cold, or a little bit heat. So it’s not really like Tuck Everlasting, but it’s true that they have the capacity to be immortal. And what gives them that is a de-regulation of the limit on telomerase production. HeLa cells are a little different, also, in that they have more telomere ends than normal, because cancer cells are often amplified in their chromosome content. So, probably in order to support those amplified chromosomes, the cell had to dramatically up-regulate telomerase, more than it would in normal cell. Of course, this would not occur in any normal cell development and it therefore had to mutate to do that. So we don’t know what causes that. And, we all use HeLa cells as a canonical model system. In fact, Dirk Hockemeyer, a new MCB faculty recruit is studying human embryonic stem cells. He is going to look at telomerase regulation in those cells and that will be a very interesting model, because he can ask: how are expression levels of the components controlled? We have a collaboration to help him ask: what controls telomerase getting to a telomere in human embryonic stem cells? I think only in comparing those cells to HeLa cells will we know what makes HeLa, “HeLa”, because right now “HeLa” is all we know. If it were not for HeLa cells, there would be no human telomerase work.

BSJ: I think that is a great place to end. Thank you so much for your time!

Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2 • 5

B S J

B S J

overall). And I don’t think aging per se is a complex thing, but the people who die of inability to fight off infections, of immune deficiency, maybe those people would have lived longer with telomerase. Obviously if you get hit by a bus you’re not going to live longer if your telomeres were longer. So not all aging and all diseases of aging are going to have anything to do with telomere length.


Activity Patterns of Golden Eagles in San Benito County, CA Taichi Natake Department of Environmental Science, Policy, and Management, College of Natural Resources, University of California at Berkeley Keywords: Golden Eagle, Aquila chrysaetos, activity pattern, habitat preference, San Benito County, California

B S J

Abstract The golden eagle (Aquila chrysaetos) is a top predator of lagomorphs and ground squirrels in open, mountainous habitats in western North America. It is currently listed as a species of concern by the U.S Fish and Wildlife Service. Understanding the diel and seasonal use of water could enhance conservation of this important species. I quantified the visits (N=402) of golden eagles to 13 water sources using camera trap photo data obtained at the Ventana Ranch, San Benito County, California.

Frequency of occurrence was analyzed by the Chisquare test to confirm diel and seasonal activity patterns. Golden eagle activity peaked between 10am and 5pm PST; there were no nocturnal visits to water sources. Bathing and drinking were noted at 32% and 14% of visits, respectively. Visits were rare during the Spring-breeding season and peaked in the hot months of July and August when both adults and juveniles were detected.

Introduction Golden eagles (Aquila chrysaetos) are raptor found in mountainous regions (elevation 300-1000m) with sufficient open areas (Snow 1973, Whitfield et al. 2007, Watson 2010). It prefers remote, interior landscapes with minimal urban development (Whitfield et al. 2007). Apex predators such as the golden eagle can strongly influence species in the lower trophic levels through direct predation or by modifying the behavior of other species (Roemer et al. 2002). Golden eagles are characterized by large territories and a low tolerance for human activity (Snow 1973). Due to these characteristics, golden eagles are considered a focal species and a “space-demanding habitat-quality indicator of the ecosystem” (Beazley and Cardinal 2004). Golden eagles are experiencing a decline in number due to loss of habitat, and are currently listed as a “species of concern” by the U.S. Fish and Wildlife Service under the Bald and Golden Eagle Protection Act. The diel activity pattern of an animal is a fundamental aspect of their ecology, and it is influenced by both biotic and abiotic factors (Vieira

et al. 2010). Ambient temperature affects the daily activity pattern of eagles and the warmer temperatures induce higher activity (Bozinovic et al. 2000, Vieira et al. 2010, Zalewski 2000). Raptors are affected by high temperature more than other birds (Schleucher 1993). Severe winter can hinder eagles’ reproductive activity if prey availability decreases (Steenhof et al., 1997). Nonetheless, the temperature effects on the diel activity of the golden eagle have not been quantified. Objective In this study I quantified the effect of season, hour of the day and habitat on the activity of golden eagles at waterholes. I hypothesized that higher temperature will increase the need of water, thus increase the frequency of visits to the water sites. Therefore, I predicted that I would observe the highest number of visits at early afternoon in summer. Higher short vegetation coverage should increase the visibility for golden eagles to find prey from the sky, thus I also predicted that areas with the highest grassland coverage should have the maximum number of golden eagle visits.

1 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


B S J

Materials and Methods Study Site This study was part of a larger one designed to document the status of terrestrial vertebrates on the Ventana Ranch. The Ventana Ranch (VR) is privately owned range land in southern San Benito County with an area of 2,540 acres (36o22’20” North Latitude, 120o55’26” West Longitude). The elevation ranges from 550 to 1200m, and the area is characterized by rugged topography. The climate at VR is Mediterranean with cool, moist winters (November through March) and hot, dry summers (April through October). The VR is composed of three main vegetation types including grassland, chaparral, and oak woodlands. Livestock have been excluded for a decade to allow regeneration of native grasses (Voelker 2010). Survey The documentation of vertebrates started in 2006, and as of December, 2010, there were 13 camera trap sites continuously monitoring water sources (springs, ponds, troughs) on the Ventana Ranch. RECONYX camera traps ( Models RM30, PM35T, and most recently Model PC900; http://www.reconyx. com) were mounted on trees or posts approximately 2m high. All cameras were set to “high sensitivity,” “no delay,” “continuous operation,” and “one photo per trigger.” These settings provided approximately one photo every other second while an animal was active at the water source. The more active the animal was at the site, the more photos were taken. Cameras were checked monthly and bait (~30lbs of Hog Grower Pellets 16% Protein) was provided in view of the camera at each check. Golden eagles showed no interest in the bait. Photos taken in 2009 and 2010 were used in this report. Photos were downloaded with MapView 3.1, a software program provided by RECONYX. The photos were sorted by species, and all photos with golden eagles were separated by camera location. Then information on “bouts” of eagle activity (visits) was entered onto an Excel spreadsheet (http://microsoft. com). This information included: the location, ambient temperature, time (month, Julian date, and time of the day) of the bout, the duration of the bout, and the number of golden eagles observed at each bout. A bout was considered to be the duration of stay of one or more individuals of a species at a trap site. It starts when the first individual shows up at the site and ends when the last individual of a group leaves. A bout was considered ended when the last eagle appeared in a photo, or >15minutes elapsed between photos. All the photos with vague picture were removed from the analysis due to the difficulty of identification. The duration was determined to the nearest minute, and if

there was just one photo for a bout, the duration was assumed to be one minute. Statistical Analysis The Excel spread-sheet was then converted into a CSV file to be analyzed using the R software platform (http://www.r-project.org/). A Chi-squared statistic was used test for independence between the frequency of bouts by time of day versus by month of the year. Patterns were similar between years so all data were pooled. The number of individuals captured in one photo was also recorded, and its seasonal variation was analyzed using a scatterplot. The vegetation cover of each site was determined by analyzing a satellite image of VR, and linear regression was used to analyze the relationship between eagle occurrence and grassland occupancy. Results After the removal of vague pictures, I recorded 402 bouts in which golden eagles visited one of the 13 camera trap locations from January 1, 2009 to December 31, 2010. The longest bout lasted 129 minutes. In this analysis, three individuals were identified, including an adult male, an adult female and a juvenile. The adults were hard to distinguish from each other, but the juvenile was easy to identify due to the white feathers in its tail. Bathing and drinking were noted at 32% and 14% of visits, respectively. There was a clear diurnal pattern of bouts (chi-squared test: χ = 201.5, df = 13, p < 2.2e-16), with 80% of detections occurring between 10am and 5pm PST. Only 4% of all bouts occurred between 7 and 9am, and only 1.6% of bouts occurred between 6 and 8pm. No bouts were observed between 9pm and 6am. (Figure1A) There was a strong seasonality among months in the number of eagle visits to the camera sites (chisquared test: χ = 280.3, df = 11, p < 2.2e-16). The number of bouts peaked in July (21% of bouts), August (19%), and September (19%). Golden eagles were rarely detected in winter through early spring (December through April), and March had the lowest number of detections (0.77% of bouts). (Figure1B ) The largest number of golden eagles observed in one picture was three (an adult male, adult female, and juvenile), and it occurred only twice (in September). This number stayed low throughout winter and spring (January through June). (Figure 2) A linear regression between the grassland occupancy and the logarithm of golden eagle occurrence showed the positive relationship (y=0.238+4.313x, p=0.08). (Figure 3) The coverage of grassland with largest number of bout observed was 60%.

2 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


80

60

60

50

40

Frequency

40 30 0

0

10

20

20

Frequency

8

9

10

11

12

13

14

15

16

17

18

19

Jan Feb

20

Mar

Apr May Jun

Aug Sep

Oct

Nov Dec

MoC

Hour

Figure 1A: Diel activity pattern of the golden eagle at 13 water sources with camera traps at the Ventana Ranch, San Benito County, California, 2009-2010. N = 402.

Jul

Figure 1B: Seasonal activity pattern of the golden eagle at 13 water sources with camera traps at the Ventana Ranch, San Benito County, California, 2009-2010. N = 402.

5

Log(EAGLE OCCURRENCE + 1)

B S J

7

4

3

2

1

0 0.0

Figure 2: The number of individual eagles detected in one picture. Each dot represents the bout at one of 13 water sources with camera traps at the Ventana Ranch, San Benito County, California, 2009-2010. N = 402.

0.1

0.2

0.3

0.4

0.5

GRASS

0.6

0.7

0.8

0.9

Figure 3: A linear regression shows eagle occurrence is more likely in habitat dominated by grassland versus forest or shrubland,

3 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


Discussion

B S J

Diel Pattern The absence of visits to water by golden eagles between dusk to dawn is consistent with the fact that most birds roost throughout the night. This is related to their foraging behavior, which relies on their highly advanced eyesight and flying ability (Watson 2010). The frequent drinking and bathing activities, especially in the hot summer months, indicate that the camera trap sites were used as water source. This result contradicts the current notion that assumes large birds of prey can acquire their necessary water from their food sources (Watson 2010). Birds of prey, including golden eagles, are vulnerable to high temperatures (Bozinovic et al. 2000), and the frequency of energy consuming activities such as foraging should decrease as ambient temperature increases (Schleucher 1993). Bathing and drinking have cooling effects. Moreover, high temperatures will induce golden eagles’ thermoregulatory behavior including panting, which induces water loss by evaporation (Prinzinger 1976). Therefore, the demand for water should noticeably increase when the ambient temperature exceeds a threshold. This may contribute to the result of high bout frequency in the middle of the day and low in the early morning and late afternoon. Seasonal Pattern In this study, there was one juvenile observed, so it is likely that at least one of two years had a successful breeding season. In the golden eagles’ breeding season, early spring is usually spent on nest building, egg-lying, and incubation activities (Watson 2010), the latter lasting up to 40 days (Snow 1973). During this time, the incubation period hinders female eagles from hunting, so their activities are predicted to be low outside the nest. This notion is supported by this study, which showed a low frequency of bouts (of only one individual) in February through March. In contrast, after juveniles fledge from the nest (July), parents require more foraging activity to feed their offspring (Tjernberg 1981). This idea is also supported by my results in which both frequency and number of individual per bout increased after spring reaching a maximum in July. The bouts with three individuals observed in September indicate that the juvenile is still in its parents’ territory, which is common for several year after fledging (Watson, 2010). Habitat Preference A linear regression (Fig 4) suggested a positive relationship between eagle visit frequency and the amount of grassland around the water source (p<0.1).

This trend may be resulted by their foraging activities, in which favored open spaces due to the high visibility. However, woody vegetation provides the protection against ground predators, and they provide nesting sites (Watson 2010). Moreover, the increased duration and frequency of bouts in summer suggest that the potential threat of excess heat to golden eagles, and it may suggests the benefit of having shade-creating vegetation in a territory. Limitation of the Experiment Due to difficulty of the identification, the total number of individual golden eagle caught by camera trap was unknown. Thus the experimental results were likely to be based on pseudo-replication, defined by Hurlbert (1984). If it is the case, the trend is derived by the experimental results which might be biased by small number of individuals. This caveat is common especially among studies on rare species (Hurlbert, 1984), and it is not feasible to collect behavioral data from large number of golden eagles in an experimental field with a limited scale. Therefore, I suggest the necessity of similar experiments on different groups of golden eagle to test the reliability of the findings generated by this study. Acknowledgements I thank Professor Reginald H. Barrett for his guidance and for letting me use some of his camera trap data. I also appreciate the generosity of Mr. Phillip Berry, owner of the Ventana Ranch, in letting me carry out this research on his property. Literature Cited Beazley, K., N. Cardinal. 2004. A systematic approach for selecting focal species for conservation in the forests of Nova Scotia and Maine. Environmental Conservation 31:91-101. Bozinovic, F., J. A. Lagos, R. A. Vasquez, and G. J. Kenagy. 2000. Time and energy use under thermoregulatory constraints in a diurnal rodent. Journal of Thermal Biology 25:251-256. Hurlbert, S. H. 1984. PSEUDOREPLICATION AND THE DESIGN OF ECOLOGICAL FIELD EXPERIMENTS. Ecological Monographs 54:187211. Prinzinger, R. 1976. Temperature Regulation and Metabolism Regulation of the Jackdaw Corvus Monedula the Carrion Crow Corvus Corone and the Magpie Pica Corvidae. Anzeiger der Ornithologischen Gesellschaft in Bayern 15:1-47. Roemer, G. W., C. J. Donlan, and F. Courchamp. 2002. Golden eagles, feral pigs, and insular carnivores: How exotic species turn native predators into

4 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


B S J

prey. Proceedings of the National Academy of Sciences of the United States of America 99:791796. Schleucher, E. 1993. Life in Extreme Dryness and Heat - a Telemetric Study of the Behavior of the Diamond Dove Geopelia cuneata in its Natural Habitat. Emu 93:251-258. Snow, C. 1973. Habitat Management Series for Unique or Endangered Species: Report No. 7 Golden Eagle Aquila chrysaetos. Bureau of Land Management U.S. Department of the Interior. Publication No. T-N-239. Denver Service Center, Denver, Colorado, USA. Steenhof, K., M. N. Kochert, and T. L. McDonald. 1997. Interactive effects of prey and weather on golden eagle reproduction. Journal of Animal Ecology 66:350-362. Tjernberg, M. 1981. Diet of the Golden Eagle Aquila chrysaetos during the Breeding-Season in Sweden. Holarctic Ecology 4:12-19. U.S. Fish and Wildlife Service. 1999. TITLE 16. CONSERVATION, CHAPTER 5A. PROTECTION AND CONSERVATION OF WILDLIFE, BALD AND GOLDEN EAGLE PROTECTION ACT <http://www.fws.gov/ migratorybirds/mbpermits/regulations/BGEPA. PDF> Vieira, E. M., L. C. Baumgarten, G. Paise, and R. G. Becker. 2010. Seasonal patterns and influence of temperature on the daily activity of the diurnal neotropical rodent Necromys lasiurus. Canadian Journal of Zoology-Revue Canadienne De Zoologie 88:259-265. Voelker, W. B. 2010. Ventana Ranch Resource Management Plan. Thesis, University of California, Berkeley, California, USA. Watson, J. 2010. The Golden Eagle. Yale University Press, New Haven, Connecticut, USA. Whitfield, D. P., A. H. Fielding, M. J. P. Gregory, A. G. Gordon, D. R. A. McLeod, and P. F. Haworth. 2007. Complex effects of habitat loss on Golden Eagles Aquila chrysaetos. Ibis 149:26-36. Zalewski, A. 2000. Factors affecting the duration of activity by pine martens (Martes martes) in the Bialowieza National Park, Poland. Journal of zoology 251:439-447.

5 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


Orientation-Dependent Neuronal Degradation Resulting from Axonal Stain Experienced in Football-Realistic Acceleration Evan Lyall1, Spencer Scott1, Jason Silver1, and Samantha Smiley1 Bioengineering, University of California, Berkeley, Berkeley, CA

1

B S J

Abstract Traumatic brain injury (TBI) is a common occurrence that results in neuronal death with hazardous longterm effects. Modeling TBI computationally is necessary in order to gain a better understanding of mechanical effects on neurobiological injury cascades and injury thresholds. A model of a single axon was submitted to accelerations observed in the sport of American Football to test for axonal membrane strains necessary to induce an apoptosis pathway. A neuronal membrane strain of 0.20 [1] has been found to cause a Ca2+ influx necessary to initiate a neuronal degradation pathway. The proposed model sought to identify if accelerations in American Football could cause such detrimental strains. To test this, forces

were applied in three directions: parallel to the axon, normal to the axon, and rotational about the axon to account for the multiple orientations forces can act upon to cause neuronal strain. Results from the different orientations with varying force magnitudes made it clear that stresses applied rotationally are the most detrimental and can cause a strain of 0.200 at an acceleration as low as 45g.• SAccelerations of 45g or greater are found in approximately 10% of the impacts observed in college football [2]. The resulting data from this model can be extrapolated to a larger scale to benefit the design of better head protection to include protection from shear forces.

Introduction

the activation of proteases, second messengers, mitochondrial failures, and other apoptosis pathways [5]. This time delay makes secondary injury detection very difficult because initial brain scans are unable to show the full extent of the injury. Diffuse axonal injury (DAI) has been pinpointed as the primary focus for secondary brain injury. Disruption of the axon is an important pathology in mild, moderate, and severe traumatic brain injury [5]. The effects of highimpact collisions, especially head-to-head collisions, frequently witnessed in football games are mitigated by the use of a helmet as a form of head protection, yet the helmet does not fully protect the players from TBI. The secondary degradation observed in athletes in high- contact sports poses a major risk because these athletes can initiate DAI and neuronal degradation pathways without detection. Computational models are indispensable tools for gathering neurobiological data non-invasively in order to better guide neural protection strategies for these athletes. Computational models for TBI have progressed from macroscale to microscale via simulations of the brain to neuronal initiation of DAI. When the brain experiences an impact, the force propagates through each individual cell of the brain, inducing both longitudinal and shear stresses upon each cell. This in

Traumatic brain injury (TBI) affects more than 2 million people in the United States each year and is associated with a high rate of morbidity [3]. These injuries can be caused by a variety of factors, including blunt trauma, penetrating injury, and concussive force. Many people place themselves in positions that increase the chance of such injuries, particularly those who play high- contact sports, such as American Football. Football is known to be a high- impact sport and, as a result, a large contributor to the 300,000 annual concussions reported for young adolescents in the United States [4]. However, the primary injury, such as cerebral contusions and hematomas, is not only the pathology associated with trauma; many patients suffer from a variety of secondary effects known as diffuse injuries that manifest hours, days, months, and years post-trauma. Secondary degradation arises from the inertial forces from rapid rotational and lateral motions of the head, which deform the white matter and lead to diffuse injury. The secondary effects can be as mild as a headache, dizziness, or nausea, or can be much more severe, such as the development of epilepsy or chronic traumatic encephalopathy (CTE). The vast majority of diffuse injuries evolve over time because of a series of deleterious cascades that include

1 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


B S J

turn induces strain upon the neurons which begins an apoptosis pathway, resulting in DAI and loss of neuron functionality. The strain on the membrane creates a calcium influx into the cytoplasm, which leads to the phosphorylation of tau proteins and, ultimately, the aggregation of microtubules within the axon. Geddes and Cargill evaluated the dependence of intracellular calcium concentrations from applying various magnitudes of strain on neural cells. In order to obtain micromolar concentration fluctuations, a strain of 0.20 must be applied to the neuronal membrane [1]. These calcium fluctuations are significant enough to simulate an axonal degradation pathway as shown in Figure 1. Calcium fluctuations activate u- calpain, an isomer of a calcium-dependent cysteine protease. Calpain activation stimulates the proteolytic cleavage of p35, a neuron-specific activator for cyclin-dependent kinase 5 (CDK5), turning it into p25 [6]. This conversion of p35 to p25 creates a more stable protein with a longer half-life. Therefore, the generation of p25 causes prolonged activation of CDK5 through the p25- CDK5 complex. This complex actively phosphorylates tau proteins, unlike the p35-CDK5 complex, and disrupts the normal regulation of this tau phosphorylation pathway [7]. Tau proteins are structural microtubulebinding proteins that localize in the axon on a neuron [8]. Hyperphosphorylation of tau proteins leads to an abnormal amount of condensed microtubules into paired helical bundles. This high association of tau proteins with microtubules has been correlated to DAI and other axonal injury diseases, such as

Figure 1: Biological pathway demonstrating the affect of a calcium influx on the phosphorylation of tau proteins resulting in cytoskeletal disruption and neuronal death

Alzheimer’s Disease. It has been proposed that a change in the microtubule network via the association of phosphorylated tau proteins affects the transport of proteins and other intracellular components along the axon, which leads to a pathway that cleaves the axon [9]. Therefore, by applying accelerations to obtain a strain of 0.20 on the axonal membrane, a calcium dependent pathway leading to the hyperphosphorylation of tau proteins can be triggered, leading to loss of neuronal functionality. The current standard to prevent TBI in activities with a high likelihood of sustaining an impact to the head, such as football, is the use of a protective helmet, typically composed of an outer plastic shell with cushioning foam on the inside. This current strategy functions by cushioning and slowing the brain during impact, decreasing the longitudinal stress on the neurons. However, this does not take into account the shear stresses applied to the neurons, a major cause of TBI. In order to further • Sand DAI that develop preventative measures for TBI address the issue of shear stress, it first has to be known what stresses, particularly shear, induce the apoptosis pathways. This model attempts to find these forces necessary to induce these apoptosis pathways so that these results can then be extrapolated to help guide preventative measures directed at TBI. Methods Model design We decided to model the neuron in three dimensions as it is the most realistic way of approaching the design of the neuron. Our initial model was of both the axon and the myelin sheaths, modeled as viscoelastic isotropic materials with properties found in literature and recorded in Table 1. To make the model as true to reality as possible, we included the nodes of Ranvier, which we modeled as gaps between the cylindrical myelin sheaths coating the axon, and the ends of the myelin sheaths were tapered into the nodes. After demonstrating the strains are localized in the node, the model was reduced from the full axon model all the way down to a single segment of axon to alleviate the issues of meshing and computational time as seen in Figure 2. There were no intra- or extracellular structures or dendrites included because we were most focused on the nodes of Ranvier and how the stress is concentrated in that particular region. Our primary goal was to prove whether or not the nodes of Ranvier are under the most stress and are most affected by a rotational force versus a longitudinal or normal force, and so we only needed to model the basics - the axon, the myelin sheath, and

2 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


the nodes. After solving several models of this form, we decided to simplify our model to be a singular node, making solving in COMSOL more efficient.

B S J

Figure 2: Schematic demonstrating how the model was simplified after gaining appropriate justification

Force Applications We modeled three different neuron orientations with respect to load to model the primary ways in which the neuron may be affected in a head-on collision often seen in football. Our models depicted a force along the longitudinal axis of the axon, a force normal to the axon, and a rotational force on the axon. Figure 3 shows a schematic of the three different force applications. We chose these three orientations as they most accurately depicted the broadest generalizations of force application in head-to-head collisions. The magnitudes of the forces applied were determined by taking three characteristic accelerations observed in football and converting them using Newton’s second law. The mass of an axon was estimated using our model’s volume and density taken from literature. These forces were converted

into stresses with units of N/m2 by dividing the forces by the surface area of the side of interest. Additionally, we applied a sinusoidal acceleration in a similar fashion to how a force was applied normally to the axon to test if oscillations would affect the viscoelastic axon differently than applying direct forces. An appropriate frequency was found by using the equation F=Af2m, where F is a force equivalent to a known acceleration, A is the amplitude of the oscillation, f is the frequency, and m is the mass of the object. We used a force of 50 N/m2, which resulted in a 0.200 strain in the normal analysis, a given amplitude of 2 mm, and the mass of our proposed model to find an f of 138 Hz. This frequency was used to apply a sinusoidal acceleration, given by the equation Asin(f2(pi)t), applied normal to the axon’s membrane. Of course, there are other ways in which the axon may be affected that have not been explored here, including if the force is exerted on the neuron at an angle, but these three modes of force translation are • S be affected. the general ways in which a neuron may Results When first starting with the axon and a single node of Ranvier, applying a rotational force and a normal force to this model showed that the myelin sheaths undergo the greatest displacement, yet there was a disproportionate strain at the node. Figure 4 shows a slice plot confirms that the node undergoes the greatest stress. These results justify refining the model to focus only on the node since it is the weakest portion of the neuron. Reducing the model to a single node allows for more a reasonable computational time. To identify which forces induced the greatest strain and DAI on the model, the reduced node model was subjected to three forces: a longitudinal force, a normal force, and a rotational force. In the longitudinal model, the force was

Figure 3: Node model under longitudinal (left), normal (middle), and shear (right) stresses

3 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


B S J

applied to the cross-sectional face of the node. When an acceleration of 120g, the maximum acceleration measured in intercollegiate football impacts, was applied to the model in a longitudinal fashion, it produced a strain of 0.129 as shown in Table 2, which is below the strain indicative of DAI. The strain distribution can be seen in Figure 5. Because this directional force could not produce a large enough calcium influx to cause secondary degradation, no further magnitudes of accelerations were tested. When acceleration was applied to the model in the x-y plane, normal to its side edge, it was found that an acceleration of 101.38g causes a 0.200 strain, the approximate threshold for inducing the DAI pathway. These values can be seen in Table 3. The displacement and strain plot is shown in Figure 6. Accelerations of 101.3g corresponds to a 50% chance of getting a concussion. Therefore, the hundreds of thousands that have concussions each year are at risk of inducing this axonal degradation pathway for long-term damage. For another model, acceleration was applied to the model in the x-y plane in a rotational manner about the center axis. Four different values of linear acceleration were applied – 120g, 80g, 45.16g, and 40g. The stress ranged between 20.6 and 61.79 N/ m2, with maximum stress occurring at the maximum acceleration and minimum stress occurring at the minimum acceleration, which is to be expected. The displacement ranged between 1.512*10-7 meters and 4.195*10-7 meters, with maximum displacement occurring at the maximum acceleration and minimum displacement occurring at the minimum acceleration, which is to be expected. The values of strain ranged between 0.185 and 0.553, with the target value of 0.200 occurring at a linear acceleration of 45.156g. More detailed values are described in Table 4. Figure 7 shows the stress distribution when a rotational linear acceleration is applied to the node of Ranvier. The acceleration that was found to produce a strain of 0.200 in the normal direction was the acceleration used in the frequency analysis. This linear acceleration of 101.38g was applied to the nodular model as a sinusoidal frequency, resulting in 50 N/m2 of stress applied to the node. The total displacement was 1.056*10-8 meters, resulting in a 0.166 strain. Figure 8 depicts the strain distribution on the nodular model resulting from normal stress applied in a sinusoidal manner.

•S

Figure 4: (top) Axon model under rotational stress showing greatest displacement concentrated at myelin sheaths and a disproportionate displacement at the node of Ranvier. (middle) Axon model under normal stress showing maximum displacement at node of Ranvier. (bottom) Axon model under rotational force showing stress through slices along the axon, proving the highest stress is concentrated at the node of Ranvier.

4 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


B S J

•S

Figure 5: (Image of COMSOL node model showing strain distribution resulting from longitudinal stress application

Figure 6: Image of COMSOL node model showing strain distribution resulting from normal stress application

5 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


B S J

•S

Figure 7: Image of COMSOL node model showing strain distribution resulting from rotational stress application

Figure 8: Image of COMSOL node model showing strain distribution resulting from a normal stress applied in a sinusoidal manner

6 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


B S J

Discussion In the original simulation modeling an axon with multiple nodes of Ranvier, the stress concentrates at the nodes rather than dissipating equally along the axon when the stress is applied normal to the axon. Furthermore, when the stress was applied rotationally to the axon, the myelin sheath was observed to displace the most, and the part of the axon covered by the myelin sheath was relatively unharmed. This finding agrees with the fact that one of the primary purposes of the myelin sheath is to act as a protective insulator surrounding the axon. The shear modulus of the myelin sheath is one-third of the shear modulus of the axon, which shows that myelin is a soft malleable coating to the axon that helps to dissipate the stress where the myelin is present. Additionally, the stress distribution was observed at several cross-sectional areas along the axon, and it was found that the node of Ranvier clearly felt the greatest magnitude of stress. This observation allowed us to simplify our model further and focus solely on the weakest part of the axon - the node of Ranvier. The three simulations modeling a rotational force, a normal force, and a longitudinal force provided great insight as to what stresses are necessary to induce a 0.200 strain and, therefore, axonal degradation. The rotational model only required a stress of 22.27 N/ m2 in order to induce a 0.200 strain, whereas the normal model required a 50 N/m2 stress, and the longitudinal model required a stress representative of an acceleration greater than those witnessed in collegiate football collisions. From these results, it is evident that rotational and shear forces on the axon cause damage at significantly lower stresses than forces applied linearly. Furthermore, the stresses we found necessary to induce the DAI pathway were translated into accelerations that the neuron would feel in a football-related collision. Translating the applied rotational stress gave a proposed linear acceleration of 45g necessary to initiate hyperphosphorylation of the tau proteins, while if the force is purely normal to the axon, the necessary acceleration was approximately 101g. Rowson et al. observed that 10% of the impacts in collegiate football were greater than 40 g in severity [2]. This means that roughly 10% of the hits in football could potentially initiate calcium influxes that result in secondary degradation of neurons even though the players might not experience concussions. In order to demonstrate that these results could be tested in vitro, the force necessary to create 0.200 strain in the model utilizing a linear normal force application was applied in a sinusoidal manner, which represents the brain’s oscillations during the course of an impact. Applying this force sinusoidally

resulted in a 0.166 strain, comparable to that generated from the direct force application. This shows that these computational models can be replicated in vitro utilizing a shaker plate model, with relative similarity in results. Shear injuries, such as those commonly observed in football, cause more damage to the axon due to the rotational force component and thus are more likely to cause TBI. Compared to other directional forces, shear injuries have a significantly lower force threshold necessary to cause the same amount of strain. This makes hyperphosphorylation of tau protein induced by shear forces increasingly more common. Because these shear injuries are more likely to induce a wide range of long-lasting effects on football players, including TBI, it is even more important to find a way to improve the current preventative techniques against shear injuries. Conclusion •S Through applying a variety of stresses on a finite element model simulating an axon, important structural and mechanical properties were determined. Most importantly, the model demonstrated the ability of myelin sheaths to act as a protective border for the axon. This observation led to the discovery that the nodes of Ranvier were thus subjected to the greatest stress, as they were the only part of the axon not protected by the myelin. The model was then further simplified in accordance to this finding so that only the weakest part of the axon would be studied. This simplified model, representing a single node of Ranvier, established forces applied rotationally as the most detrimental compared to longitudinal and normal. This finding suggests that football players can be subjected to diffuse axonal injury at relatively low accelerations not even indicative of concussions. The forces experienced in 10% of collegiate hits can induce a calcium influx into neurons large enough to induce hyperphosphorylation of tau proteins, which leads to secondary axonal degradation. Because of the danger of secondary degradation that such shear stresses can cause, it is recommended that more effective helmets at preventing shear forces be developed and implemented for use in football. Additionally, this strain-induced pathway allows for secondary protection through a chemical regulation of the tau pathway. By blocking the phosphorylation pathway for a limited amount of time, these players would reduce the long-term risks of secondary degradation. By combining both a chemical and mechanical approach, axonal degradation could greatly be reduced in high impact sports including football.

7 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2


B S J

References 1. Geddes, D.M. and Cargill, R.S., An in vitro model of neural trauma: device characterization and calcium response to mechanical stretch, Journal of Biomechanial Engineering, 123, 247 255, 2001. 2. Rowson, S., Brolinson, G., Goforth, M., Dietter, D., and Duma, S., Linear and angular head acceleration measurements in collegiate football, Journal of Biomechanical Engineering, 131, 0610161-7, 2009. 3. Smith, D.H. And Meaney, D.F., Axonal damage in traumatic brain injury, The Neuroscientist, 6, 483-495, 2000. 4. McCaffrey, M.A., Mihalik, J.P., Crowell, D.H., Shields, E.W., and Guskiewicz, K.M., Measurement of head impacts in collegiate football players: clinical measures of concussion after high- and low-magnitude impacts, NeuroSurgery, 6, 1236-1243, 2007. 5. Iwata, A., Stys, P.K., Wolf, J.A., Chen, X.H., Taylor, A.G., Meaney, D.F., and Smith, D.H., Traumatic axonal injury induces proteolytic cleavage of the voltage-gated sodium channels modulated by tetrodotoxin and protease inhibits, 19, 4605-4613, 2004. 6. Lee, M., Kwon, Y.T., Li, M., Peng, J., Friedlander, R.M., and Tsai, L., Neurotoxicity induces cleavage of p35 to p25 by calpain, Nature, 405, 607-626, 2000. 7. Dhavan, R. and Tsai, L.H., A decade of CDK5, Nature Reviews, 2, 749-759, 2001. 8. Zemlan, F.P., Rosenberg, W.S., Luebbe, P.A., Campbell, T.A., Dean, G.E., Weiner, N.E., Cohen, J.A., Rudick, R.A., and Woo, D., Quantification of axonal damage in traumatic brain injury: affinity purification and characterization of cerebrospinal fluid tau proteins, Journal of Neurochemistry, 72, 741-750, 1999. 9. Stokin, G.B. and Goldstein, L.S.B., Axonal transport and Alzheimer’s disease, Biochemistry, 75, 607-626, 2006. 10. Yu, M., Lourie, O., Dyer, M.J., Moloni, K., Kelly, T.F., and Ruoff, R.S., Strength and breaking mechanism of multiwalled carbon nanotubes under tensile load, Science, 287, 637-640, 2000. 11. Rapoport, L., Nepomnyashchy, O., Lapsker, I., Verdyan, A., Moshkovich, A., Feldman, Y., and Tenne, R., Behavior of fullerene-like WS2 nanoparticles under severe contact conditions, Wear, 259, 703-707, 2005. 12. Karami, G., Grundman, N., Abolfathi, N., Naik, A., and Ziejewski, M., A micromechanical hyperelastic modeling of brain white matter

under large deformation, Mechanical Behavior of Biomedical Materials, 2, 243-254, 2009. 13. Allen, K.B., Sasoglu, F.M., and Layton, B.E., Cytoskeleton-membrane interactions in neuronal growth cones: a finite analysis study, Journal of Biomechanical Engineering, 131, 021006-020016, 2009. 14. Bernick, K.B., Prevost, T.P., Suresh, S., and Socrate, S., Biomechanics of single cortical neurons, Acta Biomaterialia, 7, 1210-1219, 2010.

8 • Berkeley Scientific Journal • Death and Dying • Spring 2013 • Volume 17 • Issue 2

•S


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.