Science in Society Review - Spring 2011

Page 1

Spring 2011 | UChicago

A Production of The Triple Helix

THE SCIENCE IN SOCIETY REVIEW The International Journal of Science, Society and Law

The Neurological Impact of Illicit Stimulant Consumption Cash for Kidneys: How Do We Solve the Kidney Shortage Problem? Tackling the Politics of Football Brain Injuries Humanizing Healthcare Technology?

When “Is” Meets “Ought”

ASU • Berkeley • Brown • Cambridge • CMU • Cornell • Dartmouth • Georgetown • Harvard • JHU • Northwestern • NUS • Penn • UChicago • UCL • UNC Chapel Hill • University of Melbourne • UCSD • Yale


EXECUTIVE MANAGEMENT TEAM

BOARD OF DIRECTORS

Chief Executive Officer Bharat Kilaru

Chairman Kevin Hwang

Executive Editor-in-Chief Dayan Li

Erwin Wang Kalil Abdullah Melissa Matarese Joel Gabre Manisha Bhattacharya Julia Piper

Chief Production Officer Chikaodili Okaneme Executive Director of E-Publishing Zain Pasha Executive Director of Science Policy Karen Hong Chief Operations Officer, North America Jennifer Ong Chief Operations Officer, Europe Francesca Day Chief Operations Officer, Asia Felix Chew Chief Financial Officer Jim Snyder Chief Marketing Officer Mounica Yanamandala INTERNATIONAL STAFF Senior Literary Editors Dhruba Banerjee Victoria Phan Robert Qi Linda Xia Angela Yu Senior Production Editors Adam Esmail Indra Ekmanis Laura Tiedemann Robert Tinkle Sia Sin Wei Jovian Yu Senior E-Publishing Editors Anna Collins Jae Kwan Jang Rahul Kishore John Lee Jacob Parzen

TRIPLE HELIX CHAPTERS North America Chapters Arizona State University Brown University Carnegie Mellon University Cornell University Dartmouth College Georgetown University Georgia Institute of Technology Harvard University Johns Hopkins University Massachusetts Institute of Technology Northwestern University Ohio State University University of California, Berkeley University of California, San Diego University of Chicago University of North Carolina, Chapel Hill University of Pennsylvania Yale University Europe Chapters Cambridge University University College London Asia Chapters National University of Singapore Peking University Hong Kong University Australia Chapters University of Melbourne University of Sydney Monash University

THE TRIPLE HELIX A global forum for science in society

The Triple Helix, Inc. is the world’s largest completely student-run organization dedicated to taking an interdisciplinary approach toward evaluating the true impact of historical and modern advances in science. Work with tomorrow’s leaders Our international operations unite talented undergraduates with a drive for excellence at over 25 top universities around the world. Imagine your readership Bring fresh perspectives and your own analysis to our academic journal, The Science in Society Review, which publishes International Features across all of our chapters. Reach our global audience The E-publishing division showcases the latest in scientific breakthroughs and policy developments through editorials and multimedia presentations. Catalyze change and shape the future Our new Science Policy Division will engage students, academic institutions, public leaders, and the community in discussion and debate about the most pressing and complex issues that face our world today.

All of the students involved in The Triple Helix understand that the fast pace of scientific innovation only further underscores the importance of examining the ethical, economic, social, and legal implications of new ideas and technologies — only then can we completely understand how they will change our everyday lives, and perhaps even the norms of our society. Come join us!

UChicago.indb 2

4/22/2011 6:28:54 PM


TABLE OF CONTENTS

11

Stimulants

Nukes

Their Neurological Impacts

Football Hurts

How many is too many?

The effects of football on the brain

19

28

4

When “Is” Meets “Ought”

Local Articles

UCHICAGO UC CH C HI

Cover Article Marcus Moretti, Yale

7

Cash for Kidneys: How Do We Solve the Kidney Shortage Problem?

11

The Neurological Impact of Illicit Stimulant Consumption

14

Critical Slowing Down and Early Warning Signs in Complex Systems

16

“If You Give an Atom a Photon…”: The Applications of Laser Spectroscopy and Ion Traps

Alexandra Carlson

19

How Many Nukes Does A Superpower Need?

Evan WooSuk Choi

21

The Origins of Human Communication

23

Curbing Obesity: What Role can the Government Play?

26

The Problem with Cookie Cutters: Educating Children with Autism

Tae Yeon Kim

28

Tackling the Politics of Football Brain Injuries

Sithara Kodali

31

Humanizing Healthcare Technology?

34

The Weft of Unseen Garments:Epigenetic Considerations of Evolution and Disease in Society

International Features

Doni Bloomfield Venkat Boddapati Leland Bybee

Sadaf Ferdowsi Matt Green

Kelly Regan Navtej Singh

36

Plastic in Our Society

38

Preventing Bioterrorism: Regulation of Artificial Gene Synthesis Technology

Jyotsna Mullur, Brown

41

The Science of Sleep in the Changing of the Work Shift

Linda Xia, Harvard

43

The Path To The Bloodless Butcher: In Vitro Meat and Its Moral Implications

Cover design courtesy of Natalie Koh Ting Li, University of Melbourne

UChicago.indb 1

Will Feldman, Yale

Gregory Yanke, ASU

4/22/2011 6:28:56 PM


INSIDE TTH

STAFF AT UCHICAGO Co-Presidents Bharat Kilaru Sean Mirski Co-Vice Presidents Benjamin Dauber Tae Yeon Kim Co-Editors in Chief Kara Christensen Jonathan Gutman Managing Editor Leland Bybee E-Publishing Director Jacob Parzen Co-Directors of Science Policy Michelle Schmitz Jim Snyder Writers Navtej Singh Leland Bybee Kelly Regan Evan Choi Venkat Boddapati Sadaf Ferdowsi Matt Green Doni Bloomfield Sithara Kodali Alexandra Carlson Tae Yeon Kim

Message from the Editors-in-Chief and Vice-President We are very pleased to present to you the Spring 2011 edition of the Science in Society Review Journal. Each publishing cycle, we strive to find the most interesting and innovative collection of topics and this group of articles is no exception. From football to lasers, from electronic records to nuclear weapons, the topics in this issue promise both to entertain and to educate a wide audience. The Triple Helix Executive Board would like to thank everyone who has taken part in the writing, editing, and publishing of this journal. As editorial staff, we are very fortunate to have such a talented group of individuals serving as our writers, editors, reviewers, and production team. The hard work and time they have dedicated to this journal are clearly evident in the finished product. Thank you. Sincerely, Jonathan Gutman, Kara Christensen, and Benjamin Dauber

Associate Editors Marina Antillon Andrew Kam Neil Shah Gregor-Fausto Siegmund Gala Ades-Laurent Alex Turzillo Lakshmi Sundaresan Jacob Alonzo Luciana Steinert Maya Lim Indra Wechsberg Michelle Schmitz Sylwia Nowak

Local News

Faculty Gene Kim Blaine Griffen Yoav Gilad Saul Levmore Richard Cook Harriet de Wit Gary Becker David McNeill Joseph Masco Brian Boyd Rob Mitchum Cheng Chin

In the fall quarter, The Triple Helix (TTH) hosted the author of the book Naked Economics, Professor Charles Wheelan of the University of Chicago’s Harris School of Public Policy. Professor Wheelan’s lecture titled “Naked Economics and The Dichotomy Between the Theory and Practice of Public Policy” was a way for over 200 undergraduate students interested in the fields of economics, environmental studies, and healthcare to learn about differences in public policy theory and practice. Following the TTH principle of broadening the accessibility of science to a wide audience, the lecture covered a variety of topics including increasing healthcare costs both domestically and abroad, petroleum addiction, and the entitlement schemes currently being discussed by the U.S. government. In the winter quarter, TTH hosted a lecture given by Dr. Sabina Shaikh, a notable faculty member of the University of Chicago’s Environmental Studies and Public Policy departments. Dr. Shaikh’s lecture, “A Discussion of the Economics of Nature,” examined the logistics and challenges of placing a market price on the commodities found in the environment by specifically focusing on the impact of “greening” urban spaces, carbon credits, and national parks. Most recently, UChicago’s TTH chapter sent seven delegates to TTH’s leadership summit hosted in Washington D.C. by the American Association for the Advancement of Science (AAAS) from February 18th-20th 2011. The Chicago delegation at the annual AAAS conference consisted of a team of four international leaders, two AAAS poster presenters, and the chapter director of Science Policy. The three poster-presentations, given by Jacob Parzen, Tyler Lutz and Allan Zhang, all drew large crowds. UChicago’s three international staff members, Mounica Yamandala, James Snyder, and Bharat Kilaru, presented various managerial strategies at the conference in order to help guide both new and old TTH chapters towards improving their programs.

2 THE TRIPLE HELIX Spring 2011 UChicago.indb 2

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:28:57 PM


INSIDE TTH

Message from the CEO In tandem with the American Association for the Advancement of Science (AAAS) conference in Washington D.C., The Triple Helix hosted its largest Leadership Summit with close to a hundred students in attendance from nearly fifteen chapters across three different continents. From professional speakers to an international poster competition, the experience was a paragon of the passion we demonstrate in every division of our young organization: from science policy events and freelance e-publishing work to the scholarly literary work ahead. Before you look through The Science in Society Review issue awaiting you, I hope to share with you my insight into the level of work behind every word. The articles in the following pages are derived from an outstanding level of editorial and literary commitment. Each piece represents not only the work of the writer, but also the work of one-on-one associate editors, a highly effective editorial board, astute international senior literary editors, an impressive faculty review board, and an imaginative production staff that reinvents the journal every issue. As you read the following pieces, we hope you will come to appreciate the truly professional level of work that goes into every paragraph. And it is with that same dedication to improvement that every division of The Triple Helix creates progress every day. The last year has been a transitional one as we established a fresh new online presence, brought back divisions in finance and marketing, and established a whole new standard for internal communication. We have truly come a long way from the handful of students meeting in coffee shops not even four years ago. But we have so many more dreams for TTH ahead. We invite you as readers and supporters to come forward and develop new visions that will push us to the next level. The opportunity is upon us and I hope that you will join us in our work towards a global forum for science in society. Sincerely, Bharat Kilaru, CEO The Triple Helix, Inc.

Message from the CPO and EEiC We are proud to present you with another edition of The Science and Society Review! Over the past year, the Literary and Production departments have taken great care in ensuring that our science articles were published at their utmost quality. In order to achieve this, our staff members have invested a tremendous amount of time into editing and designing all the journals for publication. Realizing that coordination between Literary and Production throughout all steps of journal development is paramount for an efficient process, our departments collaborated with each other more than ever. Ultimately, our joint efforts have resulted in a publication that we feel best displays the talent of our student writers and the professionalism of our organization. As both of us prepare to graduate this May, we hope that the upcoming EEiC and CPO continue to strengthen our departments’ teamwork and overall productivity. Although handling a large number of articles and journals each semester can be a challenge, we trust that by sharing our expertise and experiences with our staff, future Literary and Production leaders will be more than ready to create the next series of outstanding Science and Society Review publications. It was certainly a pleasure serving The Triple Helix during our undergraduate careers, and we hope that you enjoy reading the Spring 2011 issue. Please continue to participate in this important dialogue on the role of science in society, whether through The Triple Helix or beyond. Sincerely, Chikaodili Okaneme and Dayan Li Chief Production Officer and Executive Editor-in-Chief

Š 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 3

THE TRIPLE HELIX Spring 2011 3 4/22/2011 6:28:57 PM


YALE

When “Is” Meets “Ought” Marcus Moretti

T

he fifth anniversary of the Afghanistan War loomed serve and to sacrifice?” [1]. For millennia, philosophers have when Sgt. 1st Class Jared Monti and his troops were wrestled with questions like these, which seek explanations for fatally exposed to the enemy. The roaring blades of an why acts like Sgt. Monti’s so imbrue the soul. But the discipline American helicopter, sent to resupply Sgt. Monti’s unit near arguably making the most strides toward such explanations the Afghan-Pakistan border, roused lurking Taliban forces at present is not philosophy, but biology. nearby. The ensuing storm of AK-47 rounds sent the unit Evolutionary scientists are looking for evidence in favor scurrying behind the largest boulders within sight. After Sgt. of a Darwinian account for the repulsion we feel at perceiving Monti secured himself, he spotted one of his men downed and human suffering. Some call this search the “science of moralvulnerable. On his first two attempts to ditch his cover and ity” because it may ultimately usurp philosophy’s dominion rescue the downed comrade, he over ethics. As you read this, was thwarted by incoming fire. experiments that will accelerate Evolutionary scientists are His third go was stopped by a this transfer of power are nearing looking for evidence in favor grenade that exploded just close completion all over the globe. enough to kill him instantly. The most common particiof a Darwinian account for the At Sgt. Monti’s Medal of pants in these types of experiments repulsion we feel at perceiving Honor ceremony three years are rhesus monkeys and chimlater, President Obama, presentpanzees, primates with whom we human suffering. Some call ing the award for the first time, share up to 98% of our DNA, and this search the “science of asked in his panegyric, “Do we our closest animal relatives. Their really grasp the meaning of these behavior defies the old anthropomorality” values? Do we truly understand centric claim that other apes, unthe nature of these virtues, to like us, are incapable of ignoring

Reproduced from [7]

4 THE TRIPLE HELIX Spring 2011 UChicago.indb 4

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:02 PM


YALE their own interests in order to protect those of another ape. Our language manifests this presumed hierarchy of creatures: when we call people “animalistic,” “reptilian,” or “Neanderthals,” we mean that they are stupid and self-centered. Believers of said anthropocentric claim may be said to be, as the writer Douglas Foster put it, in “anthropodenial.” Their hierarchy has been discredited, and a new spate of experiments hammers what may be the final nails into its coffin. In one experiment (Palagi et al., 2009), gelada baboons were shown images of conspecifics—members of the same species—yawning in hopes of finding a tendency for lower primates to identity with conspecifics [2]. As the experimenters predicted, when these animals observed yawns, they yawned back. This mimicry evinces a tendency to self-identify with conspecifics, which is the precondition for empathetic relations. Empathetic behavior in apes has indeed been well documented. In one study (Carter, J.D., 2008), a society of chimps was shown to be especially accommodating toward one of its members afflicted with cerebral palsy [3]. Not one of the disabled chimp’s neighbors exploited his deficiencies, which would be expected from truly solipsistic organisms. In fact, the group’s alpha male paid special attention to the needs of the disabled chimp and groomed him more gently than he did the others. These studies revive the evolutionary theory of altruism that is known as the “group-selection” theory. Its guiding rule is that animal communities in which members are cooperators will win out over communities of selfish creatures. The theory was discredited in the 1970s by biologist Richard Dawkins who posed a problem that he termed that of “subversion from within.” The problem is that some members may enjoy the spoils of others’ sacrifices without sacrificing anything themselves. These free-riders have a fitness advantage over the cooperators because they would enjoy, for instance, the protection afforded by those who stand guard, but never risk injury or death to fend off predators themselves. It appears that the free-riders would then win out over cooperators under group-selection theory, and cooperation would not be selected for. What may allow group-selection theory to overcome this worry is the observed tendency of primates to reciprocate negatively, that is, to take retributive action against free-riders. Primates may construct revenge systems, as they have been called in recent studies [4]. Premier primatologist Frans de Waal identifies partner selection as the chief method of enforcing the revenge system. If a partnered male ape reveals himself to be selfish and uncaring, his lady will promptly dump him. This also happens at the group level: if a member of the community is discovered to be free-riding, then the others may be unwilling to share their food with him. De Waal, in a recent paper that surveys developments in intra-group cooperation among primates, distinguishes between two types of evolutionary causes for action that are essential to group-selection theory: proximate and ultimate causes [5]. Proximate causes proliferate when the actions they entail immediately benefit the actor. An example of a proximate cause would be thirst. Quenching thirst requires the beast to hydrate itself, a biological necessity. Ultimate causes arise

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 5

Proponents of science’s annexation of moral territory cite the failure on the part of philosophers to reach a consensus on ethics after millennia of deliberation. But this debate is not one-sided when an action helps the survival of the species, not just the organism, and may not provide any immediate benefits to the actor itself. Sex drive is a potent example of an ultimate cause. Humans mostly go at it in pursuit of the sensation of stimulation and (hopefully) orgasm. Sensual pleasure may be a proximate cause, but one’s body could go on without having sex. The broader survival of the species is what is at stake. Our sex drives are ultimate causes because they make sex something we all feel compelled to do, thereby ensuring homo sapiens’ perpetuation. A lustful population is more likely to thrive and persist than a more subdued one, hence the often gratuitous human sex drive. Evolutionary biologists have used this distinction between proximate and ultimate causes to explain altruistic acts. The selfless act—the donation of food or the forfeiture of one’s life to save another’s—can be explained by ultimate causes. The tendency for communities with intra-altruism to outcompete less selfless groups, ceteris paribus, allows the altruistic gene to proliferate. The short-term, proximal causes for such sacrifice, if they do exist, are often insufficient to overwhelm the grand loss. The larger force that keeps the population afloat explains the preponderance of behaviors whose personal costs are significantly higher than their personal benefits. In light of biology’s growing encroachments onto moral territory, a large swath of philosophy may face redundancies. If we are programmed to feel certain ethical obligations and not others, and those obligations have contributed to our species’ success, then what is the use in trying to propose an alternate set of ethical principles? Of course cultures are free to decide on their own positive values (honor, intelligence, etc.), but if there are at base some discoverable, proscriptive maxims, why busy over another set? Proponents of science’s annexation of moral territory cite the failure on the part of philosophers to reach a consensus on ethics after millennia of deliberation. But this debate is not one-sided. In a 2009 issue of Newsweek, science editor Sharon Begley launched a two-tiered attack against evolutionary psychology. She claimed first that the science was reprehensible in itself because it vindicates the “evolution made me do it” excuse [6]. Case in point: a 2000 book called A Natural History of Rape: Biological Bases of Sexual Coercion argued that rape is a naturally selected sexual strategy because it improves gene survival. Begley also attacked the field’s premise. Humans, she argued, never stayed in one environment long enough

THE TRIPLE HELIX Spring 2011 5 4/22/2011 6:29:02 PM


YALE for psychological traits to compete in the epochal contest of natural selection, so what arose instead was a versatile mind that could adapt to inconstant circumstances. The first of these attacks is not well-grounded. As Begley herself seems to admit in her article, an attempt to discredit a science because its findings depress us is simply bad science. A condition of genuine scientific inquiry is the willingness to accept hard facts if valid experiments uncover them. If evolutionary psychology finds that troubling beliefs and desires come inbuilt, so be it. Even so, that evolutionary psychology produces repugnant findings is less of a concern than Begley made it out to be. That book that gave rape an evolutionary defense was later disproven by other studies, which pointed out the overwhelming odds against the survival of genes that bestow proclivities toward sexual coercion. Just as one would not expect a community to look fondly on free-riders, one would not expect community members to help a known rapist find food. The field’s most recent embarrassment was Harvard biologist Marc Hauser’s conviction on eight counts of scientific misconduct in his studies of primate morality. Hauser designed his experiments with the hope of identifying more human characteristics in primates, as several of his valid studies did before. His data-fudging was taken by some religious figures, philosophers, and even other scientists as an invalidation of the entire project. Many scientists, however, have come to the field’s defense. De Waal has pointed out that many areas of scientific study have had their fair shares of frauds, including chemistry and physics, but those anomalies did not shutter entire disciplines. Hauser’s misconduct was one unfortunate instance of malpractice and should not be taken to debunk the connection between evolution and morality. De Waal and other primatologists like Jane Goodall should not go unemployed because of one colleague’s misdoings. Setting these procedural complaints aside, evolution can still only go so far to answer the biggest questions of human existence. As Harvard professor of linguistics Steven Pinker argues, if evolution were the only source of life’s purpose, a man would be wholly fulfilled spending his afternoons at the sperm bank. But this is not so. Even if one believes that sex drive is at root the motive behind all action—whether it’s reading philosophy or playing the violin—a human needs more than science to find his interests and purpose. We search for more than just reproductive success in life, so evolution could only illuminate our biological proclivities, not our particular hobbies and tastes. To quarrel with this constraint on science, which holds it

to providing knowledge about what is the case and not about what ought to be the case, would be to commit the notorious naturalistic fallacy. The fallacy holds that any statement about how things are cannot be used in an argument that concludes with a statement on how things should be. It highlights a salient gap between facts and obligations, and has been a trump card in moral philosophy since its 1903 articulation by the English philosopher G.E. Moore. Yet even this hitherto axiomatic principle is under attack. Neuroscientist Sam Harris seeks to refute the naturalistic fallacy in his new book The Moral Landscape, in which he proposes a moral system founded on scientific understandings of well-being. Morality, he argued, is the study of how to improve well-being, so why not use experimentally verified links between behavior and well-being to write social rules? While Harris acknowledges that science has the bulk of its work in this area ahead of it, he insists that science must be what guides ethical discussions in the 21st century. The viability of arguments like this and of the greater science of morality remains to be seen, but there are some present indicators of the direction of its course. If the innateness of our basic moral intuitions receives more and more evidentiary support—and if the present trend continues, it will—then the ancient conflict between evolutionary theory and religion may become more acute. Internationally renowned preacher Ravi Zacharias spoke at Yale recently and proselytized on behalf of Christ, citing him as the needle in Western civilization’s moral compass. Harris, a public atheist, takes this argument head-on and asks, why bother to pore through Christ’s teachings when moral standards are inbuilt? It is yet unclear how a complete set of morals could derive from evolutionary theory, but many scientists and philosophers nonetheless pray—or, hope—that a universal set of maxims will one day be found. As the minds of our fellow apes are shown to have more and more in common with our own, the traditionally clear-cut distinction between man and ape will blur. Since Copernicus, science has consistently reduced the importance of human beings’ role in the universe. Progress will likely devalue our race further, but that could do us some good. Man may be the most sophisticated around, and he has a long way to go before he figures himself out. Conveying pithily this critical truth, the 19th century philosopher Friedrich Nietzsche was prescient to let his Zarathustra muse, “Man is more ape than many of the apes.”

References

4. Jensen, K. “Punishment and spite, the dark side of cooperation.” Phil. Trans. R. Soc. B 365 (2010), 2723-2735. 5. de Waal, Frans B. M. & Malini Suchak. “Prosocial primates: selfish and unselfish motivations.” Phil. Trans. R. Soc. B 365 (2010), 2711-22. 6. Begley, Sharon. “Why Do We Rape, Kill, and Sleep Around?” Newsweek, 20 Jun 2009. Accessed 28 Oct 2010. < http://www.newsweek.com/2009/06/19/why-do-werape-kill-and-sleep-around.html>. 7. http://nsf.gov/news/mmg/media/images/chimp_health3_h.jpg

1. Elliott, Philip. “MoH ceremony honors Monti’s sacrifice.” ArmyTimes, 20 Sep. 2009. Accessed 29 Oct 2010. <http://www.armytimes.com/news/2009/09/ap_medal_ of_honor_monti_091709/>. 2. Carter, J.D. “A Longitudinal Study of Social Tolerance by Chimpanzees Towards a Conspecific with Cerebral Palsy.” International Primatological Society XXII (August 2008), presentation. 3. Palagi, E. et al. “Contagious yawning in gelada baboons as a possible expression of empathy.” PNAS 106 (November 2009), 19262-7.

6 THE TRIPLE HELIX Spring 2011 UChicago.indb 6

Marcus Moretti is a sophomore studying Humanities and Political Science at Yale University.

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:02 PM


UCHICAGO

Cash For Kidneys: How Do We Solve the Kidney Shortage Problem? Doni Bloomfield

T

welve patients filed into six operating rooms; nine surgical teams set to work and ten hours later six people emerged with new, working kidneys [1]. Simultaneously transplanting six kidneys is an extraordinarily convoluted operation, and would only take place in the dismal context in which the United States finds itself: a desperate shortage of kidneys. The situation has degraded to such an extent that, as with the case above, patients and potential donors are resorting to bartering with their organs. Five of the six recipients had relatives who were immunologically unable to donate to them and, with the addition of an altruistic donor, were able to mix and match types so they all received kidneys. The operations were performed simultaneously so none of the parties could back out [1]. This extraordinary story is just one sign of our current medical crisis. The dire shortage of kidney donations leads to nearly 4,000 deaths annually in the US [2]. And that’s a low bound; some estimates reach as high as 9,000 deaths per year [3]. Currently over 86,000 people are waiting for a donation, but perhaps the most fearful statistic is the change in death rate for people on the waiting list which, between 1998 and

Before us is a real tragedy, one that confronts us with powerful moral difficulties 2007, rose 76% [4,5]. While the demand for kidneys grows daily, many economists and medical professionals are exploring innovative and controversial solutions, like the kidney exchange above, which may reduce some of these disturbing statistics. While many of these solutions appear to be at least somewhat effective, they carry a number of moral concerns that must also be addressed. The first kidney transplant was performed in Boston in 1954. This new operation would, over the next half century, save thousands of lives. The introduction of immunosuppresants in the 1970s dramatically popularized the procedure [2]. The increasing safety of donation from live donors prompted a widespread fear that impoverished citizens from abroad would be shipped into the U.S. to have their organs snatched. In response, in 1984 Congress passed the National Organ Transplant

Reproduced from [29]

Š 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 7

THE TRIPLE HELIX Spring 2011 7 4/22/2011 6:29:02 PM


UCHICAGO Act, barring the sale of organs for any reason, only permitting to hostility to the healthcare system – like Brazil encountered altruistic donations [6]. The only way to receive an organ under when they recently attempted a system of presumed consent the current system is from a living donor who decides to donate [15]. Empirical evidence suggests that, in the aggregate, prean organ without any compensation, or from a cadaver whose sumed consent raises donation rates by about 30% [16]. family decides to donate, also without compensation. As we Somewhere between these two options lies the novel have seen, demand has soared and supply stagnated. Since Israeli approach, instituted in January 2010, wherein donors 1988 the number of dialysis-dependent individuals (those with receive priority for future receipt of organs. If an individual severe kidney disease who are unable to find a transplant) donates an organ, for instance a kidney, they are automatically has tripled [7]. placed higher on the waiting list The situation may appear if they should need to receive in The situation may appear bleak but to an economist this organ themselves in the future. bleak but to an economist this is a clear problem of misaligned No money changes hands but an incentives. Currently the only inadded incentive is thrown in to is a... problem of misaligned centive for unrelated donors to boost supply [17]. An effort to enincentives donate is pure altruism, which act similar legislation is underway goes a long way in explaining in the U.S. [3]. While this system why about 80% of living donors are related to the recipient of incentives may help alleviate some of the pressure on the and the declining numbers of cadaveric donations relative waiting list, some experts believe that any deviation from a to living donations [2,3]. In the current U.S. system, which strict needs-based approach leads to undesirable outcomes, requires potential donors to opt-in to allow their organs to as sick patients “may end up being pushed to the back of the be harvested after death, large swaths of the public neglect to queue,” in favor of those who have donated [18]. do so, contributing to a situation in which less than one half While all of these remedies may depart from traditional of potential cadaveric donors donate [8]. altruistic donations, they do not radically change the basic To change the amount supplied we need to shift incen- system: no money changes hands between the donor and the tives for donating or remove potential barriers to altruism. recipient, and the only form of compensation comes through The first place to look is within the current system, especially barter of organs. However, because these solutions do not where there are inefficiencies that don’t require systematic completely bridge the supply-demand gap, some economists change to correct. A suggestion currently being tested is the suggest a more radical departure: an explicit monetary market aforementioned kidney exchange. If two individuals each for organs. How this market would work is highly contested, need a kidney, and each has a willing donor who is unable with views ranging from almost completely laissez faire to to donate to them for immunological reasons, the donors a single payer who would then distribute the kidneys based can swap kidneys by giving to the recipient whom they are on need [19,20]. The essential point of the market, though, is compatible with, thereby prolonging their lives and removing that monetary compensation would be offered as an incentive them from the waiting list [9]. In order to prevent reneging to donate. In general terms a marketplace would target two on this exchange, all four individuals typically have surgery groups: those who would donate upon death and those who simultaneously. This kidney barter maintains the basic altru- would, because they have two viable kidneys, donate while istic incentives while allowing more people to donate. Efforts alive. The first group would likely be incentivized by payments, are ongoing to create regional exchanges to allow for these paid while the individual is living, to permit harvesting upon trades to be set up, and some economists envision taking this death if the organs were to prove viable [6]. Alternatively, to a new level, with long chains of donors organized around heirs would be paid if they agreed to allow donations from anonymous donors who have no preference to whom their their already deceased relations [3]. kidney goes, allowing for non-simultaneous trades [10]. While Because there is greater demand for kidneys than there this option has the fewest legal barriers, it remains uncertain are harvestable organs from cadavers, we need living donors how effective these exchanges, which thus far have been rare, to make up the slack [2]. Moreover, organs from living donors are in mitigating the overall problem [12]. lead to better long-term health than do cadaverous organs [20]. A number of countries worldwide have taken a larger In a market system, living donors would be paid for their lost scale approach, putting the opt-in system on its head with a wages, the risks associated with the surgery, and potential system known as presumed consent, whereby possible donors hampering of their future living standards. Recent leaps in are assumed to permit donation unless they have specified transplantation have reduced the risks from surgery to around otherwise [13]. In other words, any cadaver coming into the a .03% mortality rate and minimized long-term declines in hospital with harvestable organs is presumed to consent to the living standard of the donor [18]. Given these advances, harvesting unless written documentation asserts otherwise. economists Becker and Elias estimate the price of a kidney to be This form of incentivizing is known as liberal paternalism: between $7,700 and $27,700, with more confidence in the lower systems need to be set up with some default assumptions and, value [2]. Because this would only constitute a 12% increase the reasoning goes, it’s better to set up systems that lead to in the total cost of surgery, the increased supply would well better outcomes overall [14]. On the other hand, some find this exceed the decrease in demand caused by the price bump [2]. an overreach that removes personal autonomy and can lead Before enacting such a major change we want to be con-

8 THE TRIPLE HELIX Spring 2011 UChicago.indb 8

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:03 PM


UCHICAGO fident in the efficacy of a market system. Fortunately, there is a large case study: the country of Iran. As the only country in the world that permits the sale of kidneys, Iran is also the only country with a surplus of kidneys: there are no major shortages [7,21]. The market, at least in theory, is a highly regulated and hierarchical system that attempts to decouple potential conflicts of interest. The process begins with the medical centers that identify viable sellers; these are staffed by volunteers and not compensated by the number of sellers they produce. To ensure the health of the seller, a doctor with the right to veto the procedure conducts a second examination. If the seller is approved a fixed sum is paid to him or her by the government with the rest being made up by the recipient or, if they are unable to afford it, a number of official charities [7]. This system is a useful proxy for observing the good and ill of legalizing financial incentives to donors. The benefits came over a decade, as Iran gradually cleared all shortages in kidneys, the only country worldwide to achieve this feat [7]. This is the primary aim of any kidney system, but there are major concerns about how the market operates. For one, the Iranian system may not be as efficiently regulated as it appears; those who cannot afford kidneys are sometimes not subsidized by charities and must wait for a cadaverous organ to become available [21]. Moreover, data suggests that most sellers are young men in extreme poverty, many of whom eventually regret having sold their kidneys [21]. This is coupled with some very troubling long-term health problems that sellers face [7]. These problems may be endemic to kidney markets in general but relative to more advanced economies the negative long-term health effects seems to be a particular Iranian phenomenon.

If a market in organs is highly successful at preventing deaths, as the Iranian system appears to demonstrate, it raises the question of why such a system, or a similar one, hasn’t been instituted elsewhere. There are two main responses. The first, as posited by economist Gary Becker, is that several groups have an interest in maintaining the status quo ban on selling kidneys. He proposes several possible groups, among them insurance companies, who under a market system would have to pay for many new kidney transplants, which in turn would include the additional cost of compensating the donor [22]. Another possible interest group are surgeons who have specialized in rapid-response transplants, a field which would be of much smaller relevance in an Iranian-type system under which transplants could be scheduled in advance. The second answer, proposed by economist Alvin Roth, is that there is a widely held repugnance towards compensating organ donors, much as people find raising prices in the face of a natural disaster distasteful, or how many reacted to the introduction of an all-volunteer army [22,23]. Repugnance seems to spring from a mingling of activities normally considered altruistic or familial, such as giving supplies during a disaster or sexual relations, and those considered commercial, like cash transfers. This may in part explain why price gauging and prostitution are both illegal in much of the world [23]. The same idea would seem to lie at the root of distaste for introducing monetary compensation into what might otherwise be considered an altruistic activity, the donation of a kidney. These two explanations, of course, are not mutually exclusive: interest groups may take advantage of moral intuitions to promote their own agenda. This intuition of repugnance is not the only source of

Reproduced from [13]

Š 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 9

THE TRIPLE HELIX Spring 2011 9 4/22/2011 6:29:03 PM


UCHICAGO moral objection to a market in organs. There are two types of moral problems: first, what might be considered the intellectual support for repugnance, categorical moral problems with the very act of selling an organ as dehumanizing, and second, unfair or distasteful outcomes that result from allowing organs to be sold. Often these two are confused and it would be best to separate them at the outset. Many argue that we find paying for organs repugnant because it is an act that reduces human dignity. This argument runs that by selling the organs that constitute our bodies we are engaging in commodification of our bodies, which is destructive to human dignity [25]. By parceling out our organs, we are implicitly saying that the human being is just another good that can be bought and sold, akin to slavery [25]. Opponents of this view counter that we are currently operating markets in human reproductive material, apparently without major negative consequences, and that people trade parts of themselves when they work for a wage [26,18]. Still, if one considers the human body sacrosanct, a market in organs might be morally untenable. Others believe that while selling a kidney per se is not immoral, the consequences of a market would be unfair and exploitative. We have already seen that the poor do donate disproportionately in Iran, and quite possibly this would occur elsewhere. A free market in a more developed country, however, might lean towards healthy, less impoverished individuals, as has happened in the case of the Army, which, despite expectations to the contrary, has received higher quality recruits as a volunteer force than it did under the draft, and actually gets a higher proportion of its soldiers from the highest-earning quintiles than the lowest [2, 27, 28]. Moreover, other markets engage in offering less pleasant jobs to poorer individuals

because they have fewer opportunities – this is overall a good thing for the poor, because they lack many alternatives. Likewise, the huge cash boost after selling a kidney could allow a poorer individual to buy a new home, invest in a college education, or help a relative pay for necessary surgery, among many praiseworthy reasons to accept the tradeoff [25]. We allow individuals to accept compensation for risky activities, such as logging and fishing, because the social welfare is on balance benefited. The wages in these occupations are commensurately higher than similarly skilled, but less dangerous occupations; likewise here some risks would be accepted in return for payment. It appears that while there are fairness objections, many could be overcome with education, a required cooling off period before donation, and a single buyer [2,6]. Before us is a very real tragedy, one that confronts us with powerful moral difficulties. To bridge the current kidney shortage we can turn to working within the system, whether by switching to presumed consent, crafting sophisticated swaps, or giving donors priority to organs. While these all may be productive, they seem ultimately insufficient to clear waiting lists, which leaves us with the contentious possibility of a financial market in kidneys. Economic analysis suggests that this would be very effective at eliminating shortages, potentially saving thousands of lives, and possibly redounding to the benefit of the sellers. Ultimately, however, economics cannot inform us as to the morality of this decision. We can arbitrate the facts by economic analysis; the moral truth lies in the vast realm of human subjectivity.

References

12. Elias JJ, Roth AE. A Market for Kidneys? Wall Street Journal Online 2007 Nov 7 13. Abadie A, Gay S. The impact of presumed consent legislation on cadaveric organ donation: A cross-country study. J of Health Econ 2006; 25:599–620. 14. Sunstein CR, Thaler RH. Libertarian Paternalism is Not an Oxymoron, AEIBrookings Joint Center for Regulatory Studies at The University of Chicago Law School 2003; :1-43. 15. English V, Wright L. Is presumed consent the answer to organ shortages? BMJ 2007; 334:1088-89. 16. Rithalia A, Mcdaid C, Suekarran S, Myers L, Sowden. Impact of presumed consent for organ donation on donation rates: a systematic review. BMJ 2009; 338:1-8. 17. Lavee J, Ashkenazi T, Gurman G, Steinberg, D. A new law for allocation of donor organs in Israel. The Lancet 2009; 375(9772):1131–33. 18. Brimelow A, Israeli organ donors to get transplant priority. BBC 2009 Dec 17 19. Epstein RA, The Human and Economic Dimensions of Altruism: The Case of Organ Transplantation. JOHN M. OLIN LAW & ECONOMICS WORKING PAPER NO. 385 2008; :1-48. 20. Richards JR, Erin CA, Harris J. Commentary. An ethical market in human organs. J Med Ethics 2008; :139-40. 21. Griffen A, Kidneys on Demand. BMJ 2007; 334:502-505. 22. Becker G. Interviewed by: Bloomfield D. 22 November 2010 23. Roth AE, Repugnance as a Constraint on Markets. J of Econ Perspectives 2007; 21(3):37-58. 24. Yandle, B, Bootleggers and Baptists in Retrospect. Regulation 1999; 22(3) 25. Richards JR, Nepharious Goings On: Kidney Sales and Moral Argument. J of Med and Philosophy 1996; 21:375-416. 26. Resnick DB, Regulating the Market for Human Eggs. Bioethics 2001; 15(1):1-25. 27. Wartner, TJ, Asch, BJ, The Record and Prospects of the All-Volunteer Military in the United States. J of Econ Perspectives 2001; 15(2) 169-192 28. Watkins, SJ, Sherk, J, Who Serves in the U.S. Military? Demographic Characteristics of Enlisted Troops and Officers. Center for Data Analysis at the Heritage Foundation 2008; 1-29. Available at: http://www.heritage.org/research/reports/2008/08/whoserves-in-the-us-military-the-demographics-of-enlisted-troops-and-officers 29. http://optn.transplant.hrsa.gov/ar2008/chapter_iii_AR_cd.htm?cp=4 30. http://www.maricopa.gov/safety/tips/images/aug03surgery.JPG

1. Rose, D Six-way kidney transplant goes ahead as volunteer donor steps in. The Times 2008 April 10. 2. Becker GS, Elias JJ. Introducing Incentives in the Market for Live and Cadaveric Organ Donations. J of Economic Perspectives 2007; 21(3):3-24. 3. Tabarrok A. Life-Saving Incentives: Consequences, Costs and Solutions to the Organ Shortage. Library of Economics and Liberty [homepage on the Internet]. 2009 [cited 2010 Nov 1]. Available at: http://www.econlib.org/library/Columns/y2009/ Tabarroklifesaving.html 4. Organ Procurement and Transplantation Network A. Current U.S. Waiting List, Overall By Organ. [homepage on the Internet]. 2010 [cited 2010 Nov 1]. Available from: U.S. Government, Health Resources and Services Administration, Department of Health and Human Services Web site: http://optn.transplant.hrsa.gov/latestData/ rptData.asp 5. Organ Procurement and Transplantation Network A. Annual Report of the U.S. Organ Procurement and Transplantation Network and the Scientific Registry of Transplant Recipients: Transplant Data 1998-2007. [homepage on the Internet]. 2008 [cited 2010 Nov 1]. Available from: U.S. Government, U.S. Department of Health and Human Services, Health Resources and Services Administration, Healthcare Systems Bureau, Division of Transplantation Web site: http://optn.transplant.hrsa.gov/ar2008/ chapter_iii_AR_cd.htm?cp=4 6. Schwindt J, Vining AR. Proposal for a Future Delivery Market for Transplant Organs. J of Health Politics, Policy and Law 1986; 11(3):483-500. 7. Hippen BE, Organ Sales and Moral Travails: Lessons from the Living Kidney Vendor Program in Iran. Cato Institute’s Policy Analysis 2008; (614):1-20. 8. Sheehy E, Conrad SL, Brigham LE, Luskin R, Weber P, Eakin M, et al. Estimating the Number of Potential Organ Donors in the United States. N Engl J Med 2003; 349(7):667-74. 9. Roth AE, Sonmez T, Unver MU. Kidney Exchange. Qrtly J of Econ 2004; 119(2):457488. 10. Roth AE, What Have We Learned From Market Design? The Economic Journal, 2008; 118: 285–310. 11. Rees MA, Kopke JE, Pelletier RP, Segev DL, Rutter ME, Fabrega AJ et al. A Nonsimultaneous, Extended, Altruistic-Donor Chain. N Engl J Med 2009; 360:1096-101.

10 THE TRIPLE HELIX Spring 2011 UChicago.indb 10

Doni Bloomfield is a freshman studying Economics and History at the University of Chicago.

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:03 PM


UCHICAGO

The Neurological Impact of Stimulant Consumption and Abuse Venkat Boddapati

A

s far back as 3,000 BC, humans have widely consumed Chinese tea, and more recently Arabic coffee and tobacco products [1]. With the development of modern chemical and biochemical practices, it has been determined that the active ingredient in these products are naturally occurring stimulants such as caffeine and nicotine. In the modern era, however, research has shown that many individuals are consuming a more modern stimulant – synthetically produced amphetamines – in order to deal with the pressures of modern day life. This trend is especially prominent on college campuses where students must deal with an ever-increasing amount of stress, work, and competition. Nowadays, the student who maintains a perfect grade point average, actively participates in a range of extracurricular activities, and holds down a job is no longer extraordinary but is merely keeping up with his peers. The prevalence and social acceptance of prescription stimulants has increased tremendously in recent years, causing a larger number of college students to turn to these drugs. However, many students do not realize how much more powerful and dangerous this new generation of stimulants is. As a result many students frequently – and perhaps unconsciously – cross the fine line between occasional amphetamine misuse and abuse, and in the process an amphetamine addiction may form. The History of Amphetamine Since its synthesis in 1887, amphetamine has had a variety of social and medical uses. It was not until 1932 that amphetamine

Reproduced from [18]

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 11

was first medically used in an inhaler to treat asthma. Since there was no regulation of these inhalers, amphetamine use and abuse began to escalate. In 1946, scientist William Bett found thirtynine medical uses for amphetamines, including treatment for a wide range of conditions, such as: epilepsy, fatigue, depression, caffeine addiction, migraines, seasickness, and obesity [4]. However, Congress began regulating amphetamines because of their high potential for abuse with the Controlled Substances Act in 1970, making it illegal to possess amphetamines without a doctor’s prescription [5]. Currently, amphetamines are primarily used to treat attention deficit hyperactivity disorder (ADHD) and narcolepsy. The Growth of American Stimulant Consumption In spite of stringent legal regulation, American amphetamine consumption has continued to increase in recent decades. This is particularly surprising because amphetamines are only prescribed for a small fraction of the thirty-nine possible medicinal applications suggested by Bett. However, in recent years psychiatrists have been increasingly prescribing drugs such as Adderall, Vyvanse, Ritalin, and Concerta. These drugs are composed primarily of varying proportions of mixed amphetamine salts or methylphenidate, a less potent compound that is structurally similar to amphetamine [6]. According to an article in Pediatrics regarding the abuse of ADHD medications, the number of stimulants – both pure amphetamine and methylphenidate based – prescribed to 3 to 19 year olds has increased by 180% from 6,566,546 to 11,812,041 between 1998 and 2005 [7]. The United States population has not doubled within the same seven year time period, which indicates that either psychiatrists have become more effective at detecting and diagnosing ADHD or the number of individuals seeking prescription stimulants have increased. The large increase in prescription stimulants also indicates that these regulated, potentially dangerous drugs are becoming more socially acceptable and embraced by a substantial portion of American youth This increase in the number of stimulant prescriptions has also been correlated with an increase in stimulant abuse. In addition to monitoring the rise in stimulant prescriptions, the Pediatrics article also monitored the number of calls to the American Association of Poison Control Centers from adolescent prescription ADHD abusers from 1998 to 2005. The results found that there was “a rising problem with abuse of [ADHD stimulant] medication,” and also that “case severity increased over time [and] call-volume increases (76%) parallel sales increases (80%).” Since 2005, prescription stimulant sales have increased and the companies that produce these drugs are recording huge profits. For example, Shire, the producer THE TRIPLE HELIX Spring 2011 11 4/22/2011 6:29:03 PM


UCHICAGO of Adderall and Vyvanse, reported a 32% increase in sales that led to nearly 800 million dollars in sales between July and August 2010. They also note that their plan to increase sales to mid-teens between 2009 and 2015 remains an important objective of the company [8]. As sales continue to grow into the near future, the magnitude of prescription stimulant abuse will likely grow with it proportionally. Illegal Prescription Stimulant Consumption by College Students Many individuals who are legally prescribed stimulants have found a market to illegally sell their drug on college campuses. A 2005 study of 9161 undergraduate students found that “over 90% of non-medical users of prescription stimulants who reported a source indicated they obtained prescription stimulants from peers and friends” and 11% of individuals prescribed stimulants reported selling their drugs in another study [9,10]. With the relatively large and growing percentage of Americans prescribed stimulants and the fact that even a fraction of these individuals are willing to sell their prescription to others, it is not extremely difficult to illegally obtain these drugs. In fact, in 2005 a survey of college students from a large, anonymous Midwestern university found that 8.3% and 5.9% of students had illegally used prescription stimulants in their lifetime and in the past year, respectively [11]. As the number of individuals who are willing to sell their prescribed drugs increases and as new retail sources, such as websites specializing in prescription drug sales, emerge, students will be able to illegally purchase these drugs with greater ease. The Initial Neurological Impact of Amphetamines Prescription stimulants are often consumed for their ability to improve one’s mood and productivity by changing the natural regulation of the dopamine, serotonin, and norepinephrine neurotransmitters. Neurotransmitters are chemicals are stored in small sacs, known as vesicles, and are released from a presynaptic neuron over an extracellular space known as the synaptic cleft. After the neurotransmitters have crossed the synaptic cleft, they bind with receptor molecules on a postsynaptic neuron and in doing so cause this neuron to be activated in some way. Once the neurotransmitter has activated the postsynaptic neuron, it is either withdrawn back to the presynaptic neuron in a process known as reuptake or metabolically broken down. Dopamine is one of the most widely studied neurotransmitters and scientists believe that many of the side effects of amphetamines are

12 THE TRIPLE HELIX Spring 2011 UChicago.indb 12

caused by changes in the release and reuptake of this chemical in the brains dopamine system. The dopamine system is composed of various pathways that connect various distant parts of the brain together and is often referred to as the reward system because it is associated with motivation, memory, and euphoria [12,13]. It is believed that since amphetamine and dopamine have very similar chemical structures, amphetamine is able to enter the presynaptic neuron and react with the vesicles that dopamine are held in, the vesicular monoamine transporter 2 (VMAT2). Once this happens, dopamine is released from VMAT2 and proteins known as the dopamine active transporters (DATs) pump dopamine into the synaptic cleft where it is able to activate the postsynaptic neuron and propagate its chemical signal. However, in the presence of an amphetamine, the dopamine reuptake process is also delayed, allowing the neurotransmitter to remain in the synaptic cleft for a longer period of time [14]. As a result, amphetamine not only increases the concentration of dopamine released, it also prolongs the dopamine signal by delaying reuptake. This elevation of dopamine in the reward pathway generates temporary euphoria and energy and also enhances neurological

[Prescription stimulants] may seem like a miracle drug for college students; however, the initial neurological impact does not last forever

Reproduced from [20]

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:03 PM


UCHICAGO functions such as memory, attention, and problem solving. As a result, amphetamine users are able to circumvent basic human necessities and function for days without sleeping and study productively for hours without stopping. Many college students often take prescription stimulants to obtain these initial neurological enhancements in order to more productively deal with a situation that would normally be particularly stressful or difficult, such as studying for a final exam. This may seem like a miracle drug for college students, however, the initial neurological impact does not last forever. The Neurological Impact After Amphetamine Has Left the Body The initial spike in dopamine levels in the brains rewards system causes intense euphoria, however, the brain takes measures to counteract this unnatural state. One way the brain counteracts the increased levels of dopamine is by restricting further dopamine release. This process is initiated by the activation of cyclic adenosine monophosphate (CREB) proteins that cause other proteins such as dynorphin to be produced [15]. Dynorphin is stored in vesicles in the presynaptic neuron, similar to dopamine, however the dynorphin vesicles are larger and denser. Dynorphin binds to κ-opioid receptors (KOR) on the presynaptic neuron, which in turn causes the levels of dopamine leaving the neuron to decrease [16]. When amphetamine leaves the body, excessive amounts of dopamine are no longer released, however, proteins such as dynorphin are still limiting dopamine release. Therefore, individuals will have low levels of synaptic dopamine that can lead to feelings of fatigue, depression, and a lack of motivation. To combat these negative feelings, individuals will often take more amphetamine as the original dose of amphetamine is wearing off. Doing this causes another spike in dopamine levels, however at the same time dynorphin continues to bind to KOR receptors to decrease synaptic dopamine levels. As a result, when the most recent dosage of amphetamine wears off, dopamine levels in the reward system will be even lower than before. The human body’s attempt to compensate for excessive dopamine levels due to drugs plays a role in the development of drug tolerance, dependence, and addiction. References 1. History of Tea and Coffee. 2008 Jul [cited 2011 Nov 11]; Available from: http:// www.teaandcoffeemuseum.co.uk/. 2. Doweiko H. Concepts of Chemical Dependency. 5 ed: Brooks Cole; 2001. 3. Emonson DL, Vanderbeek RD. The use of amphetamines in U.S. Air Force tactical operations during Desert Shield and Storm. Aviat Space Environ Med. 1995 Mar;66(3):260-3. 4. Bett WR. Benzedrine sulphate in clinical medicine; a survey of the literature. Postgrad Med J. 1946 Aug;22:205-18. 5. Brucker TM. The practical guide to the Controlled Drugs and Substances Act. 3rd ed. Toronto, Ont.: Carswell; 2002. 6. Stimulant ADHD Medications: Methylphenidate and Amphetamines. National Institute on Drug Abuse; 2009 Jun [cited 2010 Nov 11]; Available from: http://drugabuse.gov/infofacts/ADHD.html. 7. Setlik J, Bond GR, Ho M. Adolescent prescription ADHD medication abuse is rising along with prescriptions for these medications. Pediatrics. 2009 Sep;124(3):875-80. 8. Continued excellent performance in Q3. Full year earnings expectations increased. New value in pipeline emerging. 2010 Oct 29 [cited 2010 Nov 11]; Available from: http://www.shire.com/shireplc/en/investors/.../irshirenews?id=421. 9. McCabe SE, Knight JR, Teter CJ, Wechsler H. Non-medical use of prescription stimulants among US college students: prevalence and correlates from a national survey. Addiction. 2005 Jan;100(1):96-106. 10. Mick E, Fried R, Wilens T. ADHD and Co-Occurring Substance Use Disorders. American Society of Addiction Medicine;2007 Oct 1.

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 13

The Long Term Neurological Impact of Amphetamines After Repeated Usage Individuals who frequently consume large amounts of amphetamines over an extended period of time are likely to develop an amphetamine addiction and also permanently alter their brain chemistry. Initially, the brain will counteract rising dopamine levels due to drugs by blocking dopamine release or reducing the number of dopamine receptors on postsynaptic neurons within the reward pathway [17]. As a result, dopamine no longer plays a significant role in the reward pathway and without any drug to increase dopamine levels, individuals will feel depressed and no longer receive pleasure from previously enjoyable activities. In addition to the dopamine pathways, other regions of the brain are also significantly altered upon the repeated consumption of large amounts of amphetamine. The frontal cortex, amygdala, and hippocampus communicate with regions of the reward pathway through the neurotransmitter glutamate. Increased dopamine levels associated with amphetamine use cause changes in the responsiveness in the reward pathway towards glutamate and this change in responsiveness increases dopamine release [18]. To counteract this, CREB proteins are continuously activated causing a reduction in dopamine release and the number of dopamine receptors. As drug addiction forms, changes in the reward pathway and the interaction of this pathway with other regions of the brain take place, causing the addict to impulsively seek and take drugs to a point that demonstrates lack of self-control and concern about physical and mental health. Stimulants are being illegally consumed by a sizeable segment of students on many college campuses; however, many of these students fail to understand the potential longterm impact of their choice. Although there are high demands placed upon modern students, illegally consuming stimulants to handle these pressures is not advisable. Perhaps the most effective method to thrive in college is simply the traditional route: hard work and diligence. Venkat Boddapati is a sophomore studying Biological Sciences at the University of Chicago. 11. Teter CJ, McCabe SE, LaGrange K, Cranford JA, Boyd CJ. Illicit use of specific prescription stimulants among college students: prevalence, motives, and routes of administration. Pharmacotherapy. 2006 Oct;26(10):1501-10. 12. Beyond the Reward Pathway. Genetic Science Learning Center; 2010 Nov [cited 2010 Nov 11]; Available from: http://learn.genetics.utah.edu/content/addiction/reward/pathways.html. 13. Drevets WC, Gautier C, Price JC, Kupfer DJ, Kinahan PE, Grace AA, et al. Amphetamine-induced dopamine release in human ventral striatum correlates with euphoria. Biol Psychiatry. 2001 Jan 15;49(2):81-96. 14. Sulzer D, Sonders MS, Poulsen NW, Galli A. Mechanisms of neurotransmitter release by amphetamines: a review. Prog Neurobiol. 2005 Apr;75(6):406-33. 15. Giannini AJ, Quinones RQ, Martin DM. Role of beta-endorphin and cAMP in addiction and mania. Society for Neuroscience Abstracts. 1998;15(149). 16. Krebs MO, Gauchy C, Desban M, Glowinski J, Kemel ML. Role of dynorphin and GABA in the inhibitory regulation of NMDA-induced dopamine release in striosome- and matrix-enriched areas of the rat striatum. J Neurosci. 1994 Apr;14(4):243543. 17. Drugs Alter the Brain’s Reward Pathway. 2010 Nov [cited 2010 Nov 11]; Available from: http://learn.genetics.utah.edu/content/addiction/drugs/. 18. Drugs and the Brain. National Institute on Drug Abuse; [cited 2010 Nov 11]; Available from: http://drugabuse.gov/scienceofaddiction/brain.html. 19. http://www.justice.gov/dea/programs/forensicsci/microgram/mg0103/mg0103. html. 20. http://blog.usa.gov/roller/govgab/tags/sleep.

THE TRIPLE HELIX Spring 2011 13 4/22/2011 6:29:03 PM


UCHICAGO

Critical Slowing Down and Early Warning Signs in Complex Systems Leland Bybee

O

ne doesn’t have to look far to find instances of man’s negative relationship with his environment. Most recently, the 2010 BP oil spill in the Gulf of Mexico provided a striking example of our ability to detrimentally affect nature. As humanity’s technological and scientific knowledge increase, our ability to shape our environment increases in step as well, paving the way for potentially more destructive interactions with nature. These interactions have often taken the form of species extinctions resulting from damaged nature equilibriums in which a previously stable system is drastically altered by the introduction of a major external force [1]. One example of this phenomenon includes the South American rainforests when local logging efforts led to the fragmentation of the forest into isolated patches where species from one fragment could not interact with species from another fragment. Over time these isolated fragments experienced a significant decline in species diversity in comparison to more unified sections of rainforest [2]. Another classic example of a damaged natural equilibrium is the case of European rabbit introduction to Australia, in which the rabbits, through erosion and the removal of tree bark, are believed to account for more native plant loss than any other species in Australia [3]. Both these destructive interactions have had major environmental and economic costs with European rabbits in Australia accounting for millions of dollars in crop damages every year and deforestation of South American rainforests leading to the potential loss of major pharmaceutical breakthroughs [3, 4]. As such, much effort has been put into understanding these complex environmental systems and developing new methods to accurately predict if and when a species will go extinct [1]. A recent article in Nature discusses how evidence in a number of fields has suggested that complex ecological systems have critical thresholds, or “tipping points,” that lead to abrupt system changes [5]. Often the systems being studied are quite complex, making it very difficult to predict when a critical threshold will occur; however, recent work on the concept of critical slowing down (CSD) has presented a possible tool to predict these critical thresholds [5]. CSD is the idea that as a complex system approaches a critical threshold, the system’s ability to recover from small fluctuations and return to Reproduced from [9]

14 THE TRIPLE HELIX Spring 2011 UChicago.indb 14

equilibrium decreases [5]. Assuming limited influences from massive outside forces, complex systems normally fluctuate around an equilibrium point for various qualities. For instance, given a consistent food source and a stable ecological niche species populations will generally fluctuate around a certain population level. In theory, a system that is approaching a threshold will have a progressively harder time returning to equilibrium [5]. This critical slowing down would be directly observable; for instance, a species approaching extinction would have a noticeably more difficult time returning to a stable population level after every cyclical decline until eventually the species went extinct. Based on similar observable equilibrium values, CSD could provide a method to predict when a complex system is approaching a critical threshold and hopefully allow for time for intervention. One recent empirical test of CSD was performed by John M. Drake and Blaine D. Griffen whose findings were published in Nature [1]. The researchers divided a population of freshwater Cladoceran D. Magna, or water fleas, into two groups: one that received a constant food supply and one that received a diminishing food supply [1]. Drake and Griffen were then able to observe how the two populations behaved and population fluctuated in relation to their specific food supplies. Theory would suggest that as a population approaches a tipping point it would experience CSD. Therefore

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:03 PM


UCHICAGO if a population’s food supply decreases to the point that the growth rate is consistently less than one, the population would approach a tipping point and experience CSD, at which point it would have a progressively more difficult time recovering from cyclical declines till the population dies off entirely. What the researchers found was that the group receiving the diminishing food supplies matched the characteristics of CSD almost perfectly [1]: Every time the groups’ population size dropped as a result of a decrease in available food, it took longer for the population size to return to an equilibrium, until eventually the two groups completely bifurcated and the declining food group died off completely [1]. The beauty of Drake and Griffen’s study is that it provides a clear empirical test of CSD in an isolated and controlled environment. Their findings suggest that given some extraneous force, whether it is diminishing food supply, invasive species or any other factor detrimental to a species’ ecological niche, the species would exhibit critical slowing down in the period prior to extinction. Ideally these results could be applied to more complex natural systems allowing environmental and conservation groups to take action before species go extinct. The potential applications of critical slowing down, however, aren’t limited to just population dynamics. CSD has also been discussed as a potential predictor for climate change [6]. The historical record suggests that past climate changes have taken place in rapid adjustments similar to the critical thresholds discussed above. Generally the earth’s climate would fluctuate around certain equilibrium temperatures before experiencing rapid alterations in a similar fashion to CSD [6]. The evidence for these sudden shifts is apparent in the geological record for recent ice ages and the breakup of glacial sheets that followed [6]. This evidence is supported by current theories of climatology; the glacial periods are the result of positive feedback loops and global warming, the result of negative feedback loops. In the case of glacial periods, increased snow and ice increases the earth’s albedo causing more solar energy to be reflected and further cooling the earth and increasing snow cover. On the other hand, in the case of global warming, decreased snow and ice coverage reduce the earth’s albedo allowing for the absorption of more solar energy, heating the earth up and further melting snow and ice [6]. These feedback loops would explain the existence of critical slowing down for climate change. In addition to potential applications for climate change, CSD could also be used to help predict epileptic seizures. Epileptic seizures are the result of sudden changes in clusters of neural cells that cause the cells to begin firing in unison [5]. Some research suggests that the excitability of neurons

which causes neural clusters to fire in unison is the result of a feedback loop, similar to that found in past climate changes [7]. This research has already had some success in helping limit seizures through the use of drug therapy that attempts to limit these feedback loops from beginning in the first place [7]. Similarly to its applications to climate change, CSD could be used to help predict when a neural feedback loop is about to begin providing potential early warning signs for sufferers of epileptic seizures. Finally, CSD also has potential applications for financial markets [8]. Research into collective behavior in financial markets has shown that when stocks are divided into distinct subsets based on business sectors the stock prices act similarly to CSD models in other fields [8]. When compared to individual stock prices these subset based prices experience difficulty returning to equilibrium just prior to rapid price changes [8]. While the potential applications of critical slowing down are apparent, CSD’s limitations should not be understated. As the complexity of a system increases, the potential factors influencing behavior in that system increase as well. Ironically, those systems that benefit from CSD the most, those with multiple variables at work making a single theoretical explanation impossible, are also the ones that have the hardest time putting CSD to practical use. Because their complexity makes it difficult to isolate the variable responsible for any change in the system it is difficult to implement any changes to alter the impending critical threshold. Drake and Griffen acknowledge this limitation in the research as isolated populations like those in the study are virtually non-existent in nature [1]. While CSD provides a good theoretical explanation for critical transitions and is observable in isolated instances, the complexities of the systems in question still make applications difficult. Understanding complex systems and their accompanying critical transitions is important to understanding how our world works. Conclusions drawn from CSD may provide potential tools in resolving problems in fields such as ecology, where species are increasingly at risk of extinction. However, we do need to be aware of the limitations to CSD and the limitations to testing and monitoring complex systems also need to be acknowledged. Hopefully with the proper understanding of the complex systems that make up our world we can counteract our increasing capacity for damaging our natural environment and prevent critical thresholds and system failures before they happen.

References

George Sugihara. “Early-warning Signals for Critical Transitions.” Nature 461.7260 (2009): 53-59. 6. Dakos, V., M. Scheffer, E. H. Van Nes, V. Brovkin, V. Petoukhov, and H. Held. “Slowing down as an Early Warning Signal for Abrupt Climate Change.” Proceedings of the National Academy of Sciences 105.38 (2008): 14308-4312. 7. Misonou, H., D. Mohapatra, and J. Trimmer. “Kv2.1: A Voltage-Gated K Channel Critical to Dynamic Control of Neuronal Excitability.” NeuroToxicology 26.5 (2005): 743-52. 8. Plerou, V., P. Gopikrishnan, B. Rosenow, L. Amaral, and H. Stanley. “Collective Behavior of Stock Price Movements%u2014a Random Matrix Theory Approach.” Physica A: Statistical Mechanics and Its Applications 299.1-2 (2001): 175-80. 9. http://www.epa.gov/glnpo/image/vbig/213.jpg.

1. Drake, John M., and Blaine D. Griffen. “Early Warning Signals of Extinction in Deteriorating Enviornments.” Nature 467 (2010): 456-59. 2. Turner, I. M. “Species Loss in Fragments of Tropical Rain Forest: A Review of the Evidence.” Journal of Applied Ecology 33 (1996): 200-09. 3. Williams, Kent. Managing Vertebrate Pests: Rabbits. Canberra: Australian Govt. Pub. Service, 1995. 4. Newman, Erin B. “Earth’s Vanishing Medicine Cabinet: Rain Forest Destruction and Its Impact on the Pharmaceutical Industry.” American Journal of Law and Medicine 20.4 (1994): 479-501. 5. Scheffer, Marten, Jordi Bascompte, William A. Brock, Victor Brovkin, Stephen R. Carpenter, Vasilis Dakos, Hermann Held, Egbert H. Van Nes, Max Rietkerk, and

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 15

Leland Bybee is a sophomore studying Economics at the University of Chicago.

THE TRIPLE HELIX Spring 2011 15 4/22/2011 6:29:23 PM


UCHICAGO

“If You Give an Atom a Photon...” The Applications of Laser Spectroscopy and Ion Traps Alexandra Carlson

N

owadays, when the layperson hears the phrase “fundamental forces of nature,” he or she immediately imagines a giant particle accelerator housing collisions of weird subatomic particles at unimaginably high energies. These mysterious forces are entities that physicists study in dark labs or are the subject of highly abstract lectures that only the most ambitious of college students dare to attend. They are the glue-like forces that hold atoms and all matter together; they are the reason why we can feel the ground and why the grass is green; they are the reason for the process of life as we know it. They are so profound and operate on such a small scale that it appears there would be no possible way for humans to harness these forces and apply them to any type of normal activity in day-to-day life. Due to these forces’ intimidating nature, many people assume that mass scale research institutes, like the Large Hadron Collider, are a necessary requirement to test and understand the forces’ behavior at the atomic and subatomic levels. However, this notion is simply untrue; these forces, specifically the electromagnetic and nuclear force, appear in highly unexpected places in our lives, anywhere from the synthesis of molecules to the construction of history. A particle collider is not a necessity when it comes to applying these forces in experiments, and it

Reproduced from [20]

16 THE TRIPLE HELIX Spring 2011 UChicago.indb 16

is all thanks to an incredibly useful technique known as laser spectroscopy. This procedure is one of the ideal experiments used to measure such important atomic properties, and, unlike giant accelerator facilities, this technique can be performed at energy levels low enough to conduct the experiment on a table! Laser spectroscopy is the study of the interaction of laser radiation with matter [1]. When atoms are placed in laser light, or any other type of electromagnetic radiation, they undergo internal transformations that cause them to emit a unique collection of spectral lines, which are comparable to a fingerprint. Scientists study the emitted spectrum of an atom under such influences to determine its structural, magnetic, and electric properties. Laser spectroscopy is a branch of spectroscopy, which is the study of the interaction of electromagnetic radiation (which includes visible light, radio waves, x-rays, infrared, etc) with matter. Unlike other forms of electromagnetic radiation, laser light can be produced in incredibly intense and short pulses, which allows for high precision and resolution of measurements on individual atoms, where as, for example, optical spectroscopy measurements are restricted to enormous groupings of atoms [2]. With this technique, scientists cannot only make detailed measurements of these fundamental forces at the subatomic scale, but also use their findings to make more sense of life at the macroscopic level. Before jumping into technical applications of this procedure, it is important to understand exactly how laser spectroscopy mines information from an atom. The process is made possible due to both the atomic and nuclear structure of the atom. An atom contains a positively charged nucleus, which is composed of particles called protons and neutrons. This nucleus is orbited by clouds of negatively charged electrons, which are referred to as orbitals. These clouds of electrons are formed because of the attractive electric force of the nucleus, which means, since each element of the periodic table has a different number of electrons, each element has a different combination of orbital shapes. Electrons can move between these orbitals by gaining or losing a discrete quantized amount of energy. This “packet” of energy is referred to as

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:23 PM


UCHICAGO a photon [3]. The distance between two orbitals determines the frequency (energy) of the photon needed in order for the electron to undergo the transition from one orbital to another. Scientists can force the electron to transition by shooting the atom with a beam of photons, fondly referred to as a laser. If the laser light is set at the correct frequency, the photon excites the electron, causing it to overcome the electric and

With a greater knowledge of the properties of atoms, scientists can use laser spectroscopy to perform specific manipulations on atoms that have applications in a vast number of fields centripetal forces holding it in its original orbital and allows it to jump to the next higher energy orbital. Likewise, when an electron transitions from a higher energy level to a lower energy level, it emits a photon, and this emitted photon has the frequency that corresponds to the energy difference of the two orbitals. The frequency of this photon is in the range of visible light, which allows for easy detection of the transition energies of specific orbitals within specific atoms. With this technique, scientists are able to measure the internal energies of the atom, determine the effects of the electric force pulling the electrons, and the relative structure the atom. So this process, in essence, is laser spectroscopy; one shoots a laser of a frequency corresponding to a specific energy level transition, and measures the properties of the atom during and after the excitation [4]. With a greater knowledge of the properties of atoms, scientists can use laser spectroscopy to perform specific manipulations on atoms that have applications in a vast number of fields, ranging from the more obvious fields, like nuclear physics and analytical chemistry, to atmospheric science, biology, and even to history. First, laser spectroscopy allows for the very precise detection of atoms and molecules because of the highly tunable lasers present in modern technology. At Argonne National Laboratory, scientists are perfecting a method known as Atom Trap Trace Analysis that can be used to date polar ice and ground water [5]. This process, abbreviated ATTA, can also be used to “analyze isotopes in the atmosphere, which can help in verifying compliance with the Nuclear Non-Proliferation Treaty by monitoring nuclear fuel re-processing activities as well as detecting leaks from fuel containers for nuclear safety issues” [6]. The isotopes used in these experiments are Krypton-81, a cosmogenic isotope that is used for dating organic material such as polar ice, and Krypton-85, which is an isotope produced in the fission of both uranium and plutonium [7]. In the ATTA method, the desired krypton isotope is trapped in magnetic fields by laser spectroscopy: a known transition of the krypton isotope is selected for by tuning the laser, and

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 17

when the isotope undergoes this transition it is caught in the magnetic trap, and then counted. This allows scientists to use impure samples and does not require a special operation environment [8]. Such an experiment can be performed in a standard lab without the need of any kind of giant accelerator or collider because the energy required to perform the experiment is extremely low compared to that of the LHC or the Tevatron at Fermilab. In fact, the ATTA experiment is small enough to fit on a lab table! Using this efficient method, Argonne physicists have determined the ages of groundwater in different regions of the Nubian Aquifer underneath the Eastern Sahara Desert, each region dating in the range of 200,000 to 1,000,000 years old [9]. These results elucidate the hydrologic behavior of this huge aquifer, which has important implications for obtaining an accurate climate history as well as for water resource management in the region [10]. Such implications include the development of a new dating method, which allows for detailed diagramming of the behavior of the aquifer. This information is vital for agriculture and over all survival in that region. This knowledge helps create more accurate models that will aide in the predictions of future climate patterns and the future land use of that area. Another implication is that this data gives both historians information about the ancient weather and atmospheric composition in that region of the world at that time, which can throw light upon the living environment of ancient civilizations in that era. So from a simple tabletop experiment comes a wealth of information that allows academics in numerous fields to build past, present, and future worlds. Another important application of laser spectroscopy can be found in the bio-medical sciences. In the last few years, several research groups have developed a method of laser spectroscopy, known as coherent anti-Stokes Raman spectroscopy, that allows for a non-invasive but highly accurate method to image cancer cells in a life-like environment [11]. In previous years, the main method used to study cancer cell cultures was to grow layers of the cancerous cell in a tray. The spatial structure of cell growth in this environment is typically a single layer of cells, which is atypical to the three dimensional growth patterns that occur within the human body. Therefore, measurements taken on these cell trays are not incredibly accurate. Furthermore, only one measurement can be taken per tray because the cells are destroyed in the process of culture imaging and analysis [12]. However, with anti-Stokes Raman spectroscopy, these logistical problems are solved. The method in question, abbreviated CARS, uses laser beams to study the nuclear vibrations of bonds in cellular molecules [13]. The interactions of photons scattering off of the molecule’s structure causes a shift in energy due to atomic absorption, which gives information on the type of atoms, their location to one another, and the types of bonds that connects them. So the information of this change in energy can be used to model the molecule [14]. CARS allows scientists to study the behavior of cells in their natural environment, and in doing so, allows scientists to accurately describe cancer cell propagation, which is the first and one of the most important steps in the development of an effective treatment for such

THE TRIPLE HELIX Spring 2011 17 4/22/2011 6:29:23 PM


UCHICAGO a deadly disease. However, despite its “miracle worker” appearance, laser spectroscopy is not without its own obstacles [15]. For example, in the case of cancer cell imaging, large sets of trays cannot be performed on due to the limits in the technology available. Furthermore, so much data is produced in the process that there is no real efficient method to analyze all of it in a reasonable amount of time. In other applications, like in the measurement of certain nuclear properties, a highly tunable laser is needed in order to achieve a precision that would render the measurement usable. A laser of this nature is very sensitive, and technology of this kind can be costly, both in terms of money and in terms of time taken to maintain such a setup. Specifically in the case of krypton dating, imprecise measurements would be accompanied by margins of error so large that a comparison of the age of two different sites could not be made; also the process of tuning the laser to the exact frequency of the desired transition can be incredibly time consuming and very finicky [16]. However, these issues should not be viewed as problems, but as goals for the future of laser spectroscopy; it is obvious that laser spectroscopy has opened so many doors in the past decade that it would be nonsensical to regress back to old inefficient methods. The science and technology associated with laser spectroscopy has advanced considerably in the past few decades. Since 1986, scientists have been using laser spectroscopy to make finer and finer measurements of the hydrogen atom’s resonance frequencies to test the basic laws associated with the fundamental forces of physics as well as determine accurate values of fundament constants [17]. For example, in measurements of the resonances of the photon transition from the 1S ground state to the metastable2S state in the hydrogen atom, researchers T. W. Hänsch and H. Walther of the Max Plank Institute have achieved a resolution of better than 1:1012. To put this ratio in perspective, consider comparing the thickness of a human hair to the circumference of the earth’s equator. The same group has measured the absolute optical frequency of this resonance to 13 decimal places [18]. Furthermore, there is a current explosion of the availability and type of tunable sources. Different methods of laser-pulse amplification and

compression have created tabletop laser sources of intense ultra short pulses which are revolutionizing the study of incredibly small phenomena in condensed-matter physics, molecular physics, and even in biological science [19]. New processes such as nonlinear frequency conversion allow for a very wide spectral range when tuning the laser, from millimeter wavelength radiation to the nanometer wavelength of x-rays, ultimately making the processing of tuning the laser much easier [20]. All of these advances allow for laser spectroscopy set-ups to be built smaller and smaller, therefore severely reducing their costs in terms of money, complexity of the experiment and in terms of required lab space. This means that more and more labs have access to the tools needed to not only study the fundamental forces of nature but also can also take laser spectroscopy from the lab and apply its versatility to the real world. Unlike the large colliders and accelerators which require years of collaboration, design and paperwork, and not to mention an obscene amount of money and space to construct and maintain, these structures, laser spectroscopy can test the same fundamental forces of nature and use them to probe new realms of physics, but it requires only the space of a lab bench. All in all, laser spectroscopy allows scientists not only to probe the nuclear structure of the atom, but also allows for other fields of study to expand and grow exponentially. This technique allows scientists to harness the forces of nature and them to every part of our world, ultimately increasing our wealth of knowledge. The key point is that laser spectroscopy can be performed on a table top in a lab; it does not require the high energies, money and space that accelerators or colliders demand, and is therefore much less expensive to operate and maintain. This aspect, coupled with the wonderful development in laser technology, has made the technique available to labs worldwide, and has thus generated so many more exciting opportunities to understand our earth and society. So if you give an atom a photon, it can return to you a book of information about the inner workings of our world.

References

Z.-T. Lu,1 P. Mueller,1 T. P. O’Connor,1 R. Purtschert,4 N. C. Sturchio,5 M. Sultan,6 L. Young7. Atom Trap, Krypton-81, and Saharan Water. Physics Division, Argonne National Laboratory, Egyptian Geological Survey and Mining Authority, Cairo, Egypt, Geology Department Ain Shams University, Physics Institute University of Bern, University of Illinois at Chicago Department of Earth and Environmental Sciences, State University of New York Buffalo Geology Department, Argonne National Laboratory Chemistry Division; [cited 2011 February 10]. Available from: http://www.phy.anl.gov/mep/atta/research/krypton.html 11-15. New Earth BioMed. Laser Bioassay Project. Available from: http://www. newearthbiomed.org/228/laser-bioassay 16. X. Du,1 K. Bailey,1 Z. El Alfy,2 B. El-Kaliouby,3 B. E. Lehmann,4 R. Lorenzo,4 Z.-T. Lu,1 P. Mueller,1 T. P. O’Connor,1 R. Purtschert,4 N. C. Sturchio,5 M. Sultan,6 L. Young7. Atom Trap, Krypton-81, and Saharan Water. Physics Division, Argonne National Laboratory, Egyptian Geological Survey and Mining Authority, Cairo, Egypt, Geology Department Ain Shams University, Physics Institute University of Bern, University of Illinois at Chicago Department of Earth and Environmental Sciences, State University of New York Buffalo Geology Department, Argonne National Laboratory Chemistry Division; [cited 2011 February 10]. Available from: http://www.phy.anl.gov/mep/atta/research/krypton.html 17-19. T. W. Hänsch , H. Walther. Laser spectroscopy and quantum optics. Reviews of Modern Physics. 1999; Vol. 71, No.2, pg 5. 20. http://aemc.jpl.nasa.gov/instruments/vcam.cfm.

1-3. McGraw-Hill. McGraw-Hill Concise Encyclopedia of Physics. The McGraw-Hill Companies, Inc.; 2002 4. The American Heritage Medical Dictionary. Houghton Mifflin Company; 2007. 5-7. X. Du,1 K. Bailey,1 Z. El Alfy,2 B. El-Kaliouby,3 B. E. Lehmann,4 R. Lorenzo,4 Z.-T. Lu,1 P. Mueller,1 T. P. O’Connor,1 R. Purtschert,4 N. C. Sturchio,5 M. Sultan,6 L. Young7. Atom Trap, Krypton-81, and Saharan Water. Physics Division, Argonne National Laboratory, Egyptian Geological Survey and Mining Authority, Cairo, Egypt, Geology Department Ain Shams University, Physics Institute University of Bern, University of Illinois at Chicago Department of Earth and Environmental Sciences, State University of New York Buffalo Geology Department, Argonne National Laboratory Chemistry Division; [cited 2011 February 10]. Available from: http://www.phy.anl.gov/mep/atta/research/krypton.html 8. Zheng-Tian Lu, Kevin Bailey, Chun-Yen Chen, Yi-Min Lil, Thomas P. O’Connor, Linda Young. Atom Trap Trace Analysis. Argonne National Laboratory, North Western University; [updated 2008 June 5, cited 2011 February 10]. Available from: www-mep.phy.anl.gov/attti 9. N. C. Sturchio, X. Du, R. Purtschert, B. E. Lehmann, M. Sultan, L. J. Patterson, Z.-T. Lu, P. Mueller, K. Bailey, T. P. O’Connor, L. Young, R. Lorenzo, B. M. Kennedy, M. van Soest, Z. El Alfy, B. El Kaliouby, Y. Dawood, and A. M. A. Abdallah. One million year old groundwater in the Sahara revealed by krypton-81 and chlorine-36. Geophysical Research Letters. 2004; 31. 10.X. Du,1 K. Bailey,1 Z. El Alfy,2 B. El-Kaliouby,3 B. E. Lehmann,4 R. Lorenzo,4

18 THE TRIPLE HELIX Spring 2011 UChicago.indb 18

Alexandra Carlson is a sophomore studying Physics and Visual Arts at the University of Chicago.

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:23 PM


UCHICAGO

How Many Nukes Does a Superpower Need? Evan Woo Suk Choi

H

ollywood loves its nuclear armageddons and radioactive mutants. Through the sometimes serious, sometimes satirical movie depictions of nuclear warfare, a basic human fear is played upon: we are terrified of nukes. The modern world, however, lacks any sign of an impending nuclear holocaust, at least for now. Since the end of the Cold War, the international community has shifted its focus from the vast arsenals of the United States and Russia to worries about terrorism and the rise of new nuclear states like North Korea and Iran. The New Strategic Arms Reduction Treaty (START), ratified by the U.S. Senate last December and the Russian parliament a month later, exemplifies recent efforts to create a nuclear-free world. Barack Obama is the first president to make global nuclear disarmament a centerpiece of American defense policy, but with continued aggression from nations like North Korea, the Obama administration faces renewed challenges both at home and abroad. Just a month ago, the fate of the New START seemed to hang by a thread, and it was no surprise. The treaty met fierce opposition from the Republican party as the treaty was viewed as an act of appeasement; a world free of nuclear weapons sounds pleasant, but that was the world that delivered World War I and II. As American defense analyst Edward Luttwak wrote, “we have lived since 1945 without another world war precisely because rational minds…extracted a durable peace from the very terror of nuclear weapons” [1]. Since Fat Man and Little Boy, nuclear weapons have served as undeniable forces of deterrence.

Reproduced from [13]

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 19

In modern international politics, the cost of dealing the first strike against an opponent, because it risks nuclear retaliation, has skyrocketed. During the Cold War, while both the United States and the Soviet Union participated in proxy wars, they never went head to head. Winning big became too dangerous for both parties [2]. Nuclear weapons have brought not only overkill, but mutual kill, a condition known as Mutually Assured Destruction (MAD). As former U.S. defense official Paul Nitze aptly put it, the “atomic queens may never be brought into play; they may never actually take one of the opponent’s pieces. But the position of the atomic queens may still have a decisive bearing on which side can safely advance a limitedwar bishop or even a cold-war pawn” [3]. The delicate and risky nature of the nuclear game of chess naturally induces both parties to move for a stalemate. Nobody knows what a nuclear war will look like and what course it might follow. What is certain, however, is that the potential destructive force of a nuclear weapon inhibits its first use [4]. The level of thermal radiation released by a nuclear blast, measured in units of calories per square centimeter (cal/cm2), is deadly, to say the least. The 15-kiloton weapon dropped on Hiroshima, for instance, released approximately 10 cal/cm2, initiating a mass fire all around ground zero. This fire covered an area of roughly 4.4 square miles and burned for more than six hours after the initial explosion, instantly killing between 70,000 and 130,000 people [5]. Also, accompanying any nuclear explosion are high-speed winds and an immense blast wave. The blast wave is determined by overpressure, the additional level of air pressure above sea level. Normal air pressure at sea level is 14.7 pounds per square inch (psi). And while average hurricane winds will wander around 75 miles per hour (mph), winds from a nuclear blast will easily exceed hundreds of miles per hour. If a strategic nuclear weapon detonated over the John Hancock Tower in downtown Chicago, the level of destruction would be almost beyond comprehension. This hypothetical bomb would have an approximate yield of about 300-kilotons, the average yield of most modern nuclear weapons. Massive amounts of energy (approximately 300 trillion calories of energy at a rate of a millionth of a second) would be released upon detonation [6]. The energy released would mostly be in the form of light, immediately heating the air surrounding the point of detonation and creating a fireball [7]. At its core, the fireball would produce temperatures of over two hundred million degrees Fahrenheit, about four to five times the temperature at the center of the sun [8]. Everything and everyone in the vicinity would instantly evaporate. The blast wave and high-speed winds caused by the detonation would easily destroy low-rise buildings and skyscrapers. At the University of Chicago Gleacher Center, approximately 0.9 miles away from ground zero, intense light

THE TRIPLE HELIX Spring 2011 19 4/22/2011 6:29:23 PM


UCHICAGO from the fireball would melt asphalt, burn painting and liquify metal surfaces within seconds. The shock wave and highspeed winds accompanying the initial detonation would toss burning vehicles into the air and into buildings. Within tens of minutes, the rising mass of fire-heated air on ground zero would signal the start of a fire of massive scale. In a fraction of an hour it would generate ground winds of hurricane force with average air temperatures well above the boiling point of water [9]. The entire area, approximately 40 to 65 square miles of downtown Chicago, would be engulfed in a mass fire. What is disturbing is that the United States and the Soviet Union formerly deployed nuclear weapons in the megaton, not kiloton, range during the Cold War and after. In fact, all the explosive power used in World War II could fit in one 3-megaton bomb, and that one bomb could fit in the nose cone of one large intercontinental missile [10]. By the 1980s, the United States and the Soviet Union together had more than 50,000 of such bombs. In the final months of Dwight Eisenhower’s presidency in 1960, the U.S. military created the first formal nuclear war plan named the Single Integrated Operational Plan (SIOP). The plan was simple: launch all American nuclear warheads in the event of war with the Soviet Union. At the time, the United States was ready to launch 1,459 nuclear bombs on alert, totaling 2,164 megatons, against 654 targets in the Soviet Union, Eastern Europe, and China. The United States’ whole arsenal, however, totaled 3,423 warheads, capable of 7,847 megatons worth of destruction, potentially killing 285 million people and injuring 40 million more [11]. Overkill was the strategy. The nuclear arms race had spiraled out of control. Coming out of the Cold War, the United States and the Soviet Union were intent on maintaining nuclear nonproliferation and the general nonuse of nukes. Both nations feared escalation while the rise of new nuclear states added a chilling dimension to the issue. To build transparency and trust, George H.W. Bush and Boris Yeltsin signed the second Strategic Arms Reduction Treaty (START II) in 1993 to reduce almost three-quarters of American and Russian arsenals and all land-based intercontinental ballistic missiles. START II expired in late 2009 as work on the New START began. The New START will force both the United States and Russia to limit their strategic warheads to 1,550 within seven years. Because of past reductions, neither side would eliminate a large contingent of its arsenal, but President Obama hopes that the New START will re-establish a comprehensive inspection regime and serve as a stepping stone for global nuclear disarmament. Several months earlier, when ratification of the treaty seemed bleak, President Obama placed his reputation and political clout on the line by personally pushing for ratification. To appease Republican opposition, the Obama administration promised to commit $85 billion over 10 years to modernize U.S. nuclear facilities so that a smaller

arsenal would continue to be well-maintained, a commitment that is hardly compatible with a nuclear-free world. Senator John Kerry, the Massachusetts Democrat who led the floor fight for the treaty, said that the American people would be safer with fewer Russian missiles aimed at them. Indeed, fewer Russian missiles would be aimed at American cities, but even those few will be immensely powerful and effective. One of the most urgent tasks remaining therefore is to reduce tactical nuclear weapons. These smaller arms, with a 300 to 400 mile range, have little deterrent value relative to their larger counterparts. They have also never been subjected to treaties or inspection. The United States holds approximately 500 tactical nuclear weapons, including 180 dispersed around the European continent. While these weapons are considered secure, many claim that Russia’s larger arsenal (between 3,000 and 5,000) is vulnerable to covert sale or theft. Whether Russia would give up its superior position in tactical nukes is hard to determine. Moscow has previously stated that it would not negotiate until Washington removed all its tactical nuclear weapons in Europe. The strategic importance of nuclear primacy, however, especially for a superpower like the United States, is clear [12]. Withdrawal of a strategically placed atomic queen would only embolden other nations, for the advancement of an opponent’s pawn or bishop seems likely, if not inevitable, as has been the case with North Korea’s recent unveiling of a state-of-the-art nuclear facility. Iran’s persistence to maintain its right to enrich uranium in the face of global opposition further complicates the issue. To truly promote a nuclear-free world, many argue that the United States and Russia cannot stop with New START; a ban on nuclear testing must also be approved. The Comprehensive Test Ban Treaty, signed by President Clinton in 1996, has yet to be ratified by India, Pakistan, and unsurprisingly, North Korea. The hermit kingdom is the only nation in the 21st century to test its nuclear technology in weaponized form, prompting many to urge the Obama administration to push the ban for approval in Congress. Nevertheless, a world with zero nukes is still well past Obama’s presidency. The New START, like most nonproliferation treaties before it, acts first and foremost as a medium through which trust and diplomatic bridges can be built. For now, President Obama’s commitment “to seek the peace and security of a world without nuclear weapons” in his 2009 Prague speech feels like an elusive ideal, a Hollywood plotline. It remains to be seen what steps the United States will take to spearhead the disarmament movement and exactly how it will induce others to follow. Domestic approval of New START, at least, signifies a serious step forward.

References

8,10. Understanding Global Conflicts and Cooperation, Joseph S. Nye, Jr. and David A. Welch 11.Kaplan, The Wizards of Armageddon, p.269 12. Keir A. Lieber and Daryl G. Press, The End of MAD? The Nuclear Dimension of U.S. Primacy, pg.33 13. http://ehp.niehs.nih.gov/docs/2004/112-17/forum.html.

1. Edward N. Luttwak, “Of Bombs and Men,” Commentary, August 1983, p.82 2. Jervis, The Meaning of the Nuclear Revolution, pg.53 3. Paul H. Nitze, “Atoms, Strategy and Policy,” Foreign Affairs (January 1956) 4. Kenneth Waltz, More May Be Better, The Spread of Nuclear Weapons, pg.5 5-7,9. Lynn Eden, Whole World on Fire: Organizations, Knowledge, & Nuclear Weapons Devastation, 2004, Cornell University Press

20 THE TRIPLE HELIX Spring 2011 UChicago.indb 20

Evan Woo Suk Choi is junior studying Political Science & Economics at the University of Chicago.

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:23 PM


UCHICAGO

The Origins of Human Communication Sadaf Ferdowsi

I

magine being 19 years old and suddenly losing all control over your limbs. This is what happened to Ian Waterman, who contracted a never-diagnosed fever which caused this deafferentation. Fortunately, after many years of determination, Ian was able to use vision and cognition to re-learn motions to the extent that he performs actions “normally.” When his hands are out of his sight, however, Ian is once again unable to control his movements. Interestingly enough, his use of gesture with speech is uninhibited. Even without seeing his hands, Ian’s gestures still coincide with speech [1]. This suggests that gestures and speech are closely linked—and this connection can help explain the origins of human communication. Although research regarding the origin of human communication is relatively new, there has already been much debate as to which theory best explains this origin. In the 1960s, gestural theory posited that non-verbal communication, such as gesture, evolved first and was then followed by speech whereas another theory, monogenesis, asserts that a universal speech came first and gesture evolved afterwards. Both theories have a variety of evidence to support their claims, but recently, another theory has emerged. In 1995, David McNeill, from the University of Chicago asserted that speech and gesture co-evolved. His studies, along with others from neurological and developmental standpoints, have found links between verbal and non-verbal processes. These intimate links between verbal and non-verbal processes suggest that the processes co-evolved together thus positing that human communication originated with both speech and gesture. Reproduced from [5]

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 21

One of the issues with gestural theory and monogenesis is that they do not acknowledge the codependence in gestural and linguistic processes. The theory of co-evolution, especially from a neurological standpoint, does acknowledge this codependence as well as convey the intimate connection between the two. For example, in the human brain, there are

By examining the close link between gesture and speech, from cognitive, neurological, and developmental standpoints, it is possible to create integrative types of evidence to support the theory of their co-evolution. two main areas in charge of communication: the Broca’s area and the prefrontal cortex. The former is involved with spoken language whereas the latter has to do with action, such as gesture. The Broca’s area is nestled within the prefrontal cortex thus they are close in proximity as well as intimately tied to aid communication. This was proven in 2007 when neurologist Roel M. Willems conducted several studies to find this link [2]. He hypothesized that the Broca’s area and pre-motor cortex would rely on each other in order to interpret ambiguous nouns. To test this, actors used an ambiguous noun in a sentence accompanied by a gesture which hinted at its meaning; however, the gesture did not match up with the meaning. Disconnect between gesture and speech engendered neural overlaps between the Broca’s area and the premotor cortex which suggest that both areas were necessary for interpreting the ambiguous noun. In Willems’ other study, subjects were asked to communicate naturally with gesture and then discouraged to gesture to assess neural activity of the Broca’s area and the prefrontal cortex. Unsurprisingly, both areas were activated. Furthermore, the two areas created a sort of interactive feedback loop between each other in order to convey information [2]. When subjects were allowed to use gestures, the pre-motor cortex’s neural activation was greater than the Broca’s area, but when there were disallowed, the Broca’s area’s activation increased to “pick THE TRIPLE HELIX Spring 2011 21 4/22/2011 6:29:23 PM


UCHICAGO up slack.” The studies conveyed the intimate connection be- develop independent of culture. For example, deictic gestures, tween the action and language areas in the cortex—for both such as pointing, are considered “pre-linguistic” behavior since interpretation and communication—which helps support their babies can do them without the aid of language. Emblematic co-evolution. gestures, such as the thumbs up sign, are gestures that do A cognitive standpoint also conveys the codependence rely on cultural context, but they develop much later and of speech and gesture, and thus their co-evolution. Cognitive after cultural immersion, thus are not reviewed in this article. studies by Rachel Mayberry, who was heavily influenced by Capone, by reviewing studies conducted by both American McNeill, observed stutterers’ speech and gestures which il- and Italian psychologists, observes that babies develop the use luminated this close relationship [3]. Since individuals with of deictic gestures at the same time regardless of culture [4]. speech disorders have interrupted speech, Mayberry conjectured Since young babies cannot communicate verbally, their that gesturing will also be halted thus proving that speech means of obtaining adult attention is limited to crying and and gesture are integrated in order to communicate. This link, gesturing. When whining and fussing become too juvenile, and in turn, suggests that verbal and non-verbal processes co- when moving the entire body up and down starts to decline, evolved together. Mayberry organized her study by analyzing psychologists Bates (America) and Volterra (Italy) observed speech/gesture patterns in twelve subjects who all had early deictic gestures at around 10 months of age. Furthermore, onset stuttering. A control group of 12 non-stutterers was both Italian and American babies develop the ability to match also present in which age, level of education and gender were a word and a gesture at 20-24 months. The universality of the matched. The subjects were asked to narrate a cartoon and gesture and its development suggest the co-evolution between after the narration, the gestures were transcribed to speech gesture and speech. Despite the social environment that can with the results. Because these transcriptions conveyed the influence gesture, babies still develop the same deictic gesclose link between speech and gesture, Mayberry’s hypothesis tures that can then later match with their speech. In Capone’s was supported. review, then, speech development is examined and the link Mayberry’s hypothesis was further supported by com- between speech and gesture is notably strong which suggests from [5] paring the control group with the stuttering individuals. The Reproduced that they must have co-evolved. control group, on average, spoke 35% more words and used It is also important to note the development of speech 152 gestures. If the gestural theory holds, it would seem that and gesture in deaf and blind children. Deaf children, prior to stutterers would use gestures to compensate for their reduced have learning formal sign language, can gesture as a means to speech; however, they only produced 82 gestures (compared communicate which conveys the need to be understood in an to the 152). This suggests the fundamental connection between increasingly social environment [4]. Blind children, who are speech and gesture. Another important observation to note unable to observe gestures in their environments, still have is that when the stutterers did gesture, the gestures would coinciding speech and gesture patterns—even with other blind follow the same pattern as speech. For example, the hand children [1]. This shows that gesture and speech are not just would fall to rest and only rise when speech resumed again or socially connected, but share a deeper developmental link a hand gesture would begin, but would be abandoned upon which, in turn, makes it plausible to think they co-evolved stuttering. This correspondence suggests “multiple feedback together. links between gesture and speech through extemporaneous By examining the close link between gesture and speech expression” which points to a co-evolutionary basis [3]. from neurological, cognitive and developmental standpoints, The theory of co-evolution can be further bolstered from it is possible to create integrative types of evidence to support a developmental standpoint. In a review by Nina Capone, the the co-evolution of speech and gesture. Since it is impossible development of language acquisition is offered in her review to go back in time in order to observe the actual origins of of gesture (2004) [4]. The review opens by further expounding human communication, a significant lead is the close link the gesture/speech link by observing hand/speech synchrony in between gesture and speech that humans have today. Gesinfants. Upon birth, for example, babies will open their mouths tural theory and monogenesis cannot explain this intimate when pressure is applied to their palms as well as exhibiting connection between speech and gesture thus it seems that lots of hand-to-mouth behavior. Unsurprisingly, when babies McNeill’s theory is currently the strongest in the literature begin to babble incoherent sounds, it is often coincided with today. Research surrounding co-evolution is relatively new hand movements which further prove the similar cognitive so from these first tentative, yet bold, steps, it will be exciting functions involved with speech and gesture. to see more studies added to the literature of the origins of There is an objection that babies use gestures because they human communication. observe them in their social environment, but a study conducted cross-culturally conveys that babies use certain gestures that Sadaf Ferdowsi is a sophomore at the University of Chicago. References 1. McNeill, David. “IW—the man who lost his body.” Handbook of Phenomenology and Cognitive Sciences. 2008. 1-12. 2. Willems, Roel M. and Hagoort, Peter. Neural evidence for the interplay between language, gesture, and action. Brain and Language, 2007. 101: 278-289. 3. Mayberry, Rachel; Jacques Joselynne; McNeill, David. Gesture production during

22 THE TRIPLE HELIX Spring 2011 UChicago.indb 22

stilted speech. Language and Gesture, 2000. 199-214. 4. Capone, Nina and McGregor, Karla K. Gesture Development: a review for clinical and and research practices. Journal of Speech, Language and Hearing Research, 2004. 173-179. 5. http://www.cdc.gov/ncbddd/hearingloss/language.html.

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:24 PM


UCHICAGO

Curing Obesity: What Role Can the Government Play? Matthew Green

O

besity is a global epidemic [1]. In America alone, two out of every three adults and one out of every five children are either overweight or obese [2]. Currently, the United States spends $150 billion annually to treat obesity-related conditions such as heart disease, diabetes, and hypertension [3]. A recent study projected that if current obesity trends do not slow down, then by 2030, over 86% of American adults will be overweight or obese, at which point national expenditures will increase to over one trillion dollars annually [4]. Perhaps the most chilling fact, however, is that obesity-related conditions can reduce the length of life by up to twenty years [5]. Thus, based on the current national trends, the overall American life expectancy is not only expected to level off in the future, but could even reverse, halting an upward trend that has continued for over two centuries [5]. Many public health experts have urged the government to intervene in order to slow the rising obesity trends, but realistically, is this problem the government’s to solve? Can the government protect its citizenry from this horrific future without infringing upon personal liberty and individual choice? This article will examine the various strategies governmental entities can use or already have implemented to reduce national incidence of obesity. Four major categories – bans, taxes, subsidies, and informational mandates – will be analyzed and evaluated through case studies or proposed policies. By weighing the advantages and disadvantages of these options, this article will attempt to bring clarity to the role that government can play in the obesity epidemic and make a general set of policy recommendations. These recommendations are not to suggest that government has the ability or responsibility to single-handedly fix obesity; however, there are avenues that government can take to encourage Americans to spearhead positive change in their own lifestyle habits. It is clear that current obesity trends should not be allowed to continue unabated, because the consequences of doing nothing are economically dire and will be catastrophic to the overall health of America. Many of the popular policy strategies for dealing with obesity attempt to incentivize healthy eating habits. Whether the policy involves bans, taxes/subsidies, or informational mandates, proposed government intervention generally works to create situations in which people will make the choice to reduce © 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 23

consumption of unhealthy foods and/or increase consumption of healthy foods. Policymakers must be careful about the wording and specific regulations in the bills, because policies that restrict choice (i.e. bans or heavy taxes) are harder to pass politically, both because of accusations of “hard paternalism” and due to backlash from the food industry [6,7]. In 2006, New York City implemented a ban on trans-fatty acids in all food establishments. Recent studies have positively linked ingestion of trans-fats to increased risk of coronary heart disease [8]. In addition, a high degree of consumption of trans-fats occurs in processed foods served in restaurants, and with over 1/3 of all American caloric intake occurring outside the home, it was argued by the New York City Department of Health and Mental Hygiene (DOHMH) that targeting restaurants in the city had high potential to reduce consumption of trans-fats [9]. Anticipating attacks on the policy, DOHMH allowed the public and representatives from the food industry to comment on the proposed legislation. Fortunately for the DOHMH, over 96% of comments from the public supported the policy. To provide assistance to the industry, the city offered food establishments a chance in to voluntarily phase out trans-fat usage in 2005, but despite providing educational and technical assistance, this plan was not effective at reaching the desired usage levels. The restaurant industry uses trans fats Reproduced from [18] because they are stable when frying, thus increasing the shelf life of food, so a ban on their usage would require technological change in storage by food establishments [10]. However, the city allowed for a grace period before fines were enforced, giving restaurants time to adapt their business practices accordingly. In this case, there was overwhelming public support for a ban on trans-fats, and a less coercive policy had been tried to restrict their usage by food establishments. By allowing businesses time to adjust, a survey in May 2008 found 95% of restaurants in the City to be in compliance with the law, indicating that effective substitutes could be found [9]. New York’s approach is a good model for implementing a ban on a large scale. If lesser policies are ineffective at reducing consumption, bans are theoretically the most efficient way to reach the desired goal (although, at this point, more research is needed to determine the effectiveness of this ban in improving overall health [9]). Bans are politically dangerous because they restrict choice and often require procedural change by THE TRIPLE HELIX Spring 2011 23 4/22/2011 6:29:24 PM


UCHICAGO industry, but if public support is high and the government allows enough time for businesses to adapt, a ban can be a viable option for reducing consumption of unhealthy foods. Another option that has been proposed involves imposing a sufficiently large tax on unhealthy items to encourage

Regardless of whether government or individuals spearhead change, it is clear that current obesity trends cannot be allowed to continue unabated...and pose a catastrophic risk to American health consumers to choose healthier options. For example, there is an established link between consumption of sugar-sweetened beverages and increased risk for obesity-related disease. The effect on children is even stronger, as one study found that for each additional can or glass of sugar-sweetened beverage consumed daily, a child’s risk for obesity increases by 60% [11]. Sales taxes on soda are currently in place in forty states, but some have argued that these taxes are not large enough to reduce consumption [7]. Because sales taxes are based on the retail price of the product, it is possible that sales taxes encourage higher consumption of generic brands of soft drinks and incentivize purchase of larger containers, which offer a cheaper per-ounce price [11]. However, evidence exists suggesting that higher soft drink prices can lead to reduced sales. A study by Yale University found that for every 10% increase in price, consumption decreased by 7.8%. Another study found that as Coca-Cola prices increased by 12%, sales dropped by 14.6% [11]. Taxes are fairly popular when revenues are used to fund programs to aid in obesity prevention. A 2008 poll of New York State residents found that 52% of residents supported a soda tax, but that number increased to 72% if the revenues were used to fund these programs [7]. Still, taxes have their own disadvantages. The most common anti-tax argument is that the strategy is regressive, in that it has a disproportionally large effect on low-income individuals. A 2009 study in the United Kingdom analyzed the effects of taxes and subsidies on consumption of unhealthy foods given various targeted taxation scenarios. When a policy’s only approach was to levy a tax on unhealthy foods, it actually decreased consumption of fruits and vegetables for individuals in the lowest income quintile [12]. In addition, the beverage industry has showed fierce opposition to taxation, even threatening to relocate headquarters when states have considering implementing high sales taxes [7]. Finally, a tax placed on soda alone could cause individuals to substitute one unhealthy food for cheaper, but still unhealthy, alternatives in the long term. Still, taxes can be effective at reducing consumption of certain unhealthy foods, while at the same

24 THE TRIPLE HELIX Spring 2011 UChicago.indb 24

Reproduced from [19]

time allowing citizens to exercise purchasing choice power. As an alternative to making unhealthy choices less affordable, experts have suggested implementing subsidies as a way to make optimal choices more affordable. Currently, an inverse relationship exists between energy density of food and energy cost, meaning that dense grains and fats are lowercost options than are lean meats, fruits, or vegetables [13]. Conversely, a 2003 study found that price reductions of 10%, 25%, and 50% of low-fat vending machine snacks resulted in increased sales of 9%, 39%, and 93%, respectively [14]. A January 2009 report by the USDA found that a 10% subsidy of fruits and vegetables under the current Supplemental Assistance Nutrition Program (SNAP) would be estimated to increase consumption of fruits and vegetables by about 5%. While this estimate does show a positive correlation between price reductions through subsidies and increased fruit and vegetable consumption, the USDA concluded that demand for fruits and vegetables is fairly inelastic, meaning that overall responsiveness to price changes would be fairly small. In addition, the subsidy would require significant government outlay, estimated at an additional $19 million annually [15]. However, by placing healthier alternatives within reach, subsidies have an advantage over taxes and bans in that they encourage healthier action through positive incentives. The final strategy worth examining involves the use of nutritional information mandates. Since 1990, all packaged foods have been required to visibly feature nutrition information on the container [16]. Additionally, in 2006, New York City implemented a law requiring calorie amounts to be posted on the menus of fast-food restaurants. These policies are meant to correct a perceived market failure, in which producers have more product nutrition information about a product than do the consumers. It is argued that because of this information Š 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:24 PM


UCHICAGO asymmetry, individuals are not able to make rational choices regarding healthy options. Initial studies regarding caloric information on menu boards have been promising. A 2008 study in New York found that 84% of respondents were “surprised” by the caloric content of their food on the menu board, and 73% admitted that the information had an effect on their overall decision-making. However, on a national scale, the efficacy of this approach has yielded mixed results [17]. It is prudent to remember that “providing calorie information will only be helpful if individuals know how to interpret it.” Even given the caloric value, only 20% of participants in a survey were able to calculate the percentage of daily contributions from one food item, and caloric values are less likely to be effective in areas of low literacy and numeracy [16]. Informational mandates allow for the maximum exercise of free choice, and they require little to no government spending. However, past attempts at nutrition labeling have yielded mixed results, and unless individuals are able to interpret the caloric information found on the menu board or on the box, it is unlikely that this strategy alone will cause noticeable reduction of unhealthy food consumption. More research is still necessary to determine the efficacy of these basic strategies in curbing obesity rates in the U.S., especially considering variation in regional results. When implemented alone, each strategy is not likely to be significantly effective. However, this writer acknowledges potential in a revenue-neutral tax/subsidy combination, in which revenues garnered from taxing unhealthy foods could be used to fund subsidies of healthier foods. Given this revenue-neutral scenario, the aforementioned U.K. study projected that consumption of fruits and vegetables would yield maximal increased consumption, increasing by 11%. Additionally, this strategy was found to prevent the most deaths from heart disease [12]. One informational mandate that could be potentially useful involves placing a small clock icon next to an item on a menu board reflecting the time it takes to burn the ingested calories. This icon, while not reflecting specific caloric information,

would likely prove useful for individuals in gaining a better visual image of how healthy or unhealthy a particular food is. Regardless of whether any of these strategies are implemented, it is clear that if obesity is left to continue unabated, the overall health of America will be threatened. There is a

References

Disease. Circulation. 2007, 115: 1858-1865. 9. Tan ASL. A Case Study of the New York City Trans-Fat Story for International Application. Journal of Public Health Policy. 2009, 30 (1): 3-16. 10. Okie S. New York to Trans Fats: You’re Out! New England Journal of Medicine. 2007 May 17, 356(20): 2017-2021. 11. Brownell KD, Frieden TR. Ounces of Prevention – The Public Policy Case for Taxes on Sugared Beverages. 2009 Apr 30, 360 (18): 1805-1808. 12. Nnoaham KE, Sacks G, Rayner M, Mytton O, Gray A. Modelling Income Group Differences in the Health and Economic Impacts of Targeted Food Taxes and Subsidies. International Journal of Epidemiology. 2009 May 29: 1-10. 13. Drenowski A, Darmon N. Food Choices and Diet Costs: an Economic Analysis. J Nutr. 2005, 135: 900-904. 14. French SA. Pricing Effects on Food Choices. J Nutr. 2003, 133: 841S-843S. 15. Dong D, and Lin B-H. Fruit and Vegetable Consumption by LowIncome Americans: Would a Price Reduction Make a Difference? Economic Research Report No. 70, U.S. Department of Agriculture, Economic Research Service, January 2009: 1-17. 16. Blumenthal K, Volpp KG. Enhancing the Effectiveness of Food Labeling in Restaurants. Journal of the American Medical Association. 2010 Feb 10, 303 (6): 553-554. 17. Mello MM. New York City’s War on Fat. New England Journal of Medicine. 2009 May 7, 360 (19): 2015-2020. 18. http://www.cdc.gov/obesity/childhood/defining.html. 19. http://www.cdc.gov/Features/VitalSigns/AdultObesity/.

1. World Health Organization. Controlling the Global Obesity Epidemic. WHO Topics: Nutrition [homepage on the internet]. Available from http://www.who.int/nutrition/topics/obesity/en/. 2. Khan LK, Sobush K, Keener D, Goodman K, Lowry A, Kakietek J, Zaro S. Recommended Community Strategies and Measurements to Prevent Obesity in the United States. CDC Morbidity and Mortality Weekly Report. 2009 July 24, 58 (RR-7): 1-27. 3. Office of Management and Budget: Secure and Affordable Health Care for All Americans [homepage on the internet]. Available from http://www.whitehouse.gov/omb/factsheet_key_health_care/. 4. Wang Y, Beydoun MA, Liang L, Caballero B, Kumanyika SK. Will All Americans Become Overweight or Obese? Estimating the Progression and Cost of the US Obesity Epidemic. Obesity. 2008 Oct, 16 (10) 2323-2330 5. Olshansky SJ, Passaro DJ, Hershow RC, Layden J, Carnes BA, Brody J, et al. A Potential Decline in Life Expectancy in the United States in the 21st Century. The New England Journal of Medicine. 2005 Mar 17, 352 (11): 1138-1145. 6. Holm S. Obesity Interventions and Ethics. Obesity Reviews. 2007, 8 (Suppl. 1): 207-210. 7. Brownell KD, Farley T, Willett WC, Popkin BM, Chaloupka FJ, Thompson JW, et. al. The Public Health and Economic Benefits of Taxing Sugar-Sweetened Beverages. New England Journal of Medicine. 2010 Apr 1, 362: 1250. 8. Sun Q, Campos H, Hankinson SE, Manson JE, Stampfer MJ, Rexrode KM, et. al. A Prospective Study of Trans Fatty Acids in Erythrocytes and Risk of Coronary Heart

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 25

There is a good chance that current trends will lead to a situation in which our children will not live as long as our generation will good chance that current trends will lead to a time in which our children will not live as long as we will. This article attempted to highlight a few of the basic strategies the government can use, but it is important to remember that government intervention in a free market is not always efficient. However, the government should not completely ignore the obesity issue either, because the potential costs of doing nothing are much larger than can be quantified with dollars and cents. The advancement of the human race is at stake, as “gains in health and longevity that have taken decades to achieve may be quickly reversed” if obesity is not curtailed [5]. Government should not see itself as a paternalistic state, dictating what people can eat and how they can live; rather, government can be a partner in the process, using public policy as a way to work with the American people to trim our collective waistline. Together, we as a nation can take responsibility for improving ourselves and take the first steps toward a healthier future for ourselves and our children. Matthew Green is a junior studying Public Policy Studies at the University of Chicago.

THE TRIPLE HELIX Spring 2011 25 4/22/2011 6:29:24 PM


UCHICAGO

The Problem with Cookie Cutters: Educating Children with Autism Tae Yeon Kim

A

t school, in the library, at the playground – many children navigate smoothly through social interactions, chatting casually with a classmate or listening to family members at the dinner table. Children with autism, however, face challenges in communicating and deciphering social cues. Autism fundamentally affects how a child perceives the world. Thus, a sensitive understanding of the impact of autism on children is crucial for designing an effective system for their education. The number of children diagnosed with autism has increased over the years possibly due to improved screening procedures, heightened public awareness of autism, and others factors that are yet unidentified [1]. In fact, the Centers for Disease Control and Prevention predicts that on average every 1 in 110 children have autism [2]. In the recent years, the federal government’s policies have pushed toward a more inclusive educational system [1]. The Individuals with Disabilities Education Act (IDEA) identifies autism as a disability and requires children with autism to be placed in the regular classroom unless “education in regular classes with the use of supplementary aids and services cannot be achieved satisfactorily” [3,4]. Several pressing questions emerge. What are the benefits of including children with autism in a regular classroom? What conditions facilitate the success of this educational model? When is integrating students with autism in the regular classroom appropriate? A “markedly abnormal impaired development in social interaction and communication and a markedly restricted repertoire of activities and interests” characterize autism [5]. Because the classroom is not only an academic arena but also a social environment, considering the effects autism has on a child’s social relationships is critical for evaluating the effectiveness of inclusion, the integration of students with autism in regular classrooms. In some cases, the practice of inclusion has succeeded. The schools in Madison, Wisconsin; Charlotte-Mechlenburg, North Carolina; and, Clark County, Nevada are several examples of this success [6]. In The New York Times, Michael Winerip writes about the practice of inclusion at Madison and focuses on a teen with autism, Garner Moss [6]. When Moss was younger, he used to “collapse on the floor in despair if he had to change rooms” and usually had an aide in class, but as a high school student, he has been involved in swimming and cross country and is expected to attend most classes without aides [6]. Such integration into the social network of peers and increased independence in the academic sphere mark the successful practice of inclusion in schools. Research has shown that children with autism desire social interaction and schools present one avenue for the development of social skills. In one study, children with autism demonstrated this desire through their actions toward Keepon, a robot capable of portraying simple emotions and social behavior [7]. For 26 THE TRIPLE HELIX Spring 2011 UChicago.indb 26

instance, one child demonstrated curiosity and pleasure in interacting with the robot, as well as a possible desire to share this pleasure with others, her mother and the therapist [7]. Another child displayed a shift from aggressive to empathic behavior toward the robot [7]. The researchers assert that the desire to interact with others is not “activated” readily in children with autism because they have difficulty “sifting out meaningful information” in normal human interactions, but the simplified social behavior of the robot did not overwhelm these children and thus allowed social interactions to occur [7]. These findings suggest that children with autism would benefit from opportunities to develop social relationships but may need guidance to form them. A qualitative analysis of the autobiographies of people with autism in another study revealed the challenges they face in social situations due to atypical ways of perceiving the world [8]. Expressing sentiments similar to others quoted in the study, Birger Sellin, a man with autism, wrote “how come I always have so many lonely hours I am lonely everywhere how come” [8]. These words reverberate with Sellin’s loneliness and yearning to understand interactions with other people. This study recognizes the possibility of various perspectives on social relationships among people with autism but argues that educators have a responsibility to provide an environment that helps children with autism form social relationships [8]. What constitutes this environment is the next question. Merely placing children with autism in a regular classroom may not produce meaningful social interactions. As they grow older, children with autism have greater difficulty forming social relationships because of the “demands in both cognitive and physical skills” [9]. Studies show that group activities facilitate the formation of social relationships, particularly among older children. Sports, for instance, provide group identity [9]. Schools that practice inclusion must consider the significance of non-academic activities on the social involvement of children with autism. Furthermore, raising awareness among students through open discussions about autism contributes to the successful inclusion of children with autism in the classroom. A study has shown that in classrooms that encourage such awareness among students, high-functioning children with autism experience positive social relationships with their classmates [10]. In some cases, “classmates worked collectively to incorporate the HFA child [high-functioning child with autism] into academic or recreational activities” [10]. Conversely, classmates who were not made aware about autism and the child’s diagnosis of autism tended to ignore or reject the child [10]. Moreover, high functioning children with autism were aware of their peer’s perceptions and feel hurt by these negative social experiences [10], suggesting that efforts to encourage awareness about autism and to present the child with autism as an individual are key elements in an effective educational plan. © 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:24 PM


UCHICAGO Social relationships require trust and understanding, built by time and care. Relationships with children with autism are no exception and greater awareness of autism in the classroom fosters these relationships. However, the practice of inclusion must not morph into an effort to force children with autism into the mold of a ‘normal’ child. When normalization becomes the motive of inclusion, inclusion fails to respect children with autism as human beings. Children with autism, like all other children, are individuals with diverse abilities and needs. Because of this diversity among children with autism, some may not benefit from education in a regular classroom. Many people with autism desire a predictable environment and demonstrate sensitivity to sound and touch [8]. As a result, people with autism find the influx of sensory information during social interactions difficult to process [8]. The regular classroom poses greater challenges for children with autism than an isolated social interaction—more people, more sights, and more sounds are concentrated in the same space. Considering the individual child’s ability to adapt to the regular school environment is crucial for placing the child into a suitable educational system. The school’s ability to accommodate the individual child’s needs is another consideration. A survey of parents of children with autism reveals concerns with the educators’ ability to understand and address the needs of the child [11]. These concerns are valid, for some teachers and aides may not have the adequate training or skill to meet the needs of children with autism in a regular classroom [12]. Again, awareness of autism in the school bears a significant impact on the educational experience of children with autism, suggesting that equipping educators with the knowledge to address challenges that autism brings to the classroom is one element of a effective educational system. However, some children with more severe difficulties with language and behavior may benefit from a specialized education, such as in a day school or in a separate room, that a regular classroom in the public school system cannot provide [12]. It is the educator’s duty to help students realize their potential. The focus of educating children with autism is not specifically on including them in the classroom, but on fulfilling their needs and cultivating their strengths. The federal government has shown support for molding programs that consider each child’s individual needs and

abilities. A provision of the federal government’s IDEA is the individualized education program team (IEP Team), which includes parents, various teachers, and specialists [13]. The IEP team develops the child’s Individualized Education Program (IEP), a written statement that includes the child’s current level of academic and functional ability, “measurable annual goals,” methods of measuring these goals, any “supplementary aids and services based on peer-reviewed research” for the child, and any necessary accommodations to evaluate the child on national and district assessments [14]. When schools failed to collaborate and communicate with them, parents have voiced dissatisfaction with the education of their child with autism; conversely, the practice of such collaboration and communication were “primary reasons for satisfaction” with their child’s education [11]. Designing an effective IEP depends on the collective effort of parents, educators, and professionals. An effective IEP, however, does not necessitate the integration of children with autism into the regular classroom but focuses on specifics needs and strengths of each child. In fact, in the Hartmann v. Loudoun case, the court ruled that “IDEA encourages mainstreaming [or inclusion], but only to the extent that it does not prevent a child from receiving educational benefit” [15]. Means to measure the effectiveness of IEPs have been under research. Researchers have tested an IEP evaluation tool that utilized a scale of 0 to 2 (with 0 meaning “no/not at all,” 1 meaning “somewhat,” and 2 meaning “yes/clearly evident”) to measure the effectiveness of IEPs in fulfilling social, communication, and learning objectives [16]. Further research in this area would assist the development of appropriate educational programs for children with autism. Approaching the social and academic needs of children with autism requires respecting them as individuals. Determining the best interest of the child must include a consideration of the impact autism has on the child’s perspective of the world and the child’s particular abilities and challenges. A single philosophy cannot stamp out a suitable educational system for everyone—children are not cookies from the same cookie cutter. A flexible, innovative mindset is necessary to craft individual programs that suit each child.

References

Disabil. 2009 Apr;47(2):84-96. 9. Rotheram-Fuller E, Kasari C, Chamberlain B, Locke J. Social involvement of children with autism spectrum disorders in elementary school classrooms. J Child Psychol Psychiatry. 2010 Nov;51(11):1227-34. 10. Ochs E, Kremer-Sadlik K, Solomon O, Sirota KG. Inclusion as a social practice: views of children with autism. Social Development. 2001 Aug;10(3):399-419. 11. Starr EM, Foy JB. In the parents’ voices: the education of children with autism spectrum disorders. Remedial and Special Education[internet]. [cited 10 February 2011]. Available from: http://rse.sagepub.com/content/ early/2010/09/24/0741932510383161 12. Harchik A. Inclusion [Internet]. National Autism Center; [cited 2010 December 28]. Available from: http://www.nationalautismcenter.org/learning/inclusion.php 13. Individuals with Disabilities Education Act: Part 300 / D / 300.321 / a [statute on internet]. 2004 [cited 2011 January 1]. Available from: http://idea.ed.gov/explore/ view/p/,root,regs,300,D,300%252E321,a, 14. Topic: Individualized Education Program (IEP) [Internet]. U.S. Department of Education; 2006 Oct. [cited 2011 January 1]. Available from: http://idea.ed.gov/ explore/view/p/,root,dynamic,TopicalBrief,10, 15. Hartmann v. Loudoun County Board of Education. (1997) 118 F.3d 996. Available from: http://openjurist.org/ 16. Ruble LA, McGrew J, Dalrymple N, Jung LA. Examining the quality of IEPs for young children with autism. J Autism Dev Disord. 2010 Dec;40(12):1459-70.

1. Boyd BA, Shaw E. Autism in the classroom: a group of students changing in population and presentation, Brian A. Boyd and Evelyn Shaw. Preventing School Failure. 2010;54(4): 211-219. 2. Centers for Disease Control and Prevention. Autism Spectrum Disorders (ASDs) [homepage on the Internet]. Atlanta, GA: Centers for Disease Control and Prevention.; [updated 2010 June 24; cited 2011 February 10]. Available from: http:// www.cdc.gov/ncbddd/autism/index.html 3. Individuals with Disabilities Education Act: Part 300 / A / 300.8 [statute on internet]. 2004 [cited 2011 January 1]. Available from: http://idea.ed.gov/explore/ view/p/,root,regs,300, 4. Individuals with Disabilities Education Act: TITLE I / B / 612 / a / 5 [statute on internet]. 2004 [cited 2011 January 1]. Available from: http://idea.ed.gov/explore/ view/p/,root,statute,I,B,612,a,5, 5. Diagnostic and Statistical Manual of Mental Disorders. 4th ed. Washington D.C.: American Psychiatric Association; 1994. 6. Winerip M. A school district that takes the isolation out of autism. The New York Times [Internet]. 2010 August 1 [cited 2010 December 28]. Available from: http:// www.nytimes.com/2010/08/02/education/02winerip.html 7. Kozima H, Nakagawa C, Yasuda Y. Children-robot interaction: a pilot study in autism therapy. Prog Brain Res. 2009;164:385-400. 8. Causton-Theoharis J, Ashby C, Cosier M. Islands of loneliness: exploring social interaction through the autobiographies of individuals with autism. Intellect Dev

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 27

Tae Yeon Kim is a sophomore studying Biological Sciences at the University of Chicago.

THE TRIPLE HELIX Spring 2011 27 4/22/2011 6:29:24 PM


UCHICAGO

Tackling the Politics of Football Brain Injuries Sithara Kodali

A

ny football fan watching a Sunday afternoon game will most likely witness at least one slow-motion replay of a running back lowering his head as he faces off against a safety, or a defensive end’s helmet colliding with another player’s during a tackle. Fans of professional football bear witness to repeated head trauma during every game. Later in their lives, the football players confront the effects of these hits: health problems such as vertigo and dementia. While some rely on a support network, others struggle. The stories can be tragic, from the suicide of 44-year old Andre Waters to the 50-year old Mike Webster, a nine-time Pro Bowler and Hall of Famer whose mental problems led to his homelessness and premature death. Recently, however, academic research has started to confirm these anecdotes, uncovering potential links between repetitive head trauma in football and a serious health condition, called chronic traumatic encephalopathy (CTE). Originally associated with boxers, CTE is a type of neurological deterioration caused by repetitive head trauma. For football players, that deterioration often stems from the multiple concussions players suffer, especially those who play in positions where helmet contact is common. The new research has attracted public interest; in the past year alone, news coverage of the issue in popular media has increased; the New York Times frequently publishes articles related to the effects of head injuries in sports, and in November 2010, Sports Illustrated published a cover story on concussions. Previously a minor issue addressed within the National Football League and its Players’ Association, the long-term effects of repetitive head injuries has become a contentious issue for sports fans, com-

mentators, the U.S. government, and for the National Football League. For years, the NFL commissioned its own studies that negated the effect of head trauma. After pressure from two Congressional hearings in the past year, the NFL has tacitly changed its stance. During the 2010-2011 season, the league has started to change its internal policies towards compensation and the treatment of ex-NFL players suffering from CTE. The NFL’s new policies may also reflect the fact that their base of football fans are slowly coming to terms with the mounting evidence that professional football causes permanent physical harm, and considering what it means to support a sport with those consequences. In the meantime, CTE researchers are still in the process of collecting enough evidence to build a conclusive link between the disease and football. As of July 2009, only 51 cases of CTE have been neuropathologically confirmed, a number constrained by the fact that CTE can only be diagnosed post-mortem. 90% of those diagnosed were former athletes, of which 11% were former football players [1]. From these confirmed cases, researchers at Boston University have demonstrated that CTE is a distinct neurodegenerative disease, although it shares many features of other degenerative disorders such as Alzheimer’s disease and Parkinsons. Specifically, CTE is a tauopathy, an affliction recognizable by high levels of tau proteins and loss of volume in the brains of those affected. CTE is characterized by a progressive deterioration in mental capabilities, beginning with mild memory loss and lessened concentration, and progressing to initial symptoms of Parkinsons disease to full-blown dementia and speech abnormalities [1]. Most ex-football players exhibiting similar symptoms in

Reproduced from [12]

28 THE TRIPLE HELIX Spring 2011 UChicago.indb 28

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:24 PM


UCHICAGO the past have been diagnosed with either Alzheimer’s disease or Parkinsons, complicating the efforts to establish CTE as a unique disease suffered mainly by professional contact sports, like boxing and football. A study published in September 2010 presented the first pathological evidence that repetitive head trauma in sports is associated with protein markers and other pathological evidence that contributes to symptoms like memory loss and decreased brain function [2]. These symptoms, in turn, often lead to a mistaken diagnosis of ALS, or Lou Gehrig’s disease. Studies on boxing demonstrate that risk factors for CTE include the length of a boxing career, sparring exposure, and the amount of bouts a boxer participates in: all risk factors that can easily be extrapolated to football careers [3]. While repetitive head trauma is one indicator of later functional problems, one academic paper has suggested a genetic component as well. A later study focused on athletes grouped active professional athletes by age and then categorized subgroups by APOE-ε4 carriers; the APOE-ε4 genotype has been studied previously for its links to Alzheimer’s disease and atherosclerosis. Those in the elder group that possessed the APOE genotype exhibited significantly poorer cognitive performance than both the elder group and the younger group without the APOE genotype, suggesting that sustained head trauma may have a greater impact on those with genetic vulnerabilities [4]. Research into the link between CTE and football careers is very recent mainly because any prior studies were primarily funded by the NFL rather than credible independent sources. But what rare independent research exists has uncovered interesting correlations. A study of college football players showed that the existence of a prior history of concussion resulted in lower cognitive function [5]. Another survey of 1,090 retired

professional football players reported that symptoms consistent with CTE correlated with players’ reported concussions during their career [6]. While no study has conclusively determined that CTE is caused by repetitive head trauma and concussions, the correlative evidence has been accumulating over the past few years, slowed by the under-reporting of concussions in sports and the difficulty of diagnosing CTE in living subjects. These few studies on football players’ health that have been published over the past three years have generated an enormous amount of both political and public interest. Press coverage has spread from more general-interest papers like The New York Times to sports outlets like ESPN and even men’s magazines like Esquire. Even magazines with a primary audience of football fans, like Sports Illustrated, have run cover stories questioning “the hits that are changing football” [7]. On the political front, the House of Representatives’ Judiciary Committee has to date held two informational hearings on the legal issues relating to football head injuries. Despite the neutral objectives of these hearings, the NFL’s disability board structure and stance towards players’ health issues came under heavy criticism both from congressional representatives and those called to testify. Kyle Turley, a former NFL player, alleged that “the negligence of the NFL medical staff is fairly universal, [and] its effects are perpetuated and magnified by the NFL disability committees, the protection they enjoy under the collective bargaining agreement comprised of the owners and players’ union representatives which continually deny retired players disability claims wrongfully” [8]. In the same hearing, Dr. Ann McKee, co-director of Boston University’s Center for the Study of Traumatic Encephalopathy, stated that the elevated tau levels and other indications of degeneration “are dramatically not normal- there is no way these patho-

Reproduced from [13]

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 29

THE TRIPLE HELIX Spring 2011 29 4/22/2011 6:29:24 PM


UCHICAGO logical changes represent a variation in normal that we find under a bell shaped curve” [8]. She went on to compare the problem with the history of cigarette smoking and its health consequences, urging the panel to take preventative action with or without the support of the NFL. Since the House hearings, the NFL appears to have addressed some of the contentious issues surrounding football players’ health, although their long-term commitment is still very much in flux. The NFL had already established the 88 Plan in 2007, a program that awards retirees with dementia an $88,000 yearly stipend [8]. After the hearings, NFL Commissioner Roger Goodell reorganized their disability and head trauma research board, dismissing doctors on the board with no specialization in the area, as well as releasing new informational literature for players that clearly state the risk players face from head injuries. Even more recently, in October 2010, the NFL expanded a rule that prohibited upward launch hits that occur when receivers had not had sufficient time to defend themselves; the new player penalties punish those hits to the head with fines and possible suspensions [9]. But with the medical link between head trauma and CTE becoming more established, the NFL faces an uncertain future. What is their obligation to former players as they retire and face health issues? While the NFL has taken steps to dismantle its previously hostile policies towards dementia and mental health issues, a future framework to address their players’ mental health problems is unclear. A player lockout in which players’ pay and future would be indefinitely frozen is a looming threat, pending the negotiation of a collective-bargaining agreement between team owners and the Players’ Association by the end of the league year in March 2011. The potential lockout throws every aspect of the player-team relationship up in the air, including teams’ responsibilities towards concussed players and ex-players with health problems. NFL Commissioner Roger Goodell has promised that the impending lockout would not affect relations with the Players’ Association on health issues or payments through the 88 Plan [8]. At the same time, the NFL Players’ Association is responsible for representing players on a wide range of issues, and some players allege that the Association has compromised on granting retirees’ full health care compensation in order to gain ground on other player issues. Workers’ compensation claims have been the main recourse for players seeking compensation for football-related injuries. California law in particular stipulates that professional

football players who have played in California are eligible to file a claim in California. Since any professional player that ever played at least one game in California is therefore eligible, most players seeking NFL compensation file claims in that state. In March 2010, Dr. Eleanor Perfetto filed the first dementia-related claim, arguing that her husband Ralph Wenzel’s dementia directly stems from his seven years in the NFL. Workers’ compensation laws require documentation that the injury sustained was derived from playing football, and if the state rules in Dr. Perfetto’s favor, players will likely seek compensation through similar legal avenues [10]. While the House of Representatives has held two hearings on the issues surrounding head injuries in football, they have so far been unwilling to pass legislation specifically relating to the NFL. Legislation is pending, however, and expected to pass on regulations for concussion protection in youth sports, which will affect a much larger pool of athletes at risk and with a lower profile than professional players. Meanwhile, the NFL, as a private business (in fact, a Congress-mandated monopoly), is only subject to two venues of influence on responsibility for retirees: that of the law, and public opinion. Enforced legislation seems unlikely, but the visibility of football injuries has had an impact within the sport’s official organization. As sports reporters continue to ask questions about the serious, long-term health effects of a sport meant for entertainment, fans become more aware that every Sunday they are perhaps witnessing “brain injury, involving potentially grave consequences, in real time” [11]. Perhaps more significantly, the news about brain injuries hits close to home for many fans with children; Representative John Conyers opened the House hearing on football injuries with a personal note that his 13-year-old son was playing a football game that very afternoon [8]. The decisions made at a professional level extend beyond the 10,000 retirees and 1,600 active players of the NFL to the millions in youth sports across the country More than anything else, it could be the growing uneasiness of its fans that pushes the NFL to move from instituting more penalties during the game to creating a comprehensive framework for players’ health once they leave the game. In the meantime, research continues to accumulate and moves closer to establishing a definitive truth, as football players continue to run onto the field every Sunday.

References

tive Impairment in Retired Professional Football Players. Neurosurgery 2005; 57(4); 719-726. 7. King, Peter. Concussions: The hits that are changing football. Sports Illustrated, November 1, 2010. 8. House of Representatives Committee on the Judiciary hearing, Legal Issues Relating to Football Head Injuries (Part I and II). Serial no. 111-82, 425. 9. NFL Total Access. Competition committee speaks. http://www.nfl.com/videos/nflnetwork-total-access/09000d5d81b7cacb/NFL-s-stance-on-helmet-to-helmet-hits 10. Schwarz, Alan. Case Will Test N.F.L. Teams’ Liability in Dementia. New York Times, April 5, 2010. A1. 11. Sokolove, Michael. Should You Watch Football? New York Times, October 23, 2010. WK1. 12. http://www.nsf.gov/news/special_reports/football/index.jsp. 13. http://www.manchesternh.gov/website/Portals/2/Departments/schools/west/ images/football/Football%202.JPG

1. McKee, Ann, et. al. Chronic Traumatic Encephalopathy in Athletes: Progressive Tauopathy After Repetitive Head Injury. Journal of Neuropathology and Experimental Neurology 2009; 68(7); 709-735. 2. McKee, et al. TDP-43 Proteinopathy and Motor Neuron Disease in Chronic Traumatic Encephalopathy. Journal of Neuropathology and Experimental Neurology 2010; 69(9); 918-929. 3. Rabadi, Meheroz H, et. al. The Cumulative Effect of Repetitive Concussion in Sports. Clinical Journal of Sports Medicine 2001; 11(3); 194-198. 4. Kutner, KC, et al. Lower cognitive performance of older football players possessing apoliprotein E e4. Neurosurgery 2000; 47(3); 651-658. 5. Collins, MW, et al. Relationship between concussion and neuropsychological performance in college football players. Journal of the American Medical Association 1999; 282(10); 964-970. 6. Jordan BD, et al. Association between Recurrent Concussion and Late-Life Cogni-

30 THE TRIPLE HELIX Spring 2011 UChicago.indb 30

Sithara Kodali is a senior studying Economics at the University of Chicago.

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:24 PM


UCHICAGO

Humanizing Healthcare Technology Kelly Regan

A

pproximately 180,000 people die each year in the United States as a result of medical errors [1]. Recently, electronic health care record (EHR) technology in hospitals has been implemented across the U.S. to reduce medical errors, through the use of unit dosing methods and computerized physician order entry. The U.S. has invested over $20 billion dollars for EHRs meeting federal “meaningful use” standards in order to reduce both costs and errors, provide clinical support tools, and develop a future in which a national network will connect every citizen’s EHR. Prior to this large investment, however, no evidence suggested that EHR technology did reduce medical errors or increase overall safety [2,3]. Himmelstein and Woolhandler note the “disturbing array of unproven assumptions, wishful thinking, and special effects” that have been utilized by healthcare innovation technology (HIT) vendors since the 1960’s. Today, EHR companies continue to stir up hope and hype, despite wide gaps in evidence supporting the technology’s effectiveness [2, 4, 20]. A 2011 study of systematic reviews on EHRs and other eHealth technologies states, “A strong evidence base is characterized by quantity, quality, and consistency. Unfortunately, we found that the eHealth evidence base falls short in all of these respects” [4]. We may learn from the grand failures to set up an interoperable system of EHRs in Santa Barbara in 2002 and the UK in 2008, yet the unscientific approach of

Reproduced from [21]

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 31

“meaningful use” accreditation to U.S. hospitals will only hinder our efforts to understand medical error. Led by Catherine DesRoches, a research group from the Massachusetts General Hospital Institute analyzed an exhaustive compilation of EHR data in the U.S. [5]. Overall, EHR data measured against various metrics of quality and

Scientifically grounded study leads to the conclusion that accidents are not the abnormal operation of broken systems but the normal operations of systems under economic, social and political pressure to produce more with less efficiency showed neither statistically nor clinically significant relationships between adoption of EHRs and error reduction. To bridge the gap between the implementation of EHRs and positive results, the Health Information Technology for Economic and Clinical Health Act (HITECH) of the American Recovery and Reinvestment Act (ARRA) specified criteria for Stage 1 “meaningful use” of EHR technology. Medicare and Medicaid EHR incentive programs ($17.7 billion and $12.4 billion, respectively, paid in 2011 and 2012) will provide a financial reward for the “meaningful use” of qualified, certified EHRs to achieve goals of health and efficiency. Without federal funding, many hospitals could not afford expensive EHR software and must gain Stage 1 “meaningful use” status to qualify for reimbursement. By the end of the decade, all hospitals will be required to implement EHR systems or face major fines. HITECH’s original proposal included 23 objectives for hospitals. Yet because this all-or-nothing test was too demanding, “meaningful use” reduced to 15 core objectives including entering the patients’ basic demographic data, active medications, and allergies [6]. In August 2010, a survey conducted by the College of Healthcare Information Management Executives found that 90% of the 152 respondents expect to qualify for Stage 1 ”meaningful use” by 2012 [7]. In fact, an October 2010 study showed that only 2% of U.S. hospitals reported EHRs that would meet Stage 1 “meaningful use” [8]. More disturbing than this disparity, however, is the measurement of the core objectives. Presently, to gauge the effectiveness of EHR technology, hospitals and clinics both run status reports provided

THE TRIPLE HELIX Spring 2011 31 4/22/2011 6:29:25 PM


UCHICAGO

Reproduced from [22]

by EHR software and organize their own data to meet the provisions of “meaningful use”. This is not an accurate assessment of error. Firstly, strikingly low measures fulfill the core objectives (e.g. 50% of patient demographics and vitals information, 40% of prescriptions are electronic, 30% of medication orders are computer-based) [6]. This low filtration of EHR data for Stage 1 may be uninformative for policy decisions in later phases of “meaningful use” legislation. Secondly, while the reports are objective, healthcare providers may not actually understand which aspects of EHR technology reduce error and which do not. We are still in the experimental phase of evaluating “meaningful use”. But we are in danger of performing our initial, and perhaps most important, trials, in the wrong way. In Lessons from the War on Cancer, Cook details the general phenomenon aimed at reducing medical hazards: “The experiments - all the efforts to date have been experiments – conducted with technology and organizations have not provided much insight into patient safety itself. These efforts have been applications rather than explorations. Applications do not provide much insight into basic mechanisms. When they work, we are not sure why; when they fail, we cannot tell how” [9]. In fact, another research group determined that EHR evaluations to date have largely emphasized “simplistic approaches, which have provided little insight into way a particular outcome has occurred,” resulting in an evidence base that offers little support for future implementation efforts [4]. By these standards, the establishment of current “meaningful use” definitions for

32 THE TRIPLE HELIX Spring 2011 UChicago.indb 32

EHRs is following the same trajectory. Dr. Mark Nunally warns against the restricting vision of pure success in healthcare technology: “It is empowering to predict results, but not a loss when things do not work out as predicted…Although these factors are dominant in the application of a discovery to the clinical setting, the study of the way individual elements couple and integrate is not a large part of the history of medical science. Yet the future of medical science depends on the awareness of such integration” [10]. Blinded by the EHR advocates’ promise of error reduction, hospitals are not directed to recognize potential failures. While some improvements may fulfill “meaningful use”, implementing EHR technology alone does not teach the fundamentals of error in healthcare. In fact, Nunnally believes that performance variability should not be regarded as opposed to the status quo, but as positively linked to creativity. He also discredits a “checklist mentality”: “Such techniques have their place, but any benefit will result from more than the physical act of checking a box. Checklists, compliance mandates, and quality metrics constrain practice, sometimes for the better, but also potentially for the worse” [10]. Practitioners often succeed in coordinating healthcare systems to guard against failure and directing recoveries due to their adaptability [11]. Whatever is gained through the expenditure of energy on the “checklist mentality” may be lost in the manipulation of system operations to calibrate tolerable performance boundaries. Hospitals should heed the new errors introduced through this potential “distraction effect.” Practitioners should understand this process when trained to comply with the technology. Assigning blame for medical error becomes difficult amidst EHR technology. EHR companies and Regional Extension Centers organized by the Department of Health and Human Services (DHHS) offer guidance to support “meaningful use” decisions by hospitals, but government mandates and requirements for financial reimbursements conversely hold hospitals to the entire burden. Dr. Richard Cook stated that, “operators more often than not foot the blame and criticism during times of failure, while IT developers reap the glory of any successes” [12]. Why are healthcare providers the scapegoats in the implementation of healthcare technology? Cook suggests “the idea of human error is so attractive in part because the alternative is, from a business perspective, so unattractive. Scientifically grounded study leads to the conclusion that accidents are not the abnormal operation of broken systems but the normal operations of systems under economic, social and political pressure to produce more with less” [11]. Cook further states “our error rate is actually quite low” [12]. Dr. Lucian Leape, professor at Harvard School of Public Health and Chair of the National Patient Safety Foundation, cites an intensive care unit study suggesting that the medical community is operating at approximately 99% efficiency [1]. Distinguishing error from safety, Cook advocates the need for research on the latter to demystify instances of “human error.” He notes that Danish engineer and human performance researcher, Jen Rasmussen, observed long ago that “human error” cannot be properly analyzed. As the judgments needed to assess when the error exists are biased, Rasmussen con-

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:25 PM


UCHICAGO While EHR technology boasts transparency and immediacy of data that tackles complex system failure, resulting errors solely identify the “what” – the mistakes on part of practitioners, and ignore the “how” – the interconnected complex systems of care cludes that studies of medical error are really studies of how judgments are made. Cook mentions that reducing human error through HIT applications is regarded as “low hanging fruit”-- because we have not conducted any proper research, we have yet to taste its full impact [9]. In several disease contexts, progress in EHR error-reduction has occurred [13-15]. The promise of a dramatic decrease in error, however, is limited. Several forms of malpractice are out of the scope of EHR technology, including the control of hospital-acquired infections and wrong-site surgeries due to verbal miscommunication [16,17]. Human reasoning may also represent a significant source of error outside the reach of EHRs. Jerome Groopman, author of “How Doctors Think,” investigated the cognitive processes of doctors. He cites that about 80% of medical mistakes result from predictable mental traps inherent to all human beings, while only 20% are due to technical mishaps, such as mixed-up test results or incomprehensible paper records [18]. In fact, Greenhalgh et al (2010) indicate that primary care work has been made less efficient by EHR systems [19]. Furthermore, Cook and psychologist David Wood argue that healthcare practitioners are often the victims of hindsight References 1. Leape L. Error in Medicine. JAMA. 1994; 272(23):1851-1857. 2. Himmelstein D, Woolhandler S. Hope and Hype: Predicting The Impact Of Electronic Medical Records. Health Aff. 2005; 24(5):1121-1123. 3. Sidorov J. It Ain’t Necessarily So: The Electronic Health Record And The Unlikely Prospect Of Reducing Health Care Costs. Health Aff. 2006; 25(4):1079-1085. 4. Black AD, Car J, Pagliari C, Anandan C, Cresswell K, et al. (2011) The Impact of eHealth on the Quality and Safety of Health Care: A Systematic Overview. PLoS Med 8(1): e1000387. doi:10.1371/journal.pmed.1000387 (accessed 5 February 2011). 5. DesRoches C, Campbell E, Vogeli C, Zheng J, Rao S, Shields A, Donelan K, Rosenbaum S, Bristol S, Jha A. Electronic Health Records’ Limited Successes Suggest More Targeted Uses. Health Aff. 2010; 29(4):639-646. 6. Blumenthal D, Tavenner M. The “Meaningful Use” Regulation for Electronic Health Records. N Engl J Med, 2010; 363(6):501-504. 7. College of Healthcare Information Management Executives. CHIME Survey Finds Healthcare CIOs Cautiously Optimistic about Receiving EHR Incentive Funding. 10 Sept 2010. http://www.ciochime.org/chime/pressreleases/pr9_10_2010_11_18_19.asp (accessed 15 January 2011). 8. Jha A, DesRoches C, Kralovec P, Maulik J. A Progress Report on Electronic Health Records In U.S. Hosptials. Health Aff. 2010; 29:1951-1957. 9. Cook R. Lessons from the War on Cancer. J Patient Saf. 2005; 1(1); 7-8. 10. Nunally M. An alternative point of view: Getting by with less: What’s wrong with perfection? Crit Care Med. 2010; 38(11): 2247-2249. 11. Cook R. To err is not always human. University of Chicago Medicine on the Midway. Winter Issue, 2006. http://www.ctlab.org/publications.cfm (accessed 15

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 33

bias in the evaluation of medical error. While EHR technology boasts transparency and immediacy of data that tackles complex system failure, resulting errors solely identify the “what” – the mistakes on part of practitioners, and ignore the “how” – the interconnected complex systems of care. Cook’s research on the intersection between the “what” and the “how” emphasizes the factors that provide “capacity resilience” to systems of care: “Instead of viewing them as threats to safety, we recognize practitioners as part of the resilience that makes it possible for so many people to benefit from the complex, hurried and often conflicted conditions that surround health care” [11]. Cook also stresses that while superficially innocuous failures cause catastrophe, only their combination will produce a systematic accident. These failures change because of evolving technology, work organization, and efforts to eradicate failures [20]. He argues that complex systems are prone to “hidden” catastrophes. Thus, efforts to identify “root causes,” as in EHR unit measures, are only illusionary. While academic voices in the patient safety discourse may be skeptical, they are not cynical. Nunnally states, “Serendipity is the upside of things not going as planned and an insight into how complex systems work. It will not be clear where we end up until we get there” [10]. Yet due to a lack scientific integrity in our national experiment with EHR technology, our current over-reliance on health care technology may falsify a sense of error resistance. To understand the full impact of EHR data and solve future questions on error in this digitized era of healthcare, we must regard the underlying themes: both how complex systems errors are formed and how to resolve psychological influences and practitioner resilience on healthcare. In addition to promoting better patient data exchange and eliminating overhead costs and paper waste, EHR technology may assist practitioners to reduce some medical errors. Unfortunately, we are positioned to learn very little. Kelly Regan is a senior studying Biological Sciences, specializing in Immunology, at the University of Chicago. January 2011). 12. Cook R. Interviewed by: Kelly Regan. 10 January 2011. 13. Jha A, DesRoches C, Shields A, Miralles P, Zheng J, Rosenbaum S, Campbell E. Evidence of an Emerging Digital Divide Among Hosptials That Care for the Poor. Health Aff. 2009; 28(6): 1160-1170. 14. Tang W, Tong W, Franics G, Harris C, Young J. Evaluation and Long-Term Prognosis of New-Onset, Transient, and Persistent Anemia in Ambulatory Patients With Chronic Heart Failure. J Am Coll Cardiol. 2008; 51: 669-576. 15. Hansen, M, Gunn P, Kaelber D. Underdiagnosis of Hypertension in Children and Adolescents. JAMA. 2007; 298(8):874-879. 16. Landrigan C, Parry G, Bones C, Hackbarth A, Goldmann D, Sharek P. Temporal Trends in Rates of Patient Harm Resulting from Medical Care. N Engl J Med. 2010; 363:2124-2134. 17. Stahel P, Sabel A, Victoroff M, Varnell J, Lembitz A, Boyle D, Clarke T, Smith W, Mehler P. Analysis of a Prospective Database of Physician Self-reported Occurrences. Arch Surg. 2010; 145(10):978-984. 18. Groopman J. How Doctors Think. New York: Mariner Books; 2008. 19. Cook R. How Complex Systems Fail. In Allspaw, J. Ch. 7 “How Complex Systems Fail,” Web Operations. 107-116. 20. Greenhalgh T, Potts H, Wong G, Bark P, Swinglehurst D. Tension and paradoxes in electronic patient record research: a systematic literature review using the metanarrative method. Milbank Q. 2009 Dec; 87(4):729-88. agnosis of Hypertension in Children and Adolescents. JAMA. 2007; 298(8):874-879. 21.http://sanders.senate.gov/. 22. http://www.coconino.az.gov/clinic.aspx?id=5333.

THE TRIPLE HELIX Spring 2011 33 4/22/2011 6:29:25 PM


UCHICAGO

The Weft of Unseen Garments: Epigenetic Considerations of Evolution and Disease Navtej Singh

A

bove the code of life lies another equally complex and important mechanism: the epigenome. This system of chemical markers and modifications allows DNA to be read varyingly, ultimately affecting gene expression, protein synthesis, and various biological processes; simply, epigenetics induces chemical changes affecting the secondary and tertiary structure of DNA which manipulates gene expression without changing the DNA sequence itself [1]. Science has long considered the epigenome to be an interlocutor of the external conditions of the environment and the internal genetic framework of an organism, though the idea of such direct phenotypic response, or that concerning expressed traits, remains contentious among most purists [2]. Greater knowledge of the pathways of epigenetic response to environmental conditions, however, may reveal the gravity and sway they hold over our wellbeing and evolution. Epigenetic theory posits the ability of certain chemical mechanisms to manipulate gene expression through the “activation” or “repression” of DNA transcription by methylation, histone modifications such as those that control chromatin structure, and other chemical changes which varyingly direct the information the genome carries. Differentiation of stem cells into specific cell types, physiological responses during pregnancy, and various other cellular and systemic processes respond to epigenetic signals that allow different genes to be regulated differently under appropriate conditions. This paper aims to address the considerable influence of environmental factors on the epigenome and how these forces may ultimately dictate changes among populations. Also, the implications of epigenetically manipulating DNA to express or repress various disease-related genetic pathways must be considered, especially compared to direct genetic engineering which theoretically remains more drastic and long-lasting in its effect to the genome. The epigenome plays an important role in human genetics from birth; classic examples cite the epigenetic changes inherited by a child due to the chemical environment of the mother’s womb (a recent study showed fetuses receiving poor nutrition in the womb became “genetically primed to be born into an environment lacking proper nutrition” [3]), imprinting via epigenetic mechanisms during gametogenesis, and transgenerational inheritance of environmental stress such as acute changes in nutrition [4]. During adulthood, the epigenome responds to environmental factors over time, although the gravity of these influences and the process by which they affect the epigenome become less lucid. The effects of the slow yet steady accumulation of these “stress signals” and other experiential factors is evidenced by a study of identical twins showing that although siblings initially exhibit almost identical epigenetic traits, aging twins show the effects of exposure to a lifetime of varying environmental stimuli: “[It was] found that, although twins are epigenetically 34 THE TRIPLE HELIX Spring 2011 UChicago.indb 34

indistinguishable during the early years of life, older monozygous twins exhibited remarkable differences in their overall content and genomic distribution of 5-methylcytosine DNA and histone acetylation, affecting their gene-expression portrait.” [5] Due to this experiential malleability, epigenetics has elucidated its potential to shape both an organism and its offspring as they inherit both the genome and epigenome from parents; because of this lingering chemical “memory”, it may be contended that the inherited diversity and amenable nature of the epigenome may have some evolutionary effect or basis comparable in some degree to the diversity of the genome itself [6]. A keystone study following cohorts of famine victims in northern Sweden reveals how epigenetics not only mediates DNA expression in the individual but also how epigeneticallymediated gametic imprinting and loss of imprinting can foster transgenerational changes in mortality. The study researched this phenomenon by examining the link between food restriction in grandparents and the developmental characteristics of their grandchildren: “An intergenerational ‘feedforward’ control loop has been proposed, that links grandparental nutrition with the grandchild’s growth. The mechanism has been speculated to be a specific response, e.g. to their nutritional state, directly modifying the setting of the gametic imprint on one or more genes.” [7] In essence, this study reveals that the nutrition of the grandparent transformed his or her epigenome in such a way that effects to the grandchild’s height and weight were appreciably detectable after having corrected for the nutrition of the parents and the environment of the child’s upbringing. As such studies have shown the nuanced ways the epigenome can guide heredity, we must further clarify our understanding of other environmental factors on genomic control. Environmental toxicity is another key consideration in epigenetic inheritance, due especially to the cumulative nature of many toxins. The fact that many of these toxins have relatively low mutagenicity reveals how they may epigenetically affect an organism without the mutation of DNA itself. “Transgenerational transmission of chemically induced epigenetic changes has been suggested as a potential mechanism for these effects. Anway et al. showed that gestational exposure of female rats to the endocrine disruptor vinclozolin at the time of gonadal sex determination caused a variety of abnormalities in the offspring that were then transmitted down the male line for at least three generations. The high incidence of the defects (approximately 90% of all males in all generations) and the © 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:25 PM


UCHICAGO absence of abnormalities when passed down the female line suggested gametic epigenetic inheritance.” [8] Just as nutrition has shown its ability to affect multiple generations, this study also shows how the effects of environmental toxicity can be passed on to at least immediate posterity, which to some degree perturbs the viability, longevity, and reproduction of the progeny and could thus affect natural selection patterns that may prove to some degree consequential in the long scope of evolutionary time. This study goes on to address 14 key toxins that often negatively affect the epigenome, particularly metals such as nickel, cadmium, lead, and arsenic, which all perturb DNA methylation patterns. Cadmium, for example, was highlighted for its tendency to particularly inhibit the methylation of some proto-oncogenes which leads to oncogene expression and cell proliferation in the developmental pathway of certain cancers. As these studies reveal, epigenetic inheritance may potentially mediate the most experientially-informed organismal and generational biological changes. Therefore, science and society must duly consider the environmental and socioeconomic conditions associated with the stressors that induce these changes. In another study, scientists at the University of Calcutta relate DNA hypermethylation of p16 and p53 genes, or the attachment of methyl groups to the associated DNA sequences suppressing their transcription, to arsenic-induced malignancy by examining local populations exposed to arsenic-contaminated water supplies [9]. Testing of the exposed population showed a connection between toxicity and epigenetic changes due to a shared metabolic pathway. “It has been hypothesized that alteration of DNA methylation is involved in arsenic induced carcinogenesis. This mechanism has been proposed because the SAM/methyltransferase pathway for biotransformation of arsenic overlaps with the DNA methylation pathway in which donation of methyl groups from SAM (S-adenosylmethionine) to cytosine produces 5 methylcytosine in DNA.” [9] Various other studies of such environmental hazards to the epigenome further invoke the distinct necessity to eliminate poor sanitary conditions such as contamination of water and food supplies because they can harm populations in unforeseen ways [10, 11, 12]. Many of these populations also face famine or long-term malnutrition which have been closely associated with imprinting though more research must be conducted to fully understand the effects to individual and kindred epigenomes caused by such toxic and nutritive stress [13]. Though the epigenome serves as the key historian of

environmental stress and crisis such as famine and exposure to dangerous chemicals, the epigenetic changes caused to the genome may help in adapting the organism to a changing environment. Thus, this system’s responsive, amenable nature shows its immediate usefulness to science which often aims to cure disease and benefit humanity by manipulating genetic expression. Science must therefore weigh the potential of controlling epigenetic systems to alleviate disease compared to direct genetic engineering since the former remains theoretically less difficult to introduce in vivo and less drastic due to its reversibility than the latter, though the effects of changing the epigenome also remain unknown and manipulations could manifest themselves across generations in unintended or uncontrolled ways. In theory, the suppression of disease-causing genes and greater or more metered expression of those that develop proteins integral to disease-fighting pathways could greatly alter the etiological approach of nascent therapies. For example, the fact that numerous epigenetic alterations in cancerous cells inactivate tumor-suppressor genes and activate proto-oncogenes provides scientists with another pathway to target cancer: “Unlike tumor suppressor genes inactivated by genetic alterations, genes silenced by epigenetic mechanisms are intact and responsive to reactivation by small molecules. Many diverse genes hypermethylated in cancers can be reactivated with DNA methyltransferase inhibitors…It is clear that epigenetic changes are some of the earliest events observed during cancer development, making them excellent targets for chemoprevention.” [14]

References

8. Baccarelli A, Bollati V. Epigenetics and environmental chemicals. Current Opinion in Pediatrics. 2009 Apr; 21(2): 243-251. 9. Chanda S, Dasgupta UB, GuhaMazumder D, Gupta M, CHaudhuri U, Lahiri S, Das S, et al. DNA hypermethylation of promoter of gene p53 and p16 in arsenic-exposed people with and without malignancy. Toxicological Sciences. 2006 Feb; 89(2): 431-437. 10. Ren X, McHale CM, Skibola CF, Smith AH, Smith MT, Zhang L. An emerging role for epigenetic dysregulation in arsenic toxicity and carcinogenesis. Environmental Health Perspectives. 2011 Jan; 119(1): 11-9. 11. Pilsner RJ, Hu H, Ettinger A, Sanchez BN, Wright RO, Cantonwine D, et al. Influence of prenatal lead exposure on genomic methylation of cord blood DNA. Environmental Health Perspectives. 2009 Sep; 117(9). 12. Singh KP, DuMond JW. Genetic and epigenetic changes induced by chronic low dose exposure to arsenic of mouse testicular Leydig cells. International Journal of Oncology. 2007 Jan; 30(1): 253-260. 13. Heijmans BT, Tobi EW, Stein AD, Putter H, Blauw GJ, Susser ES, et al. Persistent epigenetic differences associated with prenatal exposure to famine in humans. Proceedings of the National Academy of Sciences. 2008 Nov 4; 105(44): 17046-17049 14. Kopelovich L, Crowell JA, Fay JR. The epigenome as a target for cancer chemoprevention. Journal of the National Cancer Institute. 2003 Oct 8; 95(23): 1747-1757.

1. Rosenfeld CS. Animal models to study environmental epigenetics. Biology of Reproduction. 2010 Mar 1; 82(3): 473-488. 2. Haig D. Weismann Rules! OK? Epigenetics and the Lamarckian Temptation. Biology and Philosophy. 2006 Mar 24; 22(3): 415-428. 3. Fu Q, Yu X, Callaway CW, Lane RH, and McKnight RA. Epigenetics: intrauterine growth retardation (IUGR) modified the histone code along the rat hepatic IGF-1 gene. The Journal of the Federation of American Societies for Experimental Biology. 2009 Apr 13; 23(8): 2438-2449. 4. Whitelaw E. Epigenetics: Sins of the fathers, and their fathers. European Journal of Human Genetics. 2006; 14: 131-132. 5. Fraga MF, Ballestar E, Paz MF, Ropero S, Setien F, Ballestar ML, et al. Epigenetic differences arise during the lifetime of monozygotic twins. Proceedings of the National Academy of Sciences. 2005 Jul 26; 102(30): 10604-10609. 6. Jablonka E, Lamb MJ. Soft inheritance: challenging the modern synthesis. Genetics and Molecular Biology. 2008; 31(2): 389-395. 7. Bygren LO, Kaati G, and Edvinsson S. Longevity determined by parental ancestors’ nutrition during their slow growth period. Acta Biotheoretica. 1999 Nov 22; 49(1): 53-59.

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 35

Ultimately, epigenetic inheritance remains a relatively rapid and transient process in evolutionary time, but this may in fact be its function as a short-term snapshot of environmental conditions that drives the more resilient forces of genetic evolution. Although the greatest argument against closer scrutiny of such processes is the short life of most epigenetic perturbations, newfound understanding of their transgenerational implications provides greater urgency to alleviate the poor nutritional and sanitary conditions that cause them. Thus, science must focus on employing this growing knowledge of epigenetics to more effectively combat disease and society must remain vigilant against those conditions which so often initiate it as we continue to learn how nurture etches its mark ever deeply on nature. Navtej Singh is a sophomore studying Biology at the University of Chicago.

THE TRIPLE HELIX Spring 2011 35 4/22/2011 6:29:25 PM


YALE

Plastic in Our Society Will Feldman

I

n terms of life altering inventions in the 20th century, the introduction of plastics into our society ranks up there with the automobile, computer, and the Internet. Across its many forms, never before had such a cheap, durable, and multipurpose class of compounds existed. Since Goodyear first synthesized natural rubber in 1839, the search for a perfect synthetic material has never stopped. From celluloid to Bakelite, neoprene to polyurethane, the evolution of plastics has led to countless discoveries and improvements in our lives. Today the plastic industry is the third largest industry in the country and the materials it produces are used in an enormous variety of consumer and industrial products. Everything from grocery bags to NASA’s space shuttles rely on advanced forms of plastic, and it would be impossible for today’s modern society to continue on without them. But the same traits that have made plastic such an important part of our lives have also made it an enormous health and safety hazard. From its production to its decomposition, plastic damages the environment in a number of ways. Burning the common plastic PVC, which is used for piping in almost every building in America, release the toxic gas dioxin, a known carcinogen responsible for birth defects and immune failure [1]. CFC’s, synthetic fluorine based organic molecules that were used in refrigeration aerosols, are another common plastic pollutant. The expulsion of CFC’s into the atmosphere throughout the 80’s and 90’s led to the depletion of the ozone layer. It is very often overlooked that plastics have few natural methods of decay. When trees first developed cellulose as a building material they were nearly indestructible. Large conifer forests that would grow and fall for millennia with no decay led to today’s coal fields across the globe. When these trees fell, their cellulose based trunks would not decompose. After being buried, pressure and heat would convert this carbon rich material into one of our modern world’s most popular fuels. It took bacteria millions of years to develop a metabolism capable of breaking down the complex and durable cellulose molecule. Similarly, nothing yet exists that is able to metabolize the even more complex and durable plastic polymers. This means that discarded plastic persists in the environment for extremely long periods of time, typically only breaking down by UV waves from the sun. It is estimated that in the last sixty years over one billion tons of plastic have been discarded, much of which will likely remain in the environment for thousands of years. In fact, it is impossible to say exactly how long common plastic pollutants will last. Since currently no organism is capable of destroying the molecules, the best that can happen is that the products break down into smaller and smaller pieces [2]. The environmental ramifications of this plastic buildup are colossal. These plastics kill perfectly healthy organisms

36 THE TRIPLE HELIX Spring 2011 UChicago.indb 36

and do enormous damage to ecosystems. From sea turtles mistaking plastic bags for jelly fish and choking to birds starving to death as their necks are trapped in the rings of six-packs, “nylon fishing gear, plastic bags and other forms of nonbiodegradable plastic waste in the oceans are killing up to a million seabirds, 100,000 sea mammals and countless fish each year” [3]. Plastic pollution has been a major cause of the falling worldwide populations of albatross and sea turtles; a US Fish and Wildlife survey found that 90% of all albatross have plastic in their digestive tract and that a significant portion of baby sea turtle deaths are caused by plastic ingestion [4]. Many plastics leech toxins and imitation hormones into the environment during their painfully slow biodegradation, a result of the fact that most plastics are chemical derivatives of toxic petroleum. The common plastic PVC, for example, has been shown to release hydrochloric acid, dioxin, and heavy metal waste salts when it decomposes, leading researchers to question the best method for its disposal and whether it is a safe product at all. The recent media stir about the bisphenol A (BPA) that can be released over time by water bottles and other food storage plastics has brought this particular issue to the public eye. BPA can affect the body’s natural endocrine processes by mimicking hormones it is chemically similar to, leading to improper sexual maturation in young people and poor sexual health in adults. Wildlife are susceptible to these same hormonal problems [5]. Very similar hormonal results have been shown to exist from the equally common plastic ingredient DEHP (Di(2-ethylhexyl) phthalate), added to PVC and other common plastic products. Both chemicals can cross placental and blood-brain barriers, and nearly 100% of those tested in a study run by Dr. John Wargo for the presence of these chemicals had positive results for chemical presence in both the blood and the urine [6]. Many harmful chemicals are also released in the decomposition of plastics over time. They have been proven to release not only BPA and DEHP but also the endocrine disruptor PS oligomer and the suspected carcinogens styrene monomer, styrene dimer, and styrene trimer which are offshoots of Styrofoam which breaks down in the ocean. These chemicals are also released into groundwater as plastic decomposes at an accelerated rate in the heat and pressure of modern landfills, which is where much of the world’s spent plastic ends up [7]. The chemical threat posed by plastics seems to grow everyday as researchers continually discover new and dangerous byproducts of plastic decomposition. These are not the only environmental and human health dangers posed by plastics. In recent years an unanticipated threat has jumped onto the radar of environmental watch groups. They are called “nurdles” - small plastic beads that are mass produced and shipped to other manufacturers to be melted down and molded into other goods. Billions and

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:25 PM


YALE billions are produced every year, and they are essential in the creation of almost every plastic product on the planet. They are even frequently used as the exfoliating agent in the face and body scrubs so popular on the market today. These beads work their way into our waterways and oceans through leakage during production and transport and by washing down drains throughout the developed world. The micro-chemical nature of nurdles makes them exceptionally dangerous. They not only carry the typical toxins associated with plastics but they also attract the pollutants from surrounding seawater. A recent study found that the concentrations of DDE’s and PCB’s, two dangerous oceanic pollutants, were up to a million times higher on the nurdles as in the surrounding water [8]. These toxic particles have been found to constitute up to 98% of plastic litter on the beaches of Orange County, and their prevalence is growing. They have been found in the stomachs of such endangered species as bottlenose dolphins, loggerhead sea turtles, and albatross. The way these nurdles work their way through the food chain is another reason they are so problematic. Zooplankton and other small creatures first consume them, which are themselves eaten by progressively larger animals. In this way nurdles build up through the food chain, accumulating ever more toxins in increasingly large and rare organisms like tuna and other predatory fish. Furthermore, plastics do not exclusively float or sink, but instead exist at many different layers of ocean water on account of changes in salinity and water temperature. This eases their spread around the world and exposes nearly every marine environment to their potential for harm. Delicate ecosystems like the Great Barrier Reef and California kelp forests are all threatened. Nurdles are virtually impossible to remove because their small size makes them difficult to collect [9]. Much like mercury poisoning; the toxins accumulated from the nurdles in these fish will eventually lead to human harm as well as environmental damage. Successful solutions to these problems are few and far between. Cleanup efforts will likely yield mixed results at best, owing to the spread of toxins and trash across the oceans and around the world. Furthermore, even if efforts were focused on pollution hotspots, such as the Texas-sized Pacific Gyre garbage patch (an enormous pollution patch created by oceanic currents), collecting anything but the large and obvious pieces of trash on the surface would be prohibitively

difficult and expensive, not to mention all but impossible for the smaller particulates such as nurdles. Politicians and environmentalists generally agree that the only workable long-term solution is stringent regulation of the production and disposal of plastic products. Among the proposed regulations are stronger controls over the existence of endocrine-altering chemicals, plastics engineered to be safely biodegradable, increased emphasis to recycle, and more stringent controls on the 60 billion plus pounds of nurdles manufactured in the US alone [10]. Furthermore, current plastic recycling is time consuming, labor intensive, and subsequently economically inefficient. Companies need to be encouraged to use more uniform plastics, making the recycling more efficient and profitable. Current national plastic production and disposal regulations focus almost entirely on the release of toxic chemicals into the environment [11]. While important, these regulations overlook the less industrial ways in which plastic gets into the environment. Regulation on the civilian waste management level is historically effective, and much like how cardboard and newspaper are separated from trash; plastics need to be treated specially in the disposal process. Furthermore, the government needs to create incentives for the production of bioplastics. These exist, but their uses are limited and current bioplastics tend to actually do great environmental harm when they biodegrade, although they do it relatively quickly. These important changes in the plastic industry need to be spearheaded by the EPA and the government to ensure industry uniformity. Plastics are critical to our society. Their importance cannot be overestimated, but neither can their potential to do harm to our environment and human health. From large plastic debris that can harm wild organisms, to emitted toxins, to dangerous nurdles in our oceans, the prevalence of plastics as waste around our natural world is unprecedented. The 20th century brought the brilliance of plastics, with their positive and negative effects, and they are here to stay. It will be the job of lawful regulation and science to keep striving for the perfect, environmentally sustainable material with which we can develop the goods of the next century.

References

7. Bernstein, Michael. “Plastics in Oceans Decompose, Release Hazardous Chemicals, Surprising New Study Says.” Content Portal. 19 Aug. 2009. Web. 27 Oct. 2010. <http:// portal.acs.org/portal/acs/corg/conten t?_nfpb=true&_pageLabel=PP_ARTICLEMAIN&node_ id=222&content_id=CNBP_022763&use_sec=true&sec_url_var=region1&__ uuid=c633a612-76fe-435c-b77d-fd01b052106b>. 8. “Heal the Bay | The Pacific Protection Initiative | AB 258: Nurdles.” Heal the Bay | Home. The Pacific Protection Initiative, 2007. Web. 30 Oct. 2010. <http://secure.healthebay.org/ currentissues/ppi /bills_AB258.asp>. 9. “Keep Fish Safe from Nurdles. Recycle Plastic Bags | Simple Steps.” Home | Simple Steps. NRDC, 2 Sept. 2009. Web. 30 Oct. 2010. <http://www.simplesteps .org/mmm/keep-fish-safenurdles-recycle-plastic-bags>. 10. Wargo, John, Mark R. Cullen, and Hugh S. Taylor. Plastics That May Be Harmful to Children and Reproductive Health. Ed. Linda Wargo and Nancy Alderman. Environment & Human Health, Inc., 2008. Web. 27 Oct. 2010. <http://www.ehhi.org /repor ts/plastics/ ehhi_plastics_report_2008.p df>. 11. “Rubber, Plastic, and Man-made Fiber Sectors | Compliance Assistance | US EPA.” US Environmental Protection Agency. 1 May 2009. Web. 31 Oct. 2010. <http://www.e pa.gov/ compliance/assistance/sectors/rubberplastic_man.html>.

1. Bock KW, Köhle C (2006). “Ah receptor: dioxin-mediated toxic responses as hints to deregulated physiologic functions”. Biochem. Pharmacol. 72 (4): 393–404. 2. Alan Weisman, “The World Without Us,” St. Martin’s Press, NY, 2007. 3. Bowker, Michael. “Caught in a Plastic Trap.” Envirnmental, Earth, and Ocean Sciences, 1986. Web. 24 Oct. 2010. <http://alpha.eeos.sci.umb.edu/~frankic/files/Oceanograp hy%20 Spring%2006/PowerWeb%20Unit%204/Article38.pdf>. 4. Alvarez, Heherson T. “Plastic Trash Washes into Oceans, Endangering Marine Life HEHERSON T.ALVAREZ / Manila Times 29mar2005.” Mindfully.org | Mindfully Green. 29 Mar. 2009. Web. 26 Oct. 2010. <http://www.mindfully.org/Plastic/Ocean/Manila-PlasticOceans29mar05.htm>. 5. University of Cincinnati. “Plastic Bottles Release Potentially Harmful Chemicals (Bisphenol A) After Contact With Hot Liquids.” ScienceDaily 4 February 2008. 26 October 2010 <http://www.sciencedaily.com¬ /releases/2008/01/080130092108.htm >. 6. Wargo, John, Mark R. Cullen, and Hugh S. Taylor. Plastics That May Be Harmful to Children and Reproductive Health. Ed. Linda Wargo and Nancy Alderman. Environment & Human Health, Inc., 2008. Web. 27 Oct. 2010. <http://www.ehhi.org/repor ts/plastics/ ehhi_plastics_report_2008.p df>.

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 37

Will Feldman is a freshman studying Economics and Environmental Studies at Yale University.

THE TRIPLE HELIX Spring 2011 37 4/22/2011 6:29:25 PM


BROWN

Preventing Bioterrorism: Regulation of Artificial Gene Synthesis Technology Jyotsna Mullur

I

n 1984, 10 restaurant salad bars in the town of The Dalles, Or- minimal orchestration due to the ease of travel in modern egon were contaminated with Salmonella enterica. While there society, such as via airplanes, trains, and other mass transit were no fatalities, 751 people developed salmonellosis, 45 of systems. whom needed hospitalization. After one year of investigation, Predicting human casualties and suffering that may be it was revealed that followers of incurred as a result of a biological radical spiritual leader Rajneeshee attack presents a grim picture. were responsible for tainting the According to the Duke Clinical If the illness is allowed to restaurants’ supplies. Their aim Institute, 50 kg of anthrax released spread naturally, widespread was to poison the voting popuin a city of 500,000 people would lation of The Dalles on election cause the deaths of nearly 250,000 infection could occur with day in order to secure a victory people, 100,000 of whom would minimal orchestration due to for a cult-supported candidate. never have the opportunity to The 1984 Rajneeshee plot was receive medical treatment. In adthe ease of travel in modern the first and largest bioterrorist dition, models predict that for society, such as via airplanes, attack on the United States, and it every 100,000 people infected, awakened concerns about the ease response costs would total nearly trains, and other mass transit with which biological weapons 26 billion dollars [3]. Additionsystems could be wielded and the threat ally, considering that biological of their use in the future [1]. systems could evolve and accuWhat makes bioterrorism such a serious threat today is mulate mutations, the virus or bacteria used as a weapon could the rapidly increasing knowledge and availability of biological develop resistance to human attempts to contain it. These products. With a basic technical understanding of biology, one consequences present a compelling motive to consider the could obtain and culture bacteria such as E. coli or Salmonella regulation of biotechnology companies to prevent the expanfor malicious purposes—just like the Rajneeshee followers. sion of resources for such an attack. Moreover, a terrorist could release a bioweapon through a means as simple as an envelope in the mail, as the anthrax Artificial Gene Synthesis: Prospects and Problems attacks of 2001 were carried out [2]. If the illness is allowed One burgeoning technology, artificial gene synthesis, presents to spread naturally, widespread infection could occur with a point of potential regulation in order to reduce the risk of bioterrorism. Gene synthesis is Reproduced from [16] a flourishing field of biotechnology that involves the molecular construction of genes, usually through the polymerization of single-stranded, short DNA fragments called “oligos” [4]. Synthesizing genes was once a difficult and tedious task; it took a dozen scientists years to synthesize the first gene de novo in 1972 [5]. Today, DNA synthesis is a rapid, inexpensive process that can accurately produce long (over 50,000 base pairs) sequences of DNA. Over 25 companies in the United States can synthesize a gene for as little as 39 cents per

38 THE TRIPLE HELIX Spring 2011 UChicago.indb 38

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:26 PM


BROWN base pair, and this price is still dropping [6]. screening [12]. While this self-regulation is a crucial first step, DNA sequences for viruses and other pathogens, includ- it is essential to also have a third party for safety inspection ing polio, variola, and Ebola, are (such as through the Centers for easily available online [7-9]. Cello Disease Control). This reduces the et al. synthesized the polio virus risk that a safety mishap would Currently, individuals seeking genome in 2002 and published provide a window of opportunity to obtain gene sequences for their groundbreaking results in for a terrorist to manipulate the Science for the world to see and technology to create a harmful pathogens labeled “select potentially replicate [10]. The product. agents” by the department of 1918 Spanish influenza virus was synthesized by another team in Identifying Consumers Health and Human Services 2000, and their genomic data were It is also important to monitor are subject to FBI background published and made available as people attempting to access gene well [11]. Both of these examples synthesis technology at the conchecks illustrate the capabilities of this sumer level to ensure it is not new technology and the challengused malevolently. First, it would ing dichotomy it presents: this technology could be utilized to be beneficial to screen new customers against a database of resurrect deadly pathogens but also holds immense promise suspected terrorists. This could be achieved by partnering with for battling diseases. national and federal bureaus, such as the Department of State or the Department of Homeland Security, as IGSB members Regulations for Companies already do [12]. This preliminary screening would act as a The logical place to implement safety regulations is at the deterrent against using artificial gene synthesis technology company level. A company that wishes to enter the artificial as a bioterror weapon. gene synthesis market should be licensed or accredited. As Secondly, a licensing process should be required for rea part of this accreditation process, company leaders should search universities and biotechnology companies to reduce be educated in safe, ethical practices and methods to enhance the hassles that might be associated with repeated review of biosecurity. Such regulation would create an enforceable in- academics and professionals with known motives. Individuals dustry standard for companies to adhere to, reducing the outside of this realm would have to file a letter of intent and likelihood of exploitable security loopholes. references to determine their motives. Since non-academic, A body dedicated to corporate accountability already exists: first-time buyers make up only a small percentage of all clients, the International Gene Synthesis Consortium, comprised of the screening process would be efficient and streamlined save five charter companies holding 80 percent of the gene synthesis for these exceptional cases [13]. Currently, individuals seeking market. In order to join, companies must adhere to practices to obtain gene sequences for pathogens labeled “select agents” including gene sequence screening and customer background by the department of Health and Human Services are subject

Reproduced from [17]

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 39

THE TRIPLE HELIX Spring 2011 39 4/22/2011 6:29:26 PM


BROWN to FBI background checks. However, this system does not and promotion of genome synthesis technology and education address the ability to build the whole select agent gene by in the biological sciences. Educating students and industry ordering individual portions of the gene from different vendors, professionals about the risks and benefits of synthetic biology or by procuring the complementary DNA strands (cDNA) allows for a deeper understanding of the field and a heightened [14]. Considering the advances in awareness of the risks. Increased gene synthesis and amplification funding for synthetic genomic The best measure against and the potential risks such DNA research, particularly in academia fragments pose, the select agent and federal biosafety bureaus, bioterrorism is wide regulations should be amended to will ensure that the leading edge proliferation and promotion of include these strategy concerns. of this field is pursued by those scientists with the intentions of genome synthesis technology Reevaluating Artificial DNA advancing human knowledge and education in the biological Products and safety. This strategy proAt the biological level, a final vides a statistical advantage; by sciences screening is necessary to ensure promoting research in synthetic DNA fragments being produced biology, it is far more likely that are not associated with virulent or pathogenic organisms. Cur- the thousands of scientists endeavoring to make use of this rent United States law requires companies to screen for DNA technology for purposes to serve human knowledge will be sequences that “are inherently capable of producing a select able to out-innovate and out-engineer those few individuals agent virus” [15]. However, such laws should be extended aiming to use this technology as a weapon. to cover specific gene sequences that enhance virulence in While today it may be difficult to predict the direction or confer drug resistance to bacteria or viruses. As the field artificial gene synthesis will take as the technology matures, of gene synthesis progresses, the law must adapt with new it is important to strive to prevent misuse or corruption. In an biotechnology. In addition, suspicious habits of customers, era in which nations are constantly on alert for terror threats, such as the rapid purchase of several discrete gene sequences it is crucial to acknowledge the serious risks posed by biologiover a short period of time, should be monitored and screened cal weapons and strive to attenuate them. It is critical to not to determine if any gene sequences could be combined to undervalue this technology’s potential—for harm, but also form something more virulent. Currently, companies screen for hope. In providing humans with the power to control and gene orders against various private gene sequence databases; command the most intricate of nature’s chemical reactions, gene the process differs by organization [13]. To ensure thorough synthesis presents the extraordinary opportunity to engineer and rigorous screening that is standard across all companies, the future of biology. If channeled towards education and a national database should be formed that would serve all research, the promise of artificial gene synthesis can promote gene synthesis companies and include fragments, cDNA, and a more nuanced knowledge of the molecular world and serve near-match sequences for pathogens, select agents and other to enrich the health of humankind. virulent strains. Jyotsna Mullur ‘11.5 is a Neuroscience concentrator from Millbury, MA. She may be reached at Jyotsna@Brown.edu. Artificial Gene Synthesis and the Future of Biology The best measure against bioterrorism is wide proliferation References 1. Wheelis M, Lajos R, Malcolm D. Deadly Cultures: Biological Weapons Since 1945. Cambridge: Harvard University Press; 2006. 2. US Department of Justice. Federal Bureau of Investigation - Amerithrax. [Cited 2010 Oct 4]. Available from: http://www.fbi.gov/anthrax/amerithraxlinks.htm. 3. McKeel, J. New Study Evaluates Cost-Effectiveness of Defending against Anthrax Bioterrorism. Duke Clinical Research Institute. [Cited 2010 Oct 1]. Available from: https://www.dcri.org/news-publications/news/2005-news-archives/new-studyevaluates-cost-effectiveness-of-defending-against-anthrax-bioterrorism. 4. Garfinkel M, Endy D, Epstein G, Friedman R. Synthetic Genomics: Options for Governance. J. Craig Venter Institute [online]. [Updated 2007 Oct; Cited 2010 Oct 1]. Available from: http://www.jcvi.org/cms/fileadmin/site/research/projects/syntheticgenomics-report/synthetic-genomics-report.pdf 5. Khorana HG, Agarwal KL, Büchi H, et al. Studies on polynucleotides. 103. Total synthesis of the structural gene for an alanine transfer ribonucleic acid from yeast. J. Mol. Biol. 1972 Dec; 72 (2): 209–217 6. Carlson R. Access : The Changing Economics of DNA Synthesis. Nat. Biotechnol. 2009; 27:1091 –1094. [Cited 2010 Oct 1]. Available from: http://www.nature.com/nbt/ journal/v27/n12/full/nbt1209-1091.html. 7. Nomoto A, Omata T, Toyoda H, Kuge S, Horie H, Kataoka Y et al. Complete Nucleotide Sequence of the Attenuated Poliovirus Sabin 1 Strain Genome. Proc Natl Acad Sci U S A. 1982 Oct;79(19):5793-7. 8. Massung R, Liu LI, Qi J, Knight JC, Yuran T, Kerlavage AR et al. Analysis of the Complete Genome of Smallpox Variola Major Virus Strain Bangladesh-1975. Virology. 1994 Jun;201(2):215-40 9. Sanchez A, Kiley M, Holloway B, Auperin D. Sequence Analysis of the Ebola Virus Genome: Organization, Genetic Elements, and Comparison with the Genome of

40 THE TRIPLE HELIX Spring 2011 UChicago.indb 40

Marburg Virus. Virus Res. 1993 Sept; 29(3): 215-240. 10. Cello J, Paul A, Wimmer E. Chemical Synthesis of Poliovirus CDNA: Generation of Infectious Virus in the Absence of Natural Template. Science. 2002 Aug; 297(5583): 1016-1018. 11. Tumpey T, Basler C, Aguilar P, Zeng H, Solorzano A, Swayne D et al. Characterization of the Reconstructed 1918 Spanish Influenza Pandemic Virus. Science. 2005 Oct 7; 310(5745): 77-80. 12. International Gene Synthesis Consortium. IGSC Harmonized Screening Protocol. Available from: http://www.genesynthesisconsortium.org/Gene_Synthesis_ Consortium/Harmonized_Screening_Protocol.html. [ Oct 1, 2010. 13. Maurer S, Fischer M, Schwer H, Stahler C, Stahler P, Bernauer H. “Making Commercial Biology Safer: What the Gene Synthesis Industry Has Learned About Screening Customers and Orders.” Goldman School of Public Policy and Boalt Law School, University of California. 2009 Sept 17. 14. Centers for Disease Control . Applicability of the Select Agent Regulations to Issues of Synthetic Genomics. [Cited 2010 Oct 1]. Available from: http://www. selectagents.gov/resources/Applicability%20of%20the%20Select%20Agents%20 Regulations%20to%20Issues%20of%20Synthetic%20Genomics.pdf. 15. Department of Health and Human Services. Possession, Use, and Transfer of Select Agents and Toxins; Final Rule. [2010 Oct 2]. Available from: http://www. selectagents.gov/resources/42_cfr_73_final_rule.pdf. 16. James J. Caras. DNA Illustration [image on the internet]. Arlington, VA: National Science Foundation. [Cited on 23 Feb 2011]. Available from: http://www.nsf.gov/ news/mmg/mmg_disp.cfm?med_id=59456&from=search_list 17. Cynthia Goldsmith. 10816 [image on the internet]. Atlanta, GA: Public Health Image Library, Centers for Disease Control. [Cited on 23 Feb 2011]. Available from: http://phil.cdc.gov/phil/home.asp

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:26 PM


HARVARD

The Science of Sleep in the Changing of the Work Shift Linda Xia

W

e spend over a third of our lifetime in slumber. It is schedule equates to roughly 80 hours of work per week. With well established that such seemingly unproductive the demand and intensity of constant work, fatigue and sleep use of time is very much crucial to the functioning debt inevitably incur. Contrary to the common belief that a of the human body. Let’s phrase it this way: if you don’t sleep, good sleep on the weekend will cause one to “bounce right you die. Sleep deprivation leads to impaired judgment, delayed back”, this debt will accumulate [3]. response, decreased cardiovasWhat happens then? Well, cular health, and decrease in the preventable medical errors hapability to fend off infections [1]. pen. In a survey of residents by A reduction in physician However, the societal implicaWu et al., 41 percent of residents weekly work hours result in tions of such impairments are reported fatigue as a cause of their often not emphasized. Specifimost serious medical mistake and an increase in the number cally, the issue of extreme sleep 31 percent of mistakes reportedly of handoffs in care between deprivation of medical school resulted in patient fatalities [4]. students, residents, and physiIn a 2004 study comparing medimedical staff, especially for cians is triggering a cascade of cal errors made in the traditional patients who need continual medical errors that could have resident work schedule to those otherwise been preventable. In made in an intervention schedule care this light, sleep deprivation ceases that eliminated extended work to be an individual problem and shifts, interns in the traditional becomes one that has direct impact on the quality of health schedule made 36 percent more serious medical errors and care. Therefore, the issue of resident sleep deprivation needs 5.6 times more diagnostic errors. These data clearly indicate to be taken into consideration when designing the policies of that policies are needed to eliminate extended work shifts our hospital system and the physician work shift. of 24 consecutive hours or more [5]. So far, many hospitals According to the National Institute of Health (NIH), the have taken actions to reduce the 30 consecutive hours of work average healthy adult needs between 7.5-9 hours of sleep for shifts. In fact, the Institute of Medicine (IOM) has provided optimal function [2]. Medical residents - interns who have guidelines that include a maximum shift of 12 hours, at least completed their medical school training and are beginning 24 hours off every seven days, and no more than three contheir full time work in the hospital - are getting, on average, secutive night shifts [6]. 5-6 hours of sleep per night. From data collected in the early However, a 2010 study carried out by the Cincinnati 2000s, most residents often work “extended duration work Children’s Hospital demonstrated that although the residents shifts,” which are approximately 30 consecutive hours every have longer consecutive sleep times, the actual work-life balother shift [3]. Most are also on-call during the night. Such a ance has been disrupted due to an increase in work intensity during the reduced work period. The study focused on first-year residents and showed that in the pre-IOM group, 67 percent of the interns reported a good to excellent work-life balance, while only 20 percent did so in the IOM group. In correlation, 40% in the pre-IOM group indicated a very low to neutral workload intensity, while 0% did so in the IOM group [7]. In addition to the increase in work intensity, the quality of patient care is not necessarily improved. A reduction in physician weekly work hours result in an increase in the number of handoffs in care between medical staff, especially for patients who need continual care. As the new medical personnel is less Reproduced from [14] familiar with cases of patients, error rates

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 41

THE TRIPLE HELIX Spring 2011 41 4/22/2011 6:29:26 PM


HARVARD can increase [5]. the adverse effect of sleep inertia, which is the tendency for Therefore, what we’re seeing here is “a-rock-and-a-hard- low body/mental function after being woken up from stage 3 place” situation. Medical residents’ high number of work hours sleep [10]. Also, the features of the human circadian rhythm – a is causing preventable medical errors, yet a simple reduction cycle of sleepiness and wakefulness regulated by the neural in the interns’ work hours will not necessarily improve patient system– can also be taken into consideration when designsafety nor increase interns’ quality ing the work shift. The biological of life. Thus, the key to solving rhythm is such that sleepiness is Specifically, research has the issue of medical resident greatest during darkness, midshown that sleep opportunities sleep deprivation lies not in the afternoon, and early morning reduction of work hours, but in hours (12am- 7am) [11]. Thus, in the afternoon before a schedule design that exploits implementing shorter but more working overnight once the features of the human wakefrequent working periods will sleep cycle to increase the interns’ provide more sleep opportunities again diminishes circadian quality of sleep even when the (naps), which are proven to drasmisalignment that will impair total number of hours worked tically reduce sleepiness. (Note: remain the same. So then comes naps shorter than 15 minutes, performance the question, what is the optimal however, have not been shown to work shift? reduce sleepiness.) Specifically, The question needs to be addressed by taking into con- research has shown that sleep opportunities in the afternoon sideration the science of sleep. Sleep operates in cycles of four before working overnight once again diminishes circadian stages (1, 2, 3 and REM - rapid eye movement - sleep) . The misalignment that will impair performance [12]. Also, properly stages cycle from stage 1 through REM, and then begin with timed exposure to bright light and darkness in alignment with stage 1 again. A cycle takes, on average, 90 to 110 minutes. the circadian rhythm will help prevent rhythm misalignment, Stage 1 is the light sleep one experiences where he/she can be which results in poor quality sleep even when the quantity easily awakened. Stage 2 sleep is characterized slower brain of sleep is increased [13]. waves, with occasional bursts of rapid brain waves known as In all, the effects of sleep deprivation are pervasive in K-complex and sleep spindles. During stage 2, eye movement society. As studies have shown, sleep deprivation has generstops, heart rate slows, and body temperature decreases. Stage ated adverse implications in the medical field. In particular, 3, also known as “deep sleep”, is characterized by delta waves fatigue due to the lack of sleep has generated a vast increase in in the brain. When awakened in stage 4, one feels disoriented. the number of preventable medical errors. The consequences In this stage, blood redirects from the brain to the muscle, have prompted many calls for hospitals to reform and reduce restoring body energy. Lastly, we have REM sleep - character- the work schedule of the residents. However, such reduction ized by rapid eye movement, paralysis of body muscles, and in work hours does not lead to positive results, with still a increase in heart rate [8]. high rate of errors due to medical hand-offs and heightened The sleep cycle is very much involved in determining daily workload intensity for residents. In light of the dilemma, the quality of sleep. Thus, a schedule design that takes into perhaps one of the few remaining solutions lies in using the consideration the biological causes of sleepiness and alertness scientific principles of sleep and circadian rhythm to improve will aid patient care tremendously. For example, a system that the residents’ quality of sleep, while maintaining the same allows residents sleep time allotments in blocks of 90 minutes number of work hours. – the duration of a complete sleep cycle - will wake residents when they have surfaced back to stage 1 of sleep, when body and Linda Xia is a sophomore at Harvard University. brain are as close to wakefulness as possible [9]. This reduces References 1. Agency on Healthcare Research and Quality. “ Limiting Medical Interns’ Work to 16 Consecutive Hours Can Substantially Reduce Serious Medical Errors in Intensive Care Units .” 27 Oct. 2004. Web. 8 Oct. 2010. <http://www.ahrq.gov/news/press/ pr2004/16hrintpr.htm> 2. Campbell, S.S., Dawson, D., Anderson, M.W. Alleviation of sleep maintenance insomnia with timed exposure to bright light. Journal of the American Geriatric Society 41:829-836, 1993. 3. Cincinnati Children’s Hospital. “ Restricting Medical Resident Work Hours Does Not Affect Average Sleep Time.” 1 May, 2010. Web. 8 Oct. 2010. <http:// www.cincinnatichildrens.org/about/news/release/2010/restricting-hours-not-affectresidents-sleep-5-1-10.htm> 4. Gold, DR, Rogacz, S., Bock, N., Tosteson, TD, Baum TM, Speizer FE, Czeisler, CA. Rotating shift work, sleep, and accidents related to sleepiness in hospital nurses. Am J Public Health. 1992 July; 82(7): 1011–1014. 5. Helpguide. “ How Much Sleep Do You Need?” 2010. Web. 8 Oct. 2010. <http:// www.helpguide.org/life/sleeping.htm> 6. Institute of Medicine. “Resident Duty Hours: Enhancing Sleep, Supervision, and Safety.” 15 Dec. 2008. Web. 8 Oct. 2010. < www.iom.edu/residenthours> 7. Landrigan, CP, Rothschild, JM, Cronin, JW, Kaushal, R, Burdick, E, Katz, JT, Lilly,

42 THE TRIPLE HELIX Spring 2011 UChicago.indb 42

CM, Stone, PH, Lockley, SW, Bates, DW, M.D., and Czeisler, CA. Effect of Reducing Interns’ Work Hours on Serious Medical Errors in Intensive Care Units. N Engl J Med 2004; 351:1838-1848 8. National Center on Sleep Disorder and Research. “Patient and Public Information.” 2010. Web. 8 Oct. 2010. < http://www.nhlbi.nih.gov/about/ncsdr/ patpub/patpub-a.htm> 9. National Sleep Foundation. “Information on Sleep Health and Safety.” 2010. Web. 8 Oct. 2010. <www.sleepfoundation.org/> 10. Roth, T., Roehrs, T., Carskadon, M.A., Dement, W.C. Daytime sleepiness and alertness. In: M. Kryger, T. Roth, W.C. Dement (eds.), Principles and Practice of Sleep Medicine, 2nd ed., pp. 40-49. Philadelphia: W.B. Saunders, 1994. 11. Sleepdex. “Stages of Sleep.” 2010. Web. 8 Oct. 2010. < http://www.sleepdex.org/ stages.htm> 12. Sleepdex. “Sleep Inertia.” 2010. Web. 8 Oct. 2010. < http://www.sleepdex.org/ inertia.htm> 13. Wu AW, Folkman S, McPhee SJ, Lo B. Do house officers learn from their mistakes? JAMA. 1991; 265:2089-2094 14. N.C. Industrial Commission Safety Bulletin [image on the internet]. Raleigh, NC: NC Industrial Commission. [Cited 2011 Mar 2]. Available from: http://www.ic.nc. gov/ncic/pages/1008safe.htm

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:27 PM


ASU

The Path to the Bloodless Butcher: In Vitro Meat and Its Moral Implications Gregory Yanke

W

ithin a decade, breakfast sausages could originate from a laboratory rather than a farm. In response to demands for safer meat sources following the mad cow disease scare of the 1990s, scientists developed the technology to produce in vitro meat. In vitro meat refers to animal muscle tissue that scientists grow in a controlled environment. In addition to providing consumers with an alternative food source, in vitro meat will generate debate regarding society’s perception of ethical food choices. At the crossroads of moral acceptance of in vitro meat are two conflicting perspectives: the utilitarian view that people should welcome a food source that will reduce animal suffering and environmental harm and the deontological view that animals have the right to live free from human possession or interference. Despite these conflicting positions, society may ultimately accept in vitro meat as one of the most moral food choices, given that animal and human well-being should improve from its adoption. Producing laboratory meat is an extension of the established technology of muscle tissue culturing [1]. While there are many potential techniques for producing in vitro meat, the scaffold-based method is the most promising, given its experimental success in growing muscle cells [2]. The method produces meat that is suitable for ground products, such as hamburger, sausages, and chicken nuggets [3]. The technology required to replicate more structured meat offerings, such as steaks or pork chops, is still in its infancy. While research in this area is promising, the lack of blood circulation that can provide nutrients to the tissue currently limits meat growth [2]. The scaffold-based process begins with the extraction of livestock myoblasts, precursors to muscle cells, from a live animal by way of biopsy [3]. The cell-donating animal is not harmed in the process. Once the extracted cells divide over several days, they are placed in a vessel known as a bioreactor, along with a culture medium of fetal bovine serum and a scaffold system [2]. The myoblasts grow on the moving scaffold system, mature into muscle cells, and form a thin layer of meat over several weeks [4]. The primary obstacle to the commercial viability of in vitro meat production is the $30 per ounce cost of the fetal bovine serum in which the meat grows. This translates into over $1,000 for a pound of meat, compared to current ground beef prices in the $3 per pound range. [4]. To overcome this cost impediment, researchers are experimenting with less expensive media, for example, extracts derived from amino acid-rich mushrooms [3]. In initial experimentation, meat growth has actually been greater

© 2011, The Triple Helix, Inc. All rights reserved. UChicago.indb 43

with the mushroom extract medium than with fetal bovine serum [2]. If this mushroom culture medium proves viable, then competitively priced in vitro meat products could become a reality. Although the development and sale of cultured meat is still years away, there are already divergent views concerning its moral efficacy, even within the animal rights movement. Dr. Peter Singer, a Princeton University bioethics professor and key influence in the animal rights movement, advocates the production of in vitro meat because it will reduce animal suffering related to factory farming [5]. However, not all animal rights activists share his view. In 2008, People For the Ethical Treatment of Animals (PETA) offered a million dollar reward to the first person who can design a commercially viable method of producing in vitro meat by 2012 [6]. The decision to offer the reward sparked debate among organization members, regarding whether PETA’s aim should be to save animal lives or to uphold the general principle that animals do not exist for the purpose of human possession. In essence, this is Reproduced from [16] a dispute between the utilitarian view that has centered on the pragmatic benefits that society could accrue through in vitro meat and the deontological view that treating animals as human possessions, even if livestock are subject to mere biopsies instead of slaughter, is morally wrong. From a utilitarian perspective, in vitro meat production would address the principal ethical objections to meat consumption. These objections relate to the view that society should minimize animal suffering and environmental damage that result from meat production in order to maximize the well-being of all sentient creatures. Opponents of meat consumption contend that largescale industrial farming practices fail to do this. In 2008, the Pew Commission on Industrial Farm Animal Production identified numerous problems with factory farming that cause harm to animals and the environment: intensive animal confinement, the use of chemical inputs, energy and water demands, animal waste, and antibiotic resistance due to the use of antimicrobial drugs [7]. The Commission concluded that current factory farm processes “fall short of current ethical and societal standards” for animal treatment due to overcrowding and confinement [7]. These are ethical concerns that in vitro meat production would alleviate. The precise impact that in vitro meat processes would have on the environment is uncertain. One study has estimated that laboratory meat production would require substantially less land and produce far less greenhouse gas emissions per pound of meat

THE TRIPLE HELIX Spring 2011 43 4/22/2011 6:29:27 PM


ASU than traditional farming [8]. It would also use less energy than estimated to result in approximately 2.7 times the energy use, existing beef, lamb, and pork production facilities, and slightly 48 times the greenhouse gas emissions, and 250 times the land more energy that conventionally produced poultry [8]. However, use of in vitro production for the same quantity of meat [8]. these findings are based on many assumptions regarding the When compared to growing food for vegan consumption, beef bioreactor and culture medium used, sterilization and cultivation production results in about 2.5 times the energy use, 7 times the methods, muscle cell density within the bioreactor, and the scale greenhouse gas emissions, and 3 times the land use efficiency of production. Therefore, because no one has produced in vitro [12-14]. Crop production also relies on the use of large quantities meat on a large scale, these generalized assumptions may not be of fossil fuels, pesticides, fertilizers, and water, while in vitro meat accurate. However, it is clear that in vitro meat processes would production does not. produce much less animal waste than all forms of traditional From a deontological perspective, even if in vitro meat farming. In addition, animal suffering would be greatly reduced, reduces animal suffering and environmental harm, the ends particularly with the use of the scaffold-based system, which do not justify society’s possession of livestock when alternative would only require livestock biopsies. In vitro meat would likely sources of food are available. In the words of Dr. Tom Regan, require antibiotics to ensure sterility in the process. However, an animal rights philosopher and a proponent of deontology, because the meat would be produced in a closed system rather animals are “subjects of a life” with inherent value that should than an open farm environment, the leakage of antibiotics not be subject to unwarranted interference from humans from the bioreactor, as well as any [15]. The problem with this view bioreactor contamination that could is that it ignores the consequences In vitro meat offers a cause environmental harm, would of existing dietary alternatives. If more likely be contained within animal well-being would improve morally superior choice a laboratory [9]. In totality, this with the introduction of in vitro meat to conventional meat evidence suggests that in vitro meat due to a reduction in suffering and offers a morally superior choice to less harm to their environment, it consumption that would conventional meat consumption that is difficult to reject this food source reduce animal suffering would reduce animal suffering and simply because animal biopsies environmental impact. are potentially invasive. If a moral If society uses animal suffering and environmental harm as approach is based upon concern for animals, ignoring their overall the basis for judging the morality of food consumption patterns, welfare is counterintuitive. then eating in vitro meat may be on par, or even ethically superior, If the well-being of animals and of our planet are our primary to a vegetarian or vegan diet. With in vitro meat production, animal moral concerns in choosing what society eats, then the commercial suffering would be virtually eliminated, as the process would introduction of in vitro meat will impact our view of the ethical diet. only require a minimal number of animal biopsies to provide Although society often views morality as a static code that governs the necessary cells for meat growth [10]. In comparison, crop our lives, scientific advances, such as in vitro meat technology, harvesting for vegan diets consisting of vegetables, fruits, and undoubtedly change the utilitarian moral landscape by altering grains results in the deaths of about two animals for every million the consequences of our decisions. In the future, perhaps further calories of food produced [11]. Compared to conventional meat, progress in food technology or agricultural processes will again these food sources cause virtually no harm to animals. alter our concept of ethical food choices and lead us further down While no studies exist that directly compare the environmental the path to the bloodless butcher. impact of in vitro meat with vegan diets, statistics from the comparison of these consumption alternatives to conventional Gregory Yanke is a student at Arizona State University. His article meat production suggest that the ecological footprint from will appear in the Cambridge, Cornell, University of Melbourne, and producing in vitro meat may be smaller. Beef production is University of Chicago editions of the journal. References 1. Pincock, S. Meat, in Vitro? The Scientist [Online]. 2007 Sep. 1 [cited 2010 Sep. 6]; 21(9); Available from: http://www.the-scientist.com/article/display/53515/. 2. Edelman, P., D. McFarland, V. Mironov, and J. Matheny. In Vitro-Cultured Meat Production. Tissue Engineering. 2005, 11(5/6); 659-62. 3. McClinton, L. Test-Tube Meat. Beef. 2007 Feb., 43(6); 48. 4. Jozefowicz, C. Mystery Meat. Current Science. 2007 Apr. 6, 92(14); 6. 5. Singer, P. Interview by G. Yanke. 2010 Nov. 6. 6. Schwartz, J. PETA’s latest tactic: $1 Million for Fake Meat. New York Times [Online]. 2008 Apr. 21 [cited 2010 Oct. 17]; Available from: http://www.nytimes.com/2008/04/21/ us/21meat.html. 7. Report of the Pew Commission on Industrial Farm Animal Production. Putting Meat on the Table: Industrial Farm Animal Production in America. [Online]. Available from: http://www.ncifap.org/bin/e/j/PCIFAPFin.pdf. 8. Tuomisto, H., and M. Joost Teixeira de Mattos. Life Cycle Assessment of Cultured Meat Production. [Online] 2010. Available from: http://www.new-harvest.org/img/ files/tuomisto_teixiera_de_mattos_2010_cultured_meat_lca.pdf. 9. Edelman, P. In Vitro Meat Production. [Online]. Available from: http://www.new-

44 THE TRIPLE HELIX Spring 2011 UChicago.indb 44

harvest.org/img/files/Edelman.pdf. 10. Sandhana, L. Test Tube Meat Nears Dinner Table. Wired Magazine [Online]. 2006 Jun. 21 [cited 2011 Jan. 21]; Available from: http://www.wired.com/print/science/ discoveries/news/2006/06/71201. 11. Animal Visuals. Number of Animals Killed to Produce One Million Calories in Eight Food Categories. [Online]. Available from: http://www.animalvisuals.org/data/1mc. 12. Kraftson, S., J. Pohorelsky and A. Myong. Vegetarianism and the Environment: The Need for Sustainable Diets. Michigan Undergraduate Research Journal. [Online]. Available from: http://umurj.org/Feature%20Articles/feature-article/62-vegetarianismand-the-environment. 13. People For the Ethical Treatment of Animals. Fight Global Warming by Going Vegetarian. [Online]. Available from: http://www.peta.org/issues/animals-used-forfood/global-warming.aspx. 14. Eshel, G., and P. Martin. Geophysics and Nutritional Science: Toward a Novel, Unified Paradigm. Journal of Clinical Nutrition. 2009, 89(5); 1710-16. 15. Regan, T. A Case for Animal Rights. 2nd ed. Berkeley: University of California Press, 2004. 16. http://www.istockphoto.com/stock-photo-1528588-obtaining-a-sample.php.

© 2011, The Triple Helix, Inc. All rights reserved. 4/22/2011 6:29:27 PM


ACKNOWLEDGMENTS

The Triple Helix at the University of Chicago would sincerely like to thank the following groups and individuals for their generous and continued support: University of Chicago Annual Allocations Student Government Finance Committee Bill Michel, Assistant Vice President for Student Life and Associate Dean of the College Arthur Lundberg, Student Activities Resource Coordinator The Biological Sciences Division The Physical Sciences Department The Social Sciences Department All of our amazing Faculty Review Board Members

If you are interested in contributing your support to The Triple Helix’s mission, whether financially or otherwise, please feel free to visit our website at http://thetriplehelix.org. © 2011 The Triple Helix, Inc. All rights reserved. The Triple Helix at the University of Chicago is an independent chapter of The Triple Helix, Inc., an educational 501(c)3 non-profit corporation. The Triple Helix at the University of Chicago is published once per semester and is available free of charge. Its sponsors, advisors, and the University of Chicago are not responsible for its contents. The views expressed in this journal are solely those of the respective authors.

UChicago.indb 1

4/22/2011 6:29:27 PM


Business and Marketing Interface with corporate and academic sponsors, negotiate advertising and cross-promotion deals, and help The Triple Helix expand its horizons across the world!

Leadership: Organize, motivate, and work with staff on four continents to build interchapter alliances and hold international conferences, events and symposia. More than a club.

Innovation: Have a great idea? Something different and groundbreaking? Tell us. With a talented team and a work ethic that values corporate efficiency, anyone can bring big ideas to life within the TTH meritocracy.

Literary and Production Lend your voice and offer your analysis of today’s most pressing issues in science, society, and law. Work with an international community of writers and be published internationally.

A Global Network: Interact with high-achieving students at top universities across the world— engage diverse points of view in intellectual discussion and debate.

Science Policy Bring your creativity to our newest division, reaching out to students and the community at large with events, workshops, and initiatives that confront today’s hardest and most fascinating questions about science in society.

For more information and to apply, visit www.thetriplehelix.org. Come join us.

UChicago.indb 2

4/22/2011 6:29:27 PM


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.