Fall 2010 | UChicago
A Production of The Triple Helix
THE SCIENCE IN SOCIETY REVIEW The International Journal of Science, Society and Law
Reconsidering Health in the 21st Century Medicine: Lost in Translation? Waist-to-Hip Ratio as a Universal Standard of Beauty
Caffeinated Youth ©iStockphoto.com/Lise Gagne
Regulation of Energy Drinks in Question
ASU • Berkeley • Brown • Cambridge • CMU • Cornell • Dartmouth • Georgetown • Harvard • JHU • Northwestern • NUS • Penn • UChicago • UCL • UNC Chapel Hill • University of Melbourne • UCSD • Yale
EXECUTIVE MANAGEMENT TEAM
BOARD OF DIRECTORS
Chief Executive Officer Bharat Kilaru
Chairman Kevin Hwang
Executive Editor-in-Chief Dayan Li
Erwin Wang Kalil Abdullah Melissa Matarese Joel Gabre Manisha Bhattacharya Julia Piper
Chief Production Officer Chikaodili Okaneme Executive Director of E-Publishing Zain Pasha Executive Director of Science Policy Karen Hong Chief Operations Officer, North America Jennifer Ong Chief Operations Officer, Europe Francesca Day Chief Operations Officer, Asia Felix Chew Chief Financial Officer Jim Snyder Chief Marketing Officer Mounica Yanamandala GLOBAL LITERARY AND PRODUCTION Senior Literary Editors Dhruba Banerjee Victoria Phan Robert Qi Linda Xia Angela Yu Senior Production Editors Darwin Chan Annie Chen Frankey Chung Jenny Crowhurst Indra Ekmanis Mabel Seah Robert Tinkle Jovian Yu
TRIPLE HELIX CHAPTERS North America Chapters Arizona State University Brown University Carnegie Mellon University Cornell University Dartmouth College Georgetown University Georgia Institute of Technology Harvard University Johns Hopkins University Massachusetts Institute of Technology Northwestern University Ohio State University University of California, Berkeley University of California, San Diego University of Chicago University of North Carolina, Chapel Hill University of Pennsylvania Yale University Europe Chapters Cambridge University University College London Asia Chapters National University of Singapore Peking University Hong Kong University Australia Chapters University of Melbourne University of Sydney Monash University
THE TRIPLE HELIX A global forum for science in society
The Triple Helix, Inc. is the world’s largest completely student-run organization dedicated to taking an interdisciplinary approach toward evaluating the true impact of historical and modern advances in science. Work with tomorrow’s leaders Our international operations unite talented undergraduates with a drive for excellence at over 25 top universities around the world. Imagine your readership Bring fresh perspectives and your own analysis to our academic journal, The Science in Society Review, which publishes International Features across all of our chapters. Reach our global audience The E-publishing division showcases the latest in scientific breakthroughs and policy developments through editorials and multimedia presentations. Catalyze change and shape the future Our new Science Policy Division will engage students, academic institutions, public leaders, and the community in discussion and debate about the most pressing and complex issues that face our world today.
All of the students involved in The Triple Helix understand that the fast pace of scientific innovation only further underscores the importance of examining the ethical, economic, social, and legal implications of new ideas and technologies — only then can we completely understand how they will change our everyday lives, and perhaps even the norms of our society. Come join us!
UChicago.indb 2
1 1 /9 /2 0 1 0 9 :4 3 :3 8 AM
TABLE OF CONTENTS Health in the 21st Century
Depression:
6
Hope through Cognitive Behavioral Therapy
11
Bio Art:
27
Obesity epidemic in the US
A controversial new art form
4
Caffeinated Youth: Regulation of Energy Drinks in Question
UCHICAGO UC CH C HI
Cover Article Margaret Kim, CMU
Local Articles 6
Cognitive Behavioral Therapy: Finding the Right Treatment for Depression
9
Synesthesia and You
11
Reconsidering Health in the 21st Century
Kara Christensen
14
Medicine: Lost in Translation?
Benjamin Dauber
18
Island Biogeography and Continental Habitats: Evaluating SpeciesArea Relationships in Terrestrial Conservation Efforts
Liddy Morris
21
Waist-to-Hip Ratio as a Universal Standard of Beauty
Jacob Parzen
23
Bioluminescence and Fluorescence: Understanding Life through Light
Bonnie Sheu
27
Bio Art: Biotechnology’s Slippery Slope
International Features 30
Music and the Mind: Can Music Benefit Those with Autism?
33
An Ever-Evolving Epidemic: Antibiotic Resistance and its Challenges
36
Do We Have Conscious Control Over Which Products We Purchase?
39
Zero:The Riddle of Riddles
41
The Great Disjoint of Language and Intelligence
43
Making Sense of Our Taste Buds
Cover design courtesy of Chikaodili Okaneme, Cornell University
UChicago.indb 1
Joseph Bartolacci Alexandra Carlson
Anna Zelivianskaia
Elizabeth Aguila, CMU
Kevin Berlin, UC Berkeley
Emily Raymond, Melbourne Ritika Sood, Cambridge Koh Wanzi, NUS
Angela Yu, UCSD
1 1 /9 /2 0 1 0 9 :4 3 :4 0 AM
INSIDE TTH
Message from the Presidents and Editor in Chief Dear Readers, The transcription of the double helix of DNA and its inevitable mutations have been central to the evolution of all forms of life that we know. Through this process of evolution, life has been able to adapt to new surroundings and, over time, take on the truly spectacular and varied forms that it does today. While we at the Triple Helix cannot lay claim to such an elegant process occupying geological time scales, we would like to think that we, too, have been able to adapt to a changing environment and have been expanding our reaches to new areas. Not only this, but it didn’t even take us millions upon millions of years to do this!
STAFF AT UCHICAGO Co-Presidents Sean Mirski Bharat Kilaru Editor in Chief Dan Plechaty Managing Editors Jonathan Gutman Kara Christensen Jacob Parzen Co-Directors of Science Policy Jim Snyder Michelle Schmitz Director of Marketing Lauren Blake Writers Kara Christensen Elizabeth Morris Jacob Parzen Anna Zelivianskaia Bonnie Sheu Joseph Bartolacci Benjamin Dauber Alexa Carlson Associate Editors Jacob Alonso Leland Bybee Lakshmi Sundaresan Sylwia Nowak Michelle Schmitz Laurel Mylonas-Orwig Gregor-Fausto Siegmund Andrew Kam Faculty Review Board Timothy Sentongo Charles Kevin Boyce Dario Maestripieri Stephen Pruett-Jones Trevor Price Michael LaBarbera Benjamin Glick Shona Vas David Glick Howard Nusbaum
We are referring, of course, to our revamped website, which hosts online-only articles, blog posts, archives of previous issues, and notifications about Chapter events. Indeed, you may not even be holding this journal in your hand, but are perhaps reading the PDF file online. Either way, despite our ongoing endeavors into the various forms of new media, the print journal still represents the essence of the Triple Helix mindset – engaging in the science that affects our society in an interesting, scientifically rigorous and multidisciplinary fashion. We hope that you enjoy this issue, however you may choose to access it, and that you take the time to get involved in the ongoing discussion both on our website and through the many speakers and events that we sponsor throughout the year. Thank you, Sean Mirski and Bharat Kilaru Co-Presidents Dan Plechaty Editor-in-Chief
Message from the Managing Editors These days, news is pushed onto iPhones, published in blogs, and linked to all other social media, and yet many scientific discoveries remain unknown to the general public. For every publicized breakthrough, many others remain lost, overwhelmed by the sheer mass of research being produced today. However, the Triple Helix seeks to repair that lack of commutation through the comprehensive review and concise analysis of a variety of scientific topics, focusing not just on innovation and discovery, but also on how science influences life outside of the lab. And so, with each publication, we hope to promote the accessibility of scientific thought. With this issue of The Triple Helix introducing our first literary cycle as Managing Editors, we are excited to present the passionate and dedicated work of our writers and editors. We hope that you enjoying reading this issue as much as we enjoyed creating it. Sincerely,
Jonathan Gutman and Kara Christensen Managing Editors
2 THE TRIPLE HELIX Fall 2010 UChicago.indb 2
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 3 :4 2 AM
INSIDE TTH
Message from the CEO Dear Reader, The Triple Helix is a completely unique organization, created and run entirely by undergraduates devoted to creating a global forum for science in society. What at first appears a focused interest is actually an eclectic vision that ventures to present ideas from students studying medicine, law, math, politics, and so much more. With more than 20 chapters across the world and more than 1,000 students from a wide range of disciplines, The Triple Helix offers a truly unique presentation of academic passion. Before you look through The Science in Society Review issue awaiting you, I hope to share with you my insight into the level of work behind every word. The articles in the following pages are derived from an outstanding level of editorial and literary commitment. Each piece represents not only the work of the writer, but also the work of one-on-one associate editors, a highly effective editorial board, astute international senior literary editors, an impressive faculty review board, and an imaginative production staff that reinvents the journal every issue. As you read the following pieces, we hope you will come to appreciate the truly professional level of work that goes into every paragraph. And it is with that same dedication to improvement that every division of The Triple Helix creates progress everyday. As we enter the next cycle, I hope to witness the next surge of interest and passion from every member as we strive to achieve the dreams we have always had for the organization. We invite you as readers and supporters to come forward and develop new visions that will push us to the next level. Sincerely, Bharat Kilaru CEO, The Triple Helix, Inc.
Message from the EEiC and CPO Scientific discoveries and technological innovations emerge everyday from humanity’s impulse to decipher nature and improve the human condition. Amidst this rush of advancement, it is easy to marvel at new gadgets, swoon over groundbreaking ideas, and overlook underlying complexities. However, science is not limited to just findings or inventions. As it seeps through the social framework, it is continually influenced by the ethics, economics, politics, laws, and culture of our societies. Therefore, to appropriately understand science’s true potential, we must analyze the assumptions, realities, motives, and implications of our scientific knowledge and endeavors. With this spirit of critical exposure and examination, The Science in Society Review strives to spark and contribute to an ongoing discussion about the most important social issues in science today. This journal does not provide answers. Instead, it raises questions – questions that our student writers attempt to address in their investigations of both locally and globally relevant topics. Given the diverse backgrounds our international body of writers, the greatest value in this publication is the variety of perspectives that they put to paper and share with you. While you read the pieces on which our writers and editors have spent so much time and thought, we ask that you be excited, startled, incensed, and actively engaged. We view this journal not as a repository of passive information, but as a platform for an active conversation about the complicated relationship between science and society. In hopes of encouraging social consciousness and positive social change, we at The Triple Helix collectively aim to keep this conversation going, and your participation is paramount. After all, even the greatest changes in society have started out as simple conversations. Sincerely, Dayan Li and Chikaodili Okaneme, Executive Editor-in-Chief and Chief Production Officer © 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 3
THE TRIPLE HELIX Fall 2010 3 1 1 /9 /2 0 1 0 9 :4 3 :4 3 AM
CMU
Caffeinated Youth: Regulation of Energy Drinks in Question Margaret Kim
E
nergy drinks are the star of the beverage industry; no such products existed 15 years ago, but today, they have conquered the drink market. Since the debut of Red Bull in 1997, more than 500 new energy drinks have been rushed into the beverage market, establishing the 5.7 billion dollar drink industry by 2006 with an annual growth of 55% in the United States alone [1,2]. Despite the recession, the sales of a new form of concentrated energy beverage – the so called “energy shots” – have been undefeatable, with sales expected to double each year to about $700 millions [3]. With the rise of the caffeinated beverage industry, it became common to see supermarket and convenient stores dedicating an entire aisle of the beverage section to these energy drinks. What makes energy drinks so popular? As the name implies, the consumption of energy drinks are intended to give quick “energy” to people. Energy drinks can be described as part soft drink and part nutritional supplement. Like many soft drinks, the main source of this energy comes from both caffeine and sugar. In addition, energy drink companies claim that other components are added to “enhance” the nutritional value and boost the energy power in the body. These components include ephedrine, taurine, ginseng, B-vitamins, guarana seed, carnitine, inositol, and ginkgo biloba – ingredients known to be stimulants of the nervous system or a type of amino acids that helps boost metabolism [4]. However, the rise of energy drinks came with concerns. Energy drinks are especially popular among young adults, the prime advertising targets of drink companies. Research has reflected that youth are more susceptible to misuse and the commercial message lacks the potential negative health effect of energy drinks. As a result, scientists claim the need for regulation of these energy drinks. The Growing Presence on College Campus The popularity of energy drinks has especially penetrated into college campuses, where students are in particular demand for quick energy. Through usage of energy drinks, students seek assistance for lack of sleep, physical activities, late night partying, and studying for exams and projects. It is common to see a line of students holding some type of energy drink at the convenient store during final exam season. But to many students, drinking an energy drink is not a seasonal use, but a regular habit. An engineering student from Carnegie Mellon University stated “I habitually grab an energy drink at the 4 THE TRIPLE HELIX Fall 2010 UChicago.indb 4
convenient store to be alert in my morning class.” A survey of 496 college students conducted by the Department of Nutrition and Dietetics of East Carolina University reported that 51% of students drink more than one energy drink each month. Out of the energy drink users, the most common reasons for use were to overcome insufficient sleep (67%), to increase energy (64%), to drink with alcohol while partying (54%), to study for finals or projects (50%), to drive for a long period of time (45%), and to treat a hangover (17%) [1]. The Marketing Energy Drink Energy drink companies understand the busy lifestyle of the college student; they know the competitive nature of classes and common motto of college students “to study hard and play hard.” As a result, companies spend millions of dollars to cater to young adults and college students. These products are advertised as a “natural performance enhancer” for studying and additional activities enjoyed by many young adults. Red Bull claims to Reproduced from [8] be a “functional beverage” that “improves performance, increases concentration and reaction speed, increases endurance, stimulates metabolism,” appropriate for sports, driving, and leisure activities [1,5]. Many products appeal to the young adult culture. For example, Rockstar derived its name from a popular music called “party like a rock star,” while Monster’s slogan is to “unleash the beast,” targeting the youth’s desire for glorification and wildness. Energy drink companies sponsor athletes, sporting events, nightclub, and music bands. In addition, many advertisements push their consumers toward sex appeal. In extreme cases, the products are promoted with an image of drug usage; cocaine energy drink has been marketed as a “legal alternative” to the class A drug, while Blow is a white powdered energy drink mix that comes in a package that portrays the image of using cocaine [1]. These rigorous marketing techniques establish a distinct image of energy drinking among students. One study shows that energy drink consumptions have been associated with a “toxic jock” identity and masculinity among college undergraduates [6]. Such identity is associated with risk-taking behavior such as “drinking, sexual risk-taking, delinquency, and interpersonal violence” [6]. Surprisingly, a survey of 795 undergraduate students indicates that the measure of masculinity and risk-taking behaviors has a positive relationship with the frequency of energy drink consumption [7]. The Potential Health Effect and Call for Regulation This image of energy drinks among youth is even more problematic © 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 3 :4 9 AM
CMU because of the potential health-effect that consumption may cause. Although the energy drink companies are rigorously boasting about the amount of energy the product will provide, scientists claim that they ignore the potential effects of their product. Dr. Roland Griffiths, a professor of psychiatry and neuroscience at Johns Hopkins University School of Medicine, voiced his concern about energy drinks’ inadequate labeling and the advertisements as a natural performance enhancer. Such messages are targeted toward young adults, who are less tolerant to caffeine, and may result in consumption with negative consequences. Griffiths stated “many of these drinks do not label the caffeine content,” while some energy drinks contain as much caffeine as found in 14 cans of soda [8]. Commonly reported cases of consequences of consumption were caffeine intoxication and overdose which include symptoms such as nervousness, anxiety, restlessness, insomnia, gastrointestinal upset, tremore, tachycardia, and psychomotor agitation [9]. In the long term, overconsumption may result in caffeine dependence and withdrawal [9]. What is also concerning is the fact that mixing caffeine with different stimulant has not been proven to be completely safe [9]. Shockingly, in a few cases, consumption of energy drinks has been linked to death. In these cases, the individual was either performing a rigorous physical activity or consuming alcohol while drinking the energy drink. According to The Higher Education Center for Alcohol and Other Drug Abuse and Violence Prevention, three people died in Sweden after drinking RedBull: two had mixed Red Bull with alcohol, and the third drank it after an exercise session [10]. In 2008 in Florida, a sixteen-year old student died after consuming alcohol and energy drinks at a party [10]. Despite such a dangerous link between alcohol and energy drinks, an increasing number of college students are consuming energy drinks with alcohol. In the East Carolina survey, out of 496 college students, 27% of students reported mixing energy drinks and alcohol, 49% of students consumed more than three energy drinks per occasion [1]. Griffiths claims that when energy drinks and alcohols are consumed together, the symptoms of alcohol intoxication are not as evident, increasing the potential for alcohol-related accident and abuse. A different survey of college students indicated that compared to those who consumed alcohol alone, students who consumed alcohol mixed with energy drinks had a significantly higher frequency of alcohol-related consequences including becoming the offender or victim of sexual assault, getting involved in drunk driving, or being injured [9]. Such high concern for the potential health and safety issues related with energy drink and alcohol lead the Food and Drug
Administration to doubt the safety and legality of nearly 30 manufactures of caffeinated alcoholic beverages in 2009 [11].
References
8. http://teens.drugabuse.gov/blog/wp-content/uploads/2010/03/energy-molecule.gif 9. Doheny, Kathleen. “Energy Drinks: Hazardous to Your Health?” WebMD Health News. 24 Sept 2008. http://www.webmd.com/food-recipes/news/20080924/energydrinks-hazardous-to-your-health 10. Griffiths, Roland R. Caffeinated Energy Drinks—A Growing Problem. Drug Alcohol Depend. 2009 January 1; 99(1-3):1-10. 11. Kapner, Daniel Ari. Ephedra and energy Drinks on College Campus. Infofacts Resources. The Higher Education Center for Alcohol and Other Drug Abuse and Violence Prevention. July 2003. http://www.higheredcenter.org. 12. Herndon, Michael. FDA to Look Into Safety of Caffeinated Alcoholic Beverages Agency Sends Letters to Nearlt 30 Manufacturers. FDA News Release. Nov 13, 2009. http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm190427.htm
1. Malinauskas, Brenda M., A survey of energy drink consumption patterns among college students. Nutritional Journal page 6.Oct 2007 2. Boyle M, Castillo VD: Monster on the loose. Fortune 2006,154: 116-122. 3. Neuman, William. “Energy Shots Stimulate Power Sales,” New York Times. 10 July 2009. 4. Watson, Stephanie, “How does Energy Drink Works?” TLC Cooking. 10 April 2010. http://recipes.howstuffworks.com/energy-drink.htm. 5. Benefits of Red Bull. Red Bull USA. 10 April 2010. http://www.redbullusa.com/cs/ Satellite/en_US/Red-Bull-Home/Products/011242746208542. 6. Miller Kathleen PhD. Energy Drinks, Race, and Problems behaviors among college students. J Adolesc Health. 2008 November; 43(5): 490-497. 7. Miller Katheleen PhD. Wired: Energy Drinks, Jock Identity, Masculine Norms, and Risk Taking. J Am Coll Health. 2008; 56(5): 481-489.
© 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 5
The Challenge of Regulation Because of this ongoing problem with energy drink consumption, several countries have begun regulating the sales of energy drinks. Energy drinks in the European Union require a “high caffeine content” label, while those in Canada need to indicate the danger of consumption with alcohol to be indicated on the energy drink [9]. Norway and France have restricted the sale of energy drinks, while Denmark has prohibited the sale [9]. The regulation of energy drinks in United States has been difficult for numerous reasons. First, energy drinks are generally marketed as dietary supplements rather than food [8]. Although the FDA regulates the caffeine contents of soft-drinks, which is considered food, to 71 mg per 12 fluid ounces, there is no limit prescribed for energy drinks, because they are considered as dietary supplements [8]. Second, caffeine is a natural compound widely consumed for years in coffee and tea [9]. The spokespersons of American Beverage Association pointed out that most energy drinks contain the same or less amount of caffeine than a cup of brewed coffee [8]. They argue that if labels are required on energy drinks, brewed coffee should be subjected to labeling as well [8]. They stated, “Energy drinks can be part of a balanced lifestyle when consumed sensibly.” Regulating consumer products has always been a complicated subject. Not surprisingly, the questions posed on energy drinks resemble those of cigarette and alcohol. Abuse of cigarettes and alcohol has been identified with negative health consequences. As a result, they have been involved in a never-ending debate on the regulation of areas such as advertising, labeling, and sales to minors. The questions related to energy drinks are comparable: is it acceptable for energy drink companies to capitalize on youth culture which may result in dangerous consequences? Or is it a matter of consumer choice? Like other cases, it is difficult to answer in black and white and the dangers and regulation of energy drinks are still in heated controversy. However, the popularity and misuse of energy drinks indicate that minors and young adults are more potentially subjected to dangerous consumption. Our society is in need of education about daily consumption and encouragement of responsible drinking behavior. Margaret is a senior studying Chemistry/Biological Science and Business at Carnegie Mellon University,
THE TRIPLE HELIX Fall 2010 5 1 1 /9 /2 0 1 0 9 :4 3 :50 AM
UCHICAGO
Cognitive Behavioral Therapy: Finding the Right Treatment for Depression Joseph Bartolacci
I
n late 2009, President Barack Obama told Congress, “There’s no reason we shouldn’t be catching diseases like breast cancer and colon cancer before they get worse…It saves money, and it saves lives” [1]. This statement contradicts the general tendency in many fields of modern medicine, which is to provide treatment but not prevention, even when prevention is comparatively easier—for example, everyone would find a mammogram easier than a mastectomy. Other relatively simple ways to encourage healthy eating, physical activity and decreasing tobacco consumption could vastly improve public health, too. But what about mental health? Today, myriad pills exist to alter almost every emotion, often hiding the root cause of these problems. While it may currently be impossible to prevent mental illness, what must be considered, then, is whether or not we are treating it effectively. In 2005, the Center for Disease Control found that 118 million prescriptions for antidepressants were issued, making them the most prescribed medication in the country [2]. Even though studies conducted in the United Kingdom and in the United States showed decreased suicide rates in patients taking antidepressants, studies have also shown that these medications serve more to alleviate symptoms, rather than to cure them, and relapse is common [3]. A long-time alternative or supplement to pharmacotherapy has been psychotherapy. In particular, over the last fifty years, there has been a shift from psychoanalytic methods towards evidence-based treatments, which include the subtype called Cognitive Behavioral Therapy (CBT). As this therapy becomes more widely used, the important question emerges of what exactly Cognitive Behavioral Therapy is and how it differs from other forms of treatment. CBT first arose after much friction between Behavior therapists and Cognitive therapists as to which of their theories on depression was the most accurate. Two independent theories were developed, the behavioral theory and the cognitive theory on depression, with ultimately the cognitive theory posited by Aaron Beck being more widely accepted [4]. Under this cognitive theory, there are three core schemas to depression: the feeling that the self is inadequate or defective, that all past experiences had negative outcomes, and that the future holds bleak promise if 6 THE TRIPLE HELIX Fall 2010 UChicago.indb 6
Reproduced from [24]
any. Dr. Beck realized that patients suffering from depression undergo depressogenic thinking that maintains these views and in order to combat these thought processes promoted the use of both behavior-based and cognition-based treatments, forming what is now known as CBT [4]. Cognitive Therapy to date has been explored more than any other psychosocial therapy [5]. Modern cognitive therapy began with such notable psychologists as George Kelly in the 1960’s, but it was Dr. Aaron Beck who brought the field most into the public’s eye and expanded its applications [6,7]. Cognitive therapy differs from Behavioral therapy in that it examines the cognitive or thought processes underlying motivation for action, as opposed to learned actions. People’s thoughts and reactions to events are taken as genuine, onesided pieces of evidence that Cognitive Therapy relies on to interpret and process responses. The therapist adds no input on this introspection but takes a patient’s observations about feelings, emotions, and wishes as data [8]. The importance of using thoughts as raw data is that the procedure can become more empirical. The therapist subsequently uses this information to challenge maladaptive thinking. © 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 3 :50 AM
UCHICAGO Dr. Beck highlights the role that meaning plays in cognitive therapy by posing the following model: “A boy is teased by his friends: The objective meaning of the event is simply that they are goading him. The personal meaning for the boy who is teased is more complex, for example, “They don’t like me” or “I am a weakling.” It is obvious that the repercussions can be significant on a child when such an event occurs. However, often times these thoughts are kept private, even when the reaction can be excessive or inappropriately suited to the situation. When reactions to events consistently occur in such a manner, they are called “cognitive distortions” and are what Cognitive Therapy defines as the basis for emotional disorders [8]. Cognitive therapy may be helpful to those diagnosed with depression by encouraging the development of a more positive sense of self-worth. When complemented with techniques from behavior therapy, it may also become more effective. Behavior Therapy began around 1912 when psychologists began to deviate from former programs like psychoanalysis because they dealt too often into “intangibles and unapproachables” [9]. However, Behavior studies are by no means a new development, but it wasn’t until the early 1970’s that C.B. Ferster and Peter Lewinsohn devised the behavioral theory of depression. This theory states that, “individuals become depressed when there is an imbalance of punishment to positive reinforcement in their lives”[10]. To restore this equilibrium, therapists use Behavioral Activation, in which depressive behavior is addressed and attempts are made to avoid such actions and to create more rewarding and positively reinforcing behaviors [11]. One such example of this is a treatment called the Coping With Depression (CWD) course
designed specifically for adolescents and conducted in a group setting. Targeting the most common poor behaviors that can lead to depression, it teaches patients better social skills and relaxation techniques as well as encourages participation in “happy” or pleasant activities [12]. The developers of CWD found that many depressed individuals rarely seek to obtain pleasure from normal activities use a Behavior Therapy background to change this behavior. Such simple changes have proven effective for patients suffering from depression on a scale consistently as effective as antidepressants if not more so [5]. In addition, relapse was much less when Behavior Therapy was used rather than medications [5]. To behavior therapists, depression is seen as the cause of an interruption in behavior that encourages healthy living, rather than caused by poor behavioral tendencies [12]. Following this theory, a stressor disrupts the normal flow of behavior or slows the ‘reward’ that promotes good behavior and the subject alters and ‘learns’ new habits, resulting in depressive symptomology. If one takes the example about the boy mentioned previously, one can see a clear distinction between the treatment pathways that Cognitive Therapy and Behavior Therapy would offer. If Cognitive Therapy were undertaken, the therapist would challenge the boy’s maladaptive thinking, or the boy’s belief that people dislike him or think he is a weakling. In Behavior Therapy, however, feelings rarely enter in the therapist’s treatment. When the boy tells the therapist that he felt as if people disliked him, the therapist could ask the boy “What are the consequences of this belief” [4]? If the boy’s response was, for example, to disassociate himself from those who teased him, the boy may unconsciously position himself in a cycle in which he avoids problems rather than confronting them. Behavioral Activation “activates” the boy into undertaking behaviors
Reproduced from [25].
© 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 7
THE TRIPLE HELIX Fall 2010 7 1 1 /9 /2 0 1 0 9 :4 3 :51 AM
UCHICAGO that reduce such poor behavior and builds coping skills that of computerized CBT, which American agencies are currently increase positive return. appraising the use of, hoping to reduce cost and increase Cognitive Behavioral Therapy heavily utilizes both availability to broader populations. In addition to being more techniques from Cognitive and Behavior Therapy in order to widely accessible, computerized CBT would provide an option present a more effective therapy model. From the behavioral for patients unwilling to share their deepest concerns with a tradition, CBT takes root in human learning theory, which human therapist [23]. Future research and product developposes innate and learned behavior as a result of reward or ment may soon make CBT a viable option for a larger range punishment. From the cognitive tradition, CBT pulls from of people in need. the idea that a patient’s own CBT does not claim to be cognitive processes (the thinka cure all or aim to completely ing that occurs in response to a supplement pharmacotherapy, stimulus) can be observed and but as one can see from research In 2005, the Center for used as a route to understanding data that CBT is an effective treatDisease Control found that symptoms. This combination of ment option for a vast range of therapeutic techniques has shown illnesses. However, as easy as 118 million prescriptions to be effective and in 2006, the the principles of CBT may seem for antidepressants were National institute for Health on paper, the real-life application and Clinical Excellence in the of these cognitive and behavioral issued, making them the most UK recommended CBT as the changes is not always a simple prescribed medication in the treatment of choice for obsessiveor quick process. The human compulsive disorder, bulimia mind is an intricate notion still country. nervosa, phobias, anxiety, and very much shrouded in mystery. depression. Amongst the techCost, human will and reluctance, niques used in CBT are group and still experimental phases of therapies, interpersonal skills training, and in some schools some CBT treatments leave many patients unsure and out of hypnosis, which was approved by the AMA in 1958, and the reach of the help they need. For these reasons it seems unAPA in 1960 [16]. Direct comparisons are available for CBT likely for CBT to completely replace the use of psychotropic versus medication in studies involving phobic disorders, diges- medications, but it seems that CBT has come to a point where tive disorders, anxiety disorders, smoking cessation, insomnia, it may prove to be a viable alternative to those who wish to and depression [17-21]. For each of the studies, CBT was found reduce their use of pharmaceutical products while pursuing more effective than medications, although for some, pharma- an effective therapy. copsychotherapeutic methods were also recommended [18]. CBT also was found to have strengths in lessening treatment Joseph Bartolacci graduated in 2010. At the University of Chicago, he majored in Biology and minored in Romance Languages and duration as compared to pharmacotherapy [22]. Stemming from CBT’s versatility is the rise of versions Literatures. References 1. Obama, B. Congressional Address. Sept 10, 2009. 2. Health, United States 2005. Center for Disease Control National Health Statistics 2005. 3. Gibbons RD, Brown CH, Hur K, Marcus S, Bhaumik D, Mann JJ. Relationship Between Antidepressants and Suicide Attempts: An Analysis of the Veterans Health Administration Data Sets. The American Journal of Psychology, July 2007. 4. Milton S, Behavior Therapy vs. Cognitive Therapy For Depression: Here We Go Again. New Jersey Association of Cognitive BehavioralTherapists; 2004. 5. Dimidjian S, Hollon SD, Dobson KS, Schmaling KB, Kohlenberg RJ, Addis ME, Gallop R, McGlinchey JB, Markley DK, Gollan JK, Atkins DC, Dunner DL, Jacobson NS. Randomized trial of behavioral activation, cognitive therapy, and antidepressant medication in the acute treatment of adults with major depression. Journal of Consulting and Clinical Psychology. Volume 74, Issue 4, August 2006. Pages 658-670. 6. Beck Institute for Cognitive Therapy and Research, http://www.beckinstitute. org/InfoID/302/RedirectPath/Add1/FolderID/196/SessionID/{6E58D7CC-4D9D4A72-906A-21F0F19FEEF2}/InfoGroup/Main/InfoType/Article/PageVars/Library/ InfoManage/Zoom.htm. 7. Kiebles J. Interview, May 2010. Northwestern University. 8. Beck AT. Cognitive Therapy and the Emotional Disorder. Penguin Group. Edition 1. Pg. 1-3. October 1979. 9. Watson JB. Behaviorism. Transaction Publishers. Pg. 6. Edition 1998. 10. Martell CR. Cognitive Behavior Therapy: Applying Empirically Supported Techniques in Your Practice. Chapter 6. John Wiley and Sons; 2008. 11. Jacobson NS, Dobson K, Truax PA, Addis ME, Koerner K, Gollan JK, Gortner E, Prince SE. A component analysis of cognitive-behavioral treatment for depression. J Consult Clin Psychol 1996; 295-304. 12. Antonuccio DO. The coping with depression course: A behavioral treatment for depression. The Clinical Psychologist. 51. 3-5; 1998. 13. Martell CR, Addis ME, Jacobson NS. Depression in Context: Strategies for Guided Action. W.W. Norton; 2001.
8 THE TRIPLE HELIX Fall 2010 UChicago.indb 8
14. Weissman MM, Markowitz JC, Klerman GL. Comprehensive Guide to Interpersonal Psychotherapy. New York, Basic Books; 2000 15. Weissman MM, Markowitz JC, Klerman GL. Clinician’s quick guide to interpersonal psychotherapy. New York: Oxford University Press; 2007 16. Kiebles JL, Keefer L. Hypnotically-assisted Relaxation: Behavioral Arm of the Gastroesophageal Reflux Disease Comparative Effectiveness Trial. May 20; 2010 17. Roy-Byrne PP, Craske MG, Stein MB, Sullivan G, Bystritsky A, Katon W, Golinelli D, Sherbourne CD. A randomized effectiveness trial of cognitive-behavioral therapy and medication for primary care panic disorder. Arch Gen Psychiatry. 2005 Mar 18. Roy-Byrne P, Craske MG, Sullivan G, Rose RD, Edlund MJ, Lang AJ, Bystritsky A, Welch SS, Chavira DA, Golinelli D, Campbell-Sills L, Sherbourne CD, Stein MB., Delivery of evidence-based treatment for multiple anxiety disorders in primary care: a randomized controlled trial. JAMA. 2010 May 19 19. Hernández-López M, Luciano MC, Bricker JB, Roales-Nieto JG, Montesinos F., Acceptance and commitment therapy for smoking cessation: a preliminary study of its effectiveness in comparison with cognitive behavioral therapy. Psychol Addict Behav. 2009 Dec. 23 20. Sivertsen B, Omvik S, Pallesen S, Bjorvatn B, Havik OE, Kvale G, Nielsen GH, Nordhus IH, Cognitive behavioral therapy vs zopiclone for treatment of chronic primary insomnia in older adults: a randomized controlled trial. JAMA. 2006 Jun 28 21. Spett M. CBT vs. Drugs for Depression. New Jersey Association of Cognitive Behavioral Therapists. 2002 Mar 7 22. Roy-Byrne P, Craske MG, Sullivan G, Rose RD, Edlund MJ, Lang AJ, Bystritsky A, Welch SS, Chavira DA, Golinelli D, Campbell-Sills L, Sherbourne CD, Stein MB. Delivery of evidence-based treatment for multiple anxiety disorders in primary care: a randomized controlled trial. JAMA. 2010 May 19 23. Computerised cognitive behaviour therapy for depression and anxiety. London (UK): National Institute for Health and Clinical Excellence (NICE). 2006 Feb. 38 p. (Technology appraisal; no. 97). 24. http://www.cdc.gov/features/depression/Depression_b200px.jpg 25. http://drugabuse.gov/scienceofaddiction/images/026.gif
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 3 :51 AM
UCHICAGO
Synesthesia and You Alexandra Carlson
D
oes a particular letter in this sentence appear in an unusual hue? When you hear a shout, do you see a corresponding blast of color? When you walk barefoot, does a singular flavor cross your taste buds? Though these questions may appear odd, even nonsensical to some, each characterizes one of many legitimate sensory perceptions that numerous humans experience daily. This coupling of sensory responses is known as synesthesia, a condition of neurological basis in which the stimulation of one sensory system (also known as a sensory modality) causes another sensory system to be stimulated. An involuntary phenomenon, synesthesia is regarded to its possessors as real and felt mostly “outside the body” as opposed to “seen in the mind’s eye” [1]. The perception, for example, of a sound with a specific color, is consistent and regular, similar to how to a non-synesthete hearing is regular and consistent. This is the difference between synesthesia and a hallucination, and allows such a condition to be diagnosed. But what is the catalyst of such a unique neurological evolution? What could possibly cause this nonintuitive oxymoronic unification of sensory information to yield something that is understood by another? How is it that we can so easily describe a noise as being “sharp” when there is no possible way to hear a tactile feeling? To understand how this “mixing” of perceptions can occur, it is necessary to understand the mode by which the brain receives sensory signals (i.e., how a human perceives his or her surroundings). Humans have five sensory systems: taste, touch, sight, hearing, and smell and each receive stimuli, which are processed in different cortical regions of the brain. For example, a sound will enter the ear, be transformed into
Although the effects of synesthesia on the thought process and communication are mysterious, there is no denying the fact that synesthesia reshapes the way that one can define the human experience. an electrical signal by sensory receptors, and then travel to the cortical areas of the brain where sound stimuli are interpreted. In the synesthetic brain, the cortical regions responsible for processing information from the different sensory systems are connected in such a way that the stimulation of one area by sensory signals causes the immediate stimulation of another
© 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 9
[2]. In short, a synesthete will experience two sensory perceptions when one stimulus is received. The underlying cause of such stimulation is as curious as the phenomena itself. Researchers have developed several theories to define the possible neural substrate of the condition. The first is that a synesthete’s brain contains a larger number of neuron-to-neuron connections between different
Reproduced from [10]
sensory modalities. It is thought that during the embryonic development of the synesthete’s brain that a genetic mutation prevented certain neural connections from being “pruned,” and consequently these extra pathways between sensory modalities are allowed to carry electrical excitations from one part of the brain to another, ultimately causing simultaneous sensory experiences [3]. This theory is analogous (and actually supported) by the proposed cause of “phantom limbs”, cortical reorganization. Through the creation of new neural bridges between areas, the stimulation of nerves in other parts of the body, for example facial nerves, may also cause the feeling of still possessing of the missing limb [2]. Another hypothesis deals with inhibition and disinhibition within the brain: it states that certain feedback pathways in the brain between different cortical regions, which are normally inhibited during the presence of sensory signals, are actually disinhibited in the brains of synesthetes. Researchers believe the cause of such disinhibition is rooted in the process of transmitting an electrical signal between two neurons. Communication between neural cells is driven by molecules known as neurotransmitters, which are released by an electrically excited neuron and then absorbed by a second neuron, causing excitation of the second cell. Disinhibition occurs when too much neurotransmitter is released, and the presence of the molecule affects later electric signals moving along that feedback pathway, preventing the inhibition of certain sensory information [3]. The cause of such physiology is also probably genetic, but for both theories the exact cause remains to be the subject of much debate. It is important to recognize, though, that both of these “umbrella theories” place the occurrence of synesthesia in the cortex of the
THE TRIPLE HELIX Fall 2010 9 1 1 /9 /2 0 1 0 9 :4 3 :51 AM
UCHICAGO brain, which deals with higher level thinking and processing. easily express our emotions and thoughts, to essentially comBut what does all of this mean? Scientists feel that syn- municate more efficiently than we have ever able to before. We esthesia is responsible for creativity, and that it could play an can compare synesthesia to an organizing system; it allows us important role in abstract thinkto link more areas of our brain, ing. To understand the connection making it easier to quickly and between synesthesia and creative efficiently make connections that Scientists feel that synesthesia thinking, one needs to understand aid us in our day-to-day survival Hebb’s rule of Neurology: an inis responsible for creativity, and and development. crease in the excitation of neurons However, what this hypoththat it could play an important directly causes an increase in esis doesn’t take into account synaptic strength, i.e., neurons is the idiosyncratic nature of role in abstract thinking. that fire together wire together synesthesia. It is true that cer[5]. Some of the most recent studtain studies (specifically those ies have shown that there is an performed by Ramachandran increase in the firing of neurons in synesthetes, which implies and his colleagues) have found that many artists are synesa greater number of neural connections, whether it be by di- thetes, and that the condition’s neurological basis may lie in rect structural connection or by chemical interactions. And, of a heightened number of neural connections, but metaphors course, as seen in the comparison between Einstein’s and an themselves have a universality that contradicts the singular Alzheimer patient’s brain tissue, the complexity of thought is expressions of synesthesia. Metaphors, specifically those sendirectly related to the number of interneural connections [5]. sory metaphors that are so prominent in language, would not So it makes sense that one might conclude that synesthetes are work if they were restricted to the nature of synesthesia; those likely to be more creative or are able to remember and recall metaphors work because there is a cultural agreement on the things on time scales that the average human could not [6]. meanings. To understand this, consider the grapheme-color A few neuroscientists have even suggested that this con- synesthesia, in which the synesthete sees one specific letter dition could have a pivotal role in not only thought process or number in a specific color. As previously discussed, the but also in the evolution of language. Researchers argue that number/letter and corresponding color vary greatly between each synesthete, making it idiosyncratic. Metaphors, on the other hand, carry the same meaning/perception amongst large groups of people, and therefore have an expression A few neuroscientists have even that contradicts the nature of synesthesia itself. That is not to say though that synesthesia may not play a role in complex suggested that this condition thought. Sensations and feelings are singular in nature, but could have a pivotal role in not they can only be shared and communicated in a way that is understood by others, which suggests that perhaps metaphors only thought process but also in are the societal equivalent of synesthesia; however, in the case the evolution of language. of the individual, the connection between synesthesia and language is strained and speculative. Although the effects of synesthesia on the thought process and communication are mysterious, there is no denying the at some level, everyone has a form of synesthesia. The most fact that synesthesia reshapes the way that one can define the obvious evidence for this is seen (and heard and read) everyday: human experience. This condition suggests not only that each it is the metaphor. How would it be possible for a person to human interprets and perceives reality differently, but also create a metaphor like “a sharp sound” or “a piercing taste” if that differences in biological structures can cause differences their modalities were not cross-wired at some level? There is in the “mind”. Unraveling the mysteries of synesthesia in no way that a person could actually experience a sharp sound neurology and psychology will further enlighten the human in real life; we simple don’t have the sensory receptors to do understanding of reality and the self. so. Therefore, such a relation could only be the fruit of some kind of neurological communication that occurs between the Alexandra Carlson is a sophomore studying physics at the University sensory modalities [6]. The diversity in how we can connect of Chicago. words and ideas has given us a pathway that allows us to more References 1. “Synesthesia and the Synesthetic Experience”, MIT, 7 October 1997, http://web.mit. edu/synesthesia/www/ 2. Neurocognitive Mechanisms of Synesthesia, Edward M. Hubbard1,* and V.S. Ramachandran, Center for Brain and Cognition,University of California, San Diego, Neuron, Vol. 48, 509–520, November 3, 2005 3. Baron-Cohen, S., and Harrison, J.E., eds. (1997). Synaesthesia: Classic and Contemporary Readings (Malden, MA: Blackwell Publishers,
10 THE TRIPLE HELIX Fall 2010 UChicago.indb 1 0
Inc.). 4. Trends in Neurosciences, Volume 31, Issue 11, 549-550, 26 September 2008 doi:10.1016/j.tins.2008.08.004 5. The Phenomenology of Synaesthesia, V.S. Ramachandran and E.M. Hubbard, Journal of Consciousness Studies, 10, No. 8, 2003, pp. 49–57 6. “Hearing Colors, Tasting Shapes”, Vilayanur S. Ramachandran and Edward M. Hubbard, pgs 51-59, Scientific American, May 2003 7. www.doe.sd.gov/
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 3 :52 AM
UCHICAGO
Reconsidering Health in the 21st Century Kara Christensen
T
he United States is struggling with a weight problem. According to a study published in the Journal of American Medical Association, in 2008 according to the Body Mass Index, 68% of the U.S. population was overweight or obese, yet in the same year Americans spent over $40 billion dollars on dieting products to achieve leaner figures [1,2]. The message to diet is evident also in popular media with weight-loss shows on television, such as “The Biggest Loser” contributing to the idea of thinness as important by rewarding contestants who drop the most pounds. For many, health and weight have become synonymous; however, some researchers and activists are rejecting this weight-centric view of health in favor of a more body-positive approach [3-7]. In addition, these researchers have raised questions about the nature of the obesity epidemic that the U.S. faces today. In response to research about the implications of dieting on physical and mental well-being, many have begun to embrace the Health at Every Size movement, which questions current public perceptions of health and how it can be achieved. Obesity has become a focus of public concern, with First Lady Michelle Obama making the problem of childhood obesity one of her focuses for 2010. Weight concern became political in 2009 when some objected to the appointment of Dr. Regina Benjamin to the position of U.S. Surgeon General, saying that her higher weight sent mixed messages about the current health initiatives against obesity. Benjamin’s predecessor, U.S. Surgeon General Richard Carmona, described obesity as “the terror within” and further stated that obesity was a greater threat to the U.S. than terrorism [8]. On a political level, obesity has been categorically targeted as a threat to the American people; however, there is still some debate as to the extent of the obesity epidemic and the way in which public health officials have advised Reproduced from [20] © 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 1 1
handling the issue. The current definition and extent of the obesity epidemic has faced some critique, the first of which lies in how we define obesity. The most commonly used measure is Body Mass Index or BMI, which can provide a measure of body fat by performing a calculation based on weight and height. A BMI of over 25 is considered to be overweight while one of over 30 is considered obese; however, BMI does not distinguish between fat and muscle mass, which may mean that body fat in muscular people may be overestimated and underestimated for those with very low muscle mass [9]. In light of this knowledge, individuals identified as being overweight by the BMI should receive additional testing by their medical providers to determine if the additional weight is from excess fat. In addition to potential limitations in identifying fat from muscle, BMI cut-offs for overweight and obese classifications may also vary between cultures and do not necessarily reflect differences in different body proportions [10]. In countries such as Singapore, BMI scales have been revised to make cut-off limits for overweight and obese lower than those in America in response to research suggesting that adverse lipid
THE TRIPLE HELIX Fall 2010 11 1 1 /9 /2 0 1 0 9 :4 3 :52 AM
UCHICAGO levels may be observed in non-white ethnic groups at lower weights than those of white ethnic groups [10]. Additionally, the presence of the set cut-offs for different weight classifications themselves could be considered problematic in determining the extent of the obesity epidemic. According to Jeffrey Friedman in an article published in the journal Nature, “Giv[ing] a fixed threshold to a continuous trait- for instance that a BMI of 25 or above indicates overweight- means that a small shift in a trait’s mean value leads to a disproportionate increase in the number of people who exceed the threshold” [6]. Friedman illustrates this idea by providing the example that a 33% increase in the incidence of obesity in the 1980s corresponded to a 3-5 kg increase [6]. Thus, a small shift in average weight resulted in more people being classified as obese, providing a somewhat misleading representation of the population as a whole. The use of set cut-offs for overweight and obese thus could present a problem for defining an obesity epidemic as it categorically defines these measures. Examining the number of individuals in the categories of normal, overweight, and obese does not give the same picture of weight trends in the U.S. over time as the examination of the continuum of weights; however only the categorical definitions are used for defining obesity. While some question the use of BMI and its cutoffs as diagnostic tools, other experts are examining the popularly presumed link that higher weights cause poor health and instead are looking at weight and health as both being correlated to other sociological conditions. For example, University of Chicago professor J. Eric Oliver explains the health/weight association by stating that “our growing weight is merely a symptom of some fundamental changes in our diet that may (or may not) affect our health” [5]. In this model, the same underlying causes, such as poor diet, low levels of exercise, and genetics, that affect conditions such as diabetes and heart disease also cause increases in weight. The current health paradigm, according to Eric Oliver, mistakes obesity as a cause of health problems when the relationship is instead related to and dependent on other factors [5]. In addition, some recent research suggests that being overweight does not have the significant health risks previously attributed to it. A statistical analysis of the National Health and Nutrition Examination Survey found that the adult group with the lowest early death rate was that consisting of those with a BMI between 25.0 and less than 30 or the group termed overweight. There were 86,000 fewer deaths in this category than in the “normal” BMI range of 18.5 to less than 25, suggesting that being “overweight” is not a significant predictor of mortality [12]. Additionally, according to an article by Campos et al., the risk for premature death only begins to significantly increase when BMI is in the high 30s [13]. Thus mortality rates appear to follow a U-shape pattern, in which there are higher rates of mortality at both the lower and higher ends of BMI, with a larger range of BMIs in the middle associated with no increase in rates [14]. These findings suggest that carrying some excess weight there may not contribute to significantly increased health risks. However, other recent studies provide contradictory
12 THE TRIPLE HELIX Fall 2010 UChicago.indb 1 2
evidence, suggesting that higher weight is in fact linked to mortality and other health conditions. A 2009 meta-analysis of 57 studies comprising a total of 895,576 people found that after controlling for smoking and age, those in the “normal” range of BMI values had the lowest levels of mortality. Additionally, every 5 point increase in BMI (or an increase in weight of 5 kg/m2) resulted in a 30% increase in mortality [15]. Other research has found increased weight to be associated with higher levels of Type II diabetes, cancer, and cardiovascular diseases [16,17]. However, some factors that researchers in both studies were unable to control for include diet, exercise, and socioeconomic status, which may therefore contributed to some skewed results. In the 2009 meta-analysis, cholesterol levels, blood pressure, and diabetes were similarly not adjusted for, meaning that obesity cannot be definitively cited as the cause for these differing mortality rates. Thus, it
The principles of HAES reflect this shift away from a focus on weight loss and movement towards body acceptance and active lifestyle. appears that while high weight and mortality rates may be positively associated, there may still be some debate about whether this is a causative or correlative relationship. Arising from these critiques and differing perspectives on the current health-weight paradigm, the Health at Every Size movement has emerged to redefine how we think about our bodies and our weights. Instead of focusing on losing weight in order to achieve better health, HAES instead promotes the idea that there is diversity in body shape and body size [4,7]. According to psychologist Linda Bacon, current obesity programs are not effective and in fact promote unhealthy habits by increasing weight-related anxiety, disordered eating, and stigma against obesity [3]. The current focus on losing weight and its efficacy is demonstrated in statistics from the National Eating Disorders Association that show that 25% of American men and 45% of American woman are currently dieting but 95% of dieters will regain lost weight within 1-5 years [18]. These statistics suggest that dieting as the primary means of controlling the obesity “epidemic” may not be effective for many people and there exists a need for alternative treatment strategies. By comparison, HAES promotes body acceptance in light of the knowledge that weight loss is not necessarily maintainable. According to HAES principles, BMI or numbers on a scale do not measure healthy weight. Instead, according to Dr. Jon Robison, a health weight may be considered the weight at which a person settles as he or she begins to pursue a more active lifestyle [7]. It does not mean that all people
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 3 :53 AM
UCHICAGO are currently at their healthiest weight, but rather that by focusing on a healthier lifestyle, people will move towards this desired weight. This movement towards health can be done without necessarily requiring weight loss and may produce more lasting results than dieting. The principles of HAES reflect this shift away from a focus on weight loss and movement towards body acceptance and active lifestyle. The three pillars of HAES - self-acceptance, physical activity, and normalized eating - build upon these ideas [3]. Self-acceptance in HAES is the embrace of the concept that beauty and self-worth are independent of shape, size, and weight. Physical activity is the pursuit of increased activity to improve pleasure and quality of life, not to lose weight, while normalized eating involves disregarding the rules about dieting and instead eating healthily and abiding by feelings of fullness and hunger. Initial studies have shown promise for the HAES approach [4,19]. In 2005, a study was undertaken that included 78 obese women randomly enrolled in either a HAES group or a traditional weight-loss diet group [4]. The women in the HAES group received information about increasing selfesteem and engaging in pleasurable exercise activities with no goal to reduce weight, while those in the dieting group learned about topics such as portion control and weight loss techniques. Participants were evaluated after one year and after two years in measurements of weight loss, other health improvements such as cardiovascular health, satisfaction, and self-esteem. After one year, those in the traditional diet group had lost an average of five kg; however, they regained most of this weight by the two-year follow-up. Additionally, health improvements in systolic blood pressure measured at the one-year follow-up were not maintained at the two-year point in the dieting group, but the women in the HAES group experienced improved levels in total cholesterol, LDL cholesterol, blood pressure, depression, and self-esteem measurements after two years. Those in the HAES group were also likely to experience a decrease in disordered eating and an increase in physical activity as opposed to the dieting group. Finally, the dieting group had a drop-out rate of 42% while the HAES group had one of only 8%, suggesting the dieting is a more
difficult lifestyle to maintain over longer periods of time than that of the HAES movement. Taken together, these results suggest that dieting, which emphasizes weight loss and not lifestyle, is not as effective as HAES for creating long-term changes in health and self-image. Health at Every Size thus attempts to address several of the problems presented by the traditional weight-loss model. First, by removing the emphasis from weight loss, HAES helps to avoid the decrease in self-esteem and the disordered eating associated with cyclical dieting. Furthermore, HAES’s acceptance of the range of body shapes and weights promotes a healthier self-image, which may serves to reduce depression and de-stigmatize obesity by no longer emphasizing the desirability of attaining a figure that is difficult to achieve for many. Although initial studies have been promising for HAES providing an alternative to dieting for improving health, more research is needed to evaluate its long-term sustainability and viability as a means of creating more permanent health improvements. Additionally, it is also important to keep in mind that different approaches work for different individuals, thus HAES may not be effective for all. As the aforementioned studies have shown, there are differences in research findings in the relationships between weight and health or mortality. People may be considered healthy at a wide range of weights and what is an ideal body weight for one person may not be for another. In the same vein, no one strategy for improving health will work for all people. Instead, individuals must find what method works best in light of their health goals. Current strategies for reducing obesity, such as dieting, may be effective for some members of the population and in the future if supplemented by body positive approaches, such as HAES, could become even more effective. Future research will be necessary to better understand different types of obesity interventions and to further develop treatment models that will improve health without contributing to obesity-related stigma and disordered eating.
References
12. Flegal KM, Graubar BI, Williamson DF, Gail MH. Excess deaths associated with underweight, overweight, and obesity. JAMA. 2005; 293: 1861-1867 13. Campos P, Saguy A, Ernsberger P, Oliver E, Gaesser G. The epidemiology of overweight and obesity: public health crisis or moral panic? Int J Epidemiol. 2006; 35 (1): 55-60. 14. Durazo-Arvizu RA, McGee DL, Cooper RS, Liao Y, Luke A. Mortality and optimal body mass index in a sample of the US population. Am J Epidemiol 1998; 147: 739-749. 15. Prospective Studies Collaboration. 2009. Body-mass index and cause-specific mortality in 900 000 adults: collaborative analyses of 57 prospective studies. The Lancet. 373; 9669; 1083-1096. 16. Guh DP, Zhang W, Bansback N, Amarsi Z, Birmingham CL, Anis AH. 2009. The incidence of co-morbidities related to obesity and overweight: A systematic review and meta-analysis. BMC Public Health; 9: 88. 17. Gustafsson F, Kragelund CB, Torp-Pedersen C, Seibaek M, Burchardt H, Akkan D, et al. Effect of obesity and being overweight on long-term mortality in congestive heart failure: influence of left ventricular systolic function. Eur Heart J. 2005; 26(1): 58-64. 18. National Eating Disorders Association. Statistics: Eating disorders and their precursors. Retrieved from www.nationaleatingdisorders.org. 19.Wood M. Health at every size: New hope for obese Americans? Agricultural Research Magazine. 2006; 54(3): 10-11. 20. http://www.ic.nc.gov/ncic/pages/dieting-slimming.jpg
1. Flegal KM, Carool MD, Ogden CL, Curtin LR. Prevalence and trends in obesity among US adults, 1999-2008. JAMA. 2010; 303 : 235-241. 2. Smolak L. National Eating Disorders Association/ Next Door Neighbors Puppet Guide Book. 1996. 3. Bacon L. Science and politics of obesity. Society for Nutritional Education Seminar. 2006. 4. Bacon L, Stern JS, Van Loan MD, Keim NL. Size acceptance and intuitive eating improve health for obese, female chronic dieters. J Am Diet Assoc. 2005; 105: 929-936. 5. Oliver JE. Fat politics: The real story behind America’s obesity epidemic. Oxford University Press. 2005. 6. Friedman JM. Causes and control of excess body fat. Nature. 2009; 459(21): 340-342. 7. Robison J. Health at every size: Toward a new paradigm of health. MedGenMed. 2005; 7(3): 13. 8. Pace G. Obesity bigger threat than terrorism? CBS News. 2006. Available from : http://www.cbsnews.com/stories/2006/03/01/health/main1361849.shtml 9. National Heart Lung and Blood Institute: Obesity Education Initiative. Aim for a healthy weight: Information for patients and the public. Available from : http:// www.nhlbi.nih.gov/health/public/heart/obesity/lose_wt/risk.htm#limitations 10. World Health Organization. Global database on Body Mass Index. 2010. Available from: http://apps.who.int/bmi/index.jsp?introPage=intro_3.html. 11. Health Promotion Board Singapore. Revision of Body Mass Index (BMI) cut-offs in Singapore. 2005.
© 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 1 3
Kara Christensen is a senior at the University of Chicago, with a major in Psychology and a minor in Romance Languages and Literatures .
THE TRIPLE HELIX Fall 2010 13 1 1 /9 /2 0 1 0 9 :4 3 :53 AM
UCHICAGO
Medicine: Lost in Translation? Benjamin Dauber
A
migrant worker from Mexico was sent to an Oregon care and have found that LEP patients face longer hospital state psychiatric hospital where he was diagnosed stays, an increased risk of misdiagnoses and medical errors, with paranoid schizophrenia. When psychiatrists and a misuse of medical services, even after factors such as spoke to him in both Spanish and English, the man became literacy, health status, health insurance, regular source of care, agitated, waving his arms fiercely. The psychiatrists concluded economic indicators, and ethnicity are accounted for [3]. Adthat the man must be delirious. After being detained for a ditionally, follow-up compliance, adherence to medication couple of years, it was discovered that the man spoke Trique, regimens and patient satisfaction are significantly lower for LEP a language indigenous to Mexico. With the help of a Trique patients than they are for English speaking patients [4]. LEP interpreter, psychiatrists finally diagnosed him as mentally populations are also less likely to receive preventative health stable and discharged him. Aside services such as mammograms. from the pain and emotional sufSuch disparities between English fering experienced by the man, speaking patients and LEP paThis variety in languages and $100,000 in unnecessary treatment tients may occur because when English proficiencies has had a was wasted. Although this is an a patient and physician do not extreme example, this migrant speak the same language, the larger impact on how healthcare worker is not alone in suffering doctor reduces his or her ability is accessed by non-native due to a language barrier. to develop rapport, to obtain a An important change shapcomprehensive patient history, English speakers. ing the United States is the growth to learn clinically relevant inof the foreign-born population, formation and to increase the accompanied by an increase in the interpersonal engagement in non-English speaking demographic. Nearly 38 million people treatment—all important aspects of the physician-patient in the United States are born in a foreign country (12.5% of relationship. the total U.S. Population), and more than 55 million (19.6% of The detrimental effects of a language barrier are not limthe total U.S. population) speak a language other than English ited to reduced quality of care and emotional and therapeutic at home. More than 24 million of these people speak English engagement. Language barriers can also create additional costs. less than “very well”, and they are therefore considered to Without proper communication, doctors may fail to order possess limited English proficiency (LEP). Among the adult necessary diagnostic tests or may reach mistaken diagnoses population, 21% of Californians, 15% of Texans, 13% of New based on what they believe their patients’ symptoms to be. Yorkers, and twelve percent of Floridians have LEP [1]. To To avert serious or fatal consequences, some doctors resort to further complicate the situation, an estimated 311 spoken using expensive and often unnecessary tests to fill the gaps languages exist in the United States [2]. This variety in lan- left by the language barrier [5]. Clear communication of sympguages and English proficiencies has had a larger impact on toms during a physical exam can lead to a more accurate how healthcare is accessed by non-native English speakers. diagnosis, thereby decreasing the need for many laboratory Verbal communication is one of the most effective means and screening tests. by which a doctor can access a patient’s history to begin to The legal consequences of ineffective communication diagnose, treat, and relate prognoses. This open communica- between patient and doctor, although secondary to patient tion between doctors and patients helps ensure that patients welfare, can have devastating consequences for all involved. receive the appropriate medical attention and treatment, and A doctor who cannot communicate with a patient due to a that they progress appropriately. Shared language is necessary language barrier may deliver improper care, potentially leadfor effective communication. However, because of language ing to a costly malpractice lawsuit. In one noteworthy case, barriers, millions of people living in the U.S. cannot have this an 18 year old was taken to the Emergency Department (ED), connection with their physicians. accompanied by his mother. The boy was unconscious, and For patients whose primary language is not English, the only clue to his condition was the use of the Spanish word navigating the American health care system—whether it be “intoxicado” by his mother, which translates as “nauseated”. in a hospital, clinic, doctor’s office, nursing home, or public As no one in the ED spoke Spanish, hospital staff interpreted health agency— can be complicated and rife with openings for the word to mean that the boy was suffering from drug overmiscommunication from the outset. Researchers have docu- dose, and not nausea. Several days later, the doctors ordered mented the deleterious effects of language barriers on health a neurological test, which revealed a ruptured artery; he was 14 THE TRIPLE HELIX Fall 2010 UChicago.indb 1 4
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 3 :53 AM
UCHICAGO not suffering from drug overdose. The boy became a quadriplegic because he did not receive the appropriate treatment in a timely manner, and consequently, his family sued the hospital, the paramedics, the ED, and the attending physicians for medical malpractice, and settled for $71 million [6]. Although patients who have trouble navigating the healthcare system because of language barriers may have similar problems navigating the legal system, legal consequences remain a very real risk for hospitals and their employees. In addition to the concern about lawsuits for improper care, language barriers also raise important ethical issues. Informed consent from a patient is a necessary prerequisite for providing care and is fundamental to the physician-patient relationship. In the case of Quintero v. Encarnacion, the U.S. Court of Appeals ruled that informed consent must be obtained in the language understood by the patient [7]. Similarly, because misunderstood patients are unable to participate in their health care decisions, language barriers undermine shared decision-making. Since the ability of patients to make decisions about their own health care is a basic principle in medical practice, adequate communication is necessary for ethical medical practice [8]. The U.S. courts and government have taken steps in the last fifty years to help facilitate adequate levels of communication. In 1964, Congress passed Title VI of the
Although the federal government requires language assistance programs to be provided, it does not outline a specific model of services to adopt and does not provide any funding for these programs. Much of it is left to the institution and individual practitioner.
Civil Rights Act to ensure that federal money not be used to support discrimination on the basis of race or national origin in government activities, including healthcare. Courts have consistently found a close connection between national origin and language. As recently as 2003, the federal government reiterated guidelines requiring that providers receiving federal funds, such as from Medicare and Medicaid, offer language assistance to LEP patients when needed [4]. However, although the federal government requires language assistance programs to be provided, it does not outline a specific model of services
Š 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 1 5
Reproduced from [16]
to adopt and does not provide any funding for these programs. Much of it is left to the institution and individual practitioner. Thus, language assistance has taken many forms, which vary in cost, availability, and possibly, in accuracy. The use of family members and friends as translators seems like a relatively practical method of interpretation, as family members often accompany patients to their doctors, clinics and hospitals. However, this method of language assistance assumes that the family members speak English as well as the patient’s native language well enough to translate accurately. This practice has come under increasing criticism for compromising patient confidentiality and also because a lack of experience or understanding of medical terminology by the translator can lead to errors. Family members may not be privy to sensitive medically relevant information (e.g. pregnancy or abortion history, drug/alcohol use, etc). Using children related to the patient, who, by virtue of their schooling in the U.S. may have better English language skills than their LEP parents even at a very young age, causes its own set of problems, as patients may not want to divulge information about certain matters in front of their children. Furthermore, acting as a translator may be stressful and emotionally damaging to children. Hospitals could find themselves at risk by relying on family members to interpret since there is no way to determine the competency of the family member, nor is there a way to make sure no conflict of interests exists between the family member and the patient. As an alternative to using family members to translate, some institutions rely on bilingual ad hoc staff from other departments in hospitals and private practices. In a national survey, over 50 percent of the providers said that when they
THE TRIPLE HELIX Fall 2010 15 1 1 /9 /2 0 1 0 9 :4 3 :54 AM
UCHICAGO effects of language differences. They allow patients and physicians to understand and exchange vital information about the experience of illness, characteristics of disease, and personal beliefs and values. Professional healthcare interpreters are trained in health care interpreting, adhere to professional ethics, and can accurately render communication from one language to another. However, while common in international business and diplomacy, professional interpreters are rarely available in healthcare. The American Medical Association, the largest association of physicians and medical students in the United States, has protested that the cost of professional interpretation would place a heavy burden on physicians, especially with existing budget cuts and funding being prevalent problems in the health care industry. They also point out that finding an interpreter may be difficult and may entail long delays [12]. Furthermore, many insurers will not pay interpretation fees, leaving individual practitioners and institutions to pay for these services. Taking all this into account, the Office of Management and Budget quantifies the cost to be an additional average of only $4.04 per visit by an LEP patient or 0.5 percent of the total cost of a visit, but acknowledges that costs could vary widely [12]. Proponents of professional interpreters point out that the amount needed to pay for professional interpreters is far less than the large disparities in medical spending that exist between English speaking patients and foreign speaking patients. They also highlight that legal battles may be extremely expensive relative to the cost of having a professional interpreter—as seen in the case of the $71 million lawsuit. Good interpretation services are occasionally available despite limited financial resources. Since immigrant communiReproduced from [17]
need interpretative services, they often enlist staff from clerical and maintenance services [9]. If the ad hoc interpreters are trained in health care, they may be more familiar with medical terminology than family members; however, interpreters drawn from maintenance and clerical services usually do not have health care terminology training. Moreover, while ad hoc interpreters may be better than no interpreter at all, they can make serious semantic distortions, which can negatively affect provided care. One study evaluated ad hoc interpreterassisted encounters and found that between one-fourth and one-half of words and phrases were incorrectly translated [10]. Additionally, the costs of pulling these staff members from their primary duties may be substantial. The use of ad hoc interpreters is most efficient when medical institutions maintain updated lists of eligible employees, assess employee language and interpretation skills, provide interpreter training, and include interpretation as a listed job duty [11]. Nevertheless, using ad hoc staff alone is unlikely to meet the needs of all LEP patients and is frequently supplemented with other types of assistance. Professional interpreters, on the other hand, are formally trained and possess a high degree of proficiency in mediating communication between languages, reliably minimizing the
16 THE TRIPLE HELIX Fall 2010 UChicago.indb 1 6
LEP patients face longer hospital stays, an increased risk of misdiagnoses and medical errors, and a misuse of medical services.
ties consist of people with a spectrum of language abilities, the availability of these local language resources can be an important asset. Some communities run programs that recruit, train, and certify local medical interpreters. For example, the Department of Linguistics at the University of Minnesota runs a community interpreter program that recruits bilingual individuals interested in medical interpreting and provides over 150 hours of training [13]. However, although the use of community members can help bridge communication hurdles between doctors and patients at a relatively low cost, it may also cause potential problems with patient confidentially.
Š 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 3 :54 AM
UCHICAGO In the spirit of keeping up with technology, there are also telephone services that provide on-demand interpreters. These resources are usually employed when an on-site interpreter is unavailable or when interpretation is needed for a rarely encountered language [11]. The AT&T Language Line, for example, is a 24-hour telephone interpreter service that is available in numerous languages. The language requested and the length of time the service is used determines the cost for interpretation. However, one of the shortcomings of this service is that, since the conversation is over the telephone, non-verbal cues, which can be especially important in emergency situations, are lost. A hospital’s operating room (OR) poses its own set of problems for the LEP population, as the above methods may not be suitable for dealing with these patients who are undergoing a surgical procedure. Most potential translators for the LEP population are not allowed in the OR, and the use of a telephone in the OR may not be very practical. During emergence from anesthesia, when patients are typically still drugged and drowsy, a language barrier between patient and anesthetist may be magnified and may compromise the safety of the patient, as he must respond to verbal commands from the anesthesiologist. Based on patients’ responses to commands, anesthesiologists evaluate extubation criteria, determine neurologic functioning, and ascertain a patient’s level of pain. A research study carried out at The University of Chicago Medical Center found that patients with self-assessed weak English skills and those who started learning English after age 12 responded to verbal commands only in their native language during emergence from anesthesia [14]. The implications of this are far reaching because even foreign language patients with proficient English ability may only respond to their native tongue during this semi conscious state. One solution to combat this potential problem is to use recorded commands on a computer and play them back upon emergence in the patient’s native language during the emergence period. The OR provides only one example out of many where an encounter with an LEP patient poses its own set of problems, and as such, it must be dealt with on a case-to-case basis. No single model of language assistance is used by all hospitals. The specifics of each hospital’s language assistance services will vary given the diversity of the institutions and its
surrounding communities. For example, hospitals in Seattle are banding together to contract with on-call interpreter pools. Clinics across the country are working with community organizations to identify bilingual residents who can be trained as volunteer translators. Higher education institutions, such as New York’s Hunter College, are teaching students to serve as professional interpreters for college credit. In Oakland, California, Asian Health Services has trained community residents in interpretation skills and offers their services to local hospitals and community clinics. In North Carolina, the Duke Endowment has funded the state’s Office of Minority Health and the state’s Area Health Education Centers Program to establish the Spanish Language and Cultural Training Institute, which is sponsoring statewide training for interpreters working in health and human service settings [15]. Factors that may influence a site-specific model include the size of the hospital, the size of the LEP population it serves, the total resources available to the hospital, and the frequency with which particular languages are encountered. Indeed, most hospitals find that to best serve their patients, they need to use some combination of the models described above. Hospitals, clinics, doctors’ offices, nursing homes, and public health agencies need to choose which services to use, based upon several factors—including their patients’ needs—when making such decisions. Language barriers in the health care system create a plethora of problems for both patients and physicians. Issues such as safety, cost, ethics, and patient discrimination arise due to a lack of translation services provided for a LEP patient. Without effective communication, the physician-patient relationship can be severely compromised. Responding sympathetically to a patient requires an acknowledgment of that individual’s uniqeness and a respect for his or her life. Without a linguistic connection, physicians lose their most important tool for establishing a meaningful relationship with the patient. Better communication across the language barrier will improve the patient-physician relationship and will lead to an overall improvement in the healthcare of those with LEP.
References
10. Flores, G., Barton Laws M., Mayo SJ et al. Errors in medical interpretation and their potential consequences in pediatric encounters. Pediatrics. 2003; 111(1):6-14. 11. Best Practice Recommendations for Hospital-Based Interpreter Services [Online]. Commonwealth of Massachusetts Executive Office of Health and Human Services. [cited June 30,2010]; available from; URL: http://www.hablamosjuntos.org/pdf_files/ Best_Practice_Recommendations_Feb2004.pdf 12. Office of Management and Budget, Report To Congress. Assessment of the Total Benefits and Costs of Implementing Executive Order No. 13166: Improving Access to Services for Persons with Limited English Proficiency. March 14, 2002. Washington DC. 13. Minnesota Refugee Health Provider Guide 2007 - Medical Interpreters. [cited 2010, May 13]; available from; URL; http://www.health.state.mn.us/divs/idepc/ refugee/guide/11interpreters.pdf. 14. B. Dauber, M. Dauber, D. Glick: English: What A Foreign Concept?! Reversion To Native Language Speeds Emergence in Immigrants. Anesth. Analg 2010; S-178. 15. Perkins, J. Overcoming Language Barriers To Healthcare [Online]. 1999. [cited 2010 July 15]; available from; URL: http://www.sog.unc.edu/pubs/ electronicversions/pg/f99-3844.pdf. 16. http://www.nlm.nih.gov/medlineplus/images/womanhospital.jpg 17. http://blog.usa.gov/roller/govgab/resource/images/mg_patient.jpg
1. Census News [Online]. 2010. [cited 2010 May 3]; available from: URL: http://www. census.gov/012634.html. 2. World Languages [Online]: Anchorage School District. 2008. [cited May 5, 2010]. available from: URL: http://www.asdk12.org/depts/world_lang/advocacy. 3. Marcus, A. Emerging From Anesthesia, Mother Tongue Takes Over. Anesthesia News. 2010 May 36;5. 4. Branch, C., Fraser, I., Paez, K. Crossing the Language Chasm. Health Affairs. 24, no. 2 (2005): 424-434. 5. Flores, G., Leighton, K. Pay Now or Pay Later: Providing Interpreter Services In Health Care. Health Affairs. 24, no. 2 (2005): 435-444. 6. Kempen, AV. Legal Risks of Ineffective Communication. American Medical Association Journal of Ethics. 2007 August, Volume 9, Number 8: 555-558. 7. U.S. 10th Circuit Court of Appeals [Online]. 2010. [cited 2010 May 15]; available from; URL: http://caselaw.lp.findlaw.com/cgi-bin/getcase.pl?court=10th&navby=c ase&no=983129. 8. President’s Commission for the Study of Ethical Problems in Medicine and Biomedical Research. The law of informed consent. In: Making Health Care decisions. Washington DC: US Government Printing Office; 1982; 3:appendix L. 9. Dover, C. Health Care Interpreters in California. UCSF Center For The Health Professions. 2003.
© 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 1 7
Benjamin Dauber is a sophomore studying Biology at the University of Chicago.
THE TRIPLE HELIX Fall 2010 17 1 1 /9 /2 0 1 0 9 :4 3 :55 AM
UCHICAGO
Island Biogeography and Continental Habitats: Evaluating Species-Area Relationships in Terrestrial Conservation Efforts Liddy Morris
C
reating a successful terrestrial reserve is an incredibly daunting task, and it is often fraught with conflict and controversy. In attempts to simplify the various complex requirements necessary to ensure the survival and propagation of endangered species, researchers have extrapolated MacArthur and Wilson’s Theory of Island Biogeography (“IBT”) to create terrestrial conservation areas. IBT offers a compelling framework for understanding why biodiversity, also referred to as species richness, varies over Reproduced from [15] different habitats. Because of this, IBT is a guiding framework for planning conservation areas throughout the world. However, oversimplification of IBT has led to mass-produced, one-size-fitsall conservation efforts that ultimately prove to produce costly, ineffective, reserve projects. Since the 1920’s, ecologists have recognized that the size of a habitat area correlates with the number of species in the area. Olof Arrhenius, a pioneer in the field of biogeography, repeatedly counted the number of species in plots of varying sizes, and from this formulated that the number of species increases at a decreasing rate as the area increases in size [1]. This concept, referred to as the species-area relationships, quickly became paradigmatic in the studies of species-composition in terrestrial habitats. However, after careful examination of the theoretical and empirical evidence regarding the species-area relationship, it is difficult to find evidence that IBT and the species-area relationship serves mainland conservation efforts as a guiding theory. While the area of a potential reserve is an important element in conservation endeavors, it is not the only factor, and it fails to clearly predict a relationship between biodiversity and area in mainland fragments. Adequate area is a necessity, but by no means an entirely sufficient, condition. The species-area relationship can prove to be helpful in understanding conserva-
18 THE TRIPLE HELIX Fall 2010 UChicago.indb 1 8
tion dynamics but should be considered in the specific context of the habitat type, nature of biotic and abiotic interactions, and the targeted taxa within a potential reserve. MacArthur and Wilson’s Island Biogeography Theory has long been used as a tool for understanding fragmentation, which is the process by which discontinuities emerge in an organism’s environment [2]. The original theory asserts that species richness on an island is determined by immigration, emigration, and extinction. In their formulation, immigration and emigration levels tend to be lower among habitats that are further apart, and are further affected by distance from the mainland. Mirroring Arrhenius’ species-area relationship, island size is also a main determinant of species richness. The theory proposes that island size determines the species extinction rate, as larger habitats provide more resources and room for expansion. Applying IBT to this framework then predicts that larger habitat sizes reduce the probability of extinction. In this way, increased island area should be positively correlated with increased species richness. Shortly after publication, the IBT concept of the speciesarea relationship was extrapolated to continental habitat fragments to explain the function and maintenance of biodiversity on the mainland. The simplicity and seeming generalizability of IBT’s species-area concept led to its extensive use in the planning and creation of continental reserves by many scientists and policy-makers. Initially, most studies that addressed mainland conservation efforts and the species-area relationship were purely theoretical, as only limited empirical tests of IBT in practice on mainlands had been undertaken. In the early 1980’s large data sets began to emerge that could potentially challenge the preeminence of IBT in reserve design. In 1981, A.J. Higgs
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 3 :55 AM
UCHICAGO wrote a seminal work that analyzed three new data sets on species richness [3]. After cautioning against an unqualified use of the species-area relationship in reserve design, Higgs, nonetheless, concluded that IBT could be applied to land fragments, thereby supporting the application of the species-area relationship to reserve design. Using the same methods as Higgs, Simberloff and Abele came to different conclusions [4]. They compiled the results from previous studies and theoretical papers and instead found no relationship between reserve size and species richness. Simberloff and Abele claimed each potential reserve site was beset by “numerous idiosyncratic biological considerations,” and thus, there was no clear indication for reserve size and species richness without more specific knowledge of the area proposed for conservation. The debate over applying IBT reached a boiling point when more data was released and researchers gained more access to observations of species richness in fragments of different sizes. Within the scientific community this debate became known as the SLOSS question, or whether Single Large or Several Small conservation areas were better suited for conserving biodiversity in a fragmented area. The paucity of empirical data supporting species richness was a frequent point of contention for scholars examining IBT. Many researches that supported the theory emphasized that while it may be theoretically sound, not enough data existed to support its wholesale application for terrestrial reserves. For example, Marybeth Buechner, used computer simulations based on the dimensions of U.S. parks to analyze IBT in practice and in the creation of fragmented park systems [5]. She concluded that the area of a reserve should be considered in conjunction with other factors, especially relationships between species at the center of a fragment and those at the perimeter. She also suggested that mainland habitats fragments tend either to encourage emigration or be immigration destinations; as such, reserve design should take into account this added complexity. On a fundamental level, these criticisms of IBT rest on the observation that terrestrial fragments differ from islands, as an island does not have edges that rub against other fragments. Scholars observed that biodiversity on terrestrial fragments tends to be different because there are edge effects which are caused by the interactions between species that occur when fragments are close together. Islands may not experience ecosystem changes precipitated by neighboring fragments because there are simply no other surrounding fragments to precipitate this interaction. Even though scholars have made strides in demonstrating the complexity inherent in preserving biodiversity in fragmented habitats, many policy-makers from conservation institutions have already put into motion the wholesale application of IBT. This institutional momentum may prove difficult to counter, despite the changing perspective taking place within the academic community towards IBT. However, most scholars will not succumb as there is an increased focused on the practical use of IBT today. Moving beyond theoretical applications, researchers are now examining various tenets of IBT to deter-
© 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 1 9
mine their utility and efficacy in designing conversation areas. Zimmerman and Bierregaard are particularly noteworthy for their long-term study of a single species in order to test the IBT species-area predictions [6]. They predicted the minimum area required to protect 90 percent of Amazonian forest frog species using IBT and compared the prediction with actual data on Amazonian forest frogs and found that although IBT predicted a reserve size large enough to accommodate significant population sizes of 90 percent of Amazonian frog species, IBT failed to incorporate the need for appropriate breeding territories. This led them to emphasize the importance of considering ecological factors and salient life history characteristics of species slated for conservation, rather than merely looking at species-area relationships. This long-term empirical approach has been reproduced in a number of studies, including an analysis of a threatened Tasmanian beetle over nearly a decade. In a taxon-specific empirical test of applied IBT, Driscoll and Weir examined beetle response to habitat fragmentation in Tasmania [7]. The researchers found that most species of beetles threatened with local extinction lived in highly fragmented areas and relied on the largest fragments, while two threatened species persisted in small, narrow remnants. Driscoll and Weir also found greatest species richness in narrow, linear fragments. IBT, however, would instead predict greatest species richness in the fragments of the largest area. Their findings suggest that reserves designed to protect areas of high species richness might include non-threatened taxa while excluding threatened taxa persisting in nearby, less species-rich areas. Reserves based on IBT might thus fail to maximize protection of threatened taxa. In another taxon-specific study, the researchers, Miller and Harris, turned to large mammals to see if IBT had more predictive power among species with a larger habitat. Miller and Harris reviewed the successful persistence of large mammals in thirteen East African savanna reserves [8]. While they found no evidence for a species-area relationship and the number of extant large mammal species, the researchers uncovered a correlation between large mammal extinction and isolation from a contiguous gene pool. Thus, Miller and Harris concluded that the mammals became endangered when they were isolated from a larger group of relatives, noting that “isolated reserves may experience species depletion due to isolation from surrounding natural habitat.” Yet they still conceded that “certain tenets of biogeographic theory still apply” in understanding reserve dynamics. This indicates that the fundamental composition of IBT as a relationship between immigration, emigration, and extinction may still hold some truth. More recent studies support Miller and Harris’s conclusion. The Utah Museum of Natural History sponsored a study of the effects of habitat fragmentation and found a similar relationship between reserve isolation and large mammal extinction in Tanzania [9]. In this way, IBT may certainly be useful under particular circumstances, yet it should not be extrapolated as a unified theory of ecology. Other studies look more broadly at the implications of IBT for multiple species in a reserve area. In further tests of the species-area relationship, an international group of tropical
THE TRIPLE HELIX Fall 2010 19 1 1 /9 /2 0 1 0 9 :4 4 :0 2 AM
UCHICAGO ecologists examined gall-forming insects in natural habitat fragments in the Pantanal floodplain of Brazil [10]. In the study, Juliao, et. al. found that species richness for gall-forming insects did not vary accordingly to species-area predictions, and that fragment size had at best a weak relationship with insect diversity. The researchers instead discovered a significant difference between gall-forming insect composition in the interior and edge of the reserve. This finding portrays IBT as a somewhat blunt instrument, useful over large areas but failing to capture the intricacies of habitats with many fragments. The controversy and debate regarding applications of IBT to mainland conservation efforts is perhaps best embodied by the work of William F. Laurance, a researcher at the Smithsonian Tropical Research Institute. Initially a proponent of IBT and the use of species-area relationships in planning and designing reserves, Laurance developed a model for reserve design that incorporated species-area relationships, relative isolation, and to a lesser degree, edge effects [11]. In his early work, Laurance built on IBT in asserting that biodiversity was greater in larger reserves, but recently incorporated empirical findings to conclude that the distance of fragments from each other, as well as the unique species interactions that take place at the edge of fragments, also plays a role in biodiversity. This was seen as a success in accounting for the phenomenon called edge effects which explains why there is an increased diversity at fragment junctions. However, based on his own research in the tropical Amazon, Laurance markedly changed his perspective. In a review of twenty-two years of Amazon forest conservation efforts, Laurance found edge effects to predominantly affect species richness. In a culmination of his changing perspective, “Theory meets reality: How habitat fragmentation research has transcended island biogeographic theory”, Laurance challenges the broad application of IBT to an understanding of the dynamics of fragmented landscapes [12]. While he concedes that a reduction in fragment size often accompanies a reduction in the species richness of an area, he rejects the “relevance” of island biogeography, instead positing that it fails to predict both change in fragments over time, as well as which species will be most vulnerable to extinction. Laurance now considers the importance of edge effects, and anthropogenic change and its synergistic interactions with habitat fragmentation as more valuable tools for explaining and predicting fragment References 1. Arrhenius, O. 1921. Species and area. Journal of Ecology 9: 95-99. 2. MacArthur, R.H. and E. O. Wilson. 1967. The Theory of Island Biogeography. Princeton University Press, Princeton, New Jersey. 3. Higgs, A.J. 1981. Island Biogeography Theory and Nature Reserve Design. Journal of Biogeography 8: 117-124. 4. Simberloff D. and L.G. Abele. 1982. Refuge design and island biogeographic theory: Effects of fragmentation. The American Society of Naturalists 120: 41-50. 5. Buechner, M. 1987. Conservation in insular parks: simulation models of factors affecting the movement of animals across park boundaries. Biological Conservation 41: 57-76. 6. Zimmerman, B.L. and R.O. Bierregaard. 1986. Relevance of the equilibrium theory of island biogeography and species-area relations to conservation with a case from Amazonia. Journal of Biogeography 13: 133-143. 7. Driscoll, D. and T. Weir. 2005. Beetle responses to habitat fragmentation depend on ecological traits, habitat condition, and remnant size. Conservation Biology 19: 182-194. 8. Miller, R. I. and L. D. Harris. 1977. Isolation and extirpations in wildlife reserves.
20 THE TRIPLE HELIX Fall 2010 UChicago.indb 2 0
dynamics [14]. Initial support for and against IBT was based mainly on theoretical arguments and computer simulations. Yet when empirical tests of the species-area relationship in mainland reserves became possible with long-term studies of conservation areas, scholars began to challenge the assumptions of IBT. In contrast to IBT in theory, IBT in practice on the mainland showed little predictability and thus little utility as a guiding theory for conservation efforts. A significant number of scholars now assert that edge effects are the predominant factor influencing the preservation of species richness in reserves rather than the species-area relationship [6,10,13-14]. Zimmerman and Bierregaard, Juliao, and more recently Laurance, all conclude that biodiversity may not rely on fragment size alone. They call for a better understanding of the dynamics between the edges of reserves and the adjacent, different habitat types, and how edges affect species richness and composition in the reserve. More generally, any application of IBT to mainland conservation should be tempered by the recognition that mainland fragments may not behave the same as islands, and should not be the only theory employed in reserve design. IBT must be considered in conjunction with edge effects, as well as the effects of humans on natural habitats. Ultimately, the lesson to be learned from research on IBT is that we cannot generalize the complexity of ecological systems. It is easy to fall into the trap of a one-size-fits-all reserve policy because it can be clearly explained and applied without much additional research. Therefore, we must invest in the infrastructure to determine optimal habitat location and size for each threatened species. Conservation does not take place in a vacuum, and planners should recognize that IBT proposes an ideal situation, rather than an often complex reality. The recent climate change debate has put into focus the fact that humans affect the natural environment, but the long-term changes in biodiversity among endangered species precipitated by climate change are not well-understood. Nevertheless, every factor relating to ecological systems must be incorporated in future designs of conservation areas or else the risk of failure for them will be high. Liddy Morris recently graduated University of Chicago. She majored in Political Science with minors in Biology and Latin American Studies. Biological Conservation 12: 311-315. 9. Newmark, W.D. 1996. Insularization of Tanzanian parks and the local extinction of large mammals. Conservation Biology 10: 1540-1556. 10. Juliao, et. al. 2004. Edge effect and species–area relationships in the gall-forming insect fau- na of natural forest patches in the Brazilian Pantanal. Biodiversity and Conservation 13: 2055-2066. 11. Laurance, W. F. 1991. Edge effects in tropical forest fragments: application of a model for the design of nature reserves. Biological Conservation 57: 205-219. 12. Laurance, W.F. and T.J. Curran. 2008. Impacts of wind disturbance on fragmented tropical forests: a review and synthesis. Austral Ecology 33: 399-408. 13. Laurance, W.F., et. al. 2002. Ecosystem decay of Amazonian forest fragments: a 22-year investigation. Conservation Biology 16: 605-618. 14. Laurance, W. F. 2008. Theory meets reality: how habitat fragmentation research has transcended island biogeographic theory. Biological Conservation 141: 17311744. 15. http://www.nature.nps.gov/nnl/photocontest/2009/photos/HM_Point-LobosState-Reserve,-CA_G_Emmons.jpg
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 4 :0 2 AM
UCHICAGO
Waist-to-Hip Ratio as a Universal Standard of Beauty Jacob Parzen
W
hat happens when comely Reproduced from [6] mannequins meet congenitally blind men? Science! Well, at least according to researchers at the Behavioral Science Institute at Radboud University in the Netherlands. Their hypothesis? That heterosexual men never exposed to visual media, a primary cultural factor in the development of what is deemed attractive, would prefer low female waist-to-hip ratios (WHRs), a preference demonstrated in sighted men [1]. The verdict didn’t disappoint, as it was found that the blind-from-birth do indeed have an intrinsic preference for low WHRs, all other factors being equal [1]. This was a significant development in the two-decade war on WHRs. On one side, researchers have contended that preferences for low WHRs are molded primarily by cultural factors. Their opposition does not deny that cultural factors are ubiquitous and play a role in the development of WHR preferences, yet argues that there is also an evolutionary basis in said preferences. The results from the Behavioral Science Institute, along with recent studies outlining the health benefits of low WHRs, suggest that the latter group currently has the upper hand. Even before WHR was considered a criterion of beauty, the debate between cultural and evolutionary factors as determinants of attraction waged on. Charles Darwin suggested that each man is attracted to what he is accustomed to, and he dismissed the notion that there is a well-defined, universal standard of beauty [2]. However, many of Darwin’s claims,
Devendra Singh, a psychologist at the University of Texas, jumpstarted the WHR dispute by providing evidence that men have a universal preference for low WHRs.
© 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 2 1
along with those of his immediate successors, were backed by anecdotal evidence rather than experimental data. Even so, it is generally accepted in the scientific community that a man’s environment, both with respect to time and location, has noticeable effects on the physical attributes he deems desirable. The notion that “thin is beautiful” in contemporary Western society is a prime example because this notion is perpetuated by visual media, notably advertisements. The main question is whether a universal standard of beauty exists. The challenge that researchers face is to not only identify a criterion that men prefer both transhistorically and transgeographically, but also to link the attribute in question to some sort of natural advantage. Devendra Singh, a psychologist at the University of Texas, fueled the WHR dispute by providing evidence that men have a universal preference for low WHRs (though, as will be discussed at a later point, there is an important caveat to his theory) [3]. The methodology was simple. To each subject, Singh distributed 12 figures of women with varying body mass indexes (BMIs), a measure of healthy body mass based on height and weight, with four different WHRs, 0.7, 0.8, 0.9, and 1.0, for each of the 3 BMIs [3]. (WHRs are measured by dividing the circumference of the waist at its smallest point by the hip circumference at its largest point.) The subjects were then asked to evaluate the attractiveness of each figure based on sexiness, youthfulness, ability to have children, and a number of other attributes. The results indicated that WHR, not BMI, tended to determine whether or not the figure was attractive with respect to at least one of the criteria [3]. To rationalize his results, Singh reasoned that low WHRs must somehow indicate good health and reproductive potential, a postulation that appears to have held its own [3]. It is generally accepted that WHR is primarily regulated by the sex hormones. While testosterone signals fat storage in the abdominal region and fat utilization in the hips and buttocks region, estrogen signals just the opposite, resulting in excess fat in the hips and buttocks region relative to the abdominal region, and leading to “curves.” Consequently, healthy premenopausal women tend to have lower WHRs (typical range of 0.67 to 0.80) than their male counterparts (0.85 to 0.95) [4]. THE TRIPLE HELIX Fall 2010 21 1 1 /9 /2 0 1 0 9 :4 4 :0 2 AM
UCHICAGO Recent studies have continued to uphold the notion that a low WHR signals physiological wellbeing. Notably, Raphael See of the University of Texas Southwestern Medical Center and his collaborators found that WHR better predicts risk for cardiovascular disease than BMI in an exhaustive analysis of data from the Dallas Heart Study [5]. BMI has long been a commonly used measure of wellbeing because of its simplicity, but the emergence of WHR as a better indicator of health, coupled with its ease of measurement, may result in a change in how health is assessed when equipment is limited and time a constraint. Indeed, WHR is on its way to becoming a parameter that is commonly used in assessing one’s health, both inside and outside of clinical settings. If low WHRs are taken to correlate with good health, it follows from the theory
WHR is an excellent indicator of physiological wellbeing, the fact that both evolution and culture play a role in determining man’s tastes only adds additional questions to the mix.
of natural selection that men would prefer women with low WHRs to those with high WHRs. Singh’s research suggested that man’s attraction to low WHRs is universal, but because his test subjects were only White and Hispanic, his research didn’t have adequate implications in cultural invariance with respect to WHR as a measure of female allure [3]. That is to say, it still needed to be proved that his data reflected an intrinsic desire for low WHRs in the female figure. For example, it is quite possible that his subjects were all exposed to the same type of visual media and other cultural factors that influenced what exactly they sought in a woman’s physique. His contributions to the study of WHR were important, but he didn’t have all the evidence to settle the issue of female appeal. After years of debate, Johan Karremans of Radboud University and his co-workers provided a striking solution to this longstanding problem by taking away what is generally accepted as the primary means of media exposure: visual perception. The congenitally blind subjects had never seen the female body, so they would have little conception of what References 1. Karremans JC, Frankenhuis WE, Arons S. Blind men prefer a low waist-to-hip ratio. Evolution and Human Behavior 2010; 31: 182-186. 2. Sheppard N, editor. Darwinism Stated by Darwin Himself. New York: D. Appleton and Company; 1884. 3. Singh D. Adaptive Significance of Female Physical Attractiveness: Role of Waist-toHip Ratio. Journal of Personality and Social Psychology 1993; 65: 293-307. 4. Marti B, Tuomilehto J, Salomaa V, Kartovaara L, Korhonen HJ, Pietinen P. Body fat distribution in the Finnish population: environmental determinants and predictive
22 THE TRIPLE HELIX Fall 2010 UChicago.indb 2 2
Reproduced from [7]
their society’s ideal female is like. The researchers duly noted that it is possible that the subjects could have been told what was generally considered attractive by sighted individuals, but it is doubtful that secondhand information would carry much weight [1]. The mannequins used had adjustable hips and waists, and the blind subjects were instructed to feel and touch the hips and waists of the dolls when the WHR was 0.70 and when it was 0.84 (with BMI held constant) [1]. As a control group, individuals with normal eyesight were also instructed to look at the mannequins and rate their appearance. The results were clear: the congenitally blind preferred those with the WHR of 0.70 [1]. Without any prior exposure to visual media, the subjects were more attracted to the mannequins with lower WHRs, supporting the notion that there is some evolutionary basis to what males desire in the female physique. That being said, Karremans and co-workers also concluded that the preference for the lower WHR was decidedly stronger in the sighted subjects, implying that cultural factors are also at work [1]. To say that the answer to the question of male preferences with respect to the female body is now complete would be rash. While studies have indicated that men have an intrinsic preference for women with low WHRs and that this is rationalized by findings that WHR is an excellent indicator of physiological wellbeing, the fact that both evolution and culture play a role in determining man’s tastes only adds additional questions to the mix. Specifically, how do these factors act both in synergy, and in opposition? Would it be possible to effectively cancel out a man’s preference for low WHRs if visual media deified a physique with a high WHR? These questions are important because the recent studies discussed above have made it difficult to, in any given situation, pinpoint what exactly makes a man physically attracted to a woman. What is clear, however, is that WHR is an all-encompassing standard of attraction, at least in women. Jacob Parzen is junior at University of Chicago working on a B.S. in Biological Chemistry and a B.A in Chemistry. power for cardiovascular risk factor levels. Journal of Epidemiology and Community Health 1991; 45: 131-137. 5. See R, Abdullah SM, McGuire DK, Khera A, Patel MJ, Lindsey JB, Grundy SM, de Lemos JA. The Association of Differing Measures of Overweight and Obesity with Prevalent Atherosclerosis: The Dallas Heart Study. Journal of the American College of Cardiology 2007; 50: 752-759. 6. http://www.calrecycle.ca.gov/calmax/images/2003/mannequins.jpg 7. http://www.ncbi.nlm.nih.gov/bookshelf/picrender.fcgi?book=curriculum&part=A1 34&blobname=alcoholf7.jpg
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 4 :0 3 AM
UCHICAGO
Bioluminescence and Fluorescence: Understanding Life through Light Bonnie Sheu
T
hose who have spent their childhood playing by the lake or gone on classroom fieldtrips may have collected samples of pondwater containing dinoflagellates, one of many diverse microscopic lifeforms that live in water. Dinoflagellates are an abundant class of both marine and freshwater phytoplankton, autotrophic protists that harvest energy by converting light energy to chemical energy via photosynthesis. Each species has two flagella serving as a pair of propellers that enable dinoflagellates (from the Greek word dinos, whirling) to spin as they dance gracefully through water. Like bacteria, some dinoflagellates are colonial and spectacularly bioluminescent. When these particular dinoflagellate species undergo occasional explosive population growth, or “blooms”, they accumulate into dense, visible patches near the water surface, causing magnificent luminescent open ocean phenomena such as “the milky sea effect”. This phenomenon refers to the eerie glow of water at night when waves, boats, or swimmers agitate seawater harboring dense populations of dinoflagellates. It is caused by an ATP-driven chemical reaction, which may serve as a survival mechanism so that when the water is disturbed by organisms feeding on dinoflagellates, the light attracts other fishes to eat those predators [2]. Mariners have documented the milky sea effect, one fictional example being the “milky sea” sighted by the Nautilus in Jules Verne’s 20,000 Leagues Under the Sea. Though the name suggests a white glow, the light produced by these bioluminescent dinoflagellates is actually blue. The name most likely stuck and gradually gained acceptance because the glow appears white in photos taken by monochromatic sensors, instruments used to capture the first sightings, and also because human rods, which are used for night vision, cannot discriminate color [3]. As one may have guessed, the milky sea effect is a bioluminescent phenomenon. Bioluminescence is the emission of light in a living organism by an enzymatic chemical reaction. It does not require an excitation light source like fluorescence, a phenomenon which will be discussed shortly. In addition to dinoflagellates, bioluminescence is also observed in certain fungi (commonly known by the term “foxfire”), coral, jellyfish, and anglerfish. Bioluminescence mainly functions as a means of camouflage, communication, reproduction or identification in deep underwater or other light-deficient environments. For example, cells in wood-decaying fungi convert energy stored in organic molecules to light, creating a bioluminescent glow that attracts insects, which help the fungus disperse its spores, ensuring an adequate species distribution [4]. Additionally, flashlight fish (Photoblepharon palpebratus) have a glowing oval organ below the eye containing bioluminescent bacteria. In this mutualistic form of symbiosis, the fish uses light created by the bacteria to attract prey and send mating signals, © 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 2 3
while in return the bacteria feed on nutrients from the fish [5]. Crystal jellyfish (Aequorea victoria) is another bioluminescent species which emits blue light like dinoflagellates, though the underlying mechanism involves the interaction of calcium cations with a photoprotein called aqueorin. These animals gained huge attention in the 2008 Nobel Prize in Chemistry awarded to Osamu Shimomura, Martin Chalfie and Robert Tsien for the discovery and development of green fluorescent protein (GFP), the protein first isolated in A. victoria by Shimomura [6]. Though GFP is a fluorescent protein, it produces green fluorescence by absorbing blue light emitted from jellyfish aqueorin in a coupled bioluminescent-fluorescent system. GFP has greatly revolutionized biomedical research because the gene can be genetically encoded in many organisms - from bacteria, yeast and fungi to plants, fish and mammalian cells. Not only does this protein make non-invasive time-lapse imaging possible, but thanks to the widespread usage and needs of researchers, an impressive array of natural and synthetically mutated GFP fluorescent derivatives are now available. Among the most notable color variants include blue fluorescent protein (BFP), cyan fluorescent protein (CFP), and yellow fluorescent protein (YFP) [7]. Today fluorescence is an extremely useful research tool, as exemplified by the popularity of GFP and other fluorescent proteins. Fluorescence is the process in which certain substances emit visible light when excited by some other radiation. Ultraviolet radiation, which has a shorter wavelength, and hence higher frequency and higher energy, than visible light, is most commonly used to excite the substance. When absorbing ultraviolet light, a photon of light raises an electron from its ground state to an excited state where it has more potential energy. If the excited molecule is isolated in the system, it will try to dissipate excess vibrational energy through collisions with other molecules, immediately returning to ground state on the order of a billionth of a second by giving off excess energy as heat. This is because, like all high-energy states, the excited electron is unstable. Such conversion of light energy to heat is what makes the top of a car very hot on a sunny day. Some pigments, including cholorophyll, will emit light as well as heat after absorbing photons. When the remaining excess energy is emitted as visible photons, fluorescence is observed. A chlorophyll solution excited with ultraviolet light, for instance, will fluoresce with a red-orange glow [8]. Researchers have exploited similar fluorescent ability of bioluminescent agents, such as aqueorin, to better study biological pathways that previously could not be traced. Over the past few decades, both fluorescence and bioluminescence have become essential tools in both biological and chemical research. Given the physiological limitations of the THE TRIPLE HELIX Fall 2010 23 1 1 /9 /2 0 1 0 9 :4 4 :0 4 AM
UCHICAGO
human eye, it is no surprise that many scientific breakthroughs rely heavily on developing both novel imaging approaches and better imaging instruments. While scientists can transform collected raw data in visible images using post-experimental imaging and signal-processing techniques, it is still helpful to incorporate visualization techniques within experiment protocols so researchers can identify and track their object(s) of interest. As a result, molecules with not only distinct emission spectra but also different chemical properties, such as GFP and its color variants, are progressively being engineered and used as specific molecular labeling probes, enzyme substrates, environmental indicators and cellular stains over a wide range of biological applications. Since the discovery of fluorescence in quinine, the earliest observed fluorophore (fluorescent molecule), by Sir John Herschel in 1845 and subsequent chemical synthesis of the molecule by R.B. Woodward and W.E. Doering in 1944, scientists have constructed an elaborate, though definitely not exhaustive, repertoire of fluorescent “dyes” [9]. It is now well known that the attachment of various functional groups (a specific configuration of atoms attached to a molecule that gives it its characteristic chemical properties), chelating ligands (chemical compounds that can bond to a metal ion) and other chemical modules to base molecules allows a fluorophore to exhibit a certain color and bind to specific target compounds. This modification is extremely useful because it enables precise assaying of a wide, complex range of biological systems based on each molecule’s distinct structure and chemical/spectral properties. For instance, constructing fluorophores with spe-
24 THE TRIPLE HELIX Fall 2010 UChicago.indb 2 4
cific optimal pKa ranges refines the conditions where the fluorophore can remain either reactive or stable in vivo. As an example, Alexa Fluor 488, from the Alexa Fluor family of fluorescent dyes produced by the company Invitrogen, is a cyan-green dye with an emission wavelength of approximately 488 nm. Alexa Fluor 448 is commonly used for cell and tissue labeling as it works well across a wide range of cell types due to its higher stability, brightness, and lower pH sensitivity compared to other similar dyes. Additionally, scientists are also able to engineer molecules tailored to track distinct biological pathways by modifying the fluorophore’s chemical and spectral properties (i.e. maximal absorption and emission wavelengths). For example, DiI, a hydrophobic and lipophilic cyanine dye, is commonly used for neuronal tracing because it can be easily retained in the lipid bilayers of the neuronal cell [10]. From Reproduced from [24] these bioengineering examples, one can imagine the countless research opportunities that remain to be explored. With the challenge of understanding more obscure biochemical and biological pathways, scientists will not only need more sophisticated instruments, but also more sophisticated molecular tools and better fluorescent probes. Although fluorescence has gained more popularity over bioluminescence thanks to the recent 2008 Nobel Prize in Chemistry for the discovery of GFP, bioluminescence also has some advantages over fluorescence that merits increased attention. Broadly speaking, fluorescent and bioluminescent dyes serve as usual visual trackers for different circumstances. Fluorophores are better suited for investigating specific molecular signaling and trafficking pathways because they can be engineered to have versatile properties. On the other hand, photophores, the bioluminescent complements of fluorophores, are more ideal for studying and tracking long-period processes such as tumor progression because photophores glow longer than fluorophores, as will be discussed shortly. In spite of this, fluorophores still outwins photophores in terms of its emission spectra. Unlike bioluminescence, which mainly emits light in the blue and green light spectra with the exception of certain loose-jawed fish that emits red light and Tomopteris, a marine plankton species, that emits yellow light, scientists have synthetically engineered a wide array of fluorescent molecules across the entire visual light spectra. In addition to GFP, its derivatives and the Alexa Fluor family, there are a few other dyes among the wide repertoirethat are worth knowing [11].
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 4 :0 5 AM
UCHICAGO
Despite the current favoritism for the latter [fluorescence], both bioluminescence and fluorescence have their benefits and shortcomings, hence they should be thought as complementary tools that together allow researchers to visualize a wider range of biological phenomena. Indole and imidazole fluorophores belong to an effective class of dyes for DNA. DAPI, or 4,6-diamidino-2-phenylindole, is a specific fluorescent stain from this class that absorbs purple light and emits blue light. Since it binds strongly to the minor groove of DNA, DAPI is the most commonly used molecular stain for both labeling cell nuclei and detecting viral DNA in cell cultures [12]. Another fluorescent probe is fluorescein, a xanthene (yellow to red-colored organic compound) dye that serves as a scaffold, or support structure, for indicator probes that can visibly show the presence or absence of a target compound of interest [11]. It is commonly used in forensics microscopy to detect bloodstains, in hospitals as a topical diagnostic tool for retinal and corneal diseases and was the first substance used to dye the Chicago River green on St. Patrick’s Day in 1962 [13]. Due to its solubility in water, fluorescein is also added to rainwater in environmental tests to help locate water leaks. A final fluorophore important in imaging applications is boron difuloride dipyrromethene (BODIPY). BODIPY dyes are useful because they can be tuned for different wavelengths through the appropriate substitution reaction, where a functional group in the compound is replaced by another functional group, making it possible for them to serve the same functions of other dyes. For instance, BODIPY-FL, BODIY-TMR, and BODIPYTR act as surrogates for fluorescein, tetramethyrhodamine, and Texas Red, respectively, through with a higher intensity (brightness). Furthermore, because BODIPY dyes are nonpolar and electrically neutral, they are often the preferred choice for labeling many electrically charged biochemical molecules such as proteins and nucleotides because they do not perturb the target compound’s electrical properties [14]. With fluorescent molecules, differences in environmental sensitivity and fluorophore modifications can provide scientists with a sophisticated means of elucidating biological and biochemical phenomena. However, there are drawbacks to using fluorescent probes in the laboratory that make it a less ideal tool than bioluminescence. Fluorescent probes have limited use in long-term investigative tracking or in vivo imaging studies because they can undergo photobleaching, the rapid
© 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 2 5
photochemical destruction, or “quenching”, of a fluorophore. Photobleaching is a complication present in time-lapse microscopy because fluorescent molecules are coincidentally destroyed by the light exposure that stimulates them to fluoresce [15]. While photobleaching can be exploited to study the diffusion of molecules through optical laboratory techniques such as FRAP, fluorescent recovery after photobleaching, and FLIP, fluorescence loss in photobleaching, it is nevertheless an obstacle that scientists have attempted to overcome with newer techniques such as two-photon microscopy, which reduces photobleaching by reducing the amount of illumination on live tissue samples, enabling imaging up to very high depths on the order of one millimeter [15]. Additionally, whereas fluorescence dies away as time progresses from photobleaching, bioluminescence endures as long as the living organism is alive. This advantage can help scientists study diseases over longer periods of time. Though bioluminescence is still not nearly as popular as fluorescence, given the relative amount of scientific publications that use this tool, studies have been conducted mostly in the field of oncology that have successfully incorporated bioluminescent molecules in experiments. Photophores have been conjugated, or attached, to epidermal growth factors (EGF) to detect cancer and tumor progression [16]. Furthermore, Gaussia luciferase (GLuc), a tiny bioluminescent protein related to firefly luciferase protein, has been used as a biomarker reporter gene for monitoring tumor progression as well as detecting levels of cortisol, a mammalian stress hormone [17]. To compensate for the short-lived illumination of fluorescence, researchers are currently trying to further the development of fluorescent probes with special chemical and photophysical properties that can allow them to be “turned on and off” by light, enzymatic activity or by other environmental changes. This sort of ideal “chemical masking” is illustrated in channelrhodopsin, a light-activated ion channel originally found in unicellular green algae. Channelrhodopsin undergoes a conformational shape change, either opening or closing its pore to regulate cation flow, after being stimulated and absorbing blue light. It is a fairly new technique used in neuroscience research to optically activate specific neurons [18]. Similarly, scientists hope to engineer an optical switch in fluorescent molecules because it suppresses unwanted fluorescence signals and reduce background noise, thereby functioning as a bioimaging filter just as noise-reduction headphones cancel out unwanted noise through destructive interference, leaving an extremely clear audio recording. In fluorescence imaging techniques like FRET, fluorescence resonance energy transfer, a major technical limitation is the external illumination requirement to initiate fluorescence transfer. External illumination leads to background noise either due to direct excitation of the fluorophore or due to photobleaching. However, in the parallel technique BRET, bioluminescence resonance energy transfer, the use of bioluminescent luciferase, for example, rather than a fluorophore such as CFP (cyan fluorescent protein) bypasses the complication of photobleaching and still produces an initial photon emission compatible with YFP (yellow fluorescent protein) because luciferase does not depend
THE TRIPLE HELIX Fall 2010 25 1 1 /9 /2 0 1 0 9 :4 4 :0 7 AM
UCHICAGO on another fluorophore to emit light [19]. Hence, it cannot photobleach, an advantage over fluorescence that researchers have been exploiting to study pathological protein-protein interactions [20]. From dinoflagellates to fireflies, GLuc to GFP, both bioluminescence and fluorescence have been around for thousands of years and continue to play an important role in scientific research. Despite the current favoritism for the latter, both bioluminescence and fluorescence have their benefits and shortcomings. Hence they should be thought as complementary tools that together allow researchers to visualize a wider range of biological phenomena. Whereas biotechnology corporations have already and are currently developing fluorescence applications, the widespread growth and availability of bioluminescent bacteria and microorganisms suggests that the prospect of extending manufacture of microbial biotracers into the consumer market – be it for detecting contamination of meat/canned food, environmental pollutants such as oil spills and herbicides/pesticides, or military biodegradable landmarks helicopters can spot as wind from their rotors kicks up dirt– seems to be extremely promising [21-23]. Given the biocompatibility of bioluminescent molecules as illustrated in previous tumor progression studies, it is completely not out of reach for scientists to bring bioluminescence to the same popularity level as fluorescence. However, just
as there are photobleaching limitations to fluorescence, there exist shortcomings to bioluminescence technology. There could potentially involve a risk of random lethal mutations, for instance, because bioluminescent probes are genetically engineered. Also, the demand for manufacturing these probes on a large industrial scale to meet the demands of the consumer market inevitably incurs huge costs, because of both patents and legal issues regarding species extinction, as, currently, firefly luciferase is still the most commonly used bioluminescent agent. However, given the natural abundance of bioluminescent dinoflagellates, it should not be too difficult to acquire and categorize bioluminescent extracts from these organisms. Perhaps in the near future, just as biotechnology companies such as Invitrogen and Sigma-Aldrich have developed an impressive Reproduced from [25] catalog of fluorescent probes, industrial designers and current leading corporations may consider building similar catalogs for bioluminescent markers. The multitude of new, unforeseen biological phenomena and diseases suggest a call for relatively novel tools, in this case bioluminescence techniques such as BRET, may help shed more light on the unresolved.
Referemces
15. Alberts, B.; Johnson, A.; Lewis, J.; Raff, M.; Roberts, K.; Walter P. “Chapter 9 Visualizing Cells.” Molecular Biology of the Cell, 5E. New York: Garland Science, 2008. 16. Liang, Q. et. al. (2004) Noninvasive adenovirus tumor retargeting in living subjects by a soluble adenovirus receptor-epidermal growth factor (sCAR-EGF) fusion protein. Molecular Imaging & Biology. 6 (6): 385-394. 17. Rowe, L.; Dikici, E.; Daunert, S. (2009) Engineering Bioluminescent Proteins: Expanding their Analytical Potential. Anal. Chem. 81 (21): 8662-8668. 18. Kramer, R.H.; Fortin, D.L.; Trauner, D. (2009) New photochemical tools for controlling neuronal activity. Curr. Opin. Neurobiol. 19 (5): 544-552. 19. Pfleger, K.D.; Eidne, K.A. (2003) New technologies: bioluminescence resonance energy transfer (BRET) for the detection of real time interactions involving G-protein coupled receptors. Pituitary. 6 (3): 141-151. 20. Pfleger, K.D.; Eidne, K.A. (2006) Illuminating insights into protein-protein interactions using bioluminescence resonance energy transfer (BRET). Nat. Methods. 3 (3): 165-174. 21. Pellinen, T. et al. (2002) Detection of Traces of Tetracyclines from Fish with a Bioluminescent Sensor Strain Incorporating Bacterial Luciferase Reporter Genes. J. Agric. Food Chem. 50 (17): 4812-4815. 22. Girotti, S.; Bolelli, L.; Roda, A.; Gentilomi, G.; Musiani, M. (2002) Improved detection of toxic chemicals using bioluminescent bacteria. Analytica Chimica Acta. 471 (1): 113-120. 23. Reitz, Stephanie. “Military increases interest in bioluminescence.” NavyTimes. 11 Sept. 2010. <http://www.navytimes.com/news/2010/09/ap-military-researchingbioluminescence-091110/>. 24. http://photolibrary.usap.gov/AntarcticaLibrary/JELLYFISH.JPG 25. http://oceanservice.noaa.gov/facts/biolum1.jpg
1. Cabeza de Vaca, Álvar Núnez. (1542) La Relación. Martin A. Dunsworth, José B. Fernández. Arte Público Press, Houston, Texas (1993) 2. Campbell, Neil A., and Jane B. Reece. “Chapter 28 Protists.” Biology. San Francisco: Pearson, 2005. 555. Print. 3. Miller, S.D., S.H.D. Haddock, C.D. Elvidge, T.F. Lee. (2005) Detection of a bioluminescent milky sea from space. Proc. Nat. Acad. Sci. 102:14181-14184. 4. Campbell, Neil A., and Jane B. Reece. “Chapter 10 An Introduction to Metabolism.” Biology. San Francisco: Pearson, 2005. 141. Print. 5. Campbell, Neil A., and Jane B. Reece. “Chapter 27 Prokaryotes.” Biology. San Francisco: Pearson, 2005. 545. Print. 6. Shimomura, O. (1995) A short story of aequorin. Biol Bull. 189 (1): 1-5. 7. Zacharias, D.A.; Tsien, R.Y. (2006) Molecular biology and mutation of green fluorescent protein. Methods Biochem. Annal. 47: 83-120 8. Maxwell, K.; Johnson, G.N. (2000) Chlorophyll fluorescence – a practical guide. J. Exp. Bot. 51 (345): 659-668. 9. Woodward, R.; Doering, W. (1944) The Total Synthesis of Quinine. J. Am. Chem. Soc. 66 (849). 10. Molecular Probes: The Handbook. Invitrogen, <http://www.invitrogen.com/site/ us/en/home/References/Molecular-Probes-The-Handbook.html>. 11. Lavis, L. D.; Raines, R. T. (2008) Bright ideas for chemical biology. ACS Chem. Biol. 3, 142–155. 12. Kapuscinski, J. (1995) DAPI: a DNA-specific fluorescent probe. Biotech Histochem. 70 (5): 220–33. 13. Colors of Chemistry – March 2010. CAS, a division of the American Chemial Society. <http://www.cas.org/aboutcas/colors/2010march.html>. 14. Loudet, A.; Burgess, K. (2007) BODIPY Dyes and Their Derivatives: Syntheses and Spectroscopic Properties. Chem. Rev. 107 (11): 4891-4932.
26 THE TRIPLE HELIX Fall 2010 UChicago.indb 2 6
Bonnie Sheu is a junior at the University of Chicago. She is a Biological Sciences major, with a specialization in neuroscience/immunology.
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 4 :0 7 AM
UCHICAGO
Bio Art: Biotechnology’s Slippery Slope Anna Zelivianskaia
A
circle of tall, thin flames lick the spindly legs of a helpless cockroach fastened to a wooden stick. This black and white photograph is one of the first images the viewer is confronted with in the segment titled “Executions” from the art exhibit “American Cockroach” by Catherine Chalmers. Chalmers is a popular “bio-artist” with degrees in engineering and painting and this work can be placed under the umbrella of “Bio Art,” a form of art that has inspired much controversy in the 21st century. Bio Art is a form of art that engages science by using biological materials or living organisms as both the substance and the subject of each piece—it also often incorporates scientific techniques thereby linking life and biotechnology [1]. Using this definition, we can examine two pieces that are representative of the Bio Art movement—or at least as representative as two pieces can be in this broad and loosely-defined artistic form. The works “Executions” and GFP Bunny created by Catherine Chalmers and Eduardo Kac, respectively, employ very different approaches to convey distinct messages; however, both of them can be defined as Bio Art and have inspired great controversy amongst the public. The “Executions” piece consists of several images of cockroaches being executed in various ways—hanging, burning, electrocution, etc.—and three videos documenting various features and reactions of the cockroach; however no live cockroaches were harmed in this production. The first video is titled “Burning at the Stake,” the second is called “Squish,” and the third is the “Roach Resurrection” [2]. The sequence is a startling display of images that challenges the audience’s viewpoints about every-day bugs and the divide between nature and culture by depicting cockroaches close up and in human-like situations. One of the topics Chalmers wants to explore in this art exhibition is the interesting dichotomy existing in human psychology: people kill cockroaches every day when they find them in their homes, but they are appalled when faced with cockroach “executions.” In fact, Chalmers received a great deal of hate mail following the exhibit’s premiere and the appearance of her execution photographs in the New York Times, even though the letters failed to mention that all the executions were staged with dead insects [2]. Only the illusion was important to Chalmers to effectively communicate her purpose, as her work revolves around the re-examination of © 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 2 7
the relationship between human beings and nature as it plays out in the art world [2]. The cockroaches’ death in the images mirror human execution methods, which suggests a deeper link between humans and insects than is outwardly apparent. A viewer who empathizes with the cockroach after viewing the artistic piece may question the validity of the treatment that cockroaches receive even when they are not in an art exhibit. By showing cockroaches dying as humans do, Chalmers effectively draws humans into the bug world and challenges the way people relate to insects [3]. Chalmers not only re-examines the relationship between nature and culture, but also challenges how humans view nature. Because the cockroaches are ‘executed’ in the Reproduced from [7] same ways that humans are, the distinction between humans and the bugs they squish becomes blurred. The three videos following the images further explore the subject of the cockroach from its ghastly death to its capability for complex dance movements and finally to its eerie resurrection. Since the execution images and aforementioned videos are viewed by the public in
Since Bio Art frequently uses cutting-edge technological techniques...this becomes the foundation for the statement that bio-artists are uniting the two worlds that were formerly thought to be practically opposite of each other. this specific order, Chalmers re-defines how we view nature on its own—we start with the death of the cockroach, move on its capabilities as a live organism, and then follow it to its resurrection, as opposed to concluding with the natural death. Perhaps by the time the mass of cockroaches wakes up, we are supposed to come to an altered emotional state THE TRIPLE HELIX Fall 2010 27 1 1 /9 /2 0 1 0 9 :4 4 :0 7 AM
UCHICAGO and begin to relate more to this insect? By placing the videos the artist did not understand the complexity or ethical issues in such a “backwards” order—with the bugs initially dying to the same degree? Could the new transgenic organism easand then waking up—and by executing cockroaches in the ily have been mistreated because it is different from the rest same ways that humans, themselves, are executed, Chalm- of its species or because the artist feels that he has complete ers suggests that humans should not have such an adverse control since he “created” the organism? Furthermore, new reaction to this organism. Fortunately, Chalmers conveys this ethical issues with Alba were raised when the director of the message without harming the French institute where Alba was bugs. However, this could be born refused to allow the bunny ...her work revolves around done by a different artist in an to leave, even though the original unethical way and the hate mail plan was for Alba to come home to the re-examination of the Chalmers received should serve his “family” in Chicago [4]. Since relationship between human as a warning that Bio Art has Kac no longer had possession of the the potential to be employed beings and nature as it plays out rabbit, there was more potential for in a dangerous way. its mistreatment and the director’s in the art world. Another bio artist, Edudecision was quite controversial. ardo Kac, coined the term “Bio Throughout the next two years, the Art” in 1997 to refer to works debate about whether Alba should that involve “biological agency” [4]. He created a new life form, be released raged on in newspapers, internet forums, Kac’s “Alba Alba, for a work entitled GFP Bunny by inserting the green Guestbook,” and a public campaign in Paris organized by Kac fluorescent protein (GFP) into the genome of a fertilized rabbit [4]. In 2002, Kac even staged a new exhibit called Free Alba! at egg cell. GFP is easily inserted into the interior of cells and the Julia Friedman Gallery in Chicago, which included bunny allows them to glow green in the presence of UV light. A few T-shirts, posters, drawings highlighting humans’ closeness to months after the procedure, the transgenic embryo gave rise the “animal other,” and Alba flags [4]. His desire to make the to a healthy rabbit in the south of France that changed color GFP bunny and the successive Free Alba! campaign revolved under certain light [4]. Creating a new, transgenic life form around wanting to show that humans and other species are obviously raised many ethical concerns and Kac realized that evolving in new ways, and his new works emphasize life and “this must be done with great care, with acknowledgement of evolution [4]. He also wanted to further our understanding the complex issues…with a commitment to respect, nurture, of nature, believing that transgenic works are natural—after and love the life thus created” [4]. However, could the situation all, genes move from one species to another in the wild all where a transgenic organism is created spiral out of control if the time, which allows for the natural creation of new species Reproduced from [6]
28 THE TRIPLE HELIX Fall 2010 UChicago.indb 2 8
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 4 :0 8 AM
UCHICAGO or furthers evolution of current species [4]. Kac uses Alba as an agent to challenge our conception of what nature is, as he emphasizes our close relationship to animals in his campaign. Fortunately, Kac realized that all transgenic species created must then be treated with respect and that one must be aware that they are powerful works that can mobilize people around a certain idea or life form. However, there is a lot of potential for ethical issues in this field. One can understand the potential damage that could be inflicted on life forms if a less sensitive artist became involved. For example, another artist could create a transgenic form that was missing a gene crucial for a good quality of life or could make a new organism and then treat it inhumanely. Kac has created a transgenic organism to convey certain messages about nature and he has vowed to treat it humanely, but his work could potentially lead to the mistreatment of other life forms. The two works by Chalmers and Kac provide examples of the statements that Bio Art can convey and bring into sharp view the ethical issues and lack of regulations for this artistic form. These two well-known artists have approached Bio Art in an ethical manner, determined not to harm living organisms just to create art. Even though their works may inspire shock and awe upon first viewing, careful consideration of the approach and the artists’ goals demonstrate that they were created in a completely ethical manner. However, the controversy over the works—particularly Reproduced from [8] Chalmer’s “Executions”—illustrates the slippery slope embedded in this art form. If Chalmers wants to use live cockroaches, there is no one there to stop her, and is that unethical? Perhaps killing cockroaches is not unethical since people do that on a daily basis, but what if she had wanted to use mammalian organisms? In the same vein, Kac is committed to treating Alba in a humane manner with respect and love—but what if another, less conscientious artist had created Alba instead? Many people are even dissatisfied with Alba’s creation on the basis of animal rights; they claim that the creation of transgenic organisms should not be practiced [5]. On this basis, should Bio Art be constrained by some ethical boundaries or even prohibited? Despite the criticism, Bio Art should not be prohibited because it is possible to engage in it in an ethical manner but all bio artists should be keenly aware of its implications. Where to draw the line in terms of ethics is a fuzzy question with very serious consequences and artists should take care
not to harm living organisms that they use or create. Perhaps Bio Art should be loosely regulated and abide by guidelines that prohibit the harming of any living organism and insist on the ethical treatment of all Bio Art creations. Ideally, not even cockroaches would get killed for the sake of entertainment. Even though the exploration into Bio Art should continue, artists should proceed with caution and not overstep the aforementioned boundaries. Operating under the assumption that Bio Art remains in the realm of ethical practice, it has been suggested that Bio Art “bridges the gap between art and science,” and MSNBC even ran an article under this very title in 2007 [5]. The author of the article, Pasko, claims that many bio-artists desire to make biotechnology more accessible using their artistic exhibits [5]. Since Bio Art frequently uses cutting-edge technological techniques, such as transgenic creation and nanotechnology, this becomes the foundation for the statement that bio-artists are uniting two fields that were formerly thought to be practically opposite of each other. However, this claim is an inaccurate reflection of the methods and goals of Bio Art, which is not subjected to the same strict guidelines that scientific experiments have to abide by. The overall goal of science is to further society’s understanding of the natural world in some way and to hopefully apply that knowledge to human benefit. The goal of Bio Art, on the other hand, is political and social criticism that ideally makes the viewer reconsider the way he is using the knowledge or power he already has [5]. The different purposes that bio art and science have implies that the gap between them can never genuinely be overcome. Art will always strive to make a statement while science will aim to elucidate the laws of nature and benefit from them. Furthermore, it is telling that bio-artists receive complaints from animal rights activists while scientists need to get advance permission from a research safety department before they even come near an animal. Bio Art should continue to grow as an art form because it conveys messages relevant to society via innovative methods, but it should carefully fit itself within the accepted ethical boundaries in society and it should not be portrayed as a magical link between two different disciplines. Anna Zelivianskaia is a senior studying Biology and Anthropology at the University of Chicago.
References 1. Zurr I, Catts O. The Ethical Claims of Bioart: Killing the Other or Self-Cannibalism. Aust. and New Zealand Journal of Art: Art & Ethics. 2004; 5(1): 167-188. 2. Thorson, A. Artist Explores Human/Nature Relationship Through Bug ‘Executions.’ The Kansas City Star. 2003 Sep. 28; 1 3. Boxer, S. “Cockroaches as Shadow in Metaphor; An Artist Began Chilling and Decorating Bugs, But Moved on Depicting Their Executions.” The New York Times. 2003 May 8; 1 4. Kac, E. Life Transformation--Art Mutation. In: Kac, E. editor. Signs of Life: Bio Art
© 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 2 9
and Beyond. Cambridge, MA: The MIT Press ; 163-180. 5. Pasko, J. Bio-artists bridge gaps between arts, sciences. [document on the internet] MSNBC. Com: Technology and Science; 2007 Mar 4 [2010 May 20]. Available from: http://www.msnbc.msn.com/id/17387568/ns/technology_and_science-science/ 6. http://ehp.niehs.nih.gov/docs/2008/116-12/roach.jpg/ 7. http://www.nal.usda.gov/awic/pubs/Rabbits/rabbits.jpg/ 8. http://www.bnl.gov/bnlweb/pubaf/pr/photos/2008/05/Figure_GFP-300.jpg
THE TRIPLE HELIX Fall 2010 29 1 1 /9 /2 0 1 0 9 :4 4 :0 9 AM
CMU
Music and the Mind: Can Music Benefit Those with Autism? Elizabeth Aguila
A
shley is a child who was diagnosed with pervasive developmental disorder (PDD), which is part of the autism spectrum, when she was 21 months old. When her mother and grandmother tried to get her attention from the television by calling her name, she would not look up. When they banged around pots and pans, she still did not respond. At 21 months, Ashley still hadn’t learned how to speak and only grunted. One of Ashley’s psychologists suggested that she take part in music therapy in which she would listen to Mozart’s music for several hours per day for several weeks. One day, when her parents were driving home from a therapy session, Ashley spoke her first words, “I want cookie”. Ever since then, Ashley has been making even more progress, and today, like every normal 10-year-old girl she loves Hannah Montana and High School Musical, and can now use language to interact with others [1]. Ashley is one example of the fact that people have always had a significant relationship with music. Its presence in every culture is an indication of its universality [2]. Using music as a method of healing began after World War I and World War II when community musicians went to hospitals to play music for veterans suffering from physical and emotional trauma. When doctors noticed that patients responded positively - physically, cognitively and emotionally - to the music, they asked the hospital to hire musicians to play for the patients. Soon it was clear that these musicians required more training before entering hospitals, such as how to interact with and how to perform music to benefit the patients. Due to patients’ positive responses to the music, the field of music therapy was born in 1940. To train musicians for therapy, Michigan State University founded the first music therapy degree program in 1944. The World Health Organization (WHO) first recognized music as a form of therapy in 1996. As an increasing number of people studied and became music therapists, the American Music Therapy Association (AMTA) was founded almost fifty-four years later, in 1998 [3]. Today, there 30 THE TRIPLE HELIX Fall 2010 UChicago.indb 3 0
are more than 70 colleges and universities that have degree programs for music therapy. Thanks to modern technology and interdisciplinary researchers, the field of music therapy has been growing to incorporate many fields such as neuroscience, cognitive science, brain imaging, and psychology [4]. Over the years, different types of music therapy have been developed. In Music Therapy: An Introduction, Jacqueline Peters describes that music therapy is “a planned, goal-directed process of interaction and intervention, based on assessment and evaluation of individual clients’ specific needs, strengths, weaknesses...to influence positive changes in an individual’s condition, skills, thoughts, feelings, and behaviors” [5]. In other words, music therapists use the ways the mind and body are stimulated when patients listen to and perform music. Music therapy promotes one-to-one interaction, creating a relationship between the music therapist and the patient. There are five main types of music therapy. First, is receptive music therapy, in which the client listens to live or recorded music. Second, is compositional music therapy, in which the client creates music. Improvisational music therapy is when the therapist guides the client to spontaneously create music. Recreative music therapy is when the client learns to play an instrument, and activity music therapy, is when the therapist sets up musical games [6]. Autism is a lifelong developmental disability. It is often referred to as ASD, or autism spectrum disorder. People with autism have three main types of impairment: difficulty with social communication, social interaction, and social imagination. Social communication impairment involves limited speech and difficulty in understanding facial expressions, tone of voice, and sarcasm. Autistic individuals may also have difficulty with social interaction, and find it hard to recognize and understand people’s emotions and implicit social cues, thus impairing their ability to form relationships with others. Finally, a defect in social imagination makes it difficult for people to understand and Reproduced from [18] predict other’s behavior, understand © 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 4 :0 9 AM
CMU abstract ideas, predict what can happen next, and prepare for Starr and Zenker also studied how keyboard sharing during change and plan for the future. Unfortunately, people with music therapy increased socializing skills of a five-year-old autism often have difficulty with these tasks [7]. boy with autism. The therapy increased the boy’s eye contact The theory behind music therapy is that since people during sessions. However, the researchers did not statistically have an innate affinity for music, they should continue to analyze their data [10]. respond to it even after physiIn the study of how mucal, cognitive, or emotional dissic therapy improves behavWhen her mother and abilities. One such disability, ioral abnormalities of autism, as seen with Ashley that has a Griggs-Drane and Wheeler, a grandmother tried to get her positive affect by music therapy music therapist and educational attention from the television is autism. The National Autistic consultant, respectively, in the Society claims that case studies Richmond Hospital Education by calling her name, she would have shown “music can stimulate Program, performed a study not look up. When they banged and develop more meaningful in 1997 with a blind, female and playful communication in adolescent with autism. The around pots and pans, she still people with autism.” They also client was asked to listen to did not look up. claim that since people with aumusic, sing with music, and tism often have idiosyncratic and play instruments to decrease avoidant styles of communication, her self-destructive behavior. music therapy can encourage more self-awareness and other- The study did show a decrease in her destructive behavior awareness, leading to more social interactions [8]. [11]. Finally, for the treatment of the communicative abnorMost of the research investigating the effect of different malities of autism, researchers Miller and Toca did melodic types of music therapy on autistic individuals has been in the intonation therapy with a three-year-old, nonverbal male with form of case studies. Case studies can be categorized based on autism. The music therapist sang to the child while tapping what aspect of autism therapists are trying to improve. There the rhythm of the words on the boy’s body. The goal of the can be music therapy treatment based on the social, behavioral, therapy was to increase the patient’s understanding of a spoand communicative abnormalities of autism. One case study ken language. This 1979 study claimed that the child began used musical interaction therapy to improve the socializations, speaking words during and outside the therapy sessions [12]. reciprocal interactions, and eye contact between an autistic However, researchers did not include a qualitative analysis three-year-old child and his mother. The results showed that of changes in the child’s communication. after music therapy, the child had increased eye contact and Although many case studies have shown social, behavinitiations of involvement with the mother [9]. Researchers ioral, or communicative improvement in people with autism after music therapy, many of these case studies lack sufficient statistical analysis or generalizability. Several researchers including Accordino, Comer, and Heller, researchers at Princeton University, who wrote an article examining the current research on music therapy with individuals with autism, have already criticized the use of case studies to show that music therapy is successful in treating people with autism. In their article, they state that although case studies provide significant details about particular patients and their responses to music therapy, these studies cannot be generalized [13]. Music therapists argue, however, that case studies are only appropriate to show the effectiveness of music therapy for autistic individuals because treatments are individual and specific to each client. But Accordino, Comer, and Heller respond by stating that researchers can account for the differences between individuals in therapy through solid empirical designs, which, before 2006, had not occurred in Reproduced from [17] © 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 3 1
THE TRIPLE HELIX Fall 2010 31 1 1 /9 /2 0 1 0 9 :4 4 :0 9 AM
Reproduced from [19]
CMU
this field. They also claim that it is important for researchers to monitor changes occurring during therapy and outside of therapy. This is because many of the case studies described have claimed that the music therapy improved certain behaviors in individuals with autism, but they failed to analyze possible external factors such as environment outside the therapy sessions or aging and development, as possible reasons for patients’ improvements [14]. A few studies that have investigated the effect of music therapy on people with autism have tried to use more subjects. For instance, researchers Gold, Voracken, and Wigram, from Sogn og Fjordane University College in Norway, in 2004, did a meta-analysis of 11 empirical investigations of music therapy and they determined there was a significant effect on the outcomes [15]. Also, in 2007, Boso et al., a group of researchers from the University of Pavia in Italy, studied the effect of long-term interactive music therapy on young adults with severe autism. They acknowledged the fact that there is insufficient data about the potential effects of music therapy in autism, and therefore they tried to investigate whether interactive music therapy could enhance behavior of eight young adults with severe autism. Their results stated that after 52 weeks of music therapy, all subjects showed improvement. Unlike the case studies described before, they used statistical References 1. Eiserike, J. (2008). Music Benefits Children with Autism. TherapyTimes.com. (Retrieved August 16, 2010). Online: http://www.therapytimes.com/content=0402J8 4C48968486404040441 2. Mary Louise Serafine, Music as Cognition: The Development of Thought in Sound (New York: Columbia University Press, 1988). 3. American Music Therapy Association. (1999). Frequently Asked Questions About Music Therapy: What is the history of music therapy as a health care profession? Retrieved: April 22, 2010. Online: http://www.musictherapy.org/faqs.html 4. Music Therapy. Retrieved (June 3, 2010). Online: http://www.cancer.org/docroot/ eto/content/eto_5_3x_music_therapy.asp 5. Peters, J.S. (2000). Music therapy: An introduction (2nd ed.). Springfield, IL: Charles C Thomas Publishers Ltd., Page 2. 6. Accordino, R., Comer, R., and W.B. Heller. (2006). Searching for music’s potential: A critical examination of research on music therapy with individuals with autism. Research in Autism Spectrum Disorders, 1(1), 101-115. Doi: 10.016/j.rasd.2006.08.002 7. Autism: What Is It? Retrieved (April 20, 2010). Online: http://www.nas.org.uk/nas/ jsp/polopoly.jsp?d=211 8. Bell, E. (2009). Music Therapy. The National Autistic Society. (Retrieved April 20, 2010). Online: http://www.nas.org.uk/nas/jsp/polopoly.jsp?d=528&a=3348 9. Wimpory, D., Chadwick, P., and Nash, S. (1995). Brief report: Musical interaction therapy for children with autism: An evaluative case study with two-year follow-up.
32 THE TRIPLE HELIX Fall 2010 UChicago.indb 3 2
Although there have been many case studies supporting the belief that music therapy positively effects people with autism, there is still a large need for empirical investigations on music therapy’s impact on individuals with autism. analyses and also studied potential external reasons for the subjects’ improvements, therefore providing more thorough support for the beneficial effects of music therapy [16]. Although there have been many case studies supporting the belief that music therapy positively effects people with autism, there is still a large need for empirical investigations on music therapy’s impact on individuals with autism. Perhaps this can be accomplished by merging several different fields including music cognition, psychology, neuroscience, and music therapy. There may also be a stronger understanding of music therapy’s effects if researchers studied why certain aspects of music can lead to any type of behavioral changes at the basic level. If we had a greater understanding of music’s effects on normal subjects, we may be able to build on this knowledge to not only determine whether or not music therapy has an effect on people with autism, but also why. Understanding the mechanism through which music influences us may help us improve current therapies and widen the scope of music therapy to other neurological disorders. Also, if we can answer these questions, perhaps we can also expand the knowledge as to how music therapy can enrich a patient’s quality of life. Elizabeth Aguila is a senior studying in Biology and Psychology at Carnegie Mellon University. Journal of Autism and Developmental Disorders, 25, 541-552. 10. Starr, E., and Zenker, E. (1998). Understanding autism in the context of music therapy: Bridging theory and practice. Canadian Journal of Music Therapy, 6, 1-19. 11. Griggs-Drane, E.R., and Wheeler, J.J. (1997). The use of functional assessment procedures and individualized schedules in the treatment of autism: Recommendations for music therapists. Music Therapy Perspectives, 15, 87-93. 12. Miller, S.B. and Toca, J.M. (1979). Adapted melodic intonation therapy: A case study of an experimental language program for an autistic child. Journal of Clinical Psychiatry, 40, 201-203. 13. Accordino et al., 2006. 14. Accordino et al., 2006. 15. Gold, C., Voracke, M., and Wigram, T. (2004). Effects of music therapy for children and adolescents with psychopathology: A meta-analysis. Journal of Child Psychology and Psychiatry, 45, 1054-1063. 16. Boso, M., Emanuele, E., Minazzi, V., Abbamonte M., and Politi P. (2007). Effect of long-term interactive music therapy on behavior profile and musical skills in young adults with severe autism. Journal of alternative and complementary medicine, 13(7), 709-712. 17. http://www.franklincountyohio.gov/probate/images/music_notes.jpg 18. http://www.niehs.nih.gov/news/newsletter/2008/april/images/autism.jpg 19. http://www.cdc.gov/ncbddd/autism/images/autism-topics-photo1.jpg
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 4 :1 0 AM
UC BERKELEY
An Ever-Evolving Epidemic: Antibiotic Resistance and its Challenges Kevin Berlin
S
ince their discovery in 1928, antibiotic medications have viral infections, despite the fact that antibiotics are ineffective irreversibly altered the way medicine is practiced world- against these pathogens [3]. When the viruses or minor infecwide. Infections that carried dire or deadly consequences tions were cleared from their systems by their own body’s for afflicted patients prior to the advent of these “wonder immune response, patients mistakenly associated the relief drugs” suddenly became trivial to treat [1]. But in recent years, with the antibiotics and requested them when they felt ill antibiotic medications that used to wipe out many bacterial again. Cases such as these, in which patients take antibiotics infections have been less and less effective in clinical practice. unnecessarily, are largely to blame for antibiotic resistance This noticeable drop in antibiotic efficacy, referred to colloquially in bacterial pathogens [4]. as “drug resistance,” has gone from a textbook hypothesis to The root of this problem lies in the way that antibiotics a well-publicized, pressing public health issue in a very short and bacterial pathogens interact. When a patient is put on amount of time [2]. Since there is both heavy controversy and antibiotic drugs, the medication is able to kill or halt the growth important implications regarding antibiotic-resistant infections, of most of the offending bacteria. A small number of bacterial the basic science behind this phenomenon is often ignored pathogens, however, will be able to resist the chemical effects in the public debate. Because of the significant risk posed by of the medication. The ability to survive when exposed to the antibiotic-resistant infections, it is important to understand antibacterial agent is not a response to the medication, but how and why infectious bacteria instead a pre-existing genetic becomes resistant to antibiotics, characteristic of the bacteria [3]. and to become aware of the steps Due to these characteristics, the The basic idea behind that the medical community can bacterium is able to resist the antibacterial resistance makes it take to reduce antibiotic resistant antibiotic and propagate in the infections in the future. host system at levels that will not highly improbable that a drug To understand antibioticmake the host of the infection sick will be developed that bacterial resistant infections, it is helpful again. In this way, the bacteria to reflect upon the discovery and that prove themselves resistant pathogens never gain resistance usage history of antibiotic medito that specific antibiotic drug towards. cations. In 1928, British scientist can be transmitted to another Alexander Fleming accidentally host, where the cycle is repeated discovered penicillin while lookagain with a new antibiotic. At ing at a bacterial culture that had died when exposed to mold any point during this cycle, bacteria are able to transmit the [1]. When penicillin’s antibiotic properties came to be formally genetic material that allowed them to resist antibiotic treatment understood just a few years later, the race to mass-produce to other, non-resistant bacteria [3]. Some of these mechanisms penicillin began. Scientific improvements, coupled with the include bacterial conjugation, reproduction, and others. In medical demands of World War II, brought commercial-grade essence, this method of developing and spreading antibiotic medicines based off of penicillin to the front lines of the war resistance is a form of natural selection [3]. on bacterial infections. Following World War II, antibiotics Using an antibiotic drug to treat a minor infection manages were integrated into civilian medical treatment plans, and to make drug-resistant variations of the bacterial pathogen, used for a wider variety of infections than those encountered which significantly increase the risk to the next host that bears in the war. Many menacing diseases of the era, from syphilis the bacterial pathogen. If antibiotics were used only when to tuberculosis, were cured using antibiotics. To the general necessary, resistant lines would still arise by virtue of their use; public, these medications seemed like panaceas, able to cure but the risk of creating an antibiotic-resistant bacterial strain just about any ailment. In this era, antibiotics were handed is outweighed by the risk of not treating a serious infection out by doctors on an increasingly frequent basis in order to when antibiotics are used properly. Antibiotics are misused satisfy patients’ desires for a more aggressive treatment regi- when a doctor simply prescribes them to satiate a patient’s men for common maladies. The demand for a quick cure to a demand for a medicine. The risk of creating an antibioticpesky ailment outweighed the medical reality that a significant resistant strain of bacteria is not outweighed by the severity number of minor infections are quickly mitigated by the body’s of the patient’s infection, in the opinion of the patient, which immune response without antibiotics. In addition, antibiotics leads to bacterial strains unnecessarily being made resistant were prescribed frequently for patients that suffered from to a specific antibiotic. Because of this, variations on what
© 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 3 3
THE TRIPLE HELIX Spring 2010 33 1 1 /9 /2 0 1 0 9 :4 4 :1 0 AM
UC BERKELEY
Reproduced from [14].
was first a minor pathogen can become resistant to a litany of antibiotic drugs, posing serious consequences for anyone who is infected with these bacteria late in the cycle of drug resistance [4].Over the past 40 years, clinical and research observations have confirmed this model of antibiotic resistance generation [3]. One of the most famous examples of antibiotic resistant bacterial infections is the rise of methicillin-resistant Staphylococcus aureus (MRSA). In 1961, hospitals in the UK began to report a strain of staphylococcus bacteria that showed resistance to a new antibiotic medication called methicillin [5]. At the time, doctors referred to methicillin as the “antibiotic of last resort,” a term used in medical practice to describe a drug that eradicates a bacterial infection quickly and completely. Resistance to methicillin developed quickly, as MRSA’s first reported cases arose just two years after the antibiotic was introduced into clinical practice [5]. MRSA came to the US shortly afterwards, when a Boston-area hospital reported the country’s first MRSA infections. Twenty years later, the drug resistance cycle has repeated itself – a large number of staph infections contracted outside of hospitals are MRSA derivatives that 34 THE TRIPLE HELIX Spring 2010 UChicago.indb 3 4
have gained resistance to today’s once new and promising antibiotic of last resort [5]. Though species such as cattle and poultry can play host to drug resistant bacterial pathogens, human carriers are largely responsible for mediating the spread of drugresistant MRSA out of the clinical environment of hospitals. Salmonella and E. coli are two other prominent bacterial pathogens that have shown themselves capable of developing resistance to antibiotics while residing in poultry or cattle hosts [8]. Currently, there exist very few drug-resistant strains of these bacteria, but it is certain that more drug-resistant strains of the bacteria will arise over time [8]. This potential for drug-resistance is driven in large part by the use of antibiotics in the rearing of cattle and livestock in several countries throughout the world [8]. Antibiotics in this context are used as a preventative measure to halt potential bacterial infections in the livestock. Each time cattle and poultry are treated with antibiotics, a small amount of bacteria in their system survives that is resistant to the drug. As more and more livestock are treated with drugs, there is potential for a drug-resistant strain of these bacteria to be produced. There is evidence that several antibiotic resistant strains of E. coli and Salmonella exist in poultry being sold in various countries throughout the world, though it does not appear that these strains infect humans on a regular basis [8]. If a strain of drug-resistant E. coli gains the ability to infect humans, it could cause problems just as serious as MRSA [6]. The basic idea behind antibacterial resistance makes it highly improbable that a drug will be developed that bacterial pathogens never gain resistance towards. At the same time, development of new antibiotic medications will continue in order to treat those that come down with serious infections. Given the highly adaptable nature of the bacterial response, it is clear that future therapeutic response to non-life-threatening bacterial infection must focus on both pharmaceutical and
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 4 :1 1 AM
UC BERKELEY non-pharmaceutical methods of reducing drug-resistant infection rates. Some industrialized countries, such as Norway, have taken a radically different approach to treating bacterial infection. Since the 1980s, when MRSA began to be observed in Norway, the country has implemented a unique strategy for combating the pathogen. Instead of focusing treatment upon the most recent antibiotic, Norway went back to basics and examined the root causes of bacterial infection [7]. With these causes in mind, the country undertook a program which increased hospital sanitation standards, outlawed the use of some common antibiotic medications, and encouraged doctors to prescribe antibiotics only when absolutely necessary [7]. Norway has one of the lowest MRSA rates in the world, currently hovering around 1% of all reported staph infections, suggesting that Norway’s model might be of considerable value to countries dealing with drug-resistant infection problems. By contrast, the United Kingdom has taken a different approach to lowering the number of MRSA cases contracted in the country. Since the initial rise of MRSA strains in the UK, the country has traditionally had one of the highest rates of infection. A program initiated by the UK’s National Health Service in 2003 has shown impressive results; the number of new MRSA infections reported was cut in half between 2003 and 2008 [9]. During this time, the NHS publicly committed itself to lowering the MRSA infection rate through a “zerotolerance” policy on any unsanitary behavior ( i.e., not washing hands with an antibacterial soap to reduce transmission of bacteria from patient to patient via the hands, not changing bed sheets frequently, etc.) in its healthcare facilities [10]. While it may seem contradictory that the NHS mandated the use of hand washing with antibiotic soap to combat antibiotic resistance, this sort of soap has not been shown to contribute to antibiotic resistance [12]. The NHS has committed itself to finding additional ways to combat MRSA, as it acknowledges that these sanitation measures need to be augmented to further reduce the number of cases [10]. The NHS has funded research into sanitation innovation, such as the use of collapsible hard screens with integrated sinks to promote hand washing and separation of infected patients, but these technologies are several years away from implementation in NHS-run trust hospitals. Innovations such as these will be the
keystone of NHS’ MRSA abatement strategy, although their adoption timeline is clouded somewhat by the cost of these high-tech devices [11]. With Norway and the UK serving as examples, other industrialized countries can learn that simply relying upon the latest antibiotic regimen to cure bacterial pathogens is not sufficient to prevent the spread of infection. If it were, antibacterial-resistant pathogens would have been extinguished years before they became a problem of significant magnitude. Norway was able to reduce its infection rate by controlling the amount of antibiotics prescribed to its citizens and by monitoring the sanitation of its health facilities. Whether or not Norway’s plan will work for other countries with lesscentralized healthcare management systems remains to be seen. In most countries, it would be exceedingly difficult to ban the prescription of certain antibiotics as Norway did. The UK’s plan to cut MRSA infections through a coordinated overhaul of hospital sanitation may work better for countries such as the United States, that lack the ability to put Norway’s stringent laws in place. Either way, the US’ current drug-resistant infection abatement strategy clearly needs to be changed. MRSA is now the greatest health concern associated with hospitalization in the United States – roughly 318 in every 100,000 people admitted to a hospital each year contract some form of the infection, and 63 of these people subsequently die from this infection [13]. Despite the numerous government sponsored websites and community outreach initiatives, the US needs to mount a more aggressive policy against antibiotic overuse and drug resistant infections in order to combat the health risks associated with these infections. Perhaps the United States can implement some of the policies employed by Norway and the UK to help reduce infection rates. With any luck, these examples might serve as a resource for new ideas in a fight against what the World Health Organization calls a serious threat “to the major gains in life expectancy experienced during the latter part of the last century” [2].
References
8. Marsik, F.J., Parisi, J.T., et. al. Transmissible drug resistance of Escherichia coli and Salmonella from humans, animals, and their rural environments. Journal of Infectious Disease [Online] 1975; 132(3), pp. 296-302. Available from: http://www. ncbi.nlm.nih.gov/pubmed/1099148 [Accessed 2 April 2010]. 9. Target to halve MRSA cases is met. BBC News. [Online] 18 September 2008. Available from: http://news.bbc.co.uk/2/hi/7622594.stm [Accessed 25 March 2010]. 10. Templeton, Sarah-Kate. Dutch doctor: why you are failing on MRSA. The Sunday Times (UK). [Online] 4 February 2007. Available from: http://www.timesonline.co.uk/ tol/news/uk/health/article1323962.ece [Accessed 25 March 2010]. 11. Novel new products unveiled to help in the fight against hospital infections. Healthcare Equipment and Supplies Magazine. [Online]. 13 April 2010. Available from: http://www.hesmagazine.com/show.php?page=story&id=1805&story=1805 [Accessed 13 April 2010] 12. Aiello, A.E., Marshall, B., et al. Antibacterial cleaning products and drug resistance. Emerging Infectious Disease [Online] 2005; 11(10), pp. 1565-1570. Available http://www.ncbi.nlm.nih.gov/pubmed/16318697 [Accessed 23 April 2010] 13. Klevins, R. M., Morrison, M. A. et. al. Invasive MRSA infections in the United States. Journal of the American Medical Association. [Online]. 2007; 298(15), pp. 1763-1771. Availale from http://www.cdc.gov/ncidod/dhqp/pdf/ar/InvasiveMRSA_ JAMA2007.pdf [Accessed 23 April 2010]. 14. http://oceanexplorer.noaa.gov/explorations/04etta/background/antimicrobial/ media/antimicrobial_600.jpg
1. The Nobel Foundation. Sir Alexander Fleming: Biography. [Online]. Available from: http://nobelprize.org/nobel_prizes/medicine/laureates/1945/fleming-bio.html/ [Accessed 12 March 2010]. 2. World Health Organization. WHO | Antimicrobial resistance. [Online]. Available from: http://www.who.int/mediacentre/factsheets/fs194/en/ [Accessed 12 March 2010]. 3. Centers for Disease Control. Antibiotic Resistance Questions and Answers. [Online]. Available from: http://www.cdc.gov/getsmart/antibiotic-use/anitbioticresistance-faqs.html#b [Accessed 10 March 2010] 4. Neu, H. The crisis in antibiotic resistance. Science [Online] 1992; 257(5073), pp. 1064-1073. Available from: http://www.sciencemag.org/cgi/content/abstract/ sci;257/5073/1064 [Accessed 9 March 2010]. 5. National Institutes of Health. MRSA: MedLine Plus. [Online]. Available from: http://www.nlm.nih.gov/medlineplus/mrsa.html [Accessed 4 March 2010]. 6. Cohen, M.L. and Tauxe, R.V. Drug-resistant Salmonella in the United States: an epidemiologic perspective. Science [Online] 1986; 234(4779), pp. 964-969. Available from: http://www.sciencemag.org/cgi/content/abstract/234/4779/964 [Accessed 7 March 2010]. 7. Mendoza, M. and Mason, M. Norway’s MRSA solution. The Spokesman-Review. [Online] 03 January 2010. Available from:http://www.spokesman.com/stories/2010/ jan/03/norways-mrsa-solution/ [Accessed 13 March 2010]
© 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 3 5
Kevin Berlin is a 3rd-year Molecular and Cell Biology major, concentrating in the study of immunology. After college, he hopes to go on to graduate or professional school in a health-sciences related field. Email: kevinberlin@berkeley.edu
THE TRIPLE HELIX Spring 2010 35 1 1 /9 /2 0 1 0 9 :4 4 :1 2 AM
MELBOURNE
Do We Have Conscious Control Over Which Products We Purchase? Emily Raymond
W
e like to believe that we exert full, conscious control over our behaviour; that every choice we make has been decided upon by evaluating the alternatives and selecting the appropriate option. Conscious processes that guide behaviour such as decision-making are those of which we are aware, whilst those which require and receive effort are intended and can be controlled. Much research into consumer behaviour, focused on how we choose products and brands, has shown that our behaviour does not always follow this definition. Instead, it seems that other, unconscious processes are at work. Marketers try to influence this phenomenon, which can greatly affect our behaviour, in order to sell their product and, at times, they succeed. However, we can rest assured that when our selections are of high importance to us, we will exert the conscious control necessary to make the right, individual choice. Unconscious processes are those of which we are not conscious; they do not involve all four characteristics of awareness, exerted effort, intention and control. They are also known as automatic processes, as they occur involuntarily. They can include the perception and storage of information, leading to the formation of implicit attitudes [1]. This may be the perception of environmental cues, of which we are completely oblivious, yet influence our actions and choices [2]. However, this does not mean that we are never aware of our behaviour when these automatic processes are at work; it means that we are unaware of the process involved in determining that behaviour. The discovery of this underlying influence in our minds caught the attention of marketers, who have since attempted to manipulate these processes. This,
Reproduced from [11]
36 THE TRIPLE HELIX Fall 2010 UChicago.indb 3 6
they hoped, would lead to increased brand preference and product purchases. Marketers use different tools in order to achieve this outcome; where the product is sold, the price of the product, the features of the product and of course, promotion and advertising of the product. They are trying to persuade
It is quite apparent that our unconscious processes play a prominent role in influencing our behaviour, or more specifically, our product choices.
consumers to purchase their brand, instead of a competing brand. Over the long term, they want the consumer to always purchase their brand over competitors’. Often, consumers create a consideration set for an item, where they store a few possible alternative brands in memory so that their search for the right product to satisfy their needs is minimised [3]. A marketer’s goal is to place their brand in each consumer’s consideration set so that an implicit association between the specific need that it satisfies and their product can be formed. This is done in the hopes that eventually, the choosing of their product for purchase will become automatic. However, there are other ways that marketers can target the unconscious in order to result in brand purchasing. Another way that our unconscious processes and product marketing can interact was studied by Tom, Nelson, Srzentic and King (2007) [4]. They conducted a study on the mere exposure effect, which predicts that consumers will choose the product that matches their primed motivations or needs. Their results showed that the participants who were subliminally shown a video frame of a specific toy, nominated the highest preference for it when given this toy after the video was shown. This preference rating was higher than both of the normal exposure, where participants viewed the toy in the video long enough to perceive it consciously, and the no exposure, where participants were not shown the toy in the video at all, comparison groups. Tom, Nelson, Srzentic and King suggested this was because the toy could match those unconscious motivations or needs, as primed in the video, while still maintaining its novelty to the © 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 4 :1 2 AM
MELBOURNE conscious processes [4]. This suggests that marketers could use subliminal tools to prime needs within the consumer, and then present their product as a means to fully satisfy these needs, thus heavily influencing our product choice. Subliminal advertising raises many ethical issues, and has lead to the banning of such marketing in Australia. However, marketers can still influence unconscious processes without the use of subliminal means. A study by Shapiro (1999) revealed that incidental exposure to supraliminal advertisements can lead to a similar outcome [5]. Participants in this study were given a magazine, which included articles and advertisements or articles and puzzles. The items were all placed on the same page, with the advertisements having been made by the researchers to avoid any familiarity. The participants were asked to read the highlighted words within the articles during approximately fifteen seconds, so that direct attention to the advertisements was minimised, if not completely avoided. After this task, the participants were shown a catalogue of products and asked which they would choose for themselves. They were also instructed not to include any products that they remembered – or even slightly remembered – from the magazine; a list of these ‘remembered items’ was also recorded. The results showed that there was a bias in product choice for those products that were placed in the advertisements in the magazines, even though the participant could not recall having seen them before. This behaviour suggests that unconscious processes had perceived the products and stored them in memory, while the participant was completely unaware of this occurring. This study implies that advertisements do not need to be subliminal, nor eye-catching, but instead that the frequency and location of advertisements is the key to influencing consumer behaviour. Our unconsciousness seems to perceive the stimuli that are present but that we are not aware of. These unconscious perceptions are then processed and may be used to determine our behaviour.
© 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 3 7
It is quite apparent that our unconscious processes play a prominent role in influencing our behaviour, or more specifically, our product choices. But to what extent does this underlying facet of our minds have control over what we purchase? Some studies that have focused on this question have proposed that choices are made automatically when we are under cognitive load [6]. Moreover, Bargh (2002) gives a succinct overview of how the real world comes into play when a consumer makes product choices; he explains “people have things they need to get done and pressing concerns on their minds” [7]. Bargh (2002) goes on to explain how there is a myriad of distractions when a product is purchased, so consumers do not always pay full attention and consider all information when making their choice [7]. It is during these circumstances that unconscious processes could have the greatest influence over our purchasing behaviour. This is because, when there are distractions as described above, our cognitive resources, or conscious processing, are near full capacity. Thus, unconscious processes come into play in order to extend our behavioural limits. This appears to be an evolutionary advancement as it works to increase our capabilities, but this discussion is outside of the scope of this essay. The study by Shiv and Nowlis (2004) clearly illustrates the way in which automatic processes govern simpler tasks when under cognitive load [8]. A group of participants were asked to memorise an eight-digit number while they tastetested the Lindt brand of chocolate. They were then asked to recall the eight-digit number, after which they could choose one chocolate bar, either Lindt or Godiva. Godiva had been chosen as the other brand of chocolate because it was rated, along with Lindt, as the tastiest chocolates by a pre-test group. A significant number of participants, who had taste-tested Lindt while having to remember the complex number, chose the Lindt chocolate after the recall task. This implies that while Reproduced from [11]
THE TRIPLE HELIX Fall 2010 37 1 1 /9 /2 0 1 0 9 :4 4 :1 3 AM
MELBOURNE consumers are under cognitive load, they rely on affective input, rather than informational input, to make choices. Affective input is associated with automatic processes. Thus, while our conscious capacities are being used elsewhere, unconscious processes influence the completion of less significant tasks. It has been shown that not only are unconscious processes constantly underlying our behavioural choices, but sometimes these processes are the main influence. While this may be a cause for concern for some, it can also be seen that these unconscious processes occur in order to extend our capabilities when under cognitive load, so that less important tasks may still be completed. Therefore, when an important decision arises, we are able to devote our cognitive capacity to reaching a solution, while still performing the less significant, day-today tasks. While our unconscious processes may determine the outcome of certain insignificant tasks, our consciousness determines which tasks are insignificant. Ratchford (1982) explains the difference between significant and insignificant tasks as the weighing up of costs and benefits [9]. That is, if the item to be purchased is valued highly, or, has the potential for great losses, search time and thus, conscious processing of information and product evaluation, increases. So, when an item is more important to the consumer, they will devote more cognitive capacity to determining the right product choice. The importance or relevance of a product to the consumer is known as the involvement. Thus, a higher involvement product warrants greater conscious processing. This was seen in a study by Petty, Cacioppo and Schumann (1983) who found that for a high involvement product, high brand preference was generated by including strong arguments as to why the product was superior [10]. This was in contrast to low involvement products, where a greater brand preference was generated by celebrity endorsers. Celebrity endorsers and other affective cues are thought to illicit automatic, positive or negative attitudes. Therefore, assuming the affective cues illicit a positive attitude, the consumer is rapidly persuaded to purchase the low involvement product, without the need to evaluate other options. The relationship between the type of advertisement persuasion and involvement is known as the Elaboration Likelihood Model [10]. This model shows that the greater the importance of the product, the deeper the processing of information, costs and benefits by the consumer. Conversely, it is those low involvement products where affective cues, such as celebrity endorsers, aim to spark our implicit attitudes and thus, unconscious processes, in order to result References 1. Yoo, C. Y. Journal of Interactive Marketing. (2008), Unconscious processing of web advertising: Effects on implicit memory, attitude toward the brand and consideration set. Journal of Interactive Marketing, 22(2) 2 – 18. 2. Berger, J. & Fitzsimons, G. Journal of Marketing Research. (2008), Dogs on the street, Pumas on your feet: How cues in the environment influence product evaluation and choice. 45, 1 – 14. 3. Pride, W., Ferrell, O., Elliott, G., Rundle-Thiele, S., Waller, D. & Paladino, A. Marketing Core Concepts & Applications (2nd ed.). Milton, Qld, Australia: John Wiley & Sons Australia Ltd. (2008). 4. Tom, G., Nelson, C., Srzentic, C. & King, R. The Journal of Psychology. (2007), Mere exposure and the endowment effect on consumer decision making. 141(2), 117 – 125. 5. Shapiro, S. Journal of Consumer Research. (1999), When an ad’s influence is beyond our conscious control: Perceptual and conceptual fluency effects caused by incidental ad exposure. 26, 16 – 36.
38 THE TRIPLE HELIX Fall 2010 UChicago.indb 3 8
Our unconscious processes underpin our behaviour and at times, have shown to exert more influence than our conscious processes. in product purchasing. Our unconscious processes underpin our behaviour and at times, have shown to exert more influence than our conscious processes. Unconscious processes are automatic, involuntary and do not require intention, effort or awareness. This feature of our minds has resulted in much research, particularly in the field of marketing. Our unconscious can be manipulated with subliminal advertising; however this carries many ethical implications. Therefore, it was found that the same effect can result from supraliminal advertising, as long as the consumer did not direct attention to the advertisement. This suggests that rather than eye-catching advertisements, frequent and inconspicuous advertising may illicit the greatest brand preference rates. It has been shown that the unconscious processing of stimuli mostly occurs when consumers are under cognitive load, which truly mimics a busy shopping centre. This creates an even greater incentive for marketers to exploit this phenomenon. However, consumers can be confident that they will ultimately have control over what they purchase. That is, when those purchases are of higher involvement and propose a certain level of costs and benefits to the consumer. It is important that marketers take these variables into account; namely personal involvement levels, type of advertising and environmental distracters, when targeting advertising campaigns at different classes of consumers. Overall, we exert conscious control over what we purchase when it matters to us. Whereas we are able to let our unconscious processes guide our decisions when they are of less importance, allowing us to focus on the more important tasks at hand, and fortunately, making our busy lives more manageable. Emily Raymond began her Commerce/Science combined degree in 2005 and has majored in psychology and finance. While she will be moving into banking next year with the National Australia Bank, she hopes to continue her psychology studies part time. 6. Friese, M., Wänke, M. & Plessner, H. Psychology and Marketing. (2006), Implicit consumer preferences and their influence on product choice. 23(9), 727 – 740. 7. Bargh, J. A. Journal of Consumer Research. (2002), Losing consciousness: Automatic influences on consumer judgement, behaviour and motivation. 29, 280 – 285. 8. Shiv, B. & Nowlis, S. M. Journal of Consumer Research. (2004), The effect of distractions while tasting a food sample: The interplay of informational and affective components in subsequent choice. 31, 599 – 608. 9. Ratchford, B. T. Management Science. (1982), Cost-benefit models for explaining consumer choice and information seeking behaviour. 28(2), 197 – 212. 10. Petty, R. E., Cacioppo, J. T. & Schumann, D. Journal of Consumer Research. (1983), Central and peripheral routes to advertising effectiveness: The moderating role of involvement. 10, 135 – 146. 11. http://www.everystockphoto.com/photo.php?imageId=954817. 12. http://www.sxc.hu/photo/167544.
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 4 :1 3 AM
CAMBRIDGE
Zero: The Riddle of Riddles Ritika Sood
T
he concept and implications of the number zero have dominated the world of mathematics for centuries, causing many of the world’s greatest mathematicians to suffer from insomnia. The world experienced a paradigm shift when this concept of nothingness was ‘discovered’ and given a definition. The number zero is inherently linked to our everyday lives. In economics, zero represents a depleted bank account. In the recent economic crisis, the fear and,
The fear of [zero] led many corporations to readdress their policies. in some cases, occurrence of a depleted bank account led many a corporation to readdress its policies. A number near zero is also the rate that the Federal Reserve claimed to start charging, or rather, not charging, commercial banks for shortterm loans in December 2008, in an attempt to defibrillate the economy [1]. The mystery of zero is also present in the scientific world, where absolute zero defines the theoretical temperature characterised by the complete absence of heat. Zero is also the proposed atomic number of the theoretical element tetraneutron, a hypothesized stable cluster of four neutrons whose existence is not supported by laws of nuclear forces [2]. In the past, zero was analyzed as a nothing that is an actual something--as the riddle of riddles. To be able to fully appreciate the significance of the number zero, “that O without a figure”, as Shakespeare called it, requires an understanding of its discovery, the progression of its presence and the resistance it encountered throughout history [3]. The Ancient Greeks were philosophically unsure about the concept of nothingness. Near the end of the eighth century BC, the notion of zero was worked into the story of Odysseus and Polyphemos, the Cyclops [4]. However, there is no trace of zero as a number in the history of Homeric or Classical Greece. It is fascinating that the Greeks, to whom many scientific, mathematical and artistic discoveries can be attributed, were unable to conceptualize zero. The reason for this is that the majority of Greek mathematical achievements were based on geometry. Greek mathematicians did not need to name their numbers, as they worked with numbers as lengths of lines. Furthermore, the lack of positional notation in Greek mathematics meant that the number zero and its mathematical properties remained undiscovered [5]. Despite its potential to extend the empire of numbers, zero was not treated as a number itself until 5th century AD © 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 3 9
in India. Prior to this, it was no more a number than a comma is a letter. This raises the question, what did it take for this immigrant to gain citizenship in the Republic of Numbers? Unlike ideas, trends and fashions, which have undergone radical changes throughout the ages, the Republic of Numbers is far more conservative, reluctant to accept new members and adamant in never letting them go once sworn in. Take irrational numbers as an example; 2500 years after the proof of their existence allegedly by Hippasus, we cannot do without them, although the sense in which they exist as numbers is still debated. Going back in history, the use of zero can be found in Babylonian and Mayan mathematics but the discovery of the use of zero as a number is attributed to Indian mathematicians. For over 1000 years, the Babylonians had a place-value number system that did not include zero as an empty place indicator. This is somewhat surprising, as one would consider this to be an important feature. Babylonian mathematicians would not distinguish between 5107 and 517; rather, the context would show which number was intended. It was not until around 400 BC that the Babylonians began using two wedge symbols where we would now put zero to indicate which was meant [5]. The Mayans developed a place-value number system with a zero, which they denoted by a shell symbol. What is interesting is that the use of zero in Mayan mathematics can be traced back prior to the introduction of their place-value number system which is a remarkable achievement. However, their concepts did not find their way into other societies.
If one were able to divide by zero, then all numbers would be the same. The birth of the concept of zero as a number and not merely a symbol for separation can be attributed to Indian mathematicians. The very word ‘zero’ finds its etymological root in the Sanskrit word śūnya, meaning ‘void’ or ‘empty’ [7]. The first recorded use of zero as a number dates back to 876 AD in India. A stone tablet carrying an inscription regarding Gwalior, a town 400 km south of the capital has been accepted by historians as the first record of the number. The inscription presents the dimensions of this garden, 187 by 270 hastas (a traditional Indian unit of length, measured from the elbow to the tip of the middle finger, approximately 18 inches) which was being grown to produce enough flowers to provide 50 garlands per day for the local temple. This information was THE TRIPLE HELIX Fall 2010 39 1 1 /9 /2 0 1 0 9 :4 4 :1 3 AM
CAMBRIDGE book The Opening of the Universe, written in c.628 [8]. The brilliant work of the Indian mathematicians was transmitted west to Islamic and Arabic mathematicians, as well as east to China. The Italian mathematician Fibonacci was one of the main people to bring the Indian numerals to Europe [5]. In his book Liber Abaci published in 1202, he described the nine Indian symbols along with the sign 0. What is significant is that Fibonacci did not treat 0 in the same way as the numbers 1, 2, 3, 4, 5, 6, 7, 8, 9. In his work, he speaks of the ‘sign’ zero, while he refers to the other symbols as numbers. Despite the incredible achievements of Indian mathematicians and subsequent work by Arabic and Islamic mathematicians, Fibonacci was unable to reach the same level of sophistication in his treatment of zero. While his book had a profound effect on European thought, zero was not widely used in Europe for a long time. An example of the resistance encountered in the acceptance of zero by European mathematics is that in the 1500s, Italian Mathematician Cardan solved cubic and quartic equations without using zero. His work would have proven to be far easier had he done so. In fact, it was only in the 1600s that zero came into widespread use. In spite of being a well established concept, zero is still a source of difficulty. On 1 January 2000, when people around the world celebrated the new millennium, they were in fact only celebrating the completion of 1999 years, as when the calendar was established, no year zero was accounted for. This is why the third millennium and the 21st century began on 1 January 2001, something that many have had a difficulty understanding. After exploring the complex history of the number zero, the fact that it is still causing confusion is hardly surprising. Let us return to the present and the Federal Reserve’s strategy of charging a near zero rate for short term loans, which triggers the question: can ‘nothing’ save us? After the recent economic turmoil, zero seems to be the option to settle for in financial terms. When you consider the alternative… it sure beats going negative. Reproduced from [9]
detailed on the tablet, and both of the numbers 270 and 50, were inscribed very similarly to how they appear on this page, the only difference being that the 0 was slightly smaller and slightly raised [5]. For zero to be held in the same regard as other numbers, knowledge of how to add, subtract, multiply and divide with it was required, though these operations are now taken for granted in simple arithmetic calculations. The Indian mathematicians Mahavira, Bhaskara and Brahmagupta set out to undertake this very task. They agreed that a number multiplied by zero is zero and that a number remains unchanged when it is diminished or augmented by zero. However, the issue which caused disagreements among the men was the division of a number by zero [5]. Experience and common sense tell us that two different numbers do not hold the same value, that 5 is not the same as 15, for example. If one were able to divide by zero, then all numbers would be the same. Let us apply the method of proof by contradiction. Any number multiplied by zero is zero – for example, 5 × 0 = 0 and 15 × 0 = 0. Hence, 5 × 0 = 15 × 0. If division by zero were possible, this would yield 5 × 0/0 = 15 × 0/0 and the zeroes would cancel leaving us with the result 5 equals 15! Hence in elementary arithmetic, dividing by zero gives an undefined value. The rules governing the use of zero as a number in its own right (with the exception of division by zero) appeared for the first time in Brahmagupta’s References 1. Isidore C. Fed: Economy better, rates to stay low. CNNMoney.com [Online]. 2010 Apr 28 [cited 2010 Aug 13]. Available from: http://money.cnn.com/2010/04/28/news/economy/fed_decision/index.htm 2. Samuel E. Ghost in the atom. New Scientist. 2002 Oct 26;2366:30-3. 3. Shakespeare W. King Lear, Act 1, Scene 4. 4. Homer. The Odyssey Book IX: The Tale of Odysseus: Lotus-Eaters, Cyclops. Lines 360-409. Translated by Murray AT 5. O’Connor JJ, Robertson EF. A history of zero [Online]. University of St. Andrews; 2000 Nov [cited 2010 Aug 11] Available from: http://www-history.mcs.st-and.ac.uk/
40 THE TRIPLE HELIX Fall 2010 UChicago.indb 4 0
Ritika Sood is a second year student studying Chemical Engineering at King’s College, Cambridge University. HistTopics/Zero.html#s31 6. O’Connor JJ, Robertson EF. Mayan mathematics [Online]. University of St. Andrews; 2000 Nov [cited 2010 Aug 13]. Available from: http://www-history.mcs. st-and.ac.uk/HistTopics/Mayan_mathematics.html 7. Ciment J. Zero [Online]. 2007 [cited 2010 Aug 13]. Available from: www. encyclopediawebsite.com/disc/entries/zero.doc 8. O’Connor JJ, Robertson EF. Brahmagupta [Online]. University of St. Andrews; 2000 Nov [cited 2010 Aug 13]. Available from: http://www-history.mcs.st-and.ac.uk/ Biographies/Brahmagupta.html 9. PD, Wikipedia. http://en.wikipedia.org/wiki/File:Newton-WilliamBlake.jpg
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 4 :1 4 AM
NUS
The Great Disjoint of Language and Intelligence Koh Wanzi
A
stunning variety of languages has evolved in the world since the birth of civilization, with some forming a unique cornerstone of many cultures. To date, there has still not been an accurate census of the exact number of languages in the world. Street surveys throw up numbers in the several hundreds, while the Ethnologue organization, generally accepted to have the most extensive list thus far, catalogues an astounding 6809 distinct languages [1]. The study of the acquisition and mastery of any language offers fascinating insights into our neural circuitry and specific regions of the brain, as well as our cognitive processes. This has unfortunately led to equating linguistic ability with intelligence; but this article will try to dispel this notion, setting them up instead as two independent domains. This essay, however, seeks to set aside the diversity of languages and view them in a single unifying light for their role in cognition. Furthermore, this article proposes a need for education policy to be updated in line with new theories of cognition in the interests of students who might be unfairly penalized for simply lacking linguistic flair. Linguistic Theories: Mould or Clock? Of great interest is the exact nature of the relationship between language and thought. In the field of linguistic theory most theories can be classified in between two general categories at opposite ends of the spectrum. They are commonly referred to as “mould theories” and “cloak theories”. Mould theories hypothesize that language is “a mould in terms of which thought categories are cast” while cloak theories theorize that “language is a cloak conforming to the customary categories of thought of its speakers” [2]. The Sapir-Whorf hypothesis, proposed by American linguists Edward Sapir and Benjamin Lee Whorf, belongs to the category of a mould theory. This theory consists of two closely associated concepts—linguistic determinism and linguistic relativity. Linguistic determinism holds that our thoughts are determined and constrained entirely by language, while the concept of linguistic relativity proposes that different languages will cause people to think and perceive the world differently. Experiments conducted with bilingual Japanese women living in America have provided interesting evidence. These women had American husbands, and spoke Japanese only when they met each other. Meeting twice with a bilingual Japanese interviewer, the first session was conducted in Japanese, while the second in English. Though questions asked both times were exactly the same, the answers given varied and seemed to depend on the language used instead of having the same answers in different languages as might be expected. In a particularly striking example, one woman said in Japanese that when her wishes conflicted with those of her family’s, it was “a time of great unhappiness”. However, her response to the same question in English was, “[I] do what I want”[3].
© 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 4 1
Proponents of the Sapir-Whorf hypothesis argue that this disparity can be accounted for by linguistic determinism and relativity, whereby the women’s thoughts and perception of the world were dependent on the language spoken. However, these results present severe limitations as this experiment is unable to account for countless other confounding factors occurring in the period between interviews that could have caused different responses and shaped the women’s views. In sharp juxtaposition with the Sapir-Whorf advocates are the cloak theorists; their argument of “universalism” is the polar opposite of the Whorfian conjecture. It is best illustrated using the Neo-Classical idea that language is the “dress of thought”. This theory has at its core the assumption that the same thought can be expressed exactly in a variety of ways. Therefore, it should theoretically be possible to express an idea in one language, and then precisely translate it to any other, putting paid to the phrase “lost in translation”. In contrast, the Whorfian hypothesis emphasizes the difficulty of translation between languages, since some languages have words that have no exact translation as a single word in another language. For example, the Portuguese “geram” means “unbearably cute”, while the German word “schaudenfreude” means “pleasure at the misfortune of others”. Whorf argued for this difference in translations as evidence that speakers of different languages viewed the world through different prisms carved by their native language. For instance, in a translation of the English language to Apache, the sentence, “he invites people to a feast” translates roughly as, “he, or somebody, goes for the eaters of cooked food” [3]. While the Sapir-Whorf hypothesis might seem to invalidate the cloak theory of universalism through these counter examples, it is guilty of several flaws. The problem of translating directly from one language to another might seemingly strengthen the argument, but upon closer inspection it actually undermines the theory of linguistic determinism. When non-speakers of the German language come across the word “weltschmerz”, used to represent the feeling of world weariness felt when recognizing the disparity between reality and an idealized world, they readily identify with the feeling. They are not impeded by their inability to speak German to recognize the feeling the word conveys, as they should if the Sapir-Whorf hypothesis holds true. This is testament to the existence of a system rich mental expression that transcends the boundaries of language. The idea that language is only a subset of our vast mental vocabulary forms the cornerstone of the book The Deeper Meaning of Liff by Douglas Adams. The book contains examples of unconventional words, for example “elecelleration”, that is, the “mistaken notion that the more often, or harder, you press an elevator button, the faster it will arrive”[4]. There exist other such actions or emotions that are as yet nameless, but the fact that they are not in our
THE TRIPLE HELIX Fall 2010 41 1 1 /9 /2 0 1 0 9 :4 4 :1 4 AM
NUS vocabulary does not preclude our noticing and feeling them. A more nuanced view of the extreme versions of the mould and cloak theories is thus required. “Mentalese” as a Common Mental Language A more moderate view of the Sapir-Whorf hypothesis is the first step towards a new understanding of the relationship between language and thought. Instead of rigidly assuming that thinking is restricted to the straitjacket of language, it is important to recognize the potential for language to influence rather than determine thinking. In his book The Language Instinct, Stephen Pinker extends the idea of a rich mental world that language can never entirely encompass. Pinker proposes a form of mental language that he terms “mentalese”, a kind of internal language we all possess, and which we convey to others by means of language as a vessel. Pinker references cases of “languageless” adults—deaf people who by force of circumstance or otherwise, have been isolated from the verbal world. This is where the extreme form of the Sapir-Whorf hypothesis is refuted. If thought is confined by language, it would make sense to conclude that the reverse is also true, that without language there can be no thought. However, these deaf adults display ability to process and learn things, and are not impeded from “thinking” in the cognitive sense of the word [5]. It could thus be said that language serves as a conductor of “mentalese”, albeit a dynamic vessel which potential to influence cannot be entirely discounted. Implications for Education Policy A particular medical condition provides a striking illustration of how different language and intelligence are in the brain. In children with Williams syndrome, which is accompanied by varying degrees of mental retardation, early medical observers had noted the “friendly and loquacious” nature of their subjects and their “unusual command of language [in speech]”. Despite this, a vast majority of adults with Williams syndrome possess only “rudimentary skills in reading, writing, and arithmetic”. In a study conducted at The Salk Institute for Biological Studies and the UCSD School of Medicine, researchers sought differential assessment of specific domains of function to isolate language from cognitive performance. Subjects with Williams syndrome were contrasted with those with Down syndrome and matched for age, sex, and mental function on IQ measures. The study noted the equivalent cognitive impairment of subjects with both conditions, stating that they are “markedly impaired on a range of purely cognitive tasks such as conservation, concept formation, and problem solving”. However, amid the background of general cognitive impairment, Williams syndrome children differed from their Down syndrome counterparts in their ability to express language. The study cites the “spontaneous and fluent speech” of an 18 year old Williams syndrome adolescent with an IQ of 49. She was said to show “great facility with References 1. Linguistic Society of America [homepage on internet]. Washington, DC: The Society [cited 2010 Mar 2]. Available from: http://www.lsadc.org/info/ling-faqshowmany.cfm 2. Chandler, Daniel. The act of writing: a media theory approach. University of Wales; 1995. 3. Alchin N. Theory of knowledge. John Murray; 2002. 4. Adams D, Lloyd J. The deeper meaning of liff. 2nd ed. Pan books;1992.
42 THE TRIPLE HELIX Fall 2010 UChicago.indb 4 2
language, being able even to weave vivid stories of imaginary events and compose lyrics to a love song”. Yet in another stark example of the “unusual dissociation of language from other cognitive functions”, she has the academic skills of a first-grader and requires a baby-sitter for supervision[6]. The evidenced lack of a correlation between language ability and intelligence also has serious practical implications for education. Singapore’s Minister Mentor Lee Kuan Yew said recently that he had come to see the error of the Republic’s method of implementation of bilingual education policy which requires students to have almost equal proficiency in both their first language and mother tongue. Because, he said, it is not possible to master two languages at the same level, Singapore’s method of teaching Mandarin to English-speaking children using Mandarin itself turned generations away from the language [7]. It is important for education policy makers to realize the lack of a link between language mastery and intelligence. If this gap in association is not recognized, children who possess mathematical or scientific aptitude but lack linguistic flair might be unfairly marginalized in the education system. For example, in the GCE ‘O’ level examination at the end of secondary education in Singapore, the final score is computed based on the grade of one language subject and five others from distinct subject groups. In this system, even if a student scores a top grade of A1 in the other five subjects, a poor grade in the languages could still pull the overall score down enough to deny the student entry into top schools. While this observation in no way seeks to undermine the importance of all-rounded education and performance, language research could someday persuade legislators to tweak the system so that it is more accommodating to students of varying abilities. This disproportionate language ability despite obvious mental impediments ties in with Pinker’s theory of “mentalese” as an entirely separate domain of thought and cognition. When this rich mental world is sometimes hidden by the cloak of expression that we call language, it can be underappreciated. Simply put, language can act as a vessel, albeit an unsatisfactory one, that holds and transports as best it can this vivid mental landscape in which exists an infinite number of emotions and concepts, countless of which have yet had a name put to them. This mental world is larger than language itself. Intelligence and cognitive abilities remain similarly in a separate domain, perhaps tied intricately with “mentalese”, the native language of the brain. As we grow more knowledgeable about these findings, it will be important for education policy to evolve alongside them. Furthermore, this could be the first step in developing more accurate measures of cognitive ability. Koh Wanzi ( u0903551@nus.edu.sg) is a second-year student studying Life Sciences and English Literature at the National University of Singapore. 5. Pinker S. The language instinct. London: Penguin Books; 1994. 6. Bellugi U, Wang PP, Jernigan TJ. Williams syndrome: an unusual neuropsychological profile. In: Atypical cognitive deficits in developmental disorders: implications for brain function. Hillsdale, NJ: Lawrence Erlbaum Associates; 1994. 7. Au Yong, J. Bilingual policy difficult. The Straits Times. 2009 November 3. Available from: http://www.straitstimes.com/Breaking%2BNews/Singapore/Story/ STIStory_449691.html [cited 2010 16 Feb].
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 4 :1 4 AM
UCSD
Making Sense of Our Taste Buds Angela Yu
B
ite into a chocolatey, caramel Twix bar and what do you taste? A chewy, sweet and creamy filling, blended together that gives an irresistible satisfaction of chocolate. Flavor is a complex mixture of sensory input composed of taste, smell and tactile sensation of food as it is being crushed in our mouths. Scientists have traditionally described the perception of flavor in terms of four qualities: saltiness, sourness, sweetness and bitterness. Taste has been a crucial ability towards our evolutionary survival, and with the recent discovery of two additional qualities -umami and fat - our sense of taste is more developed than previously imagined. The umami taste is due to the detection of glutamate, a natural amino acid commonly found in protein – rich foods such as meat, cheese, broth and mushrooms. This savory taste of umami has been popularly modified in the food industry as a flavor enhancer, the most well-known being monosodium glutamate (MSG). Ongoing studies are still investigating whether the ability of our receptor cells to taste umami and fats serves as a primal survival instinct, attracting our ancestors to the protein-abundant and high calorie food essential for survival. While the tongue guides food between the teeth to be cut into digestible pieces, it also acts as a the peripheral sense organ best known for its role in the sensation of taste. The tongue not only detects taste, but also senses tactile, thermal and even painful stimuli - like spices - that give food its flavor. The bumpy structures that cover the tongue are papillae, goblet-shaped bumps that help create friction between the tongue and food. They are often mistaken for taste buds, which are smaller structures tucked away in the folds between the papillae. Each taste bud is made up of basal and supporting cells that help maintain around 50-100 gustatory receptor cells on the tongue. These specialized receptor cells are stimulated by the chemical makeup of the foods we eat. When a stimulus, such as foods containing carbohydrates, activates a receptor on a gustatory cell, neurons will be activated to send an electrical impulse to the gustatory region of the cerebral cortex, which the brain then interprets as taste. On average, the human tongue consists of 2,000-8,000 taste buds hard at work aiding our taste process. Aside from the tongue’s ability to detect gustatory stimuli, it also perceives temperature and the complex tactile sensations such as a food’s texture, oiliness, chewiness, viscosity and density. If you like spicy food, you should be familiar with the sensation of hotness and spiciness at the same time. Spicy foods, like peppers, are “hot” because the active ingredient in peppers activates a particular thermal nocireceptors called Trp receptors that signals elevations in temperature [1]. Responses like the Trp receptors allow evolutionary adaptation of taste in humans. Of course, our cave-dwelling ancestors did not crave Milky
© 2010, The Triple Helix, Inc. All rights reserved. UChicago.indb 4 3
Reproduced from [5]
Way bars, and most would probably have winced at a sip of dark brewed coffee. Taste has evolved in humans over time as a result of technological and cultural events. Our cravings and food preferences started as a means of survival and stem from physiological traits that have evolved with our species. Pleasant tastes drive our appetites, and we crave certain foods for their flavor because our bodies need the molecules they contain. Sweets are rich in energy-high carbohydrates while salts balance our body fluids and carry nutrition throughout the body. Unpleasant tastes can be informative too: Bitterness and acidity are warning of toxins and spoiled foods to avoid. However, our palates have evolved to include flavors previously avoided. The ability to preserve foods through curing and brining (ie: kimchi) has changed our unwillingness to eat sour tastes. Humans adapted to the generally unfavorable
Taste has evolved in humans over time as a result of technological and cultural events. tastes of alcohol, energy drinks and coffee (fermented, bitter and astringent beverages), because the effects were pleasurable or necessary. And people raised with a diet of high-spiced foods have a much higher tolerance for it than those who are unaccustomed. Over time, many of the body’s natural defense mechanisms toward taste have been manipulated in the name of flavor. Until recently, scientists have accepted four basic tastes
THE TRIPLE HELIX Fall 2010 43 1 1 /9 /2 0 1 0 9 :4 4 :1 4 AM
UCSD – sweet, salty, sour and bitter, which are the building blocks of other tastes. Each primary taste triggers a particular gustatory receptor. These basic tastes went unchallenged for years, perhaps because of their familiarity. In the early 1900’s, however, Japanese scientist Kikunae Ikeda detected another taste common in savory seaweed in Japanese cooking. Ikeda isolated the compound, and discovered it Reproduced from [6] to be glutamic acid. This amino acid was found to have its own gustatory receptor on the tongue, and Ikeda and his scientific team named this fifth taste umami, a Japanese term for “delicious, savory taste.” Researchers continued to study umami throughout the 20th century. An important breakthrough came in 1985 when scientists trying to mimic the controversial flavor-enhancing substance MSG failed to replicate the taste with any combination of the basic four, sweet, salty, sour and bitter [2]. Because Ikeda’s study on taste was not translated into English until 2002, and due to the fact that glutamic acid is less commonly found in Western food, umami has only recently been known as the new addition to the taste family. The discovery of umami has opened doors for the discovery of other potential new tastes. French researchers have currently identified a potential gustatory receptor for fat, CD36. They reported that “mice have a receptor in their tongues that can sense fat, and the presence of that receptor seems to drive the mice to crave fat in their diets” [3]. The work was based on research from Nada A. Abumrad Ph.D at the Washington University School of Medicine in St. Louis. “Fat sensing has been very controversial,” Abumrad says. “It once was thought that we could sense five different tastes: sweet, salty, sour, bitter and umami. There was some indirect evidence that the tongue might be able to identify fat too, but many scientists thought that involved sensation of texture more than the actual taste of fat.” The CD36 receptor protein is located on the surface of cells and distributed in many tissues, including “fat cells, the digestive tract, heart tissue, skeletal muscle tissue and, not surprisingly, the tongue” [3]. Several scientists have proposed that people might not only sense the texture of fat, but also have fatty acid receptors that lead them to prefer foods containing high levels of fat. In mouse experiments conducted by the researchers at the University of Bourgogne in Dijon, rodents were fed two
solutions: one laced with fat and the other containing a gummy, fat-free substance that mimicked the feel of fat in the mouth. Normal mice preferred the fatty solution, but mice that had been genetically engineered without the CD36 receptor protein did not have that preference. In additional experiments in laboratory rats, the scientists found that removal of the CD36 gene kept the animals’ intestines from initiating secretion of digestive juices necessary to digest fat [4]. This incredible discovery suggests that we are able to manipulate our bodies’ natural taste preferences, and by removing the CD36 receptor protein, we might have found the key to solving the obesity problem that has reached epidemic proportions globally. Scientists are eager to study taste receptors as a possible factor in obesity. One thought is that the amount of the CD36 receptor in our systems might help regulate our cravings for fat. Now the goal is to translate these findings from rodents into humans, where variations in the CD36 gene are common. Simply shutting off the receptor, as researchers did with genetically engineered mice, is not ideal, because the protein has a number of vital functions in the body. But with scientific advancement, scientists might be able to design artificial fats tailored to fit the taste receptor and satiate cravings. Just as flavor is more than taste, taste is more than a genetic impulse. People’s food preferences and eating habits are largely based on where they grew up and even what their mothers ate while pregnant. Some people have an easy time refusing fatty foods, while it is much harder for others. It is important to realize that rather than relying on future genetic therapies, it is equally important to maintain a healthy, balanced diet and regular exercise. Regardless of what our food desires are, the good news is that we can still override our taste receptors. Next time you reach for that Twix bar, just say no and thank your good sense for keeping those taste buds at bay.
References
83. Available from: URL: http://www.nature.com/ng/journal/v21/n1/full/ng0199_76. html 4. Laugerette F, Passilly-Degrace P, Patris B, Niot I, Febbraio M, Montmayeur JP, Besnard P. CD36 involvement in orosensory detection of dietary lipids, spontaneous fat preference, and digestive secretions. Journal of Clinical Investigation. 2005. [cited Aug6]; 115: 3177-3184. Available URL:http://www.jci.org/articles/view/25299 5. http://www.nlm.nih.gov/medlineplus/news/fullstory_102120.html 6. http://www.nps.ars.usda.gov/images/docs/769_857/pizza1.jpg
1. Bear M, Connors B, Paradiso M. Neuroscience: exploring the brain. 3rd ed. Baltimore (MD); Lippincott Williams and Wilkins Publishers; 2001. 2. Iwasaki K, Kasahara T, Sato M. Gustatory effectiveness of amino acids in mice: behavioral and neurophysiological studies. Physiol Behav 1985; 34:531–542. 3. Aitman T, Glazier A, Wallace C, Cooper L, Norsworthy P, Wahid F, et al. Identification of Cd36 as an insulin-resistance gene causing defective fatty acid and glucose metabolism in hypertensive rats. Nature Genetics. 1999 [cited Jun 5]; 21: 76-
44 THE TRIPLE HELIX Fall 2010 UChicago.indb 4 4
Angela Yu is a fifth year at University of California, San Diego majoring in Biochemistry and Cell Biology. She is interested in craniofacial development and oral health, and has written several articles relating to craniofacial disorders and conditions. Angela is on the path of pursuing a career in dentistry.
© 2010, The Triple Helix, Inc. All rights reserved. 1 1 /9 /2 0 1 0 9 :4 4 :1 5 AM
ACKNOWLEDGMENTS
The Triple Helix at the University of Chicago would sincerely like to thank the following groups and individuals for their generous and continued support:
University of Chicago Annual Allocations Student Government Finance Committee Bill Michel, Assistant Vice President for Student Life and Associate Dean of the College Arthur Lundberg, Student Activities Resource Coordinator The Biological Sciences Division The Physical Sciences Department The Social Sciences Department All our amazing Faculty Review Board Members: Timothy Sentongo Charles Kevin Boyce Dario Maestripieri Stephen Pruett-Jones Trevor Price Michael LaBarbera Benjamin Glick Shona Vas David Glick Howard Nusbaum
If you are interested in contributing your support to The Triple Helix’s mission, whether financially or otherwise, please feel free to visit our website at http://uchicago.thetriplehelix.org. © 2010 The Triple Helix, Inc. All rights reserved. The Triple Helix at the University of Chicago is an independent chapter of The Triple Helix, Inc., an educational 501(c)3 non-profit corporation. The Triple Helix at the University of Chicago is published once per semester and is available free of charge. Its sponsors, advisors, and the University of Chicago are not responsible for its contents. The views expressed in this journal are solely those of the respective authors.
UChicago.indb 1
1 1 /9 /2 0 1 0 9 :4 4 :1 5 AM
Business and Marketing Interface with corporate and academic sponsors, negotiate advertising and cross-promotion deals, and help The Triple Helix expand its horizons across the world!
Leadership: Organize, motivate, and work with staff on four continents to build interchapter alliances and hold international conferences, events and symposia. More than a club.
Innovation: Have a great idea? Something different and groundbreaking? Tell us. With a talented team and a work ethic that values corporate efficiency, anyone can bring big ideas to life within the TTH meritocracy.
Literary and Production Lend your voice and offer your analysis of today’s most pressing issues in science, society, and law. Work with an international community of writers and be published internationally.
A Global Network: Interact with high-achieving students at top universities across the world— engage diverse points of view in intellectual discussion and debate.
Science Policy Bring your creativity to our newest division, reaching out to students and the community at large with events, workshops, and initiatives that confront today’s hardest and most fascinating questions about science in society.
For more information and to apply, visit www.thetriplehelix.org. Come join us.
UChicago.indb 2
1 1 /9 /2 0 1 0 9 :4 4 :1 5 AM