HURJ Volume 13 - Spring 2011

Page 1

hurj spring 2011: issue 13

HURJ Hopkins Undergraduate Research Journal

Spring 2011 | Issue 13

1


hurj spring 2011: issue 13

hurj 2010-2011 hurj’s editorial board Editor-in-Chief, Content Johnson Ukken ‘11 Leela Chakravarti ‘12 Editor-in-Chief, Layout Paige Robson Content Editors Mary Han Isaac Jilbert Andi Shahu Michaela Vaporis Layout Editors Kelly Chuang Sanjit Datta Edward Kim Lay Kodama Sydney Resnik Chief Copy Editor Haley Deutsch Copy Editors Paul Grossinger Kiran Parasher Jessica Stuyvenberg PR/Advertising Javaneh Jabbari

hurj’s writing staff Mark Brennan Inga Brown Leela Chakravarti Elizabeth DeMeo Robert Dilley Jean Fan Isaac Jilbert

about hurj:

The Hopkins Undergraduate Research Journal provides undergraduates with a valuable resource to access research being done by their peers and interesting current issues. The journal is comprised of five sections - a main focus topic, spotlights, and current research in engineering, humanities, and science. Students are highly encouraged to submit their original work. disclaimer: The views expressed in this publication are those of the authors and do not

constitute the opinion of the Hopkins Undergraduate Research Journal.

2

Calvin Price Anisha Singh Izzy Taylor Janelle Wen Hui Teng Andy Ang Tu Marshall Weinstein Noah Young

contact us: Hopkins Undergraduate Research Journal Mattin Center, Suite 210 3400 N Charles St Baltimore, MD 21218 hurj@jhu.edu http://www.jhu.edu/hurj


hurj spring 2011: issue 13

a letter from the editors

{

We live in a world of data. Companies like Google embody the spirit of the time, one increasingly aware of and dependent on the collection and distribution of all kinds of information. This machinery of data collection has benefitted few fields as greatly as genetics and specifically, the completion of the human genome project. Yet the completion of this project is simply one, perhaps the most visible, in a series of successful endeavors to understand our biology at its fundamental levels. The true test of usefulness for all this genetic data will be passed when it is translated into meaningful medical treatments. At the same time, the nature of the information has entered previously unknown levels of intimacy as one can now determine the chemical basis of any individual fairly quickly. The impact this kind of knowledge has goes far beyond the laboratory and generates crucial ethical questions while simultaneously promising incredible medical cures. This issue of the Hopkins Undergraduate Research Journal tackles these questions head on with five focus articles that explore new research in the field and potential questions it raises. There is an interesting discussion of the latest research in epigenetics and the kinds of therapy it offers as well as the obstacles facing gene therapy. Another piece explores the varying personal genomic sequencing options available today while another addresses the ethical questions a couple might face if they consider conceiving a genetically selected child in order to save the ailing health of its elder sibling. The fifth article analyzes genetics at the intersection of science and policy and calls for new regulation to ensure that personal freedoms do not become collateral damage of unchecked discovery. Humanities and science research is also presented, highlighting some of the most interesting work undertaken by Hopkins undergraduates. We would like to thank all the students who have contributed their research to this twelfth issue of HURJ. It is their dedication to their respective subjects which creates the thoughtful work we are privileged to publish. Thanks also to our excellent staff whose care and guidance brought each of these articles from proposals to print. Regards,

Leela Chakravarti Editor-in-Chief, Content

Paige Robson Editor-in-Chief, Layout

3


table of contents

hurj spring 2011: issue 13

spring 2011 focus: the art and science of education 15

Learning with Video Games

21

The Neurobiology of Learning: Cellular Replay and Implications for Education

27

Marshall Weinstein

Leela Chakravarti

Medical Education Reform for the 21st Century

Robert Dilley

spotlights on research

4

7

Studying Acute Mylogenous Leukemia: An Interview with Dr. Rachel Rau Izzy Taylor

8

The Long-Term Status of the Euro: Germany’s Role in Maintaining the Eurozone Calvin Price

9

Crossing Disciplines to Solve Complex Problems Andy Ang Tu

10

Why Egyptology? An Interview with Dr. Betsy Bryan Isaac Jilbert


table of contents hurj spring 2011: issue 13

humanities 31

The Importance of British Abolitionism Anisha Singh

34

Environmental Insecurity in Asia: A Geopolitical Interpretation Janelle Wen Hui Teng

38

Censorship Beyond the State: Examining the Role of Independent Press in Kenya’s Kakuma Refugee Camp Elizabeth DeMeo

science & engineering reports

41

Effects of Methamphetamine Addiction on Ultrasonic Vocalizations and the Rat Brain Inga Brown

44

Delineating the Role of BRF2 in Breast Cancer Pathogenesis Jean Fan

47

Genetically Engineered Transcriptional Response to Voltage Signals in Saccharomyces cerevisiae Noah Young

51

Understanding Greenhouse Gas Metrics in Climate Change Discussions: Developing a Greenhouse Gas Intercomparison Tool Mark Brennan

5


hurj spring 2011: issue 13

can you see yourself in hurj? share your research! now accepting submissions for our fall 2011 issue focus - humanities - science - spotlight - engineering contact hurj@jhu.edu

hurj 6


hurj spring 2011: issue 13

spotlight

Studying Acute Myelogenous Leukemia: An Interview With Dr. Rachel Rau Izzy Taylor, Class of 2013 Molecular and Cellular Biology It is clear that Dr. Rachel Rau is a seasoned physician. She sits across the table from me ready to answer any question I might have for her. I already know her basic background: undergraduate at Case Western, medical school at Ohio State, current fourthyear fellow of the pediatric oncology unit at the Johns Hopkins Children’s Center. I know I’m going to ask her about her current research on acute Myelogenous Leukemia (AML). What she’s about to tell me dives deeper into what makes this brilliant doctor also a loving mother, wife, physician, and research advocate. “I never thought I would get married. I thought I’d be one of those doctors solely committed to my job. But then I met my husband in med school…” Dr. Rau learned in sooner than later that she could in fact “have it all”. She smiles as she describes her two beautiful children and talks about the thrill of juggling both a life of medicine and a life with a loving family. Her husband is a rheumatologist, and between her schedule and his, they’ve both learned to balance successful careers with family. Dr. Rau is currently conducting research on AML, a cancer that has a less than 40% survival rate. Her main goal is to prove that a major mutation in the gene nucleophosmin has critical relations to the gene Flt3. The two mutations, when both activated, are what lead to AML. When Dr. Rau started this venture, she had already identified that, on their own, both genetic alterations contributed to the disease. She also noticed that there could possibly be some bigger correlation between the two. Working with hundreds of mice, Dr. Rau starts by determining what DNA mutations the pups have and letting them live accordingly. When the mice begin to show signs of the leukemia, she then initiates multiple tests to look for activation of certain oncological pathways, sensitivity to certain drugs, as well as experiment with therapeutic remedies. She’s very close to publishing this breakthrough research, in hopes of using the information to apply to children and clinics everywhere.

Acute Mylogenous Leukemia: Cancer which begins inside the bone marrow. A highly fatal disease, chemotherapy or marrow transplants are typically necessary to treat AML.

And it doesn’t stop there. Dr. Rau is a proud recipient of a St. Baldrick’s grant which helps fund this research. She is also one of the chief editors of the acclaimed pediatrician’s manual, The Harriet Lane Handbook. Now in its 18th edition, this go-to book for pediatricians references valuable information for diagnosing and advising young patients. When I ask her about the process of publishing it, she merely shrugs and responds “Eh, it was hard work but well worth it.” That hard work will again pay off when she joins the clinic staff as a junior faculty member this summer. One of the last questions I ask Dr. Rau is if she has any advice for undergraduates who are aspiring to join medicine. Her advice: “Don’t completely write off research.” After talking with Dr. Rau, I don’t think any undergraduate would even dream of it.

7


spotlight

hurj spring 2011: issue 13

The Long-Term Status of the Euro: Germany’s Role in Maintaining the Eurozone

Calvin Price, Class of 2012 International Studies The Greek debt crisis of 2010 turned world attention to the economics of the Eurozone, and, in particular, raised awareness of the surprisingly high levels of public debt in several European countries. While only Greece has been bailed out thus far, the financial situations in Portugal and Ireland, above all, continue to raise fears that the Eurozone is not as stable as once hoped. If either of those two countries, or a country like Spain, Italy, or Slovenia, were to have the same level of financial crisis as Greece in the near future, the long-term implications for the Euro as a shared currency could be devastating. The bailout of Greece was determined to be necessary primarily due to the potential consequences of Greece defaulting on its debt. If this occurred, investors would lose faith in Eurozone countries with already high debts, potentially causing a chain of events in which both Ireland and Portugal would also be forced to default. Such sovereign defaults would cause these nations to remove much of the collateral they have in the European Central Bank, possibly leading to a necessary removal of said countries from the Eurozone entirely.1 The fact that these countries continue to face a high level of sovereign debt today implies that the Eurozone as it exists today is still in danger. If the economies of these countries were to resemble the Greek economy of 2010, they would likely also require bailouts to keep them on the Euro. Is keeping these countries within the Eurozone worth the cost of more bailouts to its more economically stable countries? If Germany and France were to decide that they would no longer pay to keep the weaker states in the monetary union afloat, could the prevalence of the Euro decline as a result? Germany took a very hard-line position in the economic bailout of Greece, and the German government did not approve the loan package until a set of

severe austerity measures were conditioned on Greece as part of the loan. The initial sentiment of the German people on the Greek crisis leaned towards not helping Greece, and the signing of yet another bailout could breed massive discontent among German citizens. Combined with an already-existing dislike for the Euro among many Germans (the Euro has the nickname “teuro” in German, a portmanteau of the German word for expensive, “teuer,” and “Euro”), as goods are generally more expensive under the Euro than they were under the Mark2, Germany may end up apathetic to the economic plight of other Euro nations. While France seems committed to the continued expansion of the Eurozone, Germany is a wild card on the issue. Since a bailout cannot happen without Germany, the country would be the key to the continued survival of the Eurozone if another Greek-style crisis were to occur. Fortunately for Ireland and Portugal, it may still be advantageous for Germany to continue bailing out indebted countries, despite popular opinion against such moves. Germany makes more money off exports than any country in the world outside of China, and most of those exports are to other European Union states.3 While currency is not necessarily a barrier to exporting, there are competitors within the Eurozone, particularly Italy and Portugal, who might be able to take some of Germany’s business away if German goods become more expensive. Germany’s other main tie to the Eurozone is the country’s past. Nationalism and patriotism are held as hallmarks of people throughout the world. In the United States or France, it is almost impossible to walk through a neighborhood without seeing the country’s flag flying from somebody’s house. In Germany, however, that would be a complete faux pas. The German people know the history of their country (above all, the memory of World War II lingers), and there seems to be a general consensus that outward nationalistic tendencies are a danger. In this vein, the German people and their government want to do everything possible to curb isolationism, and refusing to aid a Eurozone member in need, possibly leading to the decline of the Eurozone in general, would be seen as an isolationist move. Although Germany is likely the key to defending the Eurozone as other countries face deficit troubles, the best defense for the Euro lies with those countries themselves. If the states with sovereign debt issues can prevent these issues from getting out of hand, the Euro should be safe and possibly even ready for more expansion. The current global financial crisis has affected European business as well, so the lack of economic growth in Ireland and Portugal is not surprising, and continues to look more and more like a disaster waiting to happen. If the world economic outlook were to improve, it is likely that the sovereign debt crisis could solve itself without the need of outside assistance. If this does not happen, however, we will find out Germany’s true commitment to her ever-increasing leading role in the politics of the European Union. References: 1. Hankel, Wilhelm. A Euro Exit is the Only Way out for Greece. Financial Times. 25 March 2010. http://www.ft.com/cms/s/0/6a618b7a-3847-11df-8420-00144feabdc0. html 2.Bordo, Michael D. and Harold James. A Long Term Perspective on the Euro. National Bureau of Economic Research. Cambridge. 2008. p. 22 3. EU für Export am Wichtigsten. N-TV Wirtschaft. 7 May 2010. http://www.n-tv. de/wirtschaft/EU-fuer-Export-am-wichtigsten-article861064.html

8


hurj spring 2011: issue 13

spotlight

Crossing Disciplines to solve

Complex Problems Ang Andy Tu, Class of 2013 Biomedical Engineering Studying biomedical engineering (BME) at Hopkins comes with many expectations: the late nights at the library, the routine credit overload, and most importantly, what kinds of research to be involved with. Many people have asked what a BME like me, who presumably came to Hopkins to study medical devices and replacement tissues, is doing in an immunology lab. To be honest, I didn’t know the answer at first; I didn’t know how an engineering student like me could apply what I have learned in class to immunology research. It wasn’t until much later that I realized that immunology, among many other fields of study, is becoming increasingly interdisciplinary, and thus is in need of engineers. In the following paragraphs, I focus on sub-fields of immunology that are growing and gaining interest at research institutions across America. At the same time, I seek to encourage Hopkins students to look to apply their expertise in unconventional ways. The first thing one should know about immunology is that the body has evolved a high degree of complexity to deal with the impressively diverse classes of pathogens that humans are subjected to daily. This complexity means that there is much that engineers and synthetic biologists alike can learn from the immune system. One example is the activation of Ras proteins in lymphocytes (a class of white blood cells), which is important in the activation of the adaptive immune system [1]. The decision for a lymphocyte to activate and upregulate its cytotoxic functions is a digital one – the cell either responds to the signal or it doesn’t. The signal that it receives, on the other hand, is an analog one, depending on the frequency of signals and co-signals, among many other factors [1]. To deal with this discrepancy, lymphocytes use two molecules to catalyze Ras activation: RasGRP and SOS. The SOS pathway includes a positive feedback loop that results in digital responses, while RasGRP responds in a graded fashion that does not exhibit hysteresis [2]. These two pathways work together to not only efficiently translate graded chemical signals to a digital response, but to also give the cells “memory” of previous stimulations, enabling them to respond accordingly to subsequent signals[2]. Ras proteins are one of many examples of unique pathways developed in the immune system. Over the past few years, increasingly more expertise in statistical mechanics has been incorporated with knowledge of cell biology to shed light on key molecular components in the immune system. Further understanding of these pathways, as a result of crossovers of disciplines, can not only help us explain other phenomena in the biological world, but also provide us with frameworks for new pathway designs that could lead to advances in synthetic biology The complexity of the immune system is directly correlated to its overall effectiveness. One of the biggest challenges in cancer treatment is that cancer cells are heterogeneous. This heterogeneity applies not only to surface markers, but also to inter- and intracellular pathways. Cancer can thus really be considered as a “malicious network,” and therefore a treatment that’s targeted against one specific aspect of this network is unlikely to succeed [3].

Fortunately, the immune system’s diverse and clever methodology can be used to fight the cancer network. In many cases, in fact, it is the breakdown of this system that allows for cancer to establish itself and grow [4]. How to fix and to take advantage of the immune system to treat cancer, in effect fighting a “network” with a “network,” has become a popular field that requires inputs of various academic fields, including that of engineers. As a result of the need for engineers in immunology, the field of “immunobioengineering” was created. The term immunobioengineering describes the joint efforts by immunologists and engineers to understand and manipulate the immune system. These efforts include processes that many associate with biomedical engineering, such as design of drug delivery vehicles and biomaterials. Already, many immunobioengineering efforts are beginning to flower. Research in adoptive immunotherapy, which consists of stimulating patients’ own white blood cells ex vivo and infusing them back into the patient to treat cancer, infections, and even autoimmune diseases, is already underway at Johns Hopkins [4]. There are also labs on campus that are working on drug delivery systems that can deliver cytokines and other stimulating factors to white blood cells. The Irvine lab at MIT has also made a great deal of progress in this field, including engineering designer materials for T-cell stimulation and drug delivery [5]. Studying BME at Hopkins comes with many expectations, and these expectations inadvertently limit the fields that students are willing to explore. Many students will be well-served working in other less “typical” fields for BME. After all, the very essence of BME is applying classical engineering principles in a less classical context; namely, medicine. In the same spirit, we shouldn’t limit ourselves, and instead should look for innovations and breakthroughs anywhere our skills may apply. References:

[1] Chakraborty, Arup K., and Andrej Kosmrlj. “Statistical Mechanical Concepts in Immunology.” Annual Review of Physical Chemistry 61 (2010): 283-303. Print. [2] Das, Jayajit, Mary Ho, Julie Zikherman, Christopher Govern, Ming Yang, Arthur Weiss, Arup K. Chakraborty, and Jeroen P. Roose. “Digital Signaling and Hysteresis Characterize Ras Activation in Lymphoid Cells.” Cell 136.2 (2009): 337-51. Print. [3] Kreeger, Pamela K., and Douglas A. Lauffenburger. “Cancer Systems Biology: A Network Modeling.” Carcinogenesis 31.1 (2010): 2-8. Print. [4] Oelke, Mathias, and Jonathan Schneck. “Immunotherapy with Enhanced Self Immune Cells.” Discovery Medicine 4.22 (2004): 203-07. Print. [5] Stachowiak, Agnieszka N., and Darrel J. Irvine. “Inverse Opal Hydrogel-collagen Composite Scaffolds as a Supportive Microenvironment for Immune Cell Migration.” Journal of Biomedical Materials Research Part A 85A.3 (2008): 815-28. Print.

9


spotlight

hurj spring 2011: issue 13

Why Egyptology? An Interview With Dr. Betsy Bryan Isaac Jilbert, Class of 2012 International Studies I don’t care about Egyptology. Actually, I didn’t care about Egyptology. Until I spoke with Johns Hopkins Professor Betsy Bryan, I didn’t understand why anyone would possibly want to devote their life to studying graveyards and trinkets under the scorching heat of the Egyptian sun. As I talked with Professor Bryan, though, her words gradually transformed images of graveyards into grand tombs, and the trinkets of my imagination into lost treasures. I learned from Professor Bryan that it wasn’t the material value of these remains that capture one’s imagination, it is instead history: the history of humanity and the history of ourselves that we see inside the tombs of ancient Egypt. For Professor Bryan, it was never a question of “why Egyptology.” Instead, growing up in Richmond, Virginia, she became interested, just like so many other kids, in mummies, and from then on Professor Bryan proclaims, “I never really wanted to study anything else.” Her college admissions essay on Egyptology was inspired by her childhood, when in museums, “I used to stare at these inscriptions and wonder what they said…I just knew that’s what I was going to do.” The challenges to enter the field of Egyptology are formidable, as there are only 13-15 Egyptology professors at any one time at just a few select schools. Hopkins is one of these schools, as we have two full-time professors of Egyptology, as well as undergraduate and graduate level courses in Egyptology. In regards to Egyptology’s place at Hopkins, Professor Bryan remarked, “I have seen in the years that I have been here, the school being more and more receptive…I have really seen that the school does embrace… the idea of having a traditional Egyptology program and other Near Eastern programs within it despite the small size of the university. I think that they see that we do it, and that we get recognition in the international community and so I think that they have embraced it.” Egyptology at Hopkins has also been given a boost by the opening of Gilman Hall, as new museum space has b e come

10

available and resources to students and professors alike have improved. What is Egyptology, though? To Professor Bryan it “is one of the few generalist fields left.” Egyptology is part science, part art, and perhaps part luck, as a wide range of skills are needed to piece together the lives of those who lived more than 3000 years ago. A myriad of languages are required, not just French and German for research purposes, but also Arabic for field work and an understanding of hieroglyphs to interpret ancient records. Furthermore, knowledge of art history is required, as well as ancient Egyptian political, economic, and social history. These skills must then also be accompanied by a knowledge of archaeological techniques and an understanding of scientific methods and technical skills as Professor Bryan explained, “in the field…we can use a lot more scientific, precise methods… we are lucky to be bringing more modern techniques into the field.” Professor Bryan has spent most of her time researching and exploring the lives of the pharaohs Thutmose IV and his son, Amenhotep III. Her dissertation on Amenhotep III explored the life of a ruler who from his father “inherited a state that was the most powerful, probably the richest state in the ancient world at that moment.” As gold poured in from Nubia, and Egypt was at peace with all her neighbors, Amenhotep III utilized the vast resources of his kingdom to build grand temples and statues of himself. As Professor Bryan explained, “temple building was one of the things kings did to build their legitimacy.” Amenhotep III, in addition to building temples, particularly “wanted to make statements that would be visible to everybody on earth, so he also had huge statues of himself made that were 70ft high and made out of a single piece of stone.” Thousands of smaller statues were also made in Amenhotep III’s likeness, as at any one time 100-150 sculptors were working on creating his statues. It is not just studying tombs and statues for Professor Bryan, but rather the “observation of a man who is making himself a god by means of art production.” Professor Bryan’s current research, however, is on the goddess Mut and the drinking parties that accompanied the celebration of her favor. Mut is often depicted as a lion headed person with a woman’s body, although the exact depiction and form of the lion’s head can change, depending on how the Egyptians wished to describe Mut. The accompanying mythology surrounding Mut is that the Sun god, Ra wished at one point to punish humanity and set Mut out to destroy Egypt. Yet upon changing his mind, Ra could not convince Mut to stop, as she developed a lust for blood. Ra asked the Egyptians to flood their fields with beer tinted with red dye in order to trick Mut into believing it was blood. Upon discovering the red beer, Mut drank until passing out, thus her rampage was halted. After Mut lefWt however, her people wanted her back, and thus coaxed Mut back by making


hurj spring 2011: issue 13 red tinted beer. The drinking parties that Professor Bryan studies currently are based off of this story, as once or sometimes twice every year the Egyptian people would spend a night at a temple and drink until they passed out. In the morning, a statue of the goddess would be brought in, and when drummers walked in banging drums, the participants would wake up in a stupor and at that moment see the statue, at which point they believed they had an “epiphany with the goddess” and felt that when they saw “the goddess [in this state], they experience[d] her.” Professor Bryan explained that “in 2004, when we were excavating on the porch of the temple itself… a foundation that was made of the columns that had been from the hall of drunkenness, and it was the hall of drunkenness that was put up for this Festival of Drunkenness…so that led me to do a great deal of research on what the festival of drunkenness was like.” Professor Bryan’s research is particularly important to Egyptology because it connects many of the ceremonies and rituals once thought peculiar to later Egypt to an earlier time. Professor Bryan stated that “my interest has been in trying to understand what it [the drinking ceremonies] looked like in this Early New Kingdom period, and what I’ve been able to demonstrate is that it in fact has almost identical aspects to what we know of how it functioned and what its purposes were in a much later time.” In terms of why this was not known before, Professor Bryan proposes that in fact in the records “you find traces of it all the way down. I think it’s there, it’s just not very visible.” In addition, the celebration of the Goddess Mut is unique in that it is not strictly hierarchical, but rather, as Professor Bryan points out, has “a lot of the kinds of participatory elements that you see in things like Southwest Indian uses of drugs, because it’s a communal event.” Unique to Professor Bryan’s experiences is the current revolution

spotlight in Egypt. While excavating in Luxor at the beginning of this year, she and her team of graduate students were evacuated as the protests got out of hand. Professor Bryan explained that “we didn’t personally have any difficulties while we were there in terms of interrupting our work. I had to make the decision to close down because…particularly the graduate students were in an uncomfortable environment, and the protests were beginning to get a little out of control. How it will affect us in the future, it’s really too soon to tell.” Professor Bryan’s love of Egyptian history also very clearly translated into a love of the Egyptian people as she added on “I only hope that the Egyptian population itself is going to have the opportunity to have elections and put a government in.” Ancient Egypt isn’t just a matter of academic curiosity for Professor Bryan, but rather something that appeals to everyone on a far deeper level. On why Ancient Egypt matters to people, Professor Bryan explained that “I honestly think that it is the result of the enormous amount and visual impact of the monuments that have come down from ancient Egypt…and when something in the news about Egypt they immediately relate it to these remarkable memorials that this ancient culture left. And I think it does have to do with this ‘you can take it with you’ attitude. People enjoy the fact that we know the names of these people who lived thousands of years ago because they built tombs and or they built pyramids and mummified themselves, so maybe it makes us all feel like in some sense we can have some immortality.” Egyptology matters because, with an understanding of the history Egyptology uncovers, pyramids are no longer graveyards, and statues are long-lost treasures not in that they are intrinsically valuable, but because they are a reflection of our own fears and dreams, and as Professor Bryan also added, “it doesn’t hurt that there’s a lot of gold.”

11


Focus:

hurj spring 2011: issue 13

Hopkins undergraduates present their research on the neural processing of learning to highlight innovative methods to better the education system

12


hurj spring 2011: issue 13

13


focus

14

hurj spring 2011: issue 13


hurj spring 2011: issue 13

focus

With

Learning

Video Games Marshall Weinstein, Class of 2012 Neuroscience In recent years, we have witnessed the beginnings of a revolution in education. Technology already has fundamentally altered the way we do many things in daily life, but it is just starting to make headway in changing the way we teach. Just as television shows like Sesame Street enhanced children’s learning by teaching in a fun format, electronic games offer to greatly enhance the learning of kids and adults by actively engaging them in the process.

15


HURJ Fall 2010: Issue13 12 hurj spring 2011: issue

Section Title focus

16 ##

The Entertainment Software Association estimates that sixty-seven percent of American households play video or computer games.1 They are especially popular among young males, with a recent study of teenagers by researchers at Yale reporting that 76.3% of male (and 29.2% of female) teens play video games.2 These numbers do not take into account the larger audience that has become hooked on other types of electronic games like the popular “Farmville,” which has more than 55 million monthly players, and “Angry Birds,” which has been

Congress in which he urged parents to “put away the video games.” Opponents cite recent studies pointing out a number of ways in which video games have been shown to negatively impact those who play them, with time spent playing video games correlating with decreased health and sleep quality and interference with real-life socializing and academic work.5 Both video games’ critics and defenders have recently been pointing to a growing body of evidence that the games you play affect you in ways that last

downloaded more than 50 million times. People are devoting larger and larger amounts of their time to these electronic worlds. Collectively, we now spend three billion hours a week gaming; the number of hours that gamers world-wide have spent playing the game “World of Warcraft” alone adds up to 5.93 million years. It makes sense then that electronic games are big business, with spending on games and equipment totaling $18.6 billion in 2010.3 This is an especially exciting time for the industry, as rapid advances in technology and design are allowing for a new generation of games. These changes range from advancements in the content of the games, such as the development of smarter AI in order to produce computer-controlled non-player characters (NPCs) that are more human in their behavior, to the way the games are played, such as the new group of games that have freed players from traditional manual controllers by allowing them to play using their own movements. At the same time, game designers are working hard to expand the market for games from the traditional young male audience to much broader segments of the population. “I think we will find that the traditional demographics will completely change in five years,” says Harley Baldwin White-Wiedow, director of design at Nihilistic Software. “Seven-year-old kids and 77-year-old women? We’ll absolutely be thinking of them when we make games.”4 In contrast with the enthusiasm of game players and developers are a number of increasingly vocal critics who are concerned about the negative effects of playing electronic games. Many see video games as an escapist retreat from reality; at best, they are a waste of time, and at worst, a corrupting influence. President Obama has repeatedly sounded the alarm against electronic games, such as in a 2009 speech to

long after the game is over. Recent research shows that video game play produces both positive and negative cognitive effects. Studies have found a number of benefits resulting from video game playing, including improvements in visual attention6, speed of processing7, and probabilistic inference.8 On the other hand, many parents are troubled by reports that violent video games increase aggression in those who play them, although the research on this is still inconclusive at the current time.9 These findings fit well with neuroscientists’ increasing understanding of just how malleable the human brain is and its ability to change as a result of one’s experience, a phenomenon known as neuroplasticity. While the previous belief amongst scientists was that the brain does not change much after childhood, decades of research have found that the brain as a whole remains plastic throughout life. The human brain consists of close to 100 billion interconnected neurons, and learning can happen through a change in the strength of the connections or by adding or removing connections. Learning and practicing a challenging task, such as playing games, can actually change the brain. Now that we know that the brain can be modified by activities as simple as playing a video game, and since it has been obvious for a while that people find these games engaging and fun, the obvious next step is to start actively using games as a teaching medium. “There’s still a tendency to think of video games as a big wad of time-wasting content,’’ said Cheryl Olson, co-director of the Center for Mental Health and Media at Massachusetts General Hospital. “You would never hear a parent say we don’t allow books in our home, but you’ll still hear parents say we don’t allow video games in our home. Games are a medium. They’re not inherently good or bad.’’10 If games


hurj spring 2011: issue 13 are going to be affecting kids anyway, it makes sense to start actively designing games with teaching as the goal, rather than an unintended result. Video games provide a great teaching tool for a variety of reasons. They are hard, and people enjoy being challenged. Crucially, since players move up to more demanding levels of play as they score more points, these games are adaptively hard, challenging people right at the edge of their abilities, which is a powerful component of learning. Along with this sense of challenge necessarily comes a sense of optimism and confidence. Research shows that gamers spend, on average, 80% of their time failing, but instead of giving up, they stick with the difficult challenge and use the feedback of the game to get better. In a good game, a player has clear goals and feelings of productivity, and this sense of confidence and accomplishment can transfer over into the real world. One recent study found, for example, that players of “Guitar Hero” are more likely to pick up a real guitar and learn how to play it. At the same time, today’s games, with compelling stories, high-quality graphics, and multiplayer environments, are ever improving in their ability to engage the player. Among the most important issues with the use of video games for learning is the extent to which the specific cognitive effects linked to games generalize to non-game tasks, known as transfer effects. For example, practicing a racing game could improve your driving ability, or it could just make you better at that specific game. This is of special concern to games that are marketed as ways to improve general cognitive functions, such as memory and attention. A number of companies already exist that produce software designed to keep the brain in good health as it ages. There is evidence that there is some substance to their claims, such as a promising 2008 study in which senior citizens who played “Rise of Nations,” a strategic video game devoted to acquiring territory and nation building, showed improvements in a wide range of cognitive abilities, including memory, reasoning, and multitasking.11 Teams of researchers are hard at work learning more about how to optimize a gaming experience to maximize learning. For example, Anne McLaughlin, a psychologist who co-directs the Gains Through Gaming lab at North Carolina State University, is assessing whether games that are novel, include social interaction, and require intense focus are better than other games at boosting cognitive skills. McLaughlin and her colleagues will use the findings to design games geared toward improving mental function among the elderly. At M.I.T., Eric Klopfer is researching the development and

focus use of computer games and simulations for building understanding of science and complex systems. This area of research also involves crosstalk with researchers in traditional areas of neuroscience and psychology, because as we learn more about the specific neural processes and areas involved in various tasks, we can better design games which hone in on the specific skills and mechanisms in need of improvement. Even as scientists in the lab study the process by which video games have these cognitive effects, others are busy actively implementing this knowledge and creating games. More than 19,000 players of “EVOKE,” an online game created for the World Bank Institute, undertook real-world missions to improve food security, increase access to clean energy, and end poverty in more than 130 countries. The game focused on building up players’ abilities to design and launch their own social enterprises. After 10 weeks, players had founded more than 50 real companies. Games offer more than just the ability to teach: they can also be of therapeutic value. A virtual environment can potentially be useful for treating people with addictions. For example, a 2008 study found that a virtual reality environment can provide the climate necessary to spark an alcohol craving so that patients can practice how to say “no” in a realistic and safe setting.12 New research by Eryn Grant shows that the virtual reality game “Second Life” can be useful in boosting people’s ability to socially interact.13 Michael Merzenich, a leading researcher in the field of neuroplasticity, developed a series of “plasticity-based computer programs” known as “Fast ForWord.” The program offers seven brain exercises to help with the language and learning deficits of dyslexia. These advances are especially welcome in the classroom, coming at a time when improvements in the American educational system are badly needed. On a recent nationwide test, known as the National Assessment of Education Progress, which included over 300,000 students, about a third of fourth graders and a fifth of high school seniors scored at or above the proficiency level.14 On an international test, PISA (Program for International Student Assessment) given to 15-year-old students around the world by the OECD, the U.S. ranked 14th in reading, 17th in science and 25th in math out of 34 countries. In a recent speech, President Obama recalled how the Soviet Union’s 1957 launching of Sputnik provoked the United States to increase its investment in math and science education, help

17


focus

hurj spring 2011: issue 13

ing America win the space race. “Fifty years later, our generation’s Sputnik moment is back […] As it stands right now, America is in danger of falling behind.” Video games fit into a larger effort to incorporate new technology into the classroom, a process known as technology integration. Examples include electronic student response systems, virtual field trips, and interactive whiteboards which provide a way for students to interact with material on the computer and which can accommodate different learning styles. Some schools are already integrating games into their curriculum; in the just opened Quest to Learn school in Manhattan, the students learn almost entirely through videogame-inspired activities, an educational strategy geared to keep kids engaged and prepare them for high-tech careers. There are opposing opinions to abundant usage of video games in the classroom and in general. Some critics protest that relying too much on technology detracts from other important skills. Others argue that, while it is good that games can teach children in a fun and engaging way, it is also important that children retain the ability to learn in a traditional setting and to be productive in a context that is not necessarily designed to entertain them. At the same time, video games provide children with an entertaining medium towards which they already devote huge amounts of time; this could be put to a more productive use. While opposing points are important, they will likely serve to moderate, rather than eliminate, the use of games for teaching purposes, because the potential benefits are so abundant.

References [1] http://www.theesa.com/facts/index.asp [2] Desai, R.A., Krishnan-Sarin, S., Cavallo, D., Potenza, M.N. “Video-Gaming Among High School Students: Health Correlates, Gender Differences, and Problematic Gaming.” Pediatrics 126.6 (2010): 1414 [3] http://www.reuters.com/article/2011/01/14/us-microsoft-xboxidUSTRE70D00120110114 [4] http://www.newsweek.com/2010/12/16/motion-controlledvideogames.html [5] Padilla-Walker, L.M., Nelson, L.J., Carroll, J.S., Jensen, A.C. “More Than a Just a Game: Video Game and Internet Use During Emerging Adulthood.” Journal of Youth and Adolescence 39.2 (2000): 103-113, and Smyth, J.M. “Beyond Self-Selection in Video Game Play: An Experimental Examination of the Consequences of Massively Multiplayer Online Role-Playing Game Play” CyberPyschology & Behavior 10.5 (2007): 717–721 [6] Green, C.C., Bavelier, D. “Action Video Game Modifies Visual Selective Attention.” Nature 423 (2003): 534-537 [7] Matthew, W.G., Dye, C., Green, S., Bavelier, D. “Increasing Speed of Processing With Action Video Games : Processing Speed and Video Games.” Current Directions in Psychological Science 18.6 (2009): 321 [8] Green, C.S., Pouget, A., Bavelier. D. “Improved Probabilistic Inference as a General Learning Mechanism with Action Video Games”. Current Biology 20.17 (2010): 1573-1579 [9] See http://www.techaddiction.ca/effects_of_violent_video_ games.html for a collection of recent papers on the topic [10] http://www.boston.com/news/health/articles/2009/10/12/ how_video_games_are_good_for_the_brain [11] Basak, C., Boot, W., Voss, M., & Kramer, A. F. “Can training in a real-time strategy videogame attenuate cognitive decline in older adults?” Psychology and Aging 23 (2008): 765–777 [12] Bordnicka, P., Traylorb, A., Coppc, H.L., Graapd, K.M., Carterb, B., Ferrerd, M., Waltone, A.P. “Assessing reactivity to virtual reality alcohol based cues.” Addictive Behaviors 33.6 (2008): 743-756 [13] http://www.sciencedaily.com/releases/2008/07/080717210838. htm [14] http://nces.ed.gov/pubsearch/pubsinfo. asp?pubid=2011451

“Fifty years later, our generation’s Sputnik moment is back […] As it stands right now, America is in danger of falling behind.”

18


hurj spring 2011: issue 13

focus

19


Section Title focus

20 ##

HURJ Fall 2010: Issue13 12 hurj spring 2011: issue


Section Title 2011: issue 13 hurj spring

HURJ Fall 2010: Issue 12 focus

The Neurobiology of Learning: Cellular Replay

and Implications for Education Leela Chakravarti, Class of 2012 Neuroscience

In developing effective educational strategies, it is important to consider the underlying biological processes involved in learning. Amidst social, political and economic aspects of education lies the fundamental aim of education: developing and cultivating the mind from the brain to form contributing members of society. The past few decades have seen enormous advances in understanding the neural substrates of learning and memory that allow people to become “educated.� The human brain is able to achieve remarkable feats of intelligence through its utilization of a high degree of plasticity in the process of learning. The brain can adapt to new information on both short-term and long-term bases, employing both structural and functional mechanisms to achieve different types of memory. Recent advances have demonstrated the importance of repetition in neuronal firing patterns for the consolidation of memory. Work over the past few decades has allowed us to develop a strong understanding of the cellular and molecular changes that take place with such repetition, and why these types of cellular rehearsal are so crucial to the persistence of learning. An understanding of the neural basis of memory consolidation may be effectively applied to development of teaching and learning methods that are more compatible with the brain’s systems for learning.

##

21


hurj spring 2011: issue 13

focus Though many brain structures are involved in learning and memory, the hippocampus and amygdala seem to have instrumental roles in various types of memory formation. The hippocampus, a structure in the temporal lobe of the brain, plays an elemental role in memory encoding, particularly for declarative memories such as word definitions or specific events.1 The amygdala, located near the hippocampus in the medial temporal lobe, is a nodal point for the encoding and modulation of memories with high emotional content, as well as for control of emotional behavior.2 Certain cells in the CA1 region of the hippocampus, called place cells, encode an animal’s location in space through the use of visual cues and are differentially activated at different locations in the animal’s immediate environs. Sequential activation of hippocampal place cells can be interpreted as the neural representation of an animal’s path.3 Many studies have used place cells and their response to spatial learning tasks to understand how hippocampal replay contributes to learning and memory consolidation (Figure 1). It has long been acknowledged that repetition of information and training over time results in increased retention. Hermann Ebbinghaus, a German psychologist and pioneer in the field of memory research, wrote in 1885, “if the relearning [of a series of syllables] is performed a second, a third, or a greater number of times, the series are more deeply engraved and fade out less easily and finally…they become possessions of the soul as constantly available as other image-series which may be meaningful and useful.”4 Over a century later, this statement remains valid, and many of the biological mechanisms that

Amygdala (beneath overlying cortex) Hippocampus (beneath overlying cortex)

Figure 1: Locations of amgydala and hippocampus in the medial temporal lobe Reprinted from: Bear M, Connors B, Paradiso M. Neuroscience: exploring the living brain. Philadelphia: Lippincott Williams & Wilkins; 2007. 669p.

22

form the basis of this phenomenon have been discovered. However, what is less apparent and perhaps even more intriguing is the developing idea that conscious repetition of a learning event is not the only pathway through which memories are consolidated. Current research has found that neurons undergo forms of “rehearsal” both before and after the learning event. Firing patterns characteristic of the newly acquired information or skill are repeated during times of relative rest, such as during sleep or lower levels of conscious mental activity. Synchronized and patterned firing produces characteristic electrical signatures called “sharp wave ripples;” these particular electrical potentials are observed as neurons repeatedly fire in set sequences.5 During slow wave sleep (the deepest sleep stage of non-REM sleep) after animals have traversed a maze, there is an observable increase in sharp wave ripple density in the CA1 hippocampal region. Additionally, a further increase in ripple density correlates with increased spatial discrimination and associative learning. This suggests that neurons not only rehearse previous firing patterns during slow wave sleep, but also that they can form different associations and possibly build upon original information to become more adept at navigational tasks.6 One of the original studies demonstrating hippocampal replay in the awake state showed that after an animal finishes movement along a linear track and rests while obtaining a reward, the particular sequence of place cells that was originally activated gets activated in reverse order.7 Later work showed that forward replay tends to occur prior to movement in a cellular planning process.8 An important aspect of replay while the animal is awake is enhancement of slow wave ripples in the hippocampus when learning is followed by receipt of a reward.9 Interestingly, the phenomenon of hippocampal preplay has also been observed. Sequential firing of place cells in the order that they are activated during a novel spatial task takes place in the resting or sleeping period prior to the task, suggesting a form of neural priming.10 Finally, it is known that novel experiences produce upregulation of pro-growth and plasticity proteins (discussed later).11 A recent study shows that introduction of novel navigation experiences in rats either before or after an unrelated learning event increased the persistence of long term learning of the original task. This, along with the previously mentioned studies, suggests that neural activity prior to and following the actual encoding period is crucial to proper memory consolidation, even if the activity does not take place during repetition of the learning task.12 Although many of these stud-

##


Section Title 2011: issue 13 hurj spring ies consider effects on hippocampal place cells, the effect of replay and cellular rehearsal also has been observed in many forms of memory consolidation during offline processing; consolidation of declarative and other procedural memory through neuronal replay has also been observed in other brain regions, such as the prefrontal cortex, during sleep.13,14 While the effects of replay are more easily observable in the model of spatial learning, the general phenomenon extends to other types of learning and is of great interest for developing theories of education. The theory that repeated firing in specific patterns during both behavioral tasks and resting states leads to consolidation of memory is thus becoming a well-established fact. What biochemical changes are actually taking place during these cellular rehearsals? On a cellular level, learning equates to enhanced and specific communication between neurons as well as memory of particular firing patterns and circuits. Neurons communicate with each other at synapses through the use of electrochemical signaling. Synapses are junctions between cells into which a neurotransmitter is released. Protein scaffolds on both sides of a synapse serve to maintain the structural integrity of the synapse. Electrical current is registered by one cell and, if adequate, leads to the release of a chemical neurotransmitter into a synaptic cleft. The neurotransmitter then binds to a receptor on the post-synaptic neuron. The binding event may be interpreted in different ways, depending on the function of the receptor. Many receptors are ion channels that open to selectively admit Na+, K+, Ca2+ or Cl-, thus changing the intracellular electrochemical environment. Specific electrical changes can lead to the production of action potentials, electrical signals that travel down the axon (a neuronal extension that communicates with other cells). At axon terminals, the electrical signal is received and triggers the release of the neurotransmitter to other neurons. The neural basis of learning is grounded in changes in both synaptic strength and in specific synaptic contacts between neurons. Repeated stimulation of a synapse leads to synaptic strengthening and provides a basis for the need for replay. Neurons can release various types of neurotransmitters; among these is the excitatory neurotransmitter, glutamate. Release of glutamate from the presynaptic neuron produces several intracellular effects. AMPA receptors, cation channels that generally do not admit calcium, are among the first receptors to open in response to glutamate. A certain type of glutamate receptor, the NMDA receptor, is able to admit both Na+ and Ca2+ into the cell. However, this type of receptor is only able to do so if the neuron is already activated by an action potential; its admittance of Ca2+ is thus dependent upon continual activity

##

HURJ Fall 2010: Issue 12 focus and previous entrance of cations through AMPA receptors. Entry of Ca2+ through NMDA receptors triggers various signaling cascades, some of which activate protein kinases, enzymes that can change the activity of other proteins (Calmodulin-dependent Kinase II, Protein Kinase A and Protein Kinase C). In particular, Calmodulindependent kinase II (CamKII) gets activated and phosphorylates AMPA receptors, causing the current flowing through them to increase.15 CamKII also moves to the postsynaptic membrane and serves as a scaffold for insertion of additional AMPA receptors, increasing current flow into the postsynaptic neuron when neurotransmitter

“Current research has found that neurons undergo forms of ‘rehearsal’ both before and after the learning event. ” is released.16 Ca2+ also stimulates production of the gaseous neurotransmitter nitric oxide (NO) in the postsynaptic neuron. NO diffuses back to the presynaptic terminal and eventually activates Protein Kinase G, which phosphorylates presynaptic ion channels and leads to an increase in release of neurotransmitter.17 With an increase in neurotransmitter release and an increase in the number of receptors and the effect of neurotransmitter binding, the synapse grows stronger, or potentiates, as the activity in the presynaptic neuron strongly drives activity in the postsynaptic neuron (Figures 2 and 3). In order to remain powerful in the long-term, the synapse must grow and form new connections between the presynaptic and postsynaptic neurons. Growth and formation of new contacts requires protein synthesis, and thus takes more time than phosphorylation events that produce short-term strengthening. Several different signaling pathways contribute to the synthesis of proteins needed for synaptic growth. One of the main pathways involves activation of Protein Kinase A (PKA) through reception of glutamate or 5-HT. PKA activates Mitogen-activated Protein Kinase (MAPK); the two kinases turn on a DNAbinding protein called CREB-1 (cAMP response element-binding 1).18 CREB-1 is a transcription factor in the nucleus that turns on transcription of pro-growth genes, allowing the cell to synthesize proteins that are essential for the formation of new synaptic contacts.19 CREB-1 also turns on transcription of proteins that allow it to become more effective by eliminating inhibition by of CREB-1 by CREB-2.20 These varying levels of pro-growth protein synthesis provide a compelling explanation for why repeated stimulation and firing is necessary to evoke full responses. Recent work has also shown that the cAMP-PKA-CREB pathway may be implicated in memory consolidation through repetitive firing during sleep.21

23


HURJ Fall 2010: Issue13 12 hurj spring 2011: issue

Section Title focus Synthesis of new proteins allows the neuron to form more dendritic spines at which it can form additional synapses with other neurons. It also allows presynaptic neurons to form additional terminals to match the new receptive surfaces. Formation and retention of spines and terminals forms the cellular basis of learning and memory.22 These long-lasting structural changes would not take place without a high degree of repeated stimulation of the synapse between two neurons. An important molecular pathway involved in synapse stabilization and persistence is the secretion of neurotrophins. In particular, brain-derived neurotrophic factor, or BDNF, signals from postsynaptic neurons back to their presynaptic partners. This secretion is activity-dependent, and presynaptic reception of BDNF also depends on simultaneous activity (action potentials). Reception of BDNF activates a signaling cascade that allows the synapse and neuron to survive; maintenance of newly formed synapses is therefore dependent on the activity of these connections and explains the need for repeated sequential firing.23 While formation of new structural connections between neurons is necessary for creation of lasting memories and learning, purely structural changes are not sufficient for memory consolidation. The synapse must have functionality in the form of viable receptors and supporting proteins. Various “plasticity-related proteins”, or PRPs, have been identified and shown to be important in synapse maintenance; many are thought to be synthesized by local translation of mRNAs at specific synapses.24 The original “synaptic tagging and capture

Figure 2: Elements of a glutamate synapse. González MI, Robinson MB. Protein kinase C-dependent remodeling of glutamate transporter function. Mol Interv. 2004;4(1):48-58.

24 ##

hypothesis” postulates that during encoding, molecular tags are laid down at active synapses and that later protein synthesis contributes to maintenance of synapse functionality.25 More recent work has suggested a dissociation between functional changes and structural changes. Although structural changes may take place in the formation of new contacts between neurons, insertion of AMPA receptors does not always occur simultaneously.26 However, insertion and functionality of AMPA receptors is necessary for maintenance of the structural contact, as the synapse must be able to effectively receive excitatory neurotransmitters in order to survive.27 Additionally, maintenance of receptor density is necessary for long-term stabilization of the new synapse.28 Revisions of the original tagging and capture hypothesis state that tagging is a property of the entire synapse, rather than just one molecular signature. The synaptic tag is present as a temporary structural change in the synapse through the activity of various kinases. One such kinase is CaMKII, which relocates (discussed above) to effect AMPA receptor insertion.29 Other structural changes are made to facilitate receptor insertion postsynaptically and increased neurotransmitter release presynaptically.30 These changes remain fairly unstable until plasticity-related proteins are captured by the synapse and allow for final stabilization. These molecular mechanisms allow us to understand the logic behind the brain’s need for both behavioral and resting repetition of learned sequences. Cellular activity over an extended time course including past, present (time of encoding), and future leads to memory consolidation. Activity related to the learning task and unrelated activity in the same neuronal population may both contribute to stabilization of synapses. How may these findings contribute to theories of education? Over the past few years, there has been increasing speculation as to the application of neuroscience to theories of education. An article in the British Journal of Education Psychology states that consideration of the neural circuits involved in different types of learning may provide important information for educators. In relation to neural replay, the article mentions that during sleep, specific neural circuits and brain regions are reactivated.31 A report by the Teaching and Learning Research Programme of the Economic and Social Research Council in the U.K. also encourages the incorporation of valid scientific research in formation of educational strategies. This report also discusses the role of sleep as an active time of cellular rehearsal.32 As previously discussed, it has also been found that reactivation of relevant circuitry during wakeful resting periods contributes to the consolidation


hurj spring 2011: issue 13 of memory and even the development of novel associations. Educators may be able to use this information to engineer learning schedules that intelligently combine periods of memory encoding with periods of rest or even sleep. Another possible strategy to consider is the role of reward pathways in learning, as seen in the increased slow wave ripple density observed when learning is followed by a reward; educational strategies that incorporate reasonable rewards following periods of learning may also prove useful in the classroom. The two above-mentioned reports, along with numerous others, discuss many possible applications of neurobiology to education, but few existing commentaries detail the role of neural firing repetition outside of immediate behavioral input and its possible implications for educational methods. The neuroscience of education as a field is very much in its nascent stages. As we learn more about the neurobiology of learning and memory, we will continue to develop better ideas about how to harness the power of the brain’s natural functions in targeted educational strategies. Researchers and educators have already begun to develop integrated methods of education, and the coming years will see exciting new educational initiatives that truly allow us to cultivate the mind from the brain. References: [1] Heit G, Smith ME, Halgren E. Neural encoding of individual words and faces by the human hippocampus and amygdala. Nature. 1988;333(6175):773-5. [2] LeDoux JE. Emotional memory systems in the brain. Behav Brain Res. 1993 Dec 20;58(1-2):69-79. [3] O’Keefe J. Place units in the hippocampus of the freely moving rat. Exp Neurol. 1976 Apr;51(1):78-109. [4] Ebbinghaus, Hermann. Memory: a contribution to experimental psychology. Ruger, H, translator. New York: Teacher’s College, Columbia University; 1913. 81 p. Translation of: Über das Gedächtnis. [5] Kudrimoti HS, Barnes CA, McNaughton BL. Reactivation of hippocampal cell assemblies: effects of behavioral state, experience, and EEG dynamics. J Neurosci. 1999 May 15;19(10):4090-101. [6] Ramadan W, Eschenko O, Sara SJ. Hippocampal sharp wave/ripples during sleep for consolidation of associative memory. PLoS One. 2009;4(8):e6697. [7] Foster, D.J. & Wilson, M.A. Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature. 2006;440(7084):680–683. [8] Diba K, Buzsáki G. Forward and reverse hippocampal place-cell sequences during ripples. Nat Neurosci. 2007;10(10):1241-2. [9] Singer AC, Frank LM. Rewarded outcomes enhance reactivation of experience in the hippocampus. Neuron. 2009;64(6):910-21. [10] Dragoi G, Tonegawa S. Preplay of future place cell sequences by hippocampal cellular assemblies. Nature. 2011;469(7330):397-401. [11] Moncada, D. & Viola, H. Induction of long-term memory by exposure to novelty requires protein synthesis: evidence for a behavioral tagging. J. Neurosci. (2007);27:7476– 7481. [12] Ballarini, F., Moncada, D., Martinez, M. C., Alen, N. & Viola, H. Behavioral tagging is a general mechanism of

focus

Figure 3: Synaptic strengthening mechanisms at a glutamate synapse. Available from: OpenWetWare, Wikipedia BIO154/254: Molecular and Cellular Neurobiology. http:// openwetware.org/wiki/BIO254:CaMKII.

long-term memory formation. Proc. Natl Acad. Sci. U.S.A. 2009;106(34):14599–14604. [13] Born J. Slow-wave sleep and the consolidation of longterm memory. World J Biol Psychiatry. 2010;11 Suppl 1:16-21. [14] Euston DR, Tatsuno M, McNaughton BL. Fast-forward playback of recent memory sequences in prefrontal cortex during sleep. Science. 2007;318(5853):1147-50. [15] Derkach V, Barria A, Soderling TR. Ca2+/calmodulinkinase II enhances channel conductance of alpha-amino-3hydroxy-5-methyl-4-isoxazolepropionate type glutamate receptors. Proc Natl Acad Sci U S A. 1999 Mar 16;96(6):3269-74. [16] Boehm J, Malinow R. AMPA receptor phosphorylation during synaptic plasticity. Biochem Soc Trans. 2005 Dec;33(6):1354-6. [17] Taqatqeh F, Mergia E, Neitz A, Eysel UT, Koesling D, Mittmann T. More than a retrograde messenger: nitric oxide needs two cGMP pathways to induce hippocampal long-term potentiation. J Neurosci. 2009 Jul 22;29(29):9344-50. [18] Waltereit R, Weller M. Signaling from cAMP/ PKA to MAPK and synaptic plasticity. Mol Neurobiol. 2003 Feb;27(1):99-106. [19] Mayr B, Montminy M. Transcriptional regulation by the phosphorylation-dependent factor CREB. Nat. Rev. Mol. Cell Biol. 2001. 2(8): 599–609. [20] Bartsch D, Ghirardi M, Skehel PA, Karl KA, Herder SP, Chen M, Bailey CH, Kandel ER. Aplysia CREB2 represses long-term facilitation: relief of repression converts transient facilitation into long-term functional and structural change. Cell. 1995 Dec 15;83(6):979-92. [21] Hernandez PJ, Abel T. A molecular basis for interactions between sleep and memory. Sleep Med Clin. 2011;6(1):7184. [22] Rusakov DA, Stewart MG, Korogod SM. Branching of active dendritic spines as a mechanism for controlling synaptic

efficacy. Neuroscience. 1996 Nov;75(1):315-23. [23] Kuczewski N, Porcher C, Gaiarsa JL. Activity-dependent dendritic secretion of brain-derived neurotrophic factor modulates synaptic plasticity. Eur J Neurosci. 2010;32(8):123944. [24] Barco, A., Lopez de Armentia, M. & Alarcon, J. M. Synapse-specific stabilization of plasticity processes: the synaptic tagging and capture hypothesis revisited 10 years later. Neurosci. Biobehav Rev. 2008;32(4):831–851. [25] Frey U, Morris RG. Synaptic tagging and long-term potentiation. Nature. 1997;385(6616):533-6. [26] Lynch, G., Rex, C. S. & Gall, C. M. LTP consolidation: substrates, explanatory power, and functional significance. Neuropharmacology. 2007; 52(1):12–23. [27] Kopec, C. D., Real, E., Kessels, H. W. & Malinow, R. GluR1 links structural and functional plasticity at excitatory synapses. J. Neurosci. 2007;27(50):13706–13718. [28] Yao, Y. et al. PKM zeta maintains late long-term potentiation by N-ethylmaleimide-sensitive factor/GluR2-dependent trafficking of postsynaptic AMPA receptors. J. Neurosci. 2008;28(31):7820–7827. [29] Bayer, K. U. et al. Transition from reversible to persistent binding of CaMKII to postsynaptic sites and NR2B. J. Neurosci. 2006;26(4):1164–74 [30] Luscher, C., Nicoll, R. A., Malenka, R. C. & Muller, D. Synaptic plasticity and dynamic modulation of the postsynaptic membrane. Nature Neurosci. 2000;3(6):545–550. [31] Goswami, U. Neuroscience and education. British Journal of Educational Psychology. 2004;74:1–14. [32] Howard-Jones, Paul. Neuroscience and Education: Issues and Opportunities. 2007. Teaching and Learning Research Programme: U.K. Economic and Social Research Council. Report. Available from: http://www.tlrp.org/pub/documents/ Neuroscience%20Commentary%20FINAL.pdf

25


focus

26 ##

hurj spring 2011: issue 13


hurj spring 2011: issue 13

focus

Medical Education Reform for the 21st Century Robert Dilley, Class of 2012 Molecular and Cellular Biology

In 1910, the Flexner report provided the first comprehensive look at medical schools in North America. Shockingly, Flexner found that many schools were profit-driven and produced illequipped physicians—a consequence of the lack of a uniform basis for medical education. His report fueled changes in medical education that integrated science and medicine and brought about a consistently high standard and level of knowledge for graduating physicians. With the turn of the 21st century, and the oncoming revolution in medicine, led by genomic and technological advances, another medical education reform is in order—one that integrates humanism with science in order to provide an environment and workforce capable of providing patient-centered and population-centered medicine. A model of this kind of educational reform is the Genes to Society curriculum at Johns Hopkins University School of Medicine. Reforms like this are needed in order for healthcare providers to keep pace with healthcare demands and lessen the inequality of national and global health.

Introduction Medical education encompasses undergraduate pre-health studies, medical school, internship, residency, fellowship, and continuing medical education for established physicians. In addition, it comprises schools of nursing and public health, both of which are essential for proper healthcare assertion and integration. These healthcare providers, their patients, and all those in between comprise the modern medical system, which is arguably the most important new institution in the world. Lifespan has doubled during the 20th century, but it hasn’t come without costs. The combined world health expenditures for 2007 were an astonishing US$5.3 trillion.1 However, expenditures for health professional education only total about US$100

billion per year, less than 2% of the total. This number is quite low, considering that medicine is a labor-intensive and talent-driven industry. The lack of proper attention given to medical education becomes obvious when looking at spending allocations and also at the tremendous gaps in health and healthcare availability within and between countries.2 Therefore, it can be surmised that medical graduates, though exquisitely well trained, are nevertheless not as well-equipped as they could be to provide the proper healthcare services to the population. No one can deny that medical education in the 20th century has integrated science, technology, and medicine, resulting in tremendous advancements in understanding and treatment of disease. This evolution in medicine created a culture of highly trained, highly educated physicians whose knowledge greatly surpassed that of the common man and woman, harnessing an environment that was provider-focused instead of patient-focused. As such, continued reform is needed for the 21st century physician-scientist. Physicians must be taught humanistic reasoning, in order to be capable of integrating all aspects of medicine (genes, the environment, society, and technology) to provide patient-centered and population-centered care in a more holistic fashion. Allotting more money to health education is undoubtedly needed, but it is not the only reform to be called upon. The health education system itself must be reformed to provide medical graduates with the ability to integrate scientific and humanistic reasoning in order to mobilize knowledge globally through teamwork, and engage in critical analysis and ethical conduct to alleviate the global and national gaps in healthcare. Some models of this reform have been put into action, such as the Genes to Society Curriculum at Johns Hopkins University School of Medicine, but more widespread change is needed to sustain healthcare progress.

27


focus

hurj spring 2011: issue 13

Where We Were

Just over 100 years ago, in 1910, Abraham Flexner published his groundbreaking report, “Medical Education in the United States and Canada”.3 This 364-page manuscript provided the first in-depth and comprehensive view of the status of medical education at the time in North America (US and Canada). Several important findings came out of this report, the first being that America had the widest gap between the “best” and “worst” physicians. For at least 25 years, there had been an overproduction of uneducated and poorly trained medical practitioners. American Medical Association President Eliot famously said, “[anybody could] walk into a medical school off the street […] [and many] could barely read and write”.4 Many low-level medical schools profited off of commercial advertising and industry worker recruiting, despite producing inadequate physicians. On the other hand, some highlevel schools, such as Johns Hopkins University, had appropriately high standards for their physicians. Accordingly, another important finding from the Flexner report, one that better explains the first finding, is that there was no uniform basis of medical educaAbraham Flexner13 tion. Flexner observed that the 1910 American medical system was inefficient, subpar, and headed in the wrong direction compared to older countries such as Germany. In his report, he provided an enduring philosophical foundation to set the medical system on the right track, starting with educational reforms. He vehemently defined medicine as a public service rather than a tribal cult that had no absolute duty to society. He rightly thought that physicians needed to train under a uniform basis of education that helped them master the modern resources available and translate these proper resources to those in need. Flexner argued that modern preliminary and professional education must include the recent advances in the sciences upon which medicine was built and the extension of education to the laboratory. Ultimately, success depended on the complete control of a hospital and laboratories by a medical school: a teaching hospital.3

28 ##

Where We Are The reforms sparked by the Flexner report integrated modern science into a new uniform, improved curriculum of university-based medical schools, which equipped health professionals with knowledge that propelled America to become one of the great countries of academic medicine during the 20th century; teaching hospitals in America are still unparalleled. Now, with the turn of the 21st century and the ushering in of health care reform legislations that push for individualized medicine and healthcare for all, it is time to look back at Flexner’s original report and see how American health education has both succeeded and fallen short in the past century. Most critics and media focus on the successes, which are innumerous. Medicine is more technologically and scientifically advanced and integrated than Flexner ever could have imagined. Translational medicine, from bench to bedside, has taken off and will most certainly yield immense victories in the highpowered fields of oncology, neurology, cardiology, and others alike. However, the medical system is lacking in perhaps a more subtle sense, but one that is present in every corner of this country and 7 across the globe: lack Dr. William Osler of patient-centered care. The system is largely fragmented due to rigid tribalism among specialties and professional fields, creating an environment that is provider-centered and not conducive to teamwork.5 Upon deeper examination of his 1910 report, it is evident that Flexner not only called on physicians to be clinical scientists, he also believed they should be humanists, or those who understand and advocate the importance of “human nature”. It is this latter attribute that new paradigms of education must seek to instill in their graduates. Flexner wrote, “[the physician must have] insight and sympathy on a varied and enlarging cultural experience […].”3 In other words, the physician must be culturally experienced and humane. As has been pointed out by reanalysis of Flexner’s report, in order to facilitate humanistic practices in medicine, medical schools must strive to integrate scientific reasoning in the humanities.6


hurj spring 2011: issue 13

focus

Figure 3: Contextual framework of GTS curriculum.8

The patient phenotype is in the center, comprising a continuum that is influenced by genes, society, the environment, and technology. *Indicates newly established fields in medical curricula.

A likely influence on Flexner’s call for humanistic skills in medicine was Sir William Osler, a prominent physician and educator in the late 19th and early 20th centuries. Osler, a cofounder of the Johns Hopkins University School of Medicine, believed that medical practice was disadvantaged by “the lack of historical insight, the rift between science and humanity, and the alienation of technology advancement and humanitarianism.” It is clear that there has been a long relationship between humanism and medicine dating back to the time of Aristotle and Socrates. Somewhere along the path of history, medicine started to shed its humanistic qualities and became completely out shadowed in the 20th century by science and its mechanistic materialism and existentialism.7 However, certain physicians, like Osler, were aware of the problem, and continued to advocate for change, keeping a spark of humanism alive in medicine. With better national and global surveillance of healthcare, it is now well documented that enormous gaps exist in the health and healthcare availability to certain populations within and between countries. The lack of humanistic and holistic approaches to medical education is, at least partly, to blame. Graduates are unable to mobilize their tremendous amounts of knowledge into the necessary humanitarian context. In recent years, there has been a big push within academic medical centers to reform their medical curricula to integrate humanism and science in order to provide a synergistic effect on their graduates that will facilitate teamwork, ethical argument and analysis, observational skills in fine arts, and historical insight, and prepare them for a larger, integrated system of “individualized” medicine. [6] Johns Hopkins University has always been a leader in healthcare innovations ranging from procedures to

biological understanding to medical practice reform. It is only fitting that this institution, once hailed as an example of proper medical education by Flexner and home to the great William Osler, is leading the way in medical curriculum reforms with its “Genes to Society” program.

Genes to Society and Where to Go From Here Nearly a century after the Flexner report and with recent pushes towards another change in medical education brought about by the genomic revolution and technological advances, the Johns Hopkins University School of Medicine Curriculum Reform Committee was formed, charged by Dr. David Nichols. This committee strategically planned a new medical education curriculum based on the principles of biologic and environmental individuality (figure 3). In August 2009, Johns Hopkins University School of Medicine implemented its “Genes to Society” (GTS) curriculum. The goal of the curriculum is to provide a framework that can “encourage medical students to explore the biologic properties of an individual’s health in the light of a larger integrated system that also includes social, cultural, psychological, and environmental variables.”8-10 In other words, the graduates of this program will have the tools to be clinical scientists and humanists, both prerequisites for providing patient-centered and population-centered care. The idea of chemical individuality, proposed by Sir Archibald Garrod in 1902, underlies the heterogeneity of patients seen by physicians.11 With the advent of the human genome project, Garrod’s ideas have been corroborated and now motivate the GTS curriculum. The teaching of this “P4 medicine” (predic

29


Section Title focus

HURJ Fall 2010: Issue13 12 hurj spring 2011: issue

Figure 4: Schematic of healthcare reform and outcomes. This figure depicts the interdependency of each aspect of health reform, starting with educational reform. Ultimately, educational reform will lead to well-equipped healthcare providers that can provide patient-centered and population-centered medicine to alleviate health disparities.

tive, preventive, personalized, and participatory)12 allows for graduates to become physicians and scientists who take into account the ongoing revolutions of scientific discovery, public accountability, and societal health when providing care to the patient population. The GTS curriculum is a model for the foundation of scientific and clinical career development for future physicians. Eventually, all medical schools will need to adopt a similar curriculum in order to be successful in the 21st century and beyond. The playing field of medicine is changing drastically, and currently, physicians are often behind the curve. Medical school curriculum reforms, like GTS, are needed to provide the health system with the ability to mobilize knowledge and engage in critical reasoning and ethical conduct to alleviate the global and national gaps in healthcare. Additional funding, administrative support, and further awareness are needed in order to fundamentally change the way medicine is taught in this country and globally, thereby changing clinical practice. A working model of this medical education reform for the 21st century is portrayed in figure 4. It is important to remember that medicine is ultimately an art, a way to show human decency, and a responsibility in a civilized society.7 Medical education reform can provide the world with healthcare providers that incorporate both scientific and humanistic principles into treating patients and in doing so, are truly able to better society.

30 ##

References: [1] World Health Organization: National Health Accounts. [Internet]. [Updated 2011]. [Cited March 8, 2011]. Available from: http://www.who.int/nha/en/ index.html. [2] Frenk J et al. Health professionals for a new century: transforming education to strengthen health systems in an interdependent world. The Lancet, Vol 376, Dec 4 2010. [3] Flexner A. Medical Education in the United States and Canada: A Report to the Carnegie Foundation for the Advancement of Teaching. The Carnegie Foundation for the Advancement of Teaching, Bulletin Number Four, 1910. [4] Eliot. The American Medical Association Bulletin, Vol. 3, No. 5, p. 262. [5] Horton R. A new epoch for health professionals’ education. The Lancet, Vol 376, Dec 4 2010. [6] Doukas DJ et al. Re-visioning Flexner: Educating Physicians to be Clinical Scientists and Humanists. The American Journal of Medicine, Vol 123, No12, Dec 2010. [7] Lang J. Chinese medical heritage: The philosophy and humanism of medicine. Chinese Medical Journal, 2011; 124(2):318-320. [8] Wiener CM et al. “Genes to Society”—The Logic and Process of the New Curriculum for the Johns Hopkins University School of Medicine. Academic Medicine, Vol 85, No.3/March 2010. [9] Kleinman A et al. Culture, illness, and care clinical lessons from anthropologic and cross-cultural research. Annals of Internal Medicine, 1978;88:251-258 [10] Choudhry NK et al. Systematic review: The relationship between clinical experience and quality of health care. Annals of Internal Medicine, 2005;142:260273. [11] Scriver C. The PAH gene, phenylketonuria, and the paradigm shift. Human Mutat, 2007;9:831-845. [12] Hood L et al. Systems biology and new technologies enable predictive and preventative medicine. Science, 2004;306:640-643. [13] Institute for Advanced Studies: Abraham Flexner. [Internet]. [Updated 2011]. [Cited March 8, 2011]. Available from: http://www.ias.edu/people/flexner.


Section Title 2011: issue 13 hurj spring

HURJ Fall 2010: Issue 12 humanities

The Importance of British Abolitionism

Anisha Singh, Class of 2012 International Studies As the September 2010 Millennium Development Goals Review Summit comes to a close, academics and politicians are questioning if there has been enough progress since the proposal of the goals at a United Nations meeting in 2000. World leaders gathered in New York to focus attention on eight aspects of international concern in order to achieve a common goal of eradicating extreme global poverty. The initial areas of focus were to reduce hunger, increase access to education, healthcare, maternal health, and HIV treatment, increase environmental sustainability, and create a global partnership. The hope was that if these goals were accomplished, an entire generation would be raised out of economic duress, thereby increasing their health, education, and earnings potential. International leaders have commended the global collaboration and commitment to helping humanity , but critics have remained skeptical of any tangible outcomes . To achieve the United Nations Millennium Project, the Summit asked for a commitment of 0.7% of the Gross Domestic Products of “rich nations” to Official Development Assistance . According to these figures, in 2009, the United States should have spent 10 billion dollars on the Millennium Development Goals; Japan should have donated 2.9 billion; and the United Kingdom, 1.5 billion . In today’s day and age, is it possible for the Millennium Development Goals to be met? Is it possible for nations to contribute such significant amounts of money, with no reward, in order to help humanity as a whole? Seemingly philanthropic humanitarian acts are often a side effect of, or a cover for, an underlying goal to advance the interests of individual nations. If there is no material, strategic,

or political gain in a situation, countries rarely take action in multinational matters. History shows that nations work to promote their own wellbeing, so what has allows leaders to believe that the Millennium Development Goals will ever be achieved? Modern international altruism is possible. Although instances are rare, the world does have a historical example of this sort of altruism: the abolition of the British slave trade. Between1806 and 1867, the British government outlawed slavery throughout its empire, and enforced the abolition of the Atlantic slave trade . While this process was extremely costly for the British Empire, both economically and politically, it saved millions of African lives and prevented the further enslavement of future slaves. In addition, the abolition of the slave trade prompted the cultural and economic growth of African societies. The British abolition of the slave trade can be viewed as a model of altruism; this historical example should be considered when deciding whether or not to enact policies that benefit humanity but do not necessarily advance individual nations’ interests.

History of the British Involvement with the Atlantic Slave Trade British involvement with the Atlantic slave trade began during the mid 1600s, when the British, along with the French and the Dutch, settled a number of islands in the Caribbean. The slave economy of the Caribbean quickly boomed following the introduction of sugar as a cash crop. Slave importations grew each decade and, throughout the eighteenth century slavery played an integral role in the British economic system. The slave trade employed

31


Section Title humanities hundreds of British sailors, captains, and merchants, and provided cheap labor to the West Indies, where British citizens ran extremely profitable sugar plantations. The value of British West Indian sugar production grew to make up approximately four percent of Britain’s national income between 1805 and 1806. Plantation owners in the West Indies were able to make substantial profits from the importation of cheap slave labor. The benefit of cheap labor, however, came to a halt in in the early nineteenth century. In 1807, the British Parliament passed the Act for the Abolishment of the Slave Trade, which prohibited British citizens from participating in the Atlantic slave trade. Two decades later, the Parliament passed the Slavery Abolition Act of 1833, which abolished the British slave trade. The British, however, did not just stop at eliminating slave trade and slavery within their own lands. For various political, economic and social reasons, the British government extended their policies to the other slave-trading nations of Europe. . Following the end of the slave trade in Britain, Parliament worked to suppress the remaining slave trades around the world. In the half-century that followed, the British government spent a large amount of money and resources in efforts to suppress the Atlantic slave trade . Given the financial losses Britain incurred in the process of abolishing the slave trade and domestic slavery, in addition to the lack of material or political gain, Britain’s actions can be considered a form of international altruism.

British Losses The British West Indies experienced the greatest hardship following Britain’s abolition of slavery. Without slaves, the plantation owners could not rely on free labor—a main input into the Caribbean goods market. Subsequently, the profits of British West Indies plantation owners decreased, thus increasing the price of goods back home and decreasing consumption and real income levels in Britain. In the years following the abolition of the slave trade, sugar production in the West Indies fell by about twenty-five percent, and Britain’s share of world sugar production fell from sixty percent in the early 1800s to fifteen percent by the end of the decade. . Without the slaves to tend these labor-intensive crops, the price of production increased dramatically for the plantation owners, and they were unable to produce the levels of sugar previously attainable with slave labor. In addition, Britain’s continued efforts to suppress the international slave trades were costly and dangerous. The overall suppression of the slave trade also put many people out of work. Journeys between Europe, Africa, and the Caribbean decreased, putting slave captains, sailors, and crew out of work. In addition, investors were no longer able to invest in the British slaving trips, and insurance companies were prohibited from insur-

32 ##

HURJ Fall 2010: Issue13 12 hurj spring 2011: issue ing slave ships . The British Empire also lost potential future profits from underdeveloped slave colonies, such as Jamaica, an as-yet undeveloped British colony. The new, unconquered land had more farmable land than the rest of the British colonies combined . When the British abolished the slave trade, other countries were able to increase their market-share of Caribbean items. Britain’s retreat from sugar and coffee production left a vacuum in the supply of these profitable goods. Since demand remained strong, other nations were able to enter the market. In the thirty-five years following abolition, production in slave economies similar to the British West Indies rose by two hundred and ten percent . Other colonies, such as Brazil and Cuba , became competitors in these previously British-dominated markets. Britain’s relative profits and market-share decreased dramatically, further harming the British economy. In several key ways, the abolition of the British slave trade and slavery affected many who were not directly involved with the trade. The decrease in sugar production led to higher sugar prices for British citizens . Sugar had slowly become a staple part of the diet for British citizens; between 1785 and 1805 sugar consumption rose by eighty percent. In addition, with the 1833 abolition of slavery, Parliament compensated slave owners for their property losses. This compensation resulted in a British national debt of over twenty million pounds (approximately two billion dollars in 2008 terms). This debt was quickly passed to citizens in the form of increased taxes. In order to abolish slavery, British government signed a number of treaties abolishing the slave trade in individual countries, and allowed Britain the “right to search” their ships in the high seas . To enforce these treaties, the British established the Royal Navy’s West Africa Squadron in 1819. Between 1819 and 1869, the Squadron patrolled the seas in search of ships with illegal slaves. The British also expanded their Caribbean and South American squadrons to conduct missions relating to slavery. Lastly, they set up a number of courts and commissions within Africa, used to try those who conducted slavery illegally. Death rates for all these missions were high, and approximately five thousand British citizens died from the effort to suppress the slave trade. The British also bribed many nations to suppress their slave trade. In July of 1817, under coercive pressure, Portugal signed a “right to search” agreement with Britain. Later that year, the British signed a treaty with Spain that immediately abolished the slave trade in the northern Atlantic, and promised total abolition of the trade by 1820. In return for abolition, the British paid the Spanish a cash bribe. In the 1820s, the British government worked with Cuba and Brazil to stop the slave trade, which concluded in 1820 and 1830 respectfully . Finally, the British pressured other significant powers, such the United States and France, to eliminate their own


Section Title 2011: issue 13 hurj spring slave trades and help enforce international abolition. These transactions cost Britain political capital as well as financial resources. Economists have tried to estimate how large of an effect the suppression effort had on the British economy, although, the losses varied depending on the group of British citizens considered. Overall, the suppression effort itself cost the British metropolitan society nearly two percent of its gross domestic product (GDP) per year, between 1808 and 1867. To put things in perspective, in 2006, the United States federal government spent the same percentage of its GDP on domestic welfare programs .

The Importance of the British Example The British Navy’s West African Squadron and other naval patrols were able to seize just under two thousands ships and nearly two hundred thousand salves . The British West Indies no longer received their supply of free labor each year, and other countries were forced to decrease their slave imports as well. Although it is unclear how much longer the slave trade would have continued without British intervention, the British government sped up the process and forced other nations to adhere to British policies regarding the slave trade. Thousands of lives were saved, as well as thousands more of potential future slaves, the sons and daughters of those who were not enslaved because of the British model of altruism. The most important use of the British abolition of slavery is as a model for contemporary policy-makers and common people. Academics and human rights activists turn to this example to highlight the importance of their own causes. In a speech to the British Parliament in 2007, the United Nations Secretary-General Kofi Annan compared abolition to the modern day human rights movement: “The abolition movement was the first campaign to bring together a coalition against gross violations of human dignity...The slave trade was eventually abolished because thousands of people examined their own consciences, and took personal responsibility for what was happening around them. We must approach today’s abuses in the same spirit...” Annan then discussed modern-day slavery, and how countries must continue to fight against current injustices. In a world with so few historical examples for humanitarians to look back on, the abolition of the British slave trade and slavery is an extremely important tool to direct the future of the human trajectory. Can an adequate comparison be made between the abolition of the British slave trade and humanitarian problems of today, such as wide spread poverty and lack of access to health and education? There are, of course, major differences. The slave trade was a specific action conducted by individuals and government policies. Humanitarian problems of today are much more complex, and they cannot be attributed to a specific group of individuals or countries. Although many differences can be found, many more parallels can be drawn between this historic example and modern-day humanitarian concerns. The fundamental problems remain across time; like the case of the Atlantic slave trade and slavery, modern-day people are being denied their fundamental human rights. They are constrained by their lack of education, health care, and finances. They do not have the freedom to direct their own futures, and are weighed down by the chains of economic oppression. In order to stop these injustices, countries must devote money, time, and manpower to the

HURJ Fall 2010: Issue 12 humanities cause. The countries that are capable of helping will not receive monetary benefits from stopping the problems. Most importantly, if plans like the Millennium Development Goals are carried out, the lives of millions of individuals could be positively changed. Their children, and children’s children, in turn, will benefit from their education, health, and capital as well - and the cycle of poverty can be broken. In the same way that our generation only reads about the Atlantic slave trade, future generations may only imagine a world in which children are denied basic nutrition and education. The tragedies of today, and the process to eliminate them, can be the history of tomorrow. History can repeat itself in a positive way, and a modern-day injustice can be stamped out. The British elimination of their slave trade, slavery, and their suppression of international slavery provides an excellent model to assess the importance and feasibility of achieving humanitarian goals. It reduces the excuse for inaction, and provides hope and motivation for those willing to help others. Through their example, it is clear that governments are able to mobilize for humanitarian purposes, and create positive change for the trajectory of mankind. References

1 Anstey, Robert T. “Capitalism and Slavery: A Critique.” The Economic History Review 21.2 (1968). Print 2 “British Abolitionism, Moral Progress, and Big Questions in History.” Boston University. Web. 13 Nov. 2010. <http://www.bu.edu/historic/london/duffy.html>. 3 “CIA - The World Factbook.” Welcome to the CIA Web Site — Central Intelligence Agency. Web. 01 Nov. 2010. <https://www.cia.gov/library/publications/the-world-factbook/>. 4 Clemens, Michael A., Charles J. Kenny, and Todd J. Moss. “The Trouble with the MDGs: Confronting Expectations of Aid and Development Success.” Center for Global Development 35.5 (2004): 1-40, A1 - A14. Print. 5 Dresher, Seymour. “Eric Williams: British Capitalism and British Slavery.” History and Theory 26.2 (1987). 193. Print. 6 Eltis, David. Economic Growth and the Ending of the Transatlantic Slave Trade. New York: Oxford UP, 1987. Print. 7 “Federal Spending, State and Local Public Spending for 2006 - Charts.” Federal State Local Public Spending United States 2010 - Charts Tables History. Web. 01 Dec. 2010. <http://www. usgovernmentspending.com/year2006_0.html>. 8 Fogel, Robert William., and Stanley L. Engerman. Without Consent or Contract: the Rise and Fall of American Slavery. New York: Norton, 1989. Print. 9 Gerson, Michael. “Unchained by Idealism.” Washington Post - Politics, National, World & D.C. Area News and Headlines - Washingtonpost.com. 20 June 2007. Web. 21 Nov. 2010. <http:// www.washingtonpost.com/wp-dyn/content/article/2007/06/19/AR2007061901737.html>. 10 “Global Partnership | End Poverty 2015.” End Poverty 2015 | We Are the Generation That Can End Poverty. Web. 01 Nov. 2010. <http://www.endpoverty2015.org/en/goals/global-partnership>. 11 Kaufmann, Chaim D., and Robert A. Pape. “Explaining Costly International Moral Action: Britain’s Sixty-year Campaign Against the Atlantic Slave Trade.” The International Organization Foundation and Massachusetts Institute of Technology: 631 - 668. Print. 12 Klein, Herbert S. The Atlantic Slave Trade. 10th ed. Cambridge: Cambridge UP, 2007. Print. 13 Oldfield, John. “BBC - History - British History in Depth: British Anti-slavery.” British Broadcasting Center. 15 Oct. 2010. Web. 01 Nov. 2010. <http://www.bbc.co.uk/history/british/ empire_seapower/antislavery_01.shtml#two>. 14 Paterson, Graham “Alan Greenspan Claims Iraq War Was Really for Oil - Times Online.” The Times | UK News, World News and Opinion. 16 Sept. 2007. Web. 01 Nov. 2010. <http://www. timesonline.co.uk/tol/news/world/article2461214.ece>. 15 “Speech by Kofi Annan on Human Trafficking at the UK Houses of Parliament.” United Nations Office on Drugs and Crime. Web. 13 Nov. 2010. <http://www.unodc.org/unodc/en/aboutunodc/speeches/speech_2007_05_08.html>. 16 “The 0.7% Target: An In-Depth Look.” UN Millennium Project. Web. 01 Nov. 2010. <http:// www.unmillenniumproject.org/press/07.htm>. 17 United Kingdom. Parliament of 1807. An Act for the Abolition of the Slave Trade. Print. 18 Williams, Eric. Capitalism and Slavery, The University North Carolina Press, Chapel Hill, NC, 1994. Print

33


Section Title humanities

HURJ Fall 2010: Issue13 12 hurj spring 2011: issue

Environmental Insecurity in Asia:

A Geopolitical Interpretation

Janelle Wen Hui Teng, Class of 2014 Krieger School of Arts and Sciences

34 ##


Section Title 2011: issue 13 hurj spring

Introduction Geopolitical concerns over environmental threats and resources are some of the most pressing and urgent issues confronting humanity today. An understanding of political geography provides a useful framework to effectively examine the relationships between three critical concepts – sovereignty, security and environment. The first two concepts relate directly with the world political map, while the last concept deals with human perception of natural processes, ecological spaces, and global environmental issues. The concept of “environmental security” was first introduced in 1987 during the 42nd Session of the United Nations General Assembly.1 Environmental security is defined by the United Nation’s Millennium Project as the environmental viability for life support, consisting of these sub-elements2: 1. Preventing or repairing military damage to the environment 2. Preventing or responding to environmentally caused conflicts 3. Protecting the environment due to its inherent moral value Using a geopolitical analytical frame, this paper aims to investigate how environmental insecurity, from the reference point of both the individual and the nation state, can cause social, political, and violent conflict. This paper also aims to investigate the corresponding relationship of how conflict can be a fundamental cause of environmental insecurity. Particular focus is given to environmental security issues in Asia since the 1950s, after the continent was rudely awakened to the concept of ‘environmental security’ through the atomic bombings of Hiroshima and Nagasaki.

1. Re-evaluating the Impacts of Conventional Wars 1.1 Environmental & Biological Warfare Environmental warfare is defined as the manipulation of the environment for hostile and military purposes.3 Biological warfare refers to the use of microorganisms or toxins to induce death or disease to plants, humans, or animals.4 The ‘ecocide,’ or the large scale destruction of flora and fauna and their natural habitat, and pollution in certain Asian countries caused by violent conflict serve as lasting legacies of how conventional wars are a fundamental cause of environmental insecurity.5 The Vietnam War (1955-1975) is historically unique, as it is the first war in modern history in which intentional and indiscriminate environmental disruption was used as a significant portion of the warfare strategy. 6 It was also the first war in which large quantities of anti-plant chemical agents were used; more than 72 million liters of such agents, which contained approximately 55 million kilograms of active herbicidal ingredients, were used by the United States.7 American military strategy employed in South Vietnam to counter the Viet Cong insurgency included the destruction of forests (to reduce coverage and freedom of movement), destruction of local crops and other resources (to reduce the supply of resources), relocation of indigenous civilians (to reduce recruitment and manpower support), and disruption of supply lines from surrounding countries (to reduce the supply of resources).8 The US attempted to carry out this strategy through various means, such as through the large-scale expenditure of munitions (such as bombs and shells), release of her-

HURJ Fall 2010: Issue 12 humanities bicides, use of heavy land-clearing tractors, and even cloud seeding.9 This strategy, employed for over a decade, not only caused permanent and irreparable damage to the natural environment, but also resulted in long-lasting negative implications for the human ecology of South Vietnam. By the end of the war, in the areas of South Vietnam not physically under US control, approximately 300 million kilograms of agriculture crops and food had been destroyed, about 20.2 million cubic meters of timber had been lost, and 30 percent of rubber plantations had been destroyed.10 The destruction of vegetation and the natural landscape also resulted in the loss of natural habitats (especially the mangrove estuaries), disruption to fragile ecological equilibriums, flash-floods, contamination of soil and water sources, death of wildlife (such as the tarpon Megalops cyprinoides of the Mekong Delta region and the highly endangered Kouprey) and onset of disease for domesticated animals and livestock.11 The Vietnam War directly resulted in 1.8 million human deaths, 3.8 million serious injuries and 17 million refugees throughout Indochina.12 The US herbicidal warfare program “Operation Trail Dust,” carried out from 1962 to 1971, caused widespread environmental insecurity which lasted well after the war ended. It was estimated that over 4.8 million Vietnamese were exposed to Agent Orange, one of the major anti-plant chemical agents used by the US, which resulted in 400,000 deaths or disabilities and 500,000 children born with birth defects.13 Today, many still suffer from health problems resulting from exposure to Agent Orange, including mental disabilities, hernias, extra limbs and high blood and breast milk levels of dioxin.14 Residents living in “hotspots”, such as Binh-Hoa, suffered the worst of these effects.15

1.2 Military Pollution Pollution stemming from military operations accounts for approximately 6 percent of the world’s environmental pollution.16 The key functions (such as ensuring national security) of the military share a unique relationship with the state, which often times allows the military special privileges to avoid regulation and accountability with regard to environmental issues.17 Military pollution can thus become a source of environmental insecurity. Within the United States alone, there are approximately 15,000 contaminated sites in about 1,600 U.S. military bases.18 Activities of the U.S. military also contributed to pollution in bases located beyond American borders. It was reported that US military operations at the Subic Bay Navy Base in the Philippines had caused large scale pollution of the area through the release of untreated sewage and waste water directly into the bay, discharge of toxins such as benzene and heavy metals into the soil, and the direct release of untreated pollutants into the air from power plants at the navy facility.19 From 2000 to 2003, approximately 38 deaths resulted from pollution of the Subic Bay area.20 Toxic waste was also disposed into the ground in nearby Clark Air Base, causing land and waterways to be contaminated with fuel, asbestos, and metals.21 Filipinos who stayed in the areas adjacent to the base after it was closed suffered from health problems such as skin disease, still birth, cancers and heart ailments which were linked to exposure to the toxic pollutants.22 Approximately 76 deaths and 68 cases of illness occurred due to the military pollution of Clark Air Base.23 The base closing agreements signed between the US and Filipino governments cleared the US of any responsibility for environmental degradation caused by military operations, and, to date, no large scale action has been taken to clean up the affected areas.24

35


hurj spring 2011: issue 13

humanities 2. Reinterpreting Resource Wars Since the 1990s, the scramble for control of and accessibility to resources has been seen as one of the most distinctive features of the global security environment.25

2.1 Resource Scarcity as a Result of Climate Change The environmental insecurity, such as changing weather patterns and reduced availability of resources, created by climate change can increase the possibility for conflict in Asia.26 Actual changes in climate and weather patterns may directly trigger social conflicts. Barnett and Adger see East Timor as a potential area where such conflicts may occur, whereby changes in climate, such as less rainfall in the dry season, may result in a food crisis which, when put into the social and economic context of the nation, can lead to violent conflict.27 Furthermore, as climate change continues and the competition to secure resources intensifies, environmental insecurity may be created through large informal sectors such as through the smuggling of resources. In 2007, the Indonesian government banned the export of sand and soil. However, a black market for these resources has been created, in part due to the increased sea reclamation projects of Indonesia’s neighboring countries, such as Thailand and Singapore (some of these projects were started to counter the effects of climate change).28 Water pollution, destruction of habitats and the disappearance of whole islands have resulted due to illegal soil extraction.29 Such environmental insecurity not only undermines the safety and livelihood of Indonesians living near the exploited areas, but also undermines national security by encouraging the corruption of local leadership and the proliferation of illegal activities.30

2.2 Control for Resources International Feuds

Fuels

On-Going

Conflicts over resources serve to fuel on-going disputes or military feuds between nations, such as the riparian disputes in Asia. For instance, the construction of the Farakka Barrage on the Ganges has served as yet another bone of contention in the already tumultuous relationship between India and Bangladesh. Bangladesh views the dam as a strategic denial of access on India’s part, as the dam cuts off Bangladesh’s water supply and has resulted in numerous environmental effects, including raised salinity levels in water and contaminated fisheries.31 In another example, territorial claims over the oil-rich South China and East China Seas involve many different Asian nations. These disputes threaten to destroy the fragile political fabric of Asia and have worsened on-going political conflicts between China and its neighbors, especially Taiwan and Japan.32

2.3 Intra-national Social Conflict Over Resource Distribution Contention over control and ownership of resources can serve to trigger regional conflicts and inter-state rivalry within a nation.

36

For instance, the construction of the Three Gorges Dam in China has resulted in a regional dispute between the municipal government of Chongqing and the provincial government of Hubei over control and revenue distribution of water resources and hydroelectric power generated by the dam.33 The antagonism is further exacerbated by the fact that most of the areas that were flooded during the construction of the dam were in the upstream Chongqing, which resulted in large scale displacement of its residents and disruption to its ecological systems.34 Ecological marginalization within a nation also serves to provide potential flashpoints for conflict by driving ordinary people to resort to violence as a means of securing resources. For instance, in the Philippines, a significant inequality in the distribution of resources (especially arable land distribution) exists between the rich land owners and the poor peasants. As a result, many of the landless poor were forced to migrate to upland regions.35 The increasing population density in the upland regions corresponded with increased cases of erosion, landslides and flash floods as slash and burn agriculture is often employed in the area.36 These events fuelled the cycle of poverty, and drove many peasants to join the communist New People’s Army insurgency as a means to obtain resources and express dissent over their current plight.37

3. Redefining Sovereignty 3.1 Renewed Claims to Space Environmental insecurity in today’s current paradigm has triggered conflicts over sovereignty through renewed claims to space. Due to global warming, much of the ice that previously clogged the Northwest Passage, a sea route through the Arctic Ocean, has melted, allowing the passage to become more navigable. The newlyaccessible Arctic shelf and its untapped oil and gas deposits have driven the five Arctic states (Canada, Norway, Denmark, Russia and the United States) to make claims on extended continental shelves such that they can declare sovereignty (through the Law of the Sea Treaty) and harvest these resources.38 Even Asian countries that do not legally have any claim to the Arctic are thinking of ways to stake a claim and forging strategic alliances to secure a share of these precious resources.39

3.2 Territorial Sovereignty Environmental insecurity makes manmade national boundaries more vulnerable and calls into question the definition and relevance of territorial sovereignty. The 1972 Declaration of the United Nations Conference on the Human Environment (Stockholm Conference) sought to link national sovereignty with environmental protection. Under Principle 21 of the Declaration: “States have, in accordance with the Charter of the United Nations and the principles of international law, the sovereign right to exploit their own resources pursuant to their own environmental policies, and the responsibility to ensure that activities within their jurisdiction to control do not cause damage to the environment of other States or of areas beyond the limits of national jurisdiction.”40


Section Title 2011: issue 13 hurj spring

In today’s interconnected world, where the flap of a butterfly’s wings in one hemisphere can engender a storm in the other, it may not be possible for nations to fulfill the obligations of Principle 21. The Southeast Asian Haze of 1997, caused primarily by “slash and burn” cultivation in Indonesia, is one such example of how the Declaration’s concept of sovereignty seems to be oversimplified. Though Indonesia was acting within its jurisdiction and sovereign right to exploit its own nation’s resources, the widespread air pollution clearly caused damage to the environments of Indonesia’s neighboring countries, including Singapore, Malaysia and Brunei.41 Defining sovereignty with regard to current environmental issues is a multifaceted problem as a result of the possible infringement on the environmental security of other nations. Furthermore, as climate change continues, environmental insecurity threatens to directly change the world political map. In China alone, 46 cities are sinking, including Shanghai.42 Rising sea levels also threaten to engulf the sinking city of Bangkok such that Thailand’s capital may have to be abandoned by the middle of the 21st century.43

Conclusion There are two kinds of conflicts linked to environmental threats – the first being conflict directly caused by environmental insecurity, and the second being conflict where causation of environmental insecurity is the ultimate objective of the conflict. The most worrying observation seems to be that environmental insecurity can work in tandem with existing social determinants, such as marginalization and poverty, to significantly undermine human security. Environmental security issues thus warrant the attention of governments and should be categorized as ‘traditional security’ matters as they threaten the interests of the nation and individual in the same way other ‘traditional security’ threats do. References

1. Kakonen J. (1990, 3-7 July). The Concept of Security: From Limited to Comprehensive. Paper presented at the 25th Annual International Peace Research Conference, Groningen, p. 15. 2. O’Connor T. (2010, 16 August). Environmental Security, MegaLinks in Criminal Justice. Retrieved from http://www.drtomoconnor.com/2010/2010lect05.htm. Accessed on 24 January 2011. 3. Westing A. (1984). Environmental Warfare: An Overview. In Westing A (Ed.) Environmental Warfare: A Technical, Legal and Policy Appraisal. London: Taylor and Francis. 4. Poupard, JA and Miller LA. (2004) Biological Warfare. In Schaechter M and Lederberg J (Eds.) The Desk Encyclopedia of Microbiology. Elsevier. 5. Westing A. (1979). Ecological Consequences of the Second Indochina War. Sweden: Almqvist & Wiksell International. 6. Westing A. (1980). Warfare in a Fragile World: Military Impact on the Human Environment. London: Taylor & Francis Ltd. 7. Buckingham, WA Jr. (1982). Operation Ranch Hand: the Air Force and Herbicides in Southeast Asia 1961-1971. Washington: US Air Force Office of Air Force History. Cecil PF. (1986). Herbicidal Warfare: the Ranch Hand Project in Vietnam. New York: Praeger. Westing A.(1989). Herbicides in Warfare: The Case of Indochina. In Bourdeau P, Haines JA, Klein W and Murti CRK (Eds.) Ecotoxicology and Climate. New York: John Wiley & Sons Ltd, New York. 8. Westing A. (1979). Ecological Consequences of the Second Indochina War. Sweden: Almqvist & Wiksell International. 9. Westing A. (1980). Warfare in a Fragile World: Military Impact on the Human Environment. London: Taylor & Francis Ltd. 10. Ibid. 11. Westing A. (1980). Warfare in a Fragile World: Military Impact on the Human Environment. London: Taylor & Francis Ltd. 12. Ibid. 13. Westing A. (1989). Herbicides in Warfare: The Case of Indochina. In P. Bourdeau, J. A. Haines, W. Klein and C. R. Krishna Murti (Eds.) Ecotoxicology and Climate. New York: John Wiley & Sons Ltd. York G. and Mick H. (2008, 12 July). ‘Last Ghost’ of the Vietnam War. The Globe and Mail.

##

HURJ Fall 2010: Issue 12 humanities

14. Denselow R. (1998, 3 December). Agent Orange Blights Vietnam. BBC News. 3 December 1998. Le KS. (2004). Agent Orange in the Vietnam War: Consequences and Measures for Overcoming it. In Furukawa H., Nishibuchi M, Kono Y and Kaida Y (Eds.) Ecological destruction, health, and development: advancing Asian paradigms. Kyoto University Press and Trans Pacific Press. 15. Sewell H. (2001, 30 December). Agent Orange hotspots located. BBC News. 16. Westing A. Environmental Warfare: An Overview. In A. Westing (Ed.) Environmental Warfare: A Technical, Legal and Policy Appraisal. London: Taylor and Francis, London. 17. Finger M. (1998). The Military, the Nation State and the Environment. In Ó Tuathail,G, Dalby S and Routledge P (Eds.) The Geopolitics Reader. London: Rourledge. 18. Renner M. (1991). Assessing the Military’s War on the Environment. In Brown, L (Ed.) State of the World 1991. New York: Norton. 19. Kemmiya M. (1997) Toxic Waste and a U.S. Base at Subic, Philippines. Retrieved from http:// www1.american.edu/ted/ice/subic.htm 20. Tritten TJ. (2010, 2 February). Decades later, US military pollution in Philippines linked to deaths. Stars and Stripes. 21. Soriano ZT. (2001). America’s Toxic Waste Legacy in the Philippines, Bulatat.com. 8. 22. Tritten TJ. (2010, 2 February). Decades later, US military pollution in Philippines linked to deaths. Stars and Stripes. 23. Ibid. 24. Ibid. 25. Le Billion P (2007, March). Geographies of War: Perspectives on ‘Resource Wars’. Geography Compass, 1(2), 163-82. 26. NordåsR and Gleditsch N P. (2007, August). Climate Change and Conflict. Political Geography. 26(6), 627-38. 27. Barnett J. and Adger WN. (2007). Climate change, human security and violent conflict. Political Geography. 26(6), 639-55. 28. Parry RL. (2010, 23 March). The black marketeers stealing Indonesia’s islands by the boat-load. The Times. 29. Ibid. 30. Ibid. 31. Homer-Dixon TF. (1998). Environmental Scarcity and Mass Violence. In Ó Tuathail,G, Dalby S and Routledge, P (Eds.) The Geopolitics Reader. London: Rourledge. 32. Nordhaug K. Taiwan and the South China Sea Conflict: the China Connection revisited. Retrieved from: http://www.southchinasea.org/docs/Nordhaug.pdf Hsiung JC. Sea Power, Law of the Sea, and China-Japan East China Sea “Resource War”. Retrieved from: http://www.nyu.edu/gsas/dept/politics/faculty/hsiung/sea_power.pdf. 33. Chen M. (2009, 17 March). Revenues disputes over Three Gorges. China.org.cn. Retrieved from: http://www.china.org.cn/china/features/content_17457681.htm. 34. Ibid. 35. Homer-Dixon TF. (1998). Environmental Scarcity and Mass Violence. In Ó Tuathail,G, Dalby S and Routledge, P (Eds.) The Geopolitics Reader. London: Rourledge. 36. Ibid. 37. Ibid. 38. Maddox B. (2009, 6 February). Russia leads Arctic Race to claim Northwest Passage. The Times. 39. Chang GC. (2010, 9 March). China’s Arctic Play. The Diplomat. 40. Westing A. (1984). Environmental Warfare: An Overview. In Westing A (Ed.) Environmental Warfare: A Technical, Legal and Policy Appraisal. London: Taylor and Francis. 41. Kasmo MA. (2003). The Southeast Asian haze crisis: lesson to be learned. In Uso JL, Patten BC and Brebbia CA (Eds.) Ecosystems and Sustainable Development IV Vol 2. WIT Press. 42. McDonald J. ( 2000, 28 July). Shanghai is sinking. ABC News. 43. Integrated Regional Information Networks. (2010, 13 January). Act now to stop Bangkok sinking, urge scientists. IRINnews.org.

37


humanities

hurj spring 2011: issue 13

Censorship Beyond the State: Examining the Role of Independent Press in Kenya’s Kakuma Refugee Camp Elizabeth DeMeo, Class of 2012 International Studies

New Media in Stateless Situations In the citizen-state relationship, the value of free speech is widely acknowledged. In the United States and other democratic nations, countless publications, social movements, and protests have been devoted to protecting this value and preserving it for decades to come. In recent years, new media technologies like Twitter, Facebook, and YouTube have expanded the role of free speech, as it is used to challenge authoritarian regimes across the globe. In today’s day and age, then, it is clear that free speech plays a valuable role in a wide range of states. What is less clear, however, is how free speech functions in situations of statelessness: what happens when individuals living outside of conventional states (i.e. refugees) are denied the opportunity to freely express themselves? Is it harder or easier to overcome oppression, when the sources of such oppression are not readily visible? How does such censorship function in reality, and how might it be overcome? Since these instances of statelessness are relatively limited, such questions are best examined on a case by case basis. Thus, it is with these questions in mind that this paper will examine a recent publication: KANERE: Kakuma News Reflector, a news source created by inhabitants of Kenya’s Kakuma Refugee Camp. In tracing the struggles faced by these refugees in creating and running an independent newspaper, this analysis will illustrate how challenges to free speech actually function in situations of statelessness, and offer insight as to how such challenges might be overcome. Although such analysis is introductory and heavily anecdotal, it offers valuable insight into the value of free speech in refugee camps, and functions as a springboard for further questions and research.

Kakuma Refugee Camp: An Introduction Before examining the development of KANERE, it is necessary to provide a brief overview of conditions at Kakuma Refugee Camp. Located in the Turkana region of northwest Kenya, the camp was originally established in 1992 to house refugees from Sudan. Since then, the camp has expanded, and now houses close to 50,000 refugees from Somalia, Ethiopia, Democratic Republic of Congo, Eritrea, Uganda, and Rwanda. Technically speaking, the camp falls under the jurisdiction of the Kenyan Government, though the camp is run by the United Nations High Commissioner for Refugees (UNHCR) with the assistance of a variety of international aid organizations. Although tolerable levels of education, shelter, and food are typically available, the camp faces several uniquely challenging conditions in addition to issues associated with displacement and relocation. For example, the natural environment poses a major challenge, as refugees contend with high temperatures, poisonous creatures, and several endemic diseases. Additionally, refugee life is limited to Kakuma, as travel beyond the camp requires a movement pass from UNHCR and local authorities. For many of Kakuma’s inhabitants, it is these

38

movement restrictions that pose the largest problem- constrained by their inability to return home, but similarly trapped inside the walls of the camp, refugees often feel confined to hostage lives in limbo.

History of Free Press in Kakuma Since the camp’s inception, the majority of global information on Kakuma has been disseminated through western news outlets, and the UNHCR has enjoyed a virtual monopoly on information relayed to donors and aid organizations. Perhaps most notably, global media became interested in the camp following the popularization of the story of Sudan’s “Lost Boys” and refugees became accustomed to western reporters coming to report on this situation. Although such coverage brought international attention to Kakuma, many refugees became frustrated with foreign media that spoke on their behalf, and began to seek an opportunity to speak freely for themselves. In 1993, such an opportunity did, in fact, exist in the form of Kakuma’s first printed newspaper. Supported by UNHCR funding and overseen by the Windle Trust of Kenya, the Kakuma News Bulletin (KANEBU) functioned as a primary source of internal communication within the camp. Although marginally successful, KANEBU weakened over the years in somewhat predictable ways; by 2005, many of its reporters were resettled or repatriated, and printing resources had dwindled. Following the collapse of KANEBU, the journalists remaining in Kakuma persisted on a highly informal and ad hoc basis. Most notably, many journalism clubs operated in schools in Kakuma, which functioned as local networks of information sharing, as well as informal sources of training.

Establishment of KANERE In light of this history, the establishment of KANERE can be seen as a result of two critical factors: the continued desire of the journalists to speak for themselves (as opposed to speaking through the UNHCR and western media), and the lingering remnants of journalism within the camp following the collapse of KANEBU. Although both factors were individually powerful, it took an exogenous shock to the system to propel them towards more concrete action. In the fall of 2008, a U.S. Fulbright Scholar, Bethany Ojalehto, arrived in Kakuma Refugee Camp to provide this momentum. Shortly after her arrival, Ojalehto was approached by refugees seeking to start a new type of newspaper. She quickly jumped on board, and together, Ojalehto and the refugees laid out a game plan for the new publication. Entitled “KANERE”, or “Kakuma News Reflector”, the newspaper would provide an outlet for refugees to freely express themselves, with the central goal being: “To counter the monopoly on information enjoyed by humanitarian organization that largely control access to and from refugee camps” To meet this objective, KANERE would differ from previous publications in two important ways: first, it would be designed to share information with the outside world as well as the refugees themselves. This meant publishing the organization both in print and (for the first time) online, through the creation of a news blog. KANERE would be run


Section Title 2011: issue 13 hurj spring exclusively by the refugees, allowing them complete autonomy over the writing and editing of articles. With these objectives in mind, KANERE was officially born. On December 22, 2008, just a few short months after its conception, the newspaper made its online debut.

Initial Reception Shortly after publication, KANERE’s first issue was declared a success by a variety of organizations in the human rights community. The Humanitarian Futures Programme exemplified the community’s response, calling KANERE “an absolutely fantastic example of citizen journalism, empowered by the web, completely changing the game of humanitarian business”. In early 2009, KANERE was also highlighted at the International Council of Voluntary Agencies, a global network of non-governmental organizations, thus placing it on the map for the global aid community to see . With such overwhelming praise and broad recognition, it appeared that KANERE was well poised to meets its goal of advancing refugee free speech through an independent, autonomous voice.

Early Challenges Despite such success, KANERE did not prosper in the months to come. Instead, the paper languished, faced with a variety of setbacks that culminated in threats to its very existence. With such an initially positive response, what was the force holding KANERE back? Despite its initially positive response, a primary force restraining KANERE was the UNHCR itself. Beginning in early 2009, the UNHCR expressed concerns regarding the publication surrounding refugee confidentiality. In response, KANERE creators deleted sensitive articles, and essentially creating an entirely anonymous publication. Despite these changes, UNHCR criticism persisted, expressing concern about the unprecedented platform for independent refugee communication offered by KANERE. To UNHCR officials, this speech presented a substantial threat, as it ended the organization’s monopoly on information provided to donors and aid organizations. Without the UNHCR filter, refugees were free to speak their mind on all the happenings of the camp, including politics, violence, and (for better or for worse) the actions of UNHCR in Kakuma. In opposing KANERE, the UNHCR indicated that such a risk was a price they were unwilling to pay for the benefit of refugee free speech. Following this shift in attitude, the UNHCR made it imminently clear through a variety of further actions that their support for KANERE had indeed been withdrawn. UNHCR officials in the camp

##

HURJ Fall 2010: Issue 12 humanities began questioning KANERE journalists about their involvement with the paper, and repeatedly ignored editor’s requests for meetings . Although these roadblocks posed significant challenges, the most difficult hurdle arose when KANERE attempted to register as a formal association within Kenya’s Turkana District. While attempting to register, journalists were informed by Kenyan officials that their registration would be contingent on a letter of UNHCR support . At this point, the message was clear: the UNHCR would make every effort to ensure that KANERE did not continue to operate independently.

Advocacy & Calls for Help: A Turning Point? A turning point for KANERE occurred with the arrival of Dr. Ekuru Aukot, a human rights lawyer and director of a major Kenyan legal aid group. In addition to acting as a galvanizing force for KANERE, Dr. Aukot served as an advocate in meetings with the UNHCR, emphasizing the right of the refugees to freely express themselves in writing . Additionally, internal KANERE officials pushed for outside backing from the global community. On January 12, 2009, for example, an article coauthored by Ojalehto was published in Pambazuka Press, calling for support for this “candlelight” of hope in the darkness of the refugee camp” . Despite these efforts, tensions were still high between the UNHCR and KANERE editors, as UNHCR remained reluctant to provide support for an independent press in any form. As negotiations continued throughout the year, it was clear that UNHCR and the Kenyan NGO community would not be willing to provide assistance unless KANERE journalists and editors would submit to their intervention and supervision. On November 10, 2009, the situation turned violent when the editor of KANERE was assaulted and his house was destroyed. Although the reasons for the attack are technically unknown, journalists at KANERE interpreted the action as yet another attempt to suppress the paper.

Current Status Despite these difficulties, KANERE has persisted and even thrived as a publication. Although it is difficult for journalists to fund, create, and disseminate printed papers within and around the refugee camp, the y have successfully released a number of editions online. Presently, the publication can be accessed at http:// kakuma.wordpress.com/. It would be preemptive, however, to declare KANERE a definitive success, as there is continued difficulty with funding and challenges to internet access in Kakuma .

39


humanities

hurj spring 2011: issue 13 Broader Implications Beyond a compelling anecdote, the story of refugee journalists in Kakuma provides incredible insight into the role of free speech in the lives of stateless citizens and journalists. One clear lesson is the critical value of internet and other new media tools in helping overcome suppression of such speech. Had the refugees in Kakuma not had access to blogs through internet cafés, it is highly unlikely that KANERE would have persisted with any success. However, it is important to note that technological advances alone cannot carry the day - without advocates like Bethany Ojalehto and Dr. Aukot, it is unlikely that even the most technologically sophisticated toolkits will ever independently produce change. More broadly, the story of KANERE provides a fresh perspective on the value of feedback loops in promoting positive change in refugee camps and breaking up the donor information monopoly enjoyed by NGOs. In a recent article, renowned scholar in the field of refugee studies and continued advocate of human rights Dr. Barbara Harrell Bond noted: “In a camp situation real power lies with the ‘lead’ NGO. It usually does not share information about the budget or other resources to be allocated and has absolute power over almost everything within its confines....In the meantime, one way to begin to address the evils of camps is to create feedback mechanisms, which, while beginning to address problems that can be put to rights, will also alert donors to the evils of camps.” Although such analyses are valuable, the most important function of KANERE is perhaps a jumping off point for a variety of more pointed research . Hopefully, this article provides a springboard for future questions in this area, and leads to new insights surrounding the role of free speech for refugees and other stateless individuals.

40

References: 1. “About Kakuma Refugee Camp.” Kakuma News Reflector – A Refugee Free Press. Web. 24 Feb. 2011. <http://kakuma.wordpress.com/ about-kakuma-refugee-camp/>. 2. Aukot, Ekuru. “Who Believes in the Rights of Immigrants? Do Refugees in Kenya Have the Right to a Free Press?” Kakuma News Reflector – A Refugee Free Press. 28 Feb. 2009. Web. 24 Feb. 2011. <http://kakuma. wordpress.com/2009/02/28/who-believes-in-the-rights-of-immigrants-dorefugees-in-kenya-have-the-right-to-a-free-press/>. 3. Harrell-Bond, Barbara. “Speaking for Refugees or Refugees Speaking for Themselves.” Kakuma News Reflector – A Refugee Free Press. 31 Jan. 2009. Web. 24 Feb. 2011. <http://kakuma.wordpress.com/2009/01/31/ speaking-for-refugees-or-refugees-speaking-for-themselves/>. 4. Ojalehto, Bethany, and Qaabata Boru. “Support KANERE for an Independent Refugee Press.” Pambazuka News. 1 Jan. 2009. Web. 24 Feb. 2011. <http://www.pambazuka.org/en/category/comment/53437>. 5. Ojalehto, Bethany. “Refugee Free Press Struggling to Remain Afloat and Independent.” Society for International Development. 1 Oct. 2009. Web. 24 Feb. 2011. <http://www.sidint.net/refugee-free-press-struggling-to-remain-afloat-and-independent/>. 6. Ojalehto, Bethany. “Refugee News Reporting Project Under Threat.” Pambazuka News. 18 Mar. 2010. Web. 24 Feb. 2011. <http://www. pambazuka.org/en/category/features/63143>. 7. “Report on KANERE’s Progress and Challenges.” Pambazuka News. 2 Feb. 2009. Web. 24 Feb. 2011. <http://www.pambazuka.org/en/category/comment/53789/print>.


Section Title 2011: issue 13 hurj spring

Fall 2010: reports Issue 12 science andHURJ engineering

Effects of Methamphetamine Addiction on Ultrasonic Vocalizations and the Rat Brain Dr. Matthew Feltenstein*, Inga Brown†

*Medical University of South Carolina †Johns Hopkins University Abstract Rats emit two types of ultrasonic vocalizations (USVs), which are vocalizations outside of the frequency range of human audition. The first category of USVs occur at 22-kHz and signify distress or unhappiness [1]. The second category of USVs occur at 50-kHz and have been proven to indicate pleasure or happiness [1]. The purpose of this experiment was to monitor the USVs emitted by methamphetamine (meth) addicted rats and examine the effects of meth on the brain. The rats underwent a meth self-administration phase followed by extinction and reinstatement. USVs were recorded during all phases of the addiction cycle. USVs were also analyzed to determine the type of call emitted most frequently during each phase in order to postulate upon the pleasure or displeasure of the rats during each phase of the meth addiction cycle. It was found that the dominant USV for each phase of the addiction cycle was the 50-kHz vocalization. Furthermore, it was also discovered that the majority of USVs both before and after (within 10 seconds of) an infusion were the 22-kHz vocalizations during self-administration, while during reinstatement testing, 50-kHz vocalizations were more prevalent before and after an active lever press. These findings and future studies will allow scientists to understand how rats, and eventually humans, react to drug addiction and its effects on the brain through their USVs.

Introduction The purpose of this study was to identify the ultrasonic vocalizations associated with meth self-administration, extinction, and reinstatement and to examine the effects of meth on the brain. Methamphetamine is classified as a psycho-stimulant due to its ability to cause euphoria, an elevated heart rate and blood pressure, hallucinations, and an enhancement of motor skills, and other similar effects [2, 6]. Meth operates on a molecular level by increasing the levels of several neurotransmitters (serotonin, dopamine, and norepinephrine) in the brain [2, 6]. Dopamine, specifically, is active in the brain’s reward system and, therefore, is in high quantity during rewarding situations [6]. The reward system, or mesocorticolimbic pathway, consists of several brain areas including the ventral tegmental area, nucleus accumbens, amygdala, hippocampus, ventral pallidum, prefrontal cortex, and the orbitofrontal cortex [6]. Dopamine can be found in abundance throughout this circuit during any situation that can be considered rewarding, be it anything from eating to using drugs [6]. It is because of the association between meth and the mesocorticolimbic pathway that meth is considered to be such a highly addictive drug [6]. Ultrasonic vocalizations are exhibited in rats under many different circumstances [1]. 22-kHz vocalizations denote any type of distress or angst, while 50-kHz vocalizations have been shown to signify pleasure and are often present when rats experience something they consider favorable [1]. Fourteen different sub-types of the 50kHz vocalization have been identified [1]. In previous studies, meth use has been proven to elicit a high level of these 50-kHz USVs in rats [1]. Recent studies have also shown that the brain areas involved in the production of 50-kHz USVs include the pre-frontal cortex, nucleus accumbens, ventral pallidum, ventral tegmental area, lateral preoptic area, lateral hypothalamus, and the raphe [3]. The majority of these neural areas are the same areas that are involved in the brain’s reward system, thus confirming the notion that 50-kHz USVs signify pleasure [3]. With the knowledge of the neural areas involved in drug addiction and USV production, and also the knowledge of the differ-

##

ent meanings for rat USVs, one can hypothesize that meth addiction affects the types of USVs produced in rats. We hope to better understand how the neural changes in a drug addicted rat’s brain cause it to emit specific vocalizations and, by proxy, how they ‘feel’ about their drug seeking behaviors. This hypothesis was tested by examining the USVs emitted by rats during the three phases of the drug addiction cycle (self-administration, extinction, and reinstatement). It was anticipated that a high number of 50-kHz USVs would be produced during self-administration and reinstatement and that a large number of 22-kHz USVs would be present during extinction.

Metho ds Subjects: Eight male Long-Evans rats were used for this study. The rats were handled for 5-10 minutes daily for 12 days. Following the rats’ acclimation to human handling, they were handled more roughly and tickled in order to increase the number of USVs emitted by the rats. A bat detector was used to ensure that the rats were emitting USVs. Surgery: Each rat underwent surgery to insert a catheter into either of its jugular veins. Each rat was anesthetized with a series of three injections. First, an intraperitoneal (IP) injection of ketamine (0.66mL/ kg) was used to sedate the rats. IP injections of xylazine (0.66 mL/ kg) and equithesin (1 mL/kg) were then used to further anesthetize the rats. The techniques used for this surgery have previously been published [4]. This surgical technique was used to create a link, via the catheter, between the rats’ bloodstream and the outside of their bodies in order to make drug infusions easier and more direct. Methamphetamine S elf-Admini stration: Rats underwent daily 2 hour sessions of meth self-administration in specialized boxes. The external portion of the rats’ catheters was hooked up to tubing that connected the catheter to a syringe of meth. Two levers reside within these boxes. When pressed, the right

41


science and engineering reports (active) lever caused the rat to be infused with a liquid dose (0.02 mg/kg meth per 0.50-ÎźL infusion) of meth. Active lever presses also resulted in the illumination of a light above the active lever and the activation of a tone (4.5-kHz) within the box. Conversely, no infusion, light, or tone was generated when the left (inactive) lever was pressed. Rats were prevented from overdosing on meth by means of a 20-second time out period after each infusion. Self-administration was said to be complete after the rats met the criterion of a minimum of 16 consecutive days of self-administration with at least 10 infusions per session. E xtinc tion f rom Methamphetamine Addic tion: During the extinction phase of this study, rats underwent daily 2 hour sessions. The rats were able to press both levers, however neither of the two levers had a programmed effect. By removing the meth infusing ability of the active lever, rats became extinguished from the drug. This was done in order to analyze the USVs emitted by the rats when they were no longer able to receive infusions of meth. Rats were said to have completed this portion of the addiction cycle when they had completed at least 7 days of extinction with 25 or fewer presses on the active lever (per session) for 2 consecutive days. Reinstatement of Methamphetamine Addic tion and Seeking: Reinstatement began after the rats met the extinction criterion of very low drug seeking behavior. This low level of drug seeking behavior is optimal for the reinstatement tests these rats underwent. Four such reinstatement tests were used in this study to determine how quickly the rats relapsed into addictive drug seeking behavior and also to examine the types of USVs emitted during reinstatement. For the first reinstatement test, rats were primed with an injection of meth (1 mg/kg, IP) and then placed into the session boxes. Lever presses in this test had no programmed effects. For the second reinstatement test, rats were primed with an injection of yohimbine (2.5 mg/kg, IP) and then again placed into the session boxes. Lever presses for this test also had no programmed effects. The next test, the cue test, involved the light and tone cues presented to the rats during self-administration. In this test, any active lever presses resulted in activation of the light and tone but did not produce an infusion of meth. The final test involved rats being primed with an infusion of meth (1 mg/kg, IP). Active lever presses resulted in activation of the light and tone but also did not produce an infusion of meth. All reinstatement test sessions were 2 hours with at least 2 extinction days between tests to re-establish low levels of drug-seeking behavior. D ata Analysi s: The number of lever presses and number of meth infusions for each session was recorded via a specialized computer program (MedPC, Med-Associates, St. Albans, VT, USA) that directly receives this information from the session boxes.

hurj spring 2011: issue 13 categorized with section labels. USVs were categorized according to the 15 call categories previously described by Clarke et al [1].

Results and Conclusions

S elf-Admini stration: The majority of USVs for the self-administration portion of this study were 50-kHz vocalizations. Of these 50-kHz USVs, the dominant call types were flats and shorts. As described by Clarke et al, a flat USV is identified by its near-constant frequency greater than 30-kHz with a mean slope between -0.2 and 0.2 kHz/ms, whereas a short USV has a duration of less than 12 ms [1]. A fair number of 22-kHz vocalizations were present during this phase of the study; however the 50-kHz vocalizations did outnumber the 22-kHz USVs. A total of 906 50-kHz USVs were labeled, while 683 22-kHz USVs science and enginee were also noted. Files were also scored based upon the type of USVs emitted either directly before or after (within 10 seconds of) an infusion of meth. During the self-administration portion, more 22-kHz than 50kHz USVs were emitted before and after an infusion. As the rats progressed from one session to the next, they became conditioned to the notion that left (inactive) lever presses did not result in meth infusions. As expected, the number of left lever presses dramatically decreased as the self-administration portion of this study progressed. Conversely, the number of right (active) lever presses increased or remained at a high level throughout the selfadministration sessions. Extinction: During the extinction portion of this study, the majority of USVs emitted were, surprisingly, 50-kHz USVs. More specifically, of these 50-kHz USVs, the flat call was most common. Unlike in self-administration, extinction 50-kHz USVs far outnumber 22-kHz USVs with 783 of the former to only 257 of the latter. As was to be expected, the rats quickly adapted to the lack of availability of meth during all lever presses in this phase. On the first day of extinction, rats pressed the active lever a very large number of times compared to later days of extinction where, presumably, the rats understood the lack of drug reinforcement. Reinstatement: The majority of the USVs emitted during the reinstatement phase of this study were 50-kHz USVs. These vocalizations far outnumbered the 22-kHz USVs emitted by a ratio of approximately to 8:1. The types of 50-kHz vocalizations that were most prevalent were trills, shorts, and flats. Descriptions of the criteria a USV must meet in order to be classified as a flat or short can be found in the

A n a l y s i s o f U l t r a s o n i c Vo c a l i z a t i o n s : Ultrasonic vocalizations were recorded during each self-administration, extinction, and reinstatement session via an ultrasonic microphone that was mounted to the ceiling of each box (approximately 20-25 inches from the rats during sessions). USVs were recorded and analyzed via the computer software program, Avisoft SASLab Pro. Spectrograms of the first and last 30 minutes of each session were generated and analyzed. USVs were identified manually and

42

Figure 1: USV Call Types for Each Phase of Addiction Cycle


Section Title 2011: issue 13 hurj spring self-administration portion of this section. As described by Clarke et al, trills are characterized by rapid frequency oscillations with a period of approximately 15 milliseconds [1].The dominant USV type emitted before and after (within 10 seconds of) an active lever press was the 50-kHz USV. For the first reinstatement test, the cue test, the dominant vocalization type was the 50-kHz flat USV. During this test, the number of active lever presses was great, indicating a reinstatement of meth seeking behavior. This trend may be attributed to the presence of conditioned cues and will be further examined in the next section. Inactive lever presses for this test were low in number. The reinstatement test involving yohimbine had similar results. The dominant type of vocalization was also the 50-kHz flat USV. The number of active lever presses for this test was higher than those in the cue test and remained at a high number during the 2 extinction days following the yohimbine test. Inactive lever presses during this test were very low and remained low on the 2 following extinction days. This trend can be attributed to the fact that yohimbine increases rats’ desire for drugs whose use has been recently extinguished [5]. Yohimbine may accomplish this increase in reinstatement responding to drugs by increasing levels of dopamine in the prefrontal cortex [8]. The primary USV type for the meth primed reinstatement test was the 50-kHz trill vocalization. As expected, the number of active lever presses for this test was very high. Inactive lever presses followed this same trend. This can be attributed to the rats becoming excited by their priming injection of meth and then less so once it was realized that lever presses would not respond in further meth infusions. The final reinstatement test consisting of a meth priming injection and the activation of the light and tone cues with each active lever press had a dominant USV type of the 50-kHz trill USV. The number of active lever responses was considerably high, as expected since the rats had been primed with an injection of meth. The inactive lever presses for this test were very low in number.

Discussion

The results obtained in this study allow for some postulation regarding the effects of drug addiction on the brain and, more specifically, on USV production in rats. Foremost, the most prevalent type of USVs emitted within 10 seconds, before and after, an infusion for the self-administration portion of this study were those of the 22kHz category. However, the most common USV emitted before and after an active lever press during the reinstatement portion of this study was the 50-kHz USV. These findings lead us to speculate that the rats are not necessarily happy when first becoming addicted to meth, but once they have had a chance to become used to the experiences involved with meth addiction, they become happy when displaying drug-seeking behavior. Another interesting result gained from this study was the presence of 50-kHz USVs during the extinction phase of the addiction cycle. One would think that the rats would become agitated at their inability to receive meth reinforcement during this stage of the cycle and thus would emit 22-kHz vocalizations. The results, however, show that a large majority of the USVs during extinction are of the 50-kHz category. This result proves to be quite interesting in that it suggests that the rats are actually pleased when they are not able to receive a meth injection despite accurate lever presses. This provides extraordinary groundwork for future studies that could further investigate why this seems to happen.

##

HURJ Fall 2010: Issue 12 science and engineering reports The trends seen in lever presses for the two reinstatement tests that involved cue tests prove to be quite interesting. In both cases, the number of active lever presses was very high for the first day of the reinstatement test, but decreased significantly once it was realized that active lever presses would not result in meth reinforcement. These results speak greatly to the ability of cues to influence reinstatement of drug seeking behavior. It has previously been proven that the presence of cues during self-administration of a drug plays a critical role in the addictive nature of that drug [7]. When exposed to the same cues after extinction, rats easily revert to old habits of drug seeking behavior [7]. Previous studies have also shown that the neural regions that are most involved in relapse in the presence of cues previously associated with drug administration are the amygdala, anterior cingulate, nucleus accumbens, and orbitofrontal cortex [7]. Note that some of these regions are also associated with the neural reward system. There is an obvious overlap of neural areas involved in drug seeking behaviors and USV production that suggests a very tight neural connection between both of these processes. The effects of meth on the brain are apparent in that meth self-administration, extinction, and reinstatement all elicit the production of USVs, in particular, 50kHz USVs. The fact that the majority of USVs produced during all three stages of the addiction cycle were 50-kHz vocalizations speaks to the extreme addictiveness of methamphetamine and also to the fact that the neural areas involved in drug addiction are also highly involved in the brain’s reward system. In order to determine exactly how meth elicits USV production, further testing needs to be done. However, it is apparent that since drug addiction and USV production draw upon the same neural circuits, that the various USVs produced likely result from specific neural changes that are incurred via drug addiction. From this study, we have gained an idea of how the rats “felt” during each stage of the addiction cycle. Additional testing can now be done to further examine how drug addiction affects USV production on a cellular level. This information can eventually be put to use in understanding what emotions humans experience during drug addiction. We have also examined how the brain changes during drug addiction and how these changes cause different USVs to be produced. With the information obtained from this study regarding the neural mechanisms involved in addiction and relapse, further testing can be done in order to determine how to reduce the likelihood of relapse to drug seeking behaviors. This could be done through conditioned cued manipulations or through drug interventions that could potentially counteract the neurotransmitter and chemical actions that are so prevalent in drug addiction. References 1. Clarke PBS, Gourdon JC, Wright JM. Identification of multiple call categories within the rich repertoire of adult rat 50-kHz ultrasonic vocalizations: effects of amphetamine and social context. Psychopharmacology 2010;211:1–13. 2. Cruickshank CC, Dyer KR. A review of the clinical pharmacology of methamphetamine. Addiction 2009;104:1085–99. 3. Burgdorf J, Wood PL, Kroes RA, Moskal JR, Panksepp J. Neurobiology of 50-kHz ultrasonic vocalizations in rats: electrode mapping, lesion, and pharmacology studies. Behavioural Brain Research 2007;182:274–83. 4. Feltenstein MW, Altar CA, See RE. Aripiprazole blocks reinstatement of cocaine seeking in an animal model of relapse. Biological Psychiatry 2007;61:582–90. 5. Feltenstein MW, See RE. Potentiation of cue-induced reinstatement of cocaine-seeking in rats by the anxiogenic drug yohimbine. Behavioural Brain Research 2006;174:1-8 6. Feltenstein MW, See RE. The neurocircuitry of addiction: an overview. British Journal of Pharmacology 2008;154:261–74. 7. See RE. Neural substrates of conditioned-cued relapse to drug-seeking behavior. Pharmacology Biochemistry and Behavior 2002;71:517¬–29. 8. Tanda G, Bassareo V, Di Chiara FG. Mianserin markedly and selectively increases extracellular dopamine in the prefrontal cortex as compared to the nucleus accumbens of the rat. Psychopharmacology (Berl) 1996;123:127-30.

43


hurj spring 2011: issue 13

science & engineering reports

Delineating the Role of BRF2 in Breast Cancer Pathogenesis Jean Fan1, Yunkai Yu2, Paul Meltzer, M.D.2, Liang Cao, Ph.D.2 Montgomery Blair High School, Silver Spring, MD 2Genetics Branch, National Cancer Institute, Bethesda, MD

1

Abstract Amplification of 8p11-12 is present in 10-15% breast cancers and is associated with poor prognosis. However, while putative driving oncogenes have been proposed, genes within this amplicon have yet to be definitively implicated in cancer growth, survival or pathogenesis. BRF2, a gene located within this amplicon, encodes a general transcription factor involved in RNA polymerase III-mediated transcription. A recent report suggests its homologue, BRF1, may promote cellular transformation. Here, the role of BRF2 is evaluated in breast cancer growth and development. A strong correlation between BRF2 gene amplification and BRF2 protein overexpression, a trait consistent with an oncogenic role, was established through immunoblot analysis, previous CGH, and expression array studies. Lentiviral-mediated gene transfer delivered BRF2-shRNA into breast cancer cells with 8p11-12 amplification and subsequently established long-term stable cell lines with shRNA constructs targeting BRF2 leading to marked reduction of BRF2. BRF2 inhibition resulted in impeded growth and proliferation rates as well as increased cell death. These findings suggest that BRF2 is a relevant oncogene in the 8p11-12 amplicon and may play a role in breast cancer growth and pathogenesis.

Introduction In breast cancers, 8p11-12 is the second most common region of genetic amplification, exhibited in 10% to 15% of breast cancers (Adélaïde et al., 1997; Courjal and Theillet, 1997; Ray et al, 2004). In recent studies, 8p11-12 amplification has been associated with poor prognosis (Gelsi-Boyer et al., 2005). Patients with the 8p11-12 amplicon exhibited a significantly lower proportion of metastasis-free survival compared to patients without the amplicon (Fig. 1). This association suggests a malignant long-term consequence of oncogene amplification within 8p11-12 on breast cancer survival and a potential link between genetic amplification and pejorative disease evolution (Gelsi-Boyer et al., 2005). Despite the frequency and importance of the 8p11-12 amplicon, the relevant oncogene or oncogenes have yet to be identified. And while putative oncogenes have been proposed, further analysis is required in order to provide direct evidence of their oncogenic roles (Theillet et al., 1993). Garcia et al. (2005) examined 80 breast and ovarian tumors and cell lines using high-resolution array comparative genomic hybridization(aCGH) and gene expression analyses in order to identify a segment of minimal common amplification of approximately 1-Mb size that is likely to contain key oncogenes driving the 8p11amplicon. BRF2 was one of four putative oncogenes cited as par12 ticularly interesting candidates for further evaluation. Gelsi-Boyer et al. (2005) also identified four distinct amplicons, A1, A2, A3, and A4, within 8p11-12 using a CGH. BRF2, common to the boundaries of amplicons A1 and A2 (Fig. 2), displayed expression levels significantly correlatedwith the expression of neighbor genes, suggestive of an oncogenic role. A2 amplification has also been associated with poor prognosis (Fig. 1). It is therefore important to generate biological data for the identification of candidate 8p11-12 genes that are important for their growth and survival. BRF2, located at 8p11.23, has been previously suggested as a candidate gene for the 8p11-12 amplicon (Garcia et al., 2005; Gelsi-Boyer et al., 2005). Yet, no biological data exist to elucidate the roles of BRF2 in breast cancer. By definitively determining the relevant oncogene or oncogenes for the 8p11-12amplicon, we hope to improve long-term breast cancer prognoses through the identification of novel target drugs. We hypothesized that BRF2 may play an oncogenic role in the development

44

and progression of breast cancers exhibiting 8p11-12 amplification. By analyzing a panel of breast cancer cell lines with the 8p11-12 amplicon, we showed a strong correlation between BRF2 genetic amplification and BRF2 protein overexpression, a trait consistent with an oncogenic role. In addition, we employed shRNA-mediated long-term gene knockdown to reduce or silence BRF2 gene expression. Knockdown of BRF2 in HCC1500 breast cancer cells resulted in a substantial decrease in BRF2 synthesis. Furthermore, cell growth and cell proliferation rates were hindered markedly by the BRF2 knockdown while cell death was promoted. Our results suggest that BRF2 is a strong candidate driver oncogene for the 8p11-12amplicon and may contribute significantly to breast cancer pathogenesis.

Metho ds & Materials Cell Lines and Grow th Conditions Breast cancer cells lines BT483, CAMA1, HCC38, HCC1500, HCC1900, MCF7, MDA231, MDA435, UACC2087, ZR75, SUM44, and SUM52 were obtained from the NCI Genetics Branch Cell Repository (Bethesda, MD). Cells were maintained in recommended 1xRPMI-1640 or DMEM supplemented with 10%v/v FBS, 100U/ml penicillin and 100μg/ml streptomycin. All cell lines were incubated at 37°C under 5% CO2 and passaged at confluence until use. Immun obl ot An aly si s : Cells were homogenized in 1ml ice-cold PBS (Biosource) and suspended in 10mL RIPA+ buffer (Thermo Scientific Pierce) supplemented with 1%v/v 1xprotease inhibitor cocktail (Roche Diagnostics) and AEBSF and 2%v/v NaF, Na3VO4, Na2P2O7, phosphatase inhibitors I and phosphatase inhibitors II (Sigma). Protein concentrations of cleared lysates were quantified using the Pierce BCA Protein Assay kit (Thermo Scientific Pierce) following the manufacturer’s protocols. Protein concentrations were normalized in RIPA+ buffer, mixed with 1xNuPAGE LDS sample buffer (Invitrogen) and subsequently boiled for preservation. Lysate proteins were fractioned by SDS-PAGE on a 15-well Bis-Tris-HCl-10% polyacrylamide gel (Invitrogen) under standard conditions, blotted onto an iBlot nitrocellulose membrane (Invitrogen) according to manufacturer’s protocols, and blocked in 5% NFDM/TBST (Bio-Rad Laboratories). Reactive bands were labeled with primary antibodies against BRF2 (Protein-


Section Title 2011: issue 13 hurj spring tech Group Inc.) at 1:1000 dilution and primary antibodies against GAPDH (Cell Signaling) at 1:1000 dilution overnight at 4°C. After 2h incubation with 1:2000 dilution of secondary HRP-anti-Rabbit (Cell Signaling), reactive bands were developed using SuperSignal West Pico Chemoluminescent Kit (Pierce) following the manufacturer’s protocols and visualized on chemiluminescent film (Kodak).

constructs. Stable cell lines were generated following a 2 week selection with puromycin to eliminate non-infected cells.Immunoblot analysis showed decreased BRF2 levels for both BRF2-shRNAinfected HCC1500 cell lines compared to the scramble control (Fig. 4), confirming effectiveness of BRF2 shRNA-1 and BRF2 shRNA-3 in silencing BRF2 geneexpression.

shRNA Plasmid Preparation: Plasmids containing lentiviralshRNA vectors targeting BRF2 were obtained from Open Biosystems. Bacterial colonies were generated on LB agar plates with ampicillin and incubated at 37°C overnight. Single colonies were inoculated into LB broth supplemented with ampicillin and incubated at 37° tilt overnight with 250 RMP shaking. Plasmid DNA was purified using the QIAGEN Plasmid Purification Midi Kit following manufacturer’s protocols.

Effec ts of BRF2 Knockdow n on Grow th in HC C1500 Breast Cancer Cells: To evaluate the BRF2 knockdown on the growth and proliferation of breast cancer cells, growth rates of BRF2 knockdown HCC1500 breast cancer cells were compared to the growth rates of control HCC1500 breast cancer cells over a period of 7 days. Control scramble HCC1500 cells grew and proliferated accordingly. However, BRF2 knockdown HCC1500 cells failed to grow and proliferate in vivo and subsequently decreased in cell count. The decrease in cell count may also be indicative of an increase in cell death rate by BRF2 knockdown (Fig. 5). This data suggests that elevated BRF2 expression is important for growth and proliferation of breast cancer cells with high degree of amplification, and thus may play a role in breast cancer development and pathogenesis.

Generation of Lentiviruses: Bacterial plasmid DNA was purified using the QIAGEN EndoFree Plasmid Purification Maxi Kit following manufacturer’s protocols. Lentiviruses were generated from purified plasmid DNA following Invitrogen protocols. Infe c ti on : A total of 2.5x104 target cells were seeded into 6 well plates. After 24h, cells were infected with lentiviruses with scramble or BRF2 shRNAs for 72 hrs. Infected cells were selected with 1μg/mL puromycin for about 2 weeks. Grow th Assay : Approximately 2.5x103 cells were seeded into 96 well plates. At specified time points, cell growth was determined via ATP assay following manufacturer’s protocols.

Results and Conclusions Detection of elevated BRF2 protein in cell lines with 8p11-12 amplicon (C GH) and increased mRNA (expression array): Since BRF2 is an essential component in the RNA pol-III transcription factor complex and subsequently plays a vital role in mRNA processing and protein transcription, elevation in expression of BRF2 following BRF2 amplification may influence translation and proliferation rates and lead to cancer. To detect elevated expressions of BRF2, we analyzed a panel of breast cancer cell lines including BT483, CAMA1, HCC38, HCC1500, HCC1900, MCF7, MDA231, MDA435, UACC2087, ZR75, SUM44, and SUM52 with immunoblot analysis. Some of these cells lines have been previously shown to exhibit amplification at 8p11-12 as determined by comparative genomic hybridization (CGH) and elevated 8p11-12 mRNA levels as determined by mRNA expression arrays. Immunoblot results detected the highest levels of BRF2 in cell lines HCC1500, BT484, and UACC2087, respectively (Fig. 3). Our results provide the first piece of evidence to demonstrate that amplification of 8p11-12 leads to increased BRF2 protein expression in this panel of breast cancer cell lines. L ong-ter m L entiv iral-Mediated G ene Silencing BRF2 in HCC1500: Lentiviral-mediated shRNA gene transfer achieved stable BRF2 knockdown in HCC1500 breast cancer cells for both BRF2-shRNA

##

HURJ Fall 2010: Issue 12 science & engineering reports

Discussion

Amplification of 8p11-12 is commonly exhibited in breast cancers and is associated with poor prognosis. However, relevant oncogenes have yet to be definitively identified. We hypothesized that BRF2, a gene within the 8p11-12amplicon, may play an oncogenic role in the development and progression of breast cancer exhibiting 8p11-12 amplification. Our data confirmed elevated BRF2 expression in a panel of breast cancer cell lines exhibiting 8p11-12 amplification (Fig.3). The magnitude of BRF2 amplification was also found to be strongly correlated with BRF2 overexpression (Fig. 3). To fully evaluate the role of BRF2 on breast cancer growth, we performed BRF2 gene silencing on HCC1500 breast cancer cells, which exhibited high levels of BRF2 amplification and BRF2 overexpression. Long-term stable lentiviral-mediated gene silencing was achieved in HCC1500 breast cancer cells with two independent BRF2-shRNA vectors (Fig. 4). We estimate that both BRF2 knockdown cell lines expressed about 10% of the original BRF2 in HCC1500, thus reducing the level of BRF2 to that comparable of breast cancer lines without BRF2 amplification. Preliminary analysis of the growth of HCC1500 cells with BRF2 knockdown showed a large reduction of cell proliferation as well as increased cell death as indicated by ATP-based cell proliferation assay and clonogenic assay (Fig. 5). Due to the interesting phenotypes associated with BRF2 knock-down in breast cancer cells with BRF2 amplification and overexpression, further investigations are warranted. In summary, the results from our experiments suggest that BRF2 may play an oncogenic role and be a putative driver oncogene for the 8p11-12 amplicon. Therefore, targeting BRF2 and its associated downstream machinery in patients exhibiting 8p11-12 amplification may have therapeutic potential. Our discovery may facilitate the understanding of breast cancer pathogenesis in the ultimate search for a cure.

45


science & engineering reports References 1. Adélaïde, J., Chaffanet, M., et al. (1997). “Chromosome region 8p11-p21: refined mapping and molecular alterations in breast cancer.” Genes Chromosomes Cancer 22: 186-199. 2. Cabarcas, S., Jacob, J. et al. (2008). “Differential expression of the TFIIIB subunits Brf1 and Brf2 in cancer cells.” BMC Mol Biol. 9: 74. 3. Cabart, P. and Murphy, S. (2002). “Assembly of Human Small Nuclear RNA Gene-specific Transcription Factor IIIB Complex de Novo On and Off Promoter.” J. Biol. Chem. 277(30): 26831-26838. 4. Chin, K., DeVries, S., et al. (2006). “Genomic and transcriptional aberrations linked to breast cancer pathophysiologies “ Cancer Cell 10 (6): 529 - 541. 5. Courjal, F. and Theillet, C. (1997). “Comparative Genomic Hybridization Analysis of Breast Tumors with Predetermined Profiles of DNA Amplification.” Cancer Research 57: 4368-4377. 6. Garcia, M. J., Pole, J.C.M., et al. (2005). “A 1 Mb minimal amplicon at 8p11–12 in breast cancer identifies new candidate oncogenes.” Oncogene 24: 5235–5245. 7. Gelsi-Boyer, V., Orsetti, B., et al. (2005). “Comprehensive Profiling of 8p11-12 Amplification in Breast Cancer “ Molecular Cancer Research 12: 655-667. 8. Knuutila, S., Bjorkqvist, A.M., et al. (1998). “DNA copy number amplifications in human neoplasms: review of comparative genomic hybridization studies.” American Journal of Pathology 152: 1107-1123. 9. Lazaris-Karatzas A., Montine K.S., et al. (1990 ). “Malignant transformation by a eukaryotic initiation factor subunit that binds to mRNA 5’ cap.” Nature345(6275): 544-547. 10. Marshall, L., Kenneth N. S., et al. (2008). “Elevated tRNAiMet Synthesis Can Drive Cell Proliferation and Oncogenic Transformation.” Cell133: 78-89. 11. Pegram, M., Hsu, S., et al. (1999). “Inhibitory effects of combinations of HER-2/neu antibody and chemotherapeutic agents used for treatment of human breast cancers.” Oncogene13(18): 2241-2251. 12. Ray, M.E., Yang, Z.Q., et al. (2004). “Genomic and Expression Analysis of the 8p11–12 Amplicon in Human Breast Cancer Cell Lines.” Cancer Research 64: 40-47. 13. Schramm, L., Pendergrast, P.S., et al. (2000). “Different human TFIIIB activities direct RNA polymerase III transcription from TATA-containing and TATA-less promoters “ Genes and Development 14(20): 2650-2663. 14. Schramm, L., and Hernandez N. (2002). “Recruitment of RNA polymerase III to its target promoters “ Genes and Development16(20): 2593-2620 15. Theillet, C., Adelaide, J., et al. (1993). “FGFRI and PLAT genes and DNA amplification at 8p12 in breast and ovarian cancers.” Genes Chromosomes Cancer 7(4): 219-26. 16. White RJ. (2004). “RNA polymerase III transcription and cancer.” Oncogene23(18): 32083216.

46

hurj spring 2011: issue 13


Section Title 2011: issue 13 hurj spring

HURJ Fall 2010: Issue 12 science & engineering reports

Genetically Engineered Transcriptional Response to Voltage Signals in Saccharomyces cerevisiae Noah Young1, Justin Porter2, Jonathan LeMoel1 1. Department of Biomedical Engineering, 2. Department of Biophysics, Johns Hopkins University Abstract

When a stressor is applied to Saccharomyces cerevisiae, or baker’s yeast cells, Calcineurin (Ca2+/calmodulin-regulated protein phosphatase) dephosphorylates in order to translocate Crz1p transcription factor into the nucleus. Upon an influx of calcium into the cell, the Crz1 pathway is activated, promoting the expression of genes downstream of Crz1 binding sites, or CDREs. Here, we controlled the pathway by manipulating voltage-gated calcium channels in yeast. We further improved gene expression by characterizing a library of CDREs of various sensitivities. We integrated both naturally occurring and synthetic CDREs with reporter genes into yeast cells. Vesicular calcium pumps were knocked out in a subset of the yeast cells to investigate their effects in our in vitro system. The cells were then exposed to different voltages for different lengths of time. For the FKS2 CDRE (a naturally occurring variant), we found the optimal voltage to be 8 volts and the minimal exposure time to be at least 40 seconds for notable expression. We concluded that vesicular calcium pumps are necessary to ensure cell viability. In the future, a microfluidic device should be used to allow finer control of voltage exposure.

Introduction

that when all but a specific 23-base pair sequence was removed from the FKS2 promoter, calcineurin sensitivity remained comparable to that of the entire promoter. Reasoning that this small region contained the Crz1 binding site, Stathopoulos’ team named it the calcineurin-dependent response element (CDRE). In 2002, Yoshimoto et al. took this work a step further with a bioinformatic approach. Yoshimoto looked for homologous regions among all yeast promoters known to induce a transcriptional response in the presence of active calcineurin. This analysis yielded a small library of seven- and eight-base pair sequences. Crz1 is believed to bind these regions with differing affinities. Given the length of this sequence, as well as the size of the yeast genome, Crz1 has enough binding sitesto activate every gene several times over. However, for a Crz1 binding event to result in transcription, the site must be properly situated within the gene’s promoter. Specifically, it must be a viable upstream activating sequence. Only genes with a Crz1 binding site arranged in this particular way are affected by calcineurin.

In the field of computer-to-cell communication, new breakthroughs are emerging in the area of electrically-actuated transcription. A transcriptional sensor for voltage in Saccharomyces cerevisiae, a microorganism that has been critical in the history of studying human genetic characteristics in homologs, has been created and shown to respond to external electrical stimuli. Previously, a Valencia group had used an applied voltage to open the voltage-gated calcium channels (VGCCs) of a small patch of S. cerevisiae cells on a 2-D electrode array. When these channels open, calcium floods into the cells through the VGCCs. In addition, the Valencia group has “engineered in,” or artificially expressed, aequorin, a protein that hydrolyzes a substrate to produce light in the presence of high calcium concentrations. This system achieves the necessary voltage sensitivity, but unfortunately, this voltage only results in a single output: light emission. We felt that to achieve greater generality and interchangeability, this system needed to interact with the transcriptional machinery of the cell. From there, a voltage input could be transduced into an output of protein transcription. Wild-type yeast possesses a mechanism to respond to calcium influx, which serves as a proxy for voltage through voltage-gated calcium channels. High calcium levels activate the enzyme calcineurin, which dephosphorylates the transcription factor Crz1 (pronounced “Crazy-1”, so named for the observation that activation of this factor causes the yeast to go “crazy” in a general stress response). A mathematical model of this system can be seen in Figure 1. Once dephosphorylated, Crz1 is pumped into the nucleus where it binds DNA and initiates transcription. Hundreds of genes have been shown to exhibit sensitivity to calcium stress via calcineurin and Crz1. In 1999, Stathopoulos and colleagues set out to determine the binding site that Crz1 targets. They Figure 1: A mathematical model of voltage-gated calcium-controlled transciption an exfocused on the gene FKS2 in particular, and found

##

pression in S. cerevisiae. Note: degradation constants not shown.

47


science & engineering reports

hurj spring 2011: issue 13

Metho ds A number of plasmid constructs—loops of foreign DNA that act somewhat like genetic programs—were experimented on in an effort to capitalize on yeast’s natural voltage-mediated behavior. The first construct contained four consecutive copies of the CDRE from FKS2 described by Stathopoulos. Under the regulation of this promoter was a gene for red fluorescent protein. A second construct contained the PMC1 promoter (which contains a binding region described by Yoshimoto) regulating yellow fluorescent protein. A final modification to the S. cerevisiae cells dealt with their calcium vesicles. Since yeast can naturally sequester cytosolic calcium in vesicles, we used a knockout strain that lacked the calcium pumps to fill these vesicles. It was believed that this would make the cells ultra-sensitive to calcium influx. A precise measurement and voltage application device to characterize the voltage response in the engineered yeast was further desired. For this application, our attention was turned to microfluidics, a branch of engineering that deals with micron-scale channels which conduct tiny volumes of water—potentially containing cells—with great precision. Rather than having an electrical lead exerting a large non-uniform voltage across a large population of mobile cells, a microfluidic device can direct the flow of cells through channels of merely a single cell’s width in diameter. Although microfluidic devices containing electrodes are common, most of them apply a specific voltage along a channel, typically for the purpose of propelling water though the tiny channels. In contrast, our application required a voltage across a channel. Because of this unique need, a microfluidic device was custom-fabricated at the Johns Hopkins Microfabrication Laboratory. 100 micron-wide channels were etched into a silicon wafer and titanium and gold electrodes were patterned onto an oxide layer on top of the channels. Out of 24 devices attempted (12 on a single wafer), only three functioned, all with poor performance. In the interest of time, we decided to use a 96 well plate-compatible electroporator (coaxial electrode). Using the electroporator and our engineered cells, we sampled outputs for different voltage amplitudes and stimulus durations. Using confocal fluorescent microscopy, it was demonstrated that with a six volt threshold applied across half the well diameter, cells began to transcribe the fluorescent proteins. Due to the gating kinetics of calcium channels, channels normally tend to all be open or closed for a given cell at a given voltage. In addition, applied membrane voltages in the range of tens of millivolts are usually needed to open such channels. However, it was found that, on a macro scale, transcriptional responses increased gradually with increased voltage and that much larger applied voltages are needed to alter the potentials of the thousands of cells in a well. Furthermore, the voltage can only be applied for 30 seconds because a prolonged duration will cause the cells to succumb to stress, due to an inability to sequester calcium.

Results The effect of knocking out the PMC1 and VCX1 vesicular calcium pumps was evaluated. The data for both pump positive and pump negative cells are shown in Figures 2-5.

48

Figure 2: This plot shows the fluorescence intensity normalized by OD 8 hours after induction by electrostimulation in cells positive for vesicular calcium pumps and using the FKS2 UAS in the reporter’s promoter

Figure 3: This plot shows the fluorescence intensity normalized by OD 8 hours after induction by electrostimulation in cells negative for vesicular calcium pumps and using the FKS2 UAS in the reporter’s promoter

Figure 4: This plot shows the fluorescence intensity normalized by OD 8 hours after induction by electrostimulation in cells positive for vesicular calcium pumps and using the PMC1 UAS in the reporter’s promoter

Figure 5: This plot shows the fluorescence intensity normalized by OD 8 hours after induction by electrostimulation in cells negative for vesicular calcium pumps and using the PMC1 UAS in the reporter’s promoter


Section Title 2011: issue 13 hurj spring

HURJ Fall 2010: Issue 12 science & engineering reports

(Top Left) Figure 6: 0 seconds of electrostimulus on yeast with vesicular pumps

(Top Left) Figure 9: 60 seconds of electrostimulus on yeast without vesicular pumps

(Bottom Left) Figure 7: 60 seconds of electrostimulus on yeast with vesicular pumps

(Bottom Left) Figure 10: 90 seconds of electrostimulus on yeast without vesicular pumps

(Top Right) Figure 8: 110 seconds of electrostimulus on yeast with vesicular pumps

From this experiment, it can be concluded that the CDRE–RFP plasmid was successfully inserted into the PMC1, VCX1 yeast strain lacking calcium vacuoles. It can also be established that the activation region of the CDRE–RFP system is between 90 and 130 seconds of shocking, with a definite increase in expression levels with an increase in time shocked (Figures 6-8). There is also no constitutive expression of CDRE–RFP, from the fact that there is no RFP seen in the 0 second control. The overall expression of the system is weak, and it is only after 110 seconds that more than half of the cells in the frame start to express. It was also observed that the cells lacking calcium pumps are smaller and less healthy looking than the cells with the pumps, which may be because they are less capable to deal with calcium shock induced by shocking and so they became more strained and unhealthy. From the images of the vesicular pump lacking yeast (Figures 9-11), we can see that there is significantly higher expression levels with constitutive expression. Also, the rate of increase in expression with time is much higher, and the region where the CDRE gets activated is much earlier than with the yeast without the pumps. We theorize that the reason for this increased expression in yeast containing vesicular pumps is because of a combination of two reasons. First, because they do not have PMC1 and VCX1 knocked out, these cells have the ability to pump calcium out of the cytosol and into vacuoles, making them more resistant to calcium shock and hence more healthy and better able to express the RFP. The other possible reason is that PMC1 and/or VCX1 may have feedback loops, and as PMC1 does have constitutive expression, the vesicular pump containing cells may be having their RFP expression rates amplified by this effect. A combination of these two effects most likely causes the increased expression rates.

##

(Top Right) Figure 11: 130 seconds of electrostimulus on yeast without vesicular pumps

Conclusions Findings indicated that voltage applied across yeast, with the specialized construct that was tested, can induce gene expression. “BioBricks” refers to genetic material that conforms to the standardized format used by iGEM. This genetic material is easily integrated into components developed by other iGEM teams, which then allows genetic engineers to control gene expression using voltage. Voltage control could serve as a powerful platform for future innovation because it provides an interface between electronic and biological systems. A computer could directly interface with the cells, orchestrating their behaviors from division to use of nutrients to production of chemicals or pharmaceuticals. This rudimentary system could be refined and extended to provide many more sophisticated levels of control than those demonstrated in the lab. By leveraging the variety of calcium channels and the varying sensitivities of the many Crz1 binding sites, a system can be created in which cells’ transductions will be sensitive enough to discriminate between four volts and five volts. Perhaps at four volts, cells will enter a senescent state, while at five they will begin to bud and divide rapidly. Or, perhaps the yeast cells have an internal clock that allows the yeast to discriminate between a four volt signal in the afternoon and a four volt signal in the morning. There appears to be an innumerable amount of ways to increase the complexity and sophistication of the system. This discovery has been a step forward in developing the micro-devices, plasmid constructs, and engineering practices that will continue to change the way we think about the ability to truly engineer biological systems.

49


science & engineering reports References 1. Alekseev, A. E., N. I. Markevich, et al. (1996). “Comparative analysis of the kinetic characteristics of L-type calcium channels in cardiac cells of hibernators.” Biophys J 70(2): 786-797 2. Cachelin, A. B., J. E. De Peyer, et al. (1983). “Sodium channels in cultured cardiac cells.” J Physiol 340: 389-401. 3. Cai, L., C. K. Dalal, et al. (2008). “Frequency-modulated nuclear localization bursts coordinate gene regulation.” Nature 455(7212): 485-490. 4. Cooper, P. J., M. L. Ward, et al. (2001). “Metabolic consequences of a species difference in Gibbs free energy of Na+/Ca2+ exchange: rat versus guinea pig.” Am J Physiol Regul Integr Comp Physiol 280(4): R1221-1229. 5. Cui, J. and J. A. Kaandorp (2006). “Mathematical modeling of calcium homeostasis in yeast cells.” Cell Calcium 39(4): 337-348. 6. Cyert, M. S. (2003). “Calcineurin signaling in Saccharomyces cerevisiae: how yeast go crazy in response to stress.” Biochem Biophys Res Commun 311(4): 1143-1150. 7. Hong, M. P., K. Vu, et al. “Cch1 restores intracellular Ca2+ in fungal cells during endoplasmic reticulum stress.” J Biol Chem 285(14): 10951-10958. 8. Kugel, J. F. and J. A. Goodrich (1998). “Promoter escape limits the rate of RNA polymerase II transcription and is enhanced by TFIIE, TFIIH, and ATP on negatively supercoiled DNA.” Proc Natl Acad Sci U S A 95(16): 9232-9237. 9. Letovsky, J. and W. S. Dynan (1989). “Measurement of the binding of transcription factor Sp1 to a single GC box recognition sequence.” Nucleic Acids Res 17(7): 2639-2653. 10. MacDonald, S. H., P. Ruth, et al. (2006). “Increased large conductance calcium-activated potassium (BK) channel expression accompanied by STREX variant downregulation in the developing mouse CNS.” BMC Dev Biol 6: 37. 11. Nakamura, T., Y. Liu, et al. (1993). “Protein phosphatase type 2B (calcineurin)-mediated, FK506-sensitive regulation of intracellular ions in yeast is an important determinant for adaptation to high salt stress conditions.” EMBO J 12(11): 4063-4071. 12. Rivetta, A., C. Slayman, et al. (2005). “Quantitative modeling of chloride conductance in yeast TRK potassium transporters.” Biophys J 89(4): 2412-2426. 13. Stathopoulos-Gerontides, A., J. J. Guo, et al. (1999). “Yeast calcineurin regulates nuclear localization of the Crz1p transcription factor through dephosphorylation.” Genes Dev 13(7): 798-803. 14. Tang, W., A. Ruknudin, et al. (1995). “Functional expression of a vertebrate inwardly rectifying K+ channel in yeast.” Mol Biol Cell 6(9): 1231-1240.

50

hurj spring 2011: issue 13


Section Title 2011: issue 13 hurj spring

HURJ Fall 2010: Issue 12 science & engineering reports

Understanding Greenhouse Gas Metrics in Climate Change Discussions: Developing a Greenhouse Gas Intercomparison Tool Mark Brennan1 , 2, Benjamin Zaitchik3 1. International Studies Program 2. Department of Applied Math and Statistics 3. Department of Earth and Planetary Sciences Abstract The need for countries to standardize greenhouse gas contributions to discuss mitigation and adaptation responsibilities is increasing, with the Kyoto Protocol set to expire in 2012. Participant countries to the United Nations Framework Convention for Climate Change (UNFCCC) and the Protocol have begun negotiations for a second multilateral climate change regime. The Intergovernmental Panel on Climate Change (IPCC) and UNFCCC have relied on the Global Warming Potential (GWP) to serve as the predominant climate change metric, up through Assessment Report 4 (AR4) and the Conference of Parties-16 (COP-16), respectively. Despite decades of scientific development and refinement, metrics in their application are ultimately policy tools. We developed a simple tool to measure bulk emission, radiative forcing, and global temperature change. It can be expanded to analyze atmospheric concentrations and sea level estimates. Our tool is entirely metric based, incorporating not just the predominant GWP but also a plethora of environmental change equivalency factors. The study concludes that greenhouse gas metrics can be applied to future and historical emissions trajectories in the form of a simple intercomparison tool.

Introduction As greenhouse gas (GHG) metrics have evolved, they have developed from simple measurements of climate change, such as radiative forcing (RF) equivalence, to multi-gas substitution damage metrics. It is useful to view metrics in this logical ‘emissions to damage’ progression. For emissions inventories and mitigation, it has become traditional to equate other greenhouse gases with CO2 in terms of relative impact on an end-point concept such as RF or damage. “A metric allows emissions to be put on a common scale. In the context of climate change, the common scale is often called ‘CO¬2-Equivilent’ emissions” (Fuglestvedt et al., 2009). This concept of equivalence is essential for widely applicable metrics that aim to quantify regional, state, and global emissions in the most appropriate ways. Climate change metrics can be vehicles for viewing global environmental change, while greenhouse gas metrics are designed to equate different gases, depicted in Table 1.

ing the corresponding greenhouse gas metric that was used in this intercomparison tool.

Metho ds

Bulk Emissions: Meinshausen et al. (2009) discusses the advantages of bulk emissions; bulk emissions are the simplest emissions metric, in that they are simply an integrated historical emissions dataset. However, bulk emissions are distinct within the climate change metric discussion in that they rely on the greenhouse gas metrics to make different gases ‘comparable’ in emissions datasets. There is a general linearity between bulk emissions and more complicated greenhouse gas metrics such as temperature change and RF. However, in policy making, bulk emissions—despite their simplicity—are somewhat abstract, and thus less useful. Our tool gives two bulk emissions estimates using both a radiative forcing greenhouse gas metric and a temperature change greenhouse gas metric, for illustrative purposes. We use conventional time horizons of 100 years for our metrics, but note that in choosing a time horizon, there exists an opportunity for qualitative judgment.

Our tool interprets global anthropogenic change in terms of emissions emitted, radiative forcing, and temperature change. Below are brief outlines of each climate change metric, and briefly describ-

R adiative Forcing: Radiative forcing (RF) “is an index of the importance of the factor as a potential climate change mechanism… radiative forcing

Table 1: Climate Change and GHG Metrics

##

51


hurj spring 2011: issue 13

science & engineering reports values are for changes relative to preindustrial conditions defined at 1750 and are expressed in watts per square metre (W/m2)” (IPCC AR4). Our tool finds radiative forcing through an equilibrium-warming constant and through treating the GWP as an endpoint metric. It is important to note that one can assume a sense of linearity between global temperature and global radiative forcing (Joshi et al., 2003). This is through an equilibrium warming constant (typically a value between .5 and .8), shown in Equation 1.

(1) (IPCC TAR). In validation studies, our tool concurs with the findings of the IPCC TAR, in that radiative forcing measurements backed out of temperature estimates (and visa versa) were appropriate. We used a parameter of .8, as suggested in Shine et al., 2007. Additionally, we used a reconfigured GWP as an endpoint metric, applying it to datasets to give estimated end-point change in RF predictions. GWP is defined as the integrated radiative forcing of a pulse emission of a unit of gas, over a time horizon (H). (Shine et al., 2007) The Kyoto Protocol typically uses H = 100 years, in an attempt to appropriately emphasize both long and short-lived gasses. However, as we will continue to explore below, the time horizon is a qualitative policy judgment, and there is criticism about the overemphasis of short-lived gases.

Environmental Change: Environmental change metrics vary, ranging from sea level rise estimates to temperature change estimates. We examine only temperature change in our tool, though simple change in sea level assessments would be viable to incorporate. The concept of an average global air temperature metric was originally founded in the Brazilian Proposal (UNFCCC, 1997; Den Elzen, 2005). Shine et al introduced the current greenhouse gas temperature change metric, known as the Global Temperature Change Potential (GTP). It is distinctive in that it is designed as an endpoint metric, just giving the temperature change at an endpoint (Shine et al., 2007; Boucher et al., 2008). We use the GTP as the environmental change greenhouse gas metric in our tool, giving us estimates for temperature increase at predetermined endpoints. Additionally, as noted above, we can use a global equilibriumwarming constant to provide a second estimate of global temperature change. Using the end-point GWP to estimate radiative forcing, our tool provides temperature change predictions. Damage Metrics: With the goal of limiting emissions to meet a long-term climate goal (radiative forcing, temperature, etc.), a cost effective metric becomes relevant (Fuglestvedt et al., 2009). Utilizing CO2-equivilant, this would establish abatement costs of certain gases. Again, depending on the greenhouse gas equivalency, time horizon (H), time to H at HO, and species abated, the cost-effectiveness metric will provide different quantifications for the same gases. One step further from a cost effective metric is the cost-benefit metric, which can assert, “marginal damages caused by a metric ton of carbon dioxide emissions” (Tol et al., 2004). Anthoff terms this the “social cost of carbon,” defined as “the social cost of an incremental emission of a greenhouse gas today” (2008). We do not include greenhouse gas damage metrics in our tool because of their inaccuracy, but note the importance they present as the most ‘tangible’ of the metrics. As damage

52

metrics improve, simplified metrics based intercomparison tools will be necessary to incorporate them.

Results & Discussion Policy makers are constantly subject to qualitative and seemingly arbitrary choices of climate change and greenhouse gas metrics. Because bulk emissions are too abstract for policymaking, and uncertainty in damage metrics is too high, radiative forcing and environmental change (EC) metrics appear to be the most applicable. However, our tool provides policy makers with all possible climate change metrics to apply to datasets. Below are several validation studies applied to three datasets. Our tool and datasets used allows us to analyze climate change attribution on a regional and sectorial level. For historical datasets, we used MIT’s Joint Program on the Science and Policy of Global Change (JPSPGC) Technical Note No. 8 (2006) dataset, as well as the World Resource Institute’s CAIT database. We interpolated within the MIT database to provide annual emissions estimates, but, instead of extrapolating from the MIT dataset, we used the CAIT dataset because MIT did not extend up to 2000. For future emissions trajectories, we used Representative Concentration Pathways (RCP) used by the IPCC AR5. We defined our regions as the RCP’s do, with the regions Asia, Latin America, Organization for Economic Cooperation and Development 1990, Middle East & Africa, and Reforming Economies. (RCP Database, 2009) All the emissions profiles were segregated by gas.

Bulk Emissions Calculations: We looked at bulk emissions using GWP and GTP for CO2 equivalence. As seen in Figure 1, bulk emissions with different


Section Title 2011: issue 13 hurj spring GHG metrics show significantly different emissions histories. The difference between OECD-90 and the ASIA emissions profiles are marked, when using the different metrics for contrast. As noted above, qualitative decisions in the policy making process is where much conflict over international climate change regimes stems from. By plainly illustrating the quantitative differences resulting from qualitative judgments, our tool helps take some of the uncertainty out of climate change negotiations.

H i s t o r i c a l R e s p o n s i b i l i t y a n d Ve r i f i c a t i o n : Estimates from case studies using our tool agree with much of climate modeling literature, suggesting that our rudimentary model is robust. Below are several case studies in which we examined the concept of historical responsibility, in terms of contribution to radiative forcing and temperature increase in 2000 from pre-industrial levels. Our findings aligned with that of the published RCP and MATCH models. Table 2 is an excellent illustration of how our tool can approach climate change metrics through a variety of greenhouse gas metrics and measures, to give policy makers well-balanced analyses. It is important to note that we used the databases listed above, while the RCP team and MATCH used their own databases. Specific Historical Responsibility : As noted above, the datasets we have used in our tool for illustrative purposes are broken down to represent five ‘emissions regions’ that, aggregated, represent global emissions. However, emissions profiles are available for most large countries for almost two centuries. For example, when examining American and Chinese contributions to global climate change, our tool suggests the United States contributed to about 19.6% of warming in 2000, while China contributed to 9.41%. The MATCH study similarly found that the US contributed to 19% while China contributed to about 10.5% of warming, though with large margins of error.

Conclusion Though the GWP-100 remains the most commonly used greenhouse gas metric, different international regimes will begin to require understanding of alternative greenhouse gas metrics. For example, there is a growing amount of literature that asserts the need for a micro-reorientation of climate change regimes and metrics (Bar-

HURJ Fall 2010: Issue 12 science & engineering reports rett et al., 2008; Unger et al., 2009). This implies different sectors and regions may use the most applicable metrics to themselves, and the standardized GWP-100 may only exist within the United Nations. AOSIS may employ a sea level metric, developed countries may advocate for the short-lived-forcer-emphasizing GWP, and carbonheavy cement makers may use a bulk emissions index. Thus, as countries, sectors, and regions begin to approach global climate change regimes, technical diplomacy will become increasingly metric-oriented as we attempt to understand and relate different emissions profiles. Our tool will aid in understanding the quantitative impacts of qualitative decisions, such as climate change metric choice or time horizon. Furthermore, it will essentially provide policy makers with a simple model that can be applied to any historical or future dataset, giving quick and applicable estimates of global environmental change. References 1. Anthoff, David. “Equity Weighting and the Marginal Damage Costs of Climate Change.” Ecological Economics 68 (2009): 836-49. Web. 2. Boucher, O. “Climate Trade-off between Black Carbon and Carbon Dioxide Emissions.” Energy Policy 36 (2008): 193-200. Web. 3. Climate Analysis Indicators Tool (CAIT). Web. 23 Mar. 2011. <http://cait.wri.org/>. 4. Den Elzen, Michel. “Analysing Countries’ Contribution to Climate Change: Scientific and Policy-related Choices.” Environmental Science & Policy 8 (2005): 614-36. Web. 5. Fuglestvedt, Jan. “Transport Impacts on Atmosphere and Climate: Metrics.” Atmospheric Environment Xxx (2009): 1-30. Web. 6. Höhne, Niklas. “Contributions of Individual Countries’ Emissions to Climate Change and Their Uncertainty.” Climatic Change (2009). Web. 7. IPCC. Fourth Assessment Report : Working Group I. Publication. 2007. Print. 8. Joshi, M. “A Comparison of Climate Response to Different Radiative Forcings in Three General Circulation Models: towards an Improved Metric of Climate Change.” Climate Dynamics 20 (2003): 843-54. Web. 9. Meinshausen, Malte. “Greenhouse-gas Emission Targets for Limiting Global Warming to 2 °C.” Nature 458.7242 (2009): 1158-162. Print. 10. “MIT Global Change Program | Technical Note 8.” The MIT Joint Program on the Science and Policy of Global Change. Web. 23 Mar. 2011. <http://globalchange.mit.edu/pubs/abstract. php?publication_id=526>. 11. “RCP Database.” IIASA - Laxenburg, Austria. Web. 06 Mar. 2011. (2009): <http://www.iiasa. ac.at/web apps/tnt/RcpDb/dsd?Action=htmlpage&page=about#regiondefs>. 12. Shine, Keith. “Comparing the Climate Effect of Emissions of Short- and Long-lived Climate Agents.” Phil. Trans. R. Soc. 365 (2007): 1903-914. Web. 13. Tol, Richard. “How much damage will climate change do? Recent estimates.” Working Paper SCG-2, Research Unit Sustainability and Global Change. Hamburg University. Web. 14. Tol, Richard. “The Marginal Damage Costs of Carbon Dioxide Emissions: an Assessment of the Uncertainties.” Energy Policy 33 (2005): 2064-074. Web. 15. “United Nations Parties & Observers.” United Nations Framework Convention on Climate Change. Web. 11 May 2010. <http://unfccc.int/parties_and_observers/items/2704.php>. 16. Van Vuuren, Detlef. “Multi-gas Scenarios to Stabilize Radiative Forcing.” Energy Economics 28 (2006): 102-20. Web.

Table 2: RF and Temperature Change in 2000 from Pre-Industrial Levels

##

53


hurj spring 2011: issue 13

54


hurj spring 2011: issue 13

55


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.