NIAID needs you because the world needs us! The National Institute of Allergy and Infectious Diseases (NIAID), one of the largest institutes of the National Institutes of Health (NIH), conducts and supports a global program of biomedical research to better understand, treat, and ultimately prevent infectious, immunologic, and allergic diseases. NIAID is a world leader in areas such as HIV/AIDS, pandemic and seasonal influenza, malaria, and more. Advance your career while making a difference in the lives of millions! NIAID has opportunities for an array of career stages and types of research, including clinical and research training programs that provide college students and recent graduates an opportunity to work side-by-side with worldrenowned scientists who are committed to improving global health in the 21st century. Your individual talents may help us complete our mission. This is your chance to get involved.
National Institute of Allergy and Infectious Diseases Take the first step in advancing your future and apply for a training program today.
Help Us Help Millions
U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health
Check out all of NIAID’s opportunities at www.niaid.nih.gov/careers/jhu.
National Institute of Allergy and Infectious Diseases Proud to be Equal Opportunity Employers
a letter from the editors
{
A little over a year ago and barely a generation after the Civil Rights Act, the United States elected its first African-American president. Barack Obama based his campaign for the White House on the idea of forging change that was more than skin deep. This lasting change would be essential, since the problems awaiting the new Commander-in-Chief were mounting: two internationally unpopular wars fueling a growing deficit, the need to take action to address environmental concerns, the recent bailouts of large financial institutions feeding fears of a deepening recession, the cries to rejuvenate a dying healthcare system, and the lingering feeling that America was slowly receding from its international dominance. How would this new era play out under a new brand of president and his promise of a different kind of politics? This eleventh volume of the Hopkins Undergraduate Research Journal features a snapshot of these issues, which, even a year into Obama’s presidency, continue to affect the lives of citizens both domestically and around the world. Drawing from a range of interests pursued by students at the Johns Hopkins University, this volume explores some of the effects of policy on health and technology at home and the relationship of the US with its neighbors abroad. Alongside these pieces are the continued investigations in the humanities concerning the rise of evangelicalism and the representation of Jews in Argentina, as well as the science and engineering pieces that embody the quality and diversity of research that Hopkins is renowned for. HURJ aims to offer a unique opportunity to undergraduates by providing a venue to present their research to a wider audience. The undergraduates contributing to this volume come from a variety of disciplines and we would like to thank them for their dedicated work on these pieces. We would also like to extend our thanks, as always, to the Student Activities Commission for their continued support. Also, many thanks to our hardworking staff members, old and new, that have made this issue possible. We hope that you come curious and leave the pages of this journal stimulated. Sending our best,
Johnson Ukken Editor-in-Chief, Content
Paige Robson Editor-in-Chief, Layout
Karun Arora Editor-in-Chief, Operations
table of contents spring 2010 focus: america’s new era pg. 14
America’s Changing International Role
pg. 19
Biofuels and Land Use Changes: Flawed Carbon Accounting Leading to Greater Greenhouse Gas Emissions and Lost Carbon Sequestration Opportunity
pg. 23
Julia Blocher
Beyond Tokyo: Emissions Trading in America
pg. 31
Paul Grossinger
Toshiro Baum
Stem Cell Act Spurs New Age for Medicine
Nezar Alsaeedi
table of contents spotlights on research 7 Economic Ramifications of Drug Prohibition Calvin Price 9 Role of Adenosine and Related Ectopeptidases in Tumor Evasion of Immune Response Carolyn Rosinski 11 What Time Is It Now? 2PM and Global Social Movements Isaac Jilbert
science 33 Micro-RNA: A New Molecular Dogma Robert Dilley 35 Double Chooz: A Study in Muon Reconstruction Leela Chakravarti
humanities 42 Resigned to the Fringes: An Analysis of Self-Represenations of Argentine Jews in Short Stories and Films Helen Goldberg 46 Innovation & Stagnation in Modern Evangelical Christianity Nicole Overley
engineering 52 Fractal Image Compression on the Graphics Card August Sodara 56 Robotic Prosthetic Development: The Advent of the DEKA Arm Kyle Baker
hurj 2009-2010 hurj’s editorial board Editor-in-Chief, Content Johnson Ukken Editor-in-Chief, Layout Paige Robson Editor-in-Chief, Operations Karun Arora Content Editors Budri Abubaker-Sharif Ayesha Afzal Leela Chakravarti Isaac Jilbert Mike Lou Layout Editors Kelly Chuang Sanjit Datta Edward Kim Lay Kodama Michaela Vaporis Copy Editors Haley Deutsch Mary Han Andi Shahu PR/Advertising Javaneh Jabbari Webmaster Ehsan Dowlati
hurj’s writing staff Nezar Alsaeedi Kyle Baker Toshiro Baum Julia Blocher Leela Chakravarti Robert Dilley Helen Goldberg
Paul Grossinger Isaac Jilbert Nicole Overley Calvin Price Carolyn Rosinsky August Sodara Cover & Back Cover by Karam Han
Photographer/Graphic Design Sarah Frank
about hurj:
The Hopkins Undergraduate Research Journal provides undergraduates with a valuable resource to access research being done by their peers and interesting current issues. The journal is comprised of five sections - a main focus topic, spotlights, and current research in engineering, humanities, and science. Students are highly encouraged to submit their original work.
disclaimer: The views expressed in this publication are those of the authors
and do not constitute the opinion of the Hopkins Undergraduate Research Journal.
contact us: Hopkins Undergraduate Research Journal Mattin Center, Suite 210 3400 N Charles St Baltimore, MD 21218 hurj@jhu.edu http://www.jhu.edu/hurj
hurj spring 2010: issue 11
can you see yourself in hurj? share your research! now accepting submissions for our fall 2010 issue focus -- humanities -- science -- spotlight -- engineering
hurj
hurj spring 2010: issue 11
spotlight
The Economic Ramifications of Drug Prohibition Calvin Price / Staff Writer When California Assemblyman Tom Ammiano introduced the Marijuana Control, Regulation, and Education Act in early 2009, he was challenging decades-old US policy and popular thought that considered drug legalization harmful to society. With the recent recession intensifying California’s budget crisis, the bill was designed to supply much-needed capital to Sacramento, without raising taxes or stifling the already weakened economy. The reasoning behind the bill was to choose the lesser of two evils, suggesting that the negative impact of the “societal ill” of increased marijuana use was less than the gains that could be made by injecting well over one billion dollars into a state government that, at times, cannot even pay its own employees. Marijuana is at the forefront of the drug debate because it is relatively safe, both for individuals and communities when compared not only to other illegal drugs, but even to alcohol or tobacco. Supporters of marijuana legalization argue that marijuana, unlike these already legal drugs, is not physically addictive, and its use is less likely to cause death to others than alcohol (driving under the influence of marijuana, while more dangerous than driving sober, is less dangerous than driving drunk) , and also seems to cause diseases such as lung cancer or emphysema less than tobacco. For these reasons, marijuana is the only currently illegal drug for which there is serious political debate as to its legal-
7
ization. In this time of serious economic issues, though, should marijuana be the sole drug considered for legalization? Harvard economist Jeffrey Miron estimates that the legalization of all drugs would net the federal government over $70 billion per year, more than half of that from decreased law enforcement spending (funding the DEA, and providing aid to the governments of Mexico and Columbia for their help in fighting the drug war). In a day of soaring deficits when funding cannot even be found for serious health reform, that kind of capital is sorely needed in Washington and it may just be possible to obtain it without raising taxes. Unfortunately, most of the cash crop drugs (such as cocaine, heroin, and methamphetamine) are far more dangerous than marijuana: they cause severe addiction problems, carry a risk of overdose, and pose dangers to others. Any legalization would have to come with government programs such as those seen in Portugal and the Czech Republic, with serious investment in rehabilitation and recovery for addicts.
$70
B I L L I O N The legalization of all drugs would net the federal government over $70 billion per year — Jeffrey Miron, Harvard These programs have been a resounding success in the countries where they have been instituted. Though Portugal has decriminalized all drug use, its serious investment in rehabilitation and education has decreased the number of “problem drug users” (people whose contribution to society is lessened or negative due to their drug use) to a level below that seen in other developed countries. Deaths relating to drug use in Portugal have been more than halved recently, while HIV rates due to drug use have dropped and the number of drug users seeking treatment for addiction has doubled. Maybe most surprisingly, however, is that drug use as a whole has declined, suggesting that treatment and education could be more effective tools than law enforcement. Though this can be attributed to Portugal’s investment in rehabilita
spotlight
hurj spring 2010: issue 11 tion, much of the credit has to be given to their drug policies. When people do not fear imprisonment for letting the government know they are drug users, they are much more likely to seek treatment. Portugal’s drug policies are not only a social good, they also help Portugal’s general economy by preventing non-problem users from becoming addicts, allowing them to remain in the workforce and increase general productivity. There are certainly a number of arguments against legalization of drugs. If Portugal had legalized drug use instead of simply decriminalizing it, there likely would have been an increase in the overall prevalence of drugs, because it would be legal to sell them as well. On the other hand, drug use is still nontaxable when not legalized, meaning that Portugal is not collecting a large amount of money that could go to the public good. With the correct preventative measures, it may indeed be possible for the United States to legalize drug use and reap the positive benefits from it (more money for the government, decreased problem users, decreased mortality rate), while limiting the obvious negative side effects.
Unfortunately, for the past 40 years, discussions by politicians on the issue of drug legalization has been limited to legislation like Ammiano’s, a state politician from one of the country’s perceived wackiest cities (Ammiano, a Democrat, represents California’s 13th district, which includes San Francisco). It is doubtful that he has the political capital to even bring the discussion forward. Though drug use has enough downsides to warrant its prohibition, the fact that there is nearly no political discussion of drug legalization, which has serious advantages, represents a failure of the political system. There has been practically no discussion on the topic for 40 years, and the current administration wants to continue that trend, with President Obama’s Drug Czar Gil Kerlikowske saying, “legalization vocabulary doesn’t exist for me, and it was made clear that it doesn’t exist in President Obama’s vocabulary.” One of the greatest mistakes we can make, as a society, is to not consider the possible advantages of change. It is a folly that, for certain issues, we have always allowed and, apparently, always will.
References: 1. Ammiano, Tom. Assembly Bill No. 390. 23 February 2009. http://www.leginfo.ca.gov/ pub/09-10/bill/asm/ab_0351-0400/ab_390_ bill_20090223_introduced.pdf 2. G. Chesher and M. Longo. Cannabis and alcohol in motor vehicle accidents. In: F. Grotenhermen and E. Russo (Eds.) Cannabis and Cannabinoids: Pharmacology, Toxicology, and Therapeutic Potential. 2002. New York: Haworth Press. Pages 313-323. 3. Miron, Jeffrey A. Drug War Crimes: The Consequences of Drug Prohibition. 2004. Michigan: University of Michigan Press. 4. Greenwald, Glenn. Drug Decriminalization in Portugal: Lessons for Creating Fair and Successful Drug Policies. 2009. Cato Institute. Page 3. 5. Greenwald, Glenn, et al. “Lessons for Creating Fair and Successful Drug Policies”. Drug Decriminalization in Portugal. 3 April 2009. Cato Institute. http://www.cato.org/pubs/wtpapers/ greenwald_whitepaper.pdf. 6. Peele, Stanton. The Five Stages of Grief over Obama’s Drug Policies. 8 August 2009. http:// www.huffingtonpost.com/stanton-peele/thefive-stages-of-grief_b_254134.html
8
hurj spring 2010: issue 11
spotlight
Role of Adenosine and Related Ectopeptidases in TumorEvasion of Immune Response Carolyn Rosinsky / Staff Writer Most cancer treatments used today work mainly by targeting cancer cells and inhibiting their natural metabolisms and abilities to divide and proliferate. The problem with many of these treatments is that they do not target cancer cells specifically enough, damaging non-cancer cells as well. Tumor immunology is a field that strives to elucidate better methods for cancer treatment by enhancing and using the mechanisms of our natural immune systems to fight cancer, rather than using outside agents to target cancer cells. One focus of tumor immunology is the signaling pathway between the tumor and the immune system. Tumor signals can affect the immune system either defensively, protecting the tumor from immune responses, or offensively, effectively shutting down the immune response before it begins acting on the tumor. I research this signaling system under Christian Meyer, Ph.D, in the lab of Jonathan Powell, M.D. Ph.D, a professor of oncology at the Johns Hopkins School of Medicine. We study adenosine, one of the cell’s basic nucleosides. Adenosine helps make up DNA, RNA, and ATP (adenosine triphosphate), the energy molecule of cells. Adenosine is also an integral regulator in immune response. It modulates the immune system’s T-cell activity by suppressing the immune response of T-cells when adenosine levels are high. When tissue is damaged, the initial release of adenosine from the damaged cells allows for an immune response, but as the adenosine level rises, the response begins to fall off, protecting the tissue from excessive immune activity. In the course of normal inflammation and immune activity, adenosine generation helps limit the
9
extent of immune activation to prevent damage (1). However, in the microenvironment of a tumor, adenosine is also released in large quantities by damaged malignant cells signaling an immune response, which, in turn, can create a negative feedback loop, allowing the cancer to grow (2). This provides a mechanism by which tumors can both avoid and dampen the immune response. Whereas the adenosine receptor-suppression of T-cells protects tissue from excessive inflammation, it may cause the immune response to stop before T-cells have effectively protected tissue from pathogens. In the case of cancer cells, the oxygen-deprived tumor and damaged vasculature of the tumor mass cause adenosine to be released, thereby halting the T-cell response that could fight the tumor. Drugs that block the adenosine receptor stop premature T-cell inhibition, allowing anti-tumor T-cells to reject the tumor, causing a decrease in tumor growth, metastases, and vasculature (3). Two cell surface ectopeptidases (enzymes that break down amino acid
chains) involved in adenosine signaling are CD73, which is involved in converting AMP (adenosine monophosphate) to adenosine, and CD39, which is involved in converting ATP to AMP. We have been investigating the role that these surface markers play in tumor immunosuppression. Specifically, we have been examining the expression of these markers in BRCA-1-associated breast cancers in a stem-cell-enriched and a non-stem-cell enriched mouse cancer cell line, as well as in human lung cancer cell lines. We surveyed cell lines in vitro and in vivo to determine expression patterns of CD39 and CD73. Expression of CD73 is higher in the stem-cell enriched breast cancer line, and seems confined mainly to the stem cell population. Additionally, the check-point inhibitor PD-1 ligand prevents immune activation. We have shown that breast cancer cells highly express CD73 and PD1-L, thereby facilitating the conversion of ATP to adenosine, which, in the large quantities thus produced, halts the immune response prematurely. In vivo, we have surveyed the ex
spotlight
hurj spring 2010: issue 11 pression of these markers in tumor growth in immune-compromised mice. Tumor expression of the ectopeptidases increased in these mice. These models show that the up-regulation of surface expression of CD73, CD39, and PD1-L does not depend on the T- and B-cells of the immune system being present. This suggests that the up-regulation of these markers is not a defensive response against the immune system, but rather an innate mechanism for survival. We also injected mouse Lewis lung carcinoma into mice without the adenosine receptor gene and into mice treated with a drug that blocks the receptor. In these models, survival and tumor-free rates went up. This suggests that inhibition of the adenosine receptor genetically or pharmacologically potentiates immune responses in vivo, and solidifies the crucial role
of adenosine as a modulator in tumor survival mechanisms used to halt the immune response. We are currently investigating the in vivo effects of enhancing vaccines against tumors by blocking the adenosine receptor. Additionally, we have begun investigating lung cancer for these markers by screening human lung cancer lines and primary lung cancer tissue samples. Our results have shown an up-regulation of both CD73 and PD1-L, suggesting that these cancers use similar immune-suppression mechanisms to those outlined above. Our findings offer some elucidation of how cancer cells stop the immune system from fighting cancer in the same way that it fights and defeats other diseases. Drugs that block the breakdown and signaling of adenosine are an exciting, though still developing, new front to cancer therapy.
References 1. Sitkovsky M, Lukashev D, Deaglio S, Dwyer K, Robson SC, Ohta A. Adenosine A2A receptor antagonists: blockade of adenosinergic effects and T regulatory cells. Br J Pharmacol. 2008 Mar;153 Suppl 1:S457-64. Pubmed. 2. Lukashev D, Ohta A, Sitkovsky M. Hypoxia-dependent anti-inflammatory pathways in protection of cancerous tissues. Cancer Metastasis Rev. 2007 Jun;26(2):273-9. Pubmed. 3. Ohta A, Gorelik E, Prasad SJ, Ronchese F, Lukashev D, Wong MK, Huang X, Caldwell S, Liu K, Smith P, Chen JF, Jackson EK, Apasov S, Abrams S, Sitkovsky M. A2A adenosine receptor protects tumors from antitumor T cells. Proc Natl Acad Sci U S A. 2006 Aug 29;103(35):13132-7. Epub 2006 Aug 17. Pubmed.
10
hurj spring 2010: issue 11
spotlight
What Time Is It Now? 2PM and Global Social Movements Isaac Jilbert / Spotlight Editor
On September 7th, 2009, Korea was rocked to its very core as a clash of civilization gripped the country. Western values ran into those of a more conservative Korean culture as Park Jae Beom left the male idol group 2PM. Although few noticed in the United States, his departure precipitated an international uproar and, most interestingly, a social movement dedicated solely towards encouraging him to return to the group. Although admittedly an unusual topic of study, the very oddity of the circumstances and the subsequent social movement that arose out of Park’s withdrawal from 2PM is the reason such a movement deserves attention. Park Jae Beom is actually an American-Korean from Seattle who, during his teenage years, was recruited by the entertainment company JYP Entertainment to move to South Korea and train to become a Korean pop music singer. With six other members, Park became the leader of the group 2PM. 2PM’s first songs were released in late 2008 and met huge success, not only in the Korean market, but also all throughout the Hallayu, or “Korean Wave” market, which includes Japan and much of Southeast Asia, as
11
Photo courtesy of AllKPop.com
well as pockets of the rest of the world. Park did not originally speak Korean or know anyone in Korea, however, and thus felt alienated for many years during his training. He naturally stayed in communication with his friends back in the States, but at one point in 2005, wrote to his friend via a MySpace wall “Korea is gay. I hate Koreans. I want to come back like no other.” After a fan sifted through his past posts four years later on his still public MySpace site, this past quote came up and was translated into Korean, where a highly nationalistic public met the translated
comments with great disapproval. Merely four days after the “news” had broken, a petition circulated the internet, garnering some 3,000 signatures, which called for Park Jae Beom to commit suicide. Not even an apology that was meant to explain away the wall post as a difference in cultural values (where American’s often use the words “gay” and “hate” simply as exaggerators and descriptors) and as a mistake of a disgruntled youth, could quiet the fervor, and Park left 2PM to return to Seattle only four days after the original news hit the internet.
spotlight
hurj spring 2010: issue 11 Of course, the initial question is how this constitutes a social movement. Using the guidelines provided by Sidney Tarrow, who defines social movements as “collective challenges, based on common purposes and social solidarities, in sustained interaction with elites, opponents, and authorities,” the withdrawal of Park Jae Beom from 2PM did, in fact, cause a social movement to quickly form and shake the Korean music industry. What is remarkable about the situation is the sudden reversal of fans’ attitudes. As soon as Park left for Seattle, fans regretted their actions and launched a sustained effort to try and bring him back. The movement to bring Park back remarkably exists to this day, accounting for the sustainability aspect of Tarrow’s definition. But what is truly remarkable is how fans organized their protests. The rise of the internet as a networking and social tool is crucial to understanding how fans could decorate the headquarters of JYP Entertainment with 15,000 roses in protest. In addition, 2,000 fans were able to organize outside of the headquarters for a silent protest which prompted a police presence. Perhaps the most remarkable,
however, are the fans’ “flash mob” protests, which have been held in major cities across the world, including Vancouver, New York, Boston, and Toronto. This form of protest often involves a group of people, organized through a fan website, to go to a particular location and dance for a bit, then disperse as if nothing happened. However, it is not just casual dancing, but rather choreographed, coordinated, high quality dancing to one of 2PM’s songs. Thus, one witnesses through the power of the internet just how the modern social movement occurs. The entire movement and the original cause seem perplexing to say the least, but this is why such a movement is worth study. When one looks through internet sites and finds how the protests are organized, one witnesses just how a modern social movement is organized. Although the Jae Beom Movement does not fight against great social injustices on the magnitude of the Civil Rights Movement or the Feminist Movement, one sees that perhaps social movements can be frivolous and petty, or maybe that thousands can come together for the justice and love of one man. Regardless, through
this movement, one can examine how the internet can change the landscape, whether it is in Iran, China, or Korea. References 1. Korean. “2PM, Jaebeom, and Korea’s Internet Culture”; available from http://askakorean. blogspot.com/2009/12/2pm-jaebeom-andkoreas-internet-culture.html; Internet; accessed 22 March 2009. 2. RHYELEE. “Anits Create a Suicide Petition for 2PMs Jaebeom”; available from http://www. allkpop.com/2009/09/antis_create_a_suicide_ petition_for_2pms_jaebeom; Internet; accessed 7 September 2009. 3. JOHNNYDORAMA. “2PM Jaebeom Apologizes”; available from http://www.allkpop. com/2009/09/2pm_jaebeom_apologizes; Internet; accessed 14 September 2009. 4. BEASTMODE. “15000 Origami Roses for Jaebeom’s Return”; available from http://www. allkpop.com/2009/10/15000_origami_roses_for_ jaebeoms_return; Internet; accessed 26 October 2009. 5. BEASTMODE. “2000 Mobilize at JYPE Building in a Silent Protest”; available from http://www.allkpop.com/2009/09/2000_mobilize_at_jype_building_in_a_silent_protest; Internet; accessed 14 September 2000. 6. RAMENINMYBOWL. “Hottests Around the World are ‘Flashing’ for Jaebeom”; available from http://www.allkpop.com/2009/10/hottests_around_the_world_are_flashing_for_jaebeom; Internet; accessed 20 October 2009.
A display of 15,000 origami roses on the AYP building created by fans in support of Park Jae Beom. - Courtesy of AllKPop
12
focus: america’s new era america, like many countries, has seen rapid change in recent years economically, scientifically, environmentally, & politically.
hopkins’ undergrads have taken note and have compiled this issue’s focus on
america’s new era.
focus
hurj spring 2010: issue 11
AMERICA’S
Photo by Sarah Frank
CHANGING INTERNATIONAL ROLE Paul Grossinger / Staff Writer In the last three decades, the world has seen several seismic shifts in the division of power among various nationstates and international institutions. From the end of the Second World War until the late 1980’s, the Soviet Union and its Warsaw Pact of Communist states were seen as the chief rivals of the United States. This “bipolar” system of global affairs was radically redefined after the USSR’s collapse in 1991, at which point America became the primary, unchallenged power in a “unipolar” system.i However, this paradigm has begun to change in the last few years, due to the rise of potential rivals to the United States. It is a well-recognized fact that the United States’ stranglehold on economic, political, and military power has weakened over the last few years. This is not to say that America itself has weakened; indeed, through the year 2009, the United States has sustained
over a decade of consistent economic growth and has retained its key alliances.ii However, its relative power has declined, because of the dramatic rise of competitor states and institutions, most notably the European Union, China, India, Brazil, and Russia. For example, while America’s annual GDP in 1999 was ten times that of China’s, the Chinese economy grew by 800% in the last ten years.1 Other emerging states, such as India and Russia, have showed similarly exponential growth, thus decreasing the economic gap between these countries and the United States.2 Therefore, although the growth of the American economy was, overall, robust, several other states neatly bridged a large part of the economic gap between themselves and the superpower. It is likely that if current trends continue, China may surpass the United States in total national GDP in the near future.
i. To clarify, uni-polarity is a world system where one state dominates all others whereas a bipolar system is one dominated by a rivalry between two superpowers.
ii. Despite the recession between December 2007 and August 2009, the United States has still maintained a consistent economic growth curve since 1999.
What does this mean for American interests? How does the world’s richest and most politically influential superpower use its remaining time alone atop the global totem pole to shape its future role in world affairs? Unlike earlier pseudo-hegemonic states like Great Britain, who faced their up and coming rivals while already in decline, the United States has the opportunity to shape a future place amongst (or above) its rivals while still at the peak of its powers. This article suggests that the way the United States should maintain its hegemony is to forge individualized partnerships with each of these potential rivals, thus binding them within a US-controlled world system that I will term: “uni-multipolarity.”
THE THEORY It is hard to define a theory using a term as contradictory as ‘uni-multipolarity’. After all, the two terms that comprise it, unipolar and multipolar, represent two opposing world systems.
14
focus Unipolarity is a system dominated by an unrivaled superpower, and multipolarity is one controlled by powers centered at various regional poles. From the American strategic perspective, the rise of these new powers means that the unipolarity that defined the 1990’s is no longer possible because, while the United States remains far stronger on a global scale than any other states, the EU, China, India, Russia, and even Brazil are growing briskly and can now challenge America’s dictates at their individual regional levels. However, with that said, the pursuit of full multipolarity would not make sense as a geopolitical objective for the United States, because it would either surrender the tremendous power advantage America currently possesses, or ruin its ability to persuade other states to pursue American interests at an international level. Instead, the foreign policy strategy that makes the most sense for American interests is uni-multipolarity. So, what is uni-multipolarity, aside from a hyphenated contradiction? This term attempts to describe a power system in which the United States uses its capacity as a global superpower to act as a hegemonic balancer between regionally dominant emerging powers. It differs from a pure unipolar system in that these states have the power to challenge the United States’ hegemony at a regional level. At the same time, this system is far more stable than a chaotic multipolar system because of the balancing affect of the global superpower. How can the United States use its unique combination of hard and soft power to achieve uni-multipolarity? Broadly speaking, America should work to maintain its economic, technological, and military edge over emerging states, while simultaneously using its soft power to create unique partnerships with each of these states. In this way, the US can make itself the indispensable “glue state” within the international system, moving from the position of a stand-alone hegemony to a superpower mediator whose cooperation is needed for any and all major international projects. If it can achieve this, then America will have managed to retain its power and influence while accommodating and benefiting from the inevitable “rise of the rest” of these new game changers on the international stage. Importantly, however, this approach is not a carte blanche collective doctrine, and instead relies on individual treatment of each of these major states, so let us deal in brief with how the US should approach creating long-term bilateral partnerships with each of these emerging powers.
15
hurj spring 2010: issue 11
THE EUROPEAN UNION Because of the intertwined histories of the United States and Europe, and the vast array of treaties between them, the task of changing the dynamics of their bilateral relationship promises to be one of the trickiest America will face in the next decade. This is mainly because the EU, despite the efforts of the most ardent Euro-integrationists, remains a voluntary institutional grouping of states, each having an independent relationship with the United States.3 However, this very difficulty is also what makes the EU one of the best potential allies of the United States in the future; since most of its member-states are already close American allies. Within this group are traditional American allies, including Britain, Germany, and Greece, as well as several former Soviet satellites such as Poland, Hungary, and the Czech Republic, who have entered NATO. Over the next decade, America should seek to maintain the strength of its traditional alliances, particularly those with Britain and Germany, through continued cooperation on relevant economic and foreign policy concerns. In addition to keeping up these essential partnerships, and working to prevent major rifts with occasionally estranged allies such as France, the United States should seek to actively build its relationships with Eastern European states by integrating them further into joint security pacts and expanding trade with them. In its relations with the European Commission, the executive branch of the EU, America should seek deep cooperation and accommodation in areas of mutual interest, while still pursuing its own goals if interests differ. The United States should also seek to break down as many trade barriers as possible, in order to increase the mutual profitability of bilateral trade. In addition, the United States must cooperate effectively on international ventures of joint interest, which range from the continued protection of Kosovo (which is currently being turned over from NATO to EU forces) to joint investment ventures in Africa. When relations reach potential snags, such as the role of NATO versus the common EU Defense Policy and America’s independent alliances with the EU’s eastern member states, the US should continue to guard these interests and pursue increased cooperation while still maintaining a sustained and active interest in the continued growth and integration of Europe. America should sustain this effective working relationship with the European Commission because, when combined with close alliances to individual member-states, this relationship will make American cooperation essential for any major EU actions outside of the purely domestic realm.
focus
hurj spring 2010: issue 11
CHINA
RUSSIA
If the potential for building a US-EU relationship is tricky because of complicated preexisting relationships, the potential for relationship building with China is difficult for opposite reasons: a combination of weak pre-existing ties and striking cultural differences. However, of all the United States’ future foreign policy priorities, developing a positive relationship with China, and integrating it into the network of global institutions, should be right at the top of the list. Despite the economic strength of the EU, the growth of Brazil and India, and the resurgence of Russia as a relevant regional power, only China is poised to rival the US as a geopolitical force in the near future. Because of their locations, economic and political situations, and pre-existing relationships, it makes sense for America and China to work together. From the US perspective, a post-Cold War lesson is that, however powerful it may be, America lacks the strength to run the world by itself. It needs powerful regional partners to represent its interests in each region. China, because of its power and rapid growth, is a perfect partner for America in this regard. This is especially true because, while China is growing at a record pace, the prospect of singular Chinese domination evokes hatred and fear in its neighbors, particularly economic powerhouses Japan and South Korea.4 Because of these issues, China needs to be at least loosely connected to the strong US alliances with its rival Asian powers. Therefore, because the US needs China’s influence to balance the region and keep North Korea and Russia in check, a direct, bilateral foreign policy agreement or ‘understanding’ to cooperate is the best policy for both states. Economically, bilateral cooperation between America and China is even more essential. As it currently stands, the two nations’ economies already dovetail, with American consumption sustaining Chinese production and Chinese products and money sustaining the American economy and debt markets. While America’s economic dependence on China, particularly as it relates to American debt, often gets the most press, the reality is that both sides would suffer if this system of economic cooperation fell apart.5 If the American consumer economy and the dollar collapsed overnight, China would see its manufacturing sector collapse, while losing the cash reserves to provide stimulus to deal with the problem.6 Because of this mutual co-dependency on both the economic and foreign policy fronts, the US should seek both to sign several bilateral agreements aimed at cooperation and to integrate China more fully into international institutions. In so doing, it will stimulate economic growth, while also making China’s economy more dependent on its global partners–particularly the United States.
Russia represents a unique foreign policy paradox: how does one deal with a state whose power and influence are both highly under and over-estimated at the same time? On one hand, it is easy to dismiss Russia: she has a one-horse energy economy with a weak conventional military, a declining population, and no key international alliances outside the old Soviet sphere to speak of. However, she is also territorially enormous, possesses a large nuclear arsenal, and is important to US influence in Eastern Europe, Central Asia, and the Middle East. Furthermore, along with India and Japan, she could serve as an essential partner for the United States in balancing the expansion of Chinese regional influence. While many veterans of the Cold War tend to consider Russian policy over the last decade as trending towards the brinkmanship politics of yesteryear, the evidence actually suggests that ‘Putinist’ politics, named for Russia’s current Prime Minister Vladimir Putin, have a uniquely czarist streak. Whereas the Soviet Union sought to spread Communism internationally and engaged in a superpower struggle to dominate the globe, Russia today seems more focused on rebuilding a sphere of influence and becoming an independent great power. As such, the United States should attempt to engage Russia on issues within what is considered to be its sphere: Eastern Europe and Central Asia. This does not mean that the US should abandon its newer allies in Eastern Europe. What it means is that America should abandon policies that unnecessarily antagonize Russia, such as missile defense plans in Eastern Europe, and instead positively engage with Russia on issues of mutual necessity. One good example of how this could work in practice concerns Iran’s nuclear program, which Russia has recently protected at the UN solely as a counterweight to American power in the region. However, despite this recent support, a nuclear Iran is hardly in Russia’s interest, so if America can engage it on this issue and remove other antagonizing factors, it could see tangible results. Even more importantly, if the US can engage Russia bilaterally, it can and should use her as a counterweight to Chinese influence in Central Asia. Renewed Russian and Chinese engagement could challenge US power globally, but if the US can work to prevent that and engage both bilaterally, it should preserve its status as a unique global actor.
16
hurj spring 2010: issue 11
focus
BRAZIL Historically speaking, the United States has never had a rival for influence within the Western Hemisphere. While Brazil is unlikely to ever truly rival its northern neighbor, its large size and growing economy make it an ideal co-hemispheric partner for future US interests. Brazil is important at both a regional and a global level: should the US maintain the support of the other two powers, Brazil and Canada, in its own hemisphere, it can work on global issues without worrying about its own backyard. To do this, America will need to tailor hemispheric agreements to Brazil’s needs in order to foster its own growth, while giving Brazil a reason to support both regional and global US interests. At first glance, Brazil seems to be an odd final member of this quartet: the European Union, China, and Russia are all larger and more powerful states, who would seem to have more to offer the United States on a geopolitical level. However, Brazil is, in fact, central, because it has the potential to tip the balance in Latin America for or against American interests in future debates. The United States must work to appease Brazilian interests by investing in its economic growth and keeping them from turning to other rising powers, especially China, for their most important economic and security partnership agreements. In cooperating with them fully on matters pertaining to economic growth, the US can use Brazil to retain its geopolitical sphere. In many ways, Brazil represents the key to this uni-multipolar strategy for the US, since its cooperation would allow America to retain control of the Western Hemisphere and utilize its influence in other regions. Combined with continued transatlantic cooperation, a secure partnership with Brazil would leave Asia as the only potentially muddled regional theater in the uni-polar system.
INDIA Each one of this essay’s subsections details necessary changes to American relationships with future partners and regional rivals. However, of all the foreign policy changes the US needs to make, none is more important than completely redefining our relationship with India. Broadly speaking, this essay has, to this point, suggested that the United States secure its own backyard (Brazil) while weaning itself away from dependence on its European allies in NATO and instead working to jointly cooperate with and contain a future superpower in China and a resurgent one in Russia. While these moves are all necessary, the linchpin that would make this new policy system work is a new and dynamic alliance between the United States and India. Historically, America and India have not been terribly close bedfellows. Throughout much of the Cold
17
War, India was a largely socialist state that viewed the US warily because of the superpower’s consistent support for its Pakistani neighbors. However, this has changed in the last decade for a number of different reasons. First, India has opened its economy to foreign enterprise and no longer embraces a socialist economic or political model which has increased America’s interest in a strategic partnership, as Washington increasingly views India as a future economic superpower which already boasts a functioning democracy. This development has occurred alongside Pakistan’s increasingly debilitating struggle with corrupt, dictatorial governance and homegrown terror cells. As the US increasingly watches India flourish while Pakistan lurches toward becoming a failed state, the incentive to drastically alter our relationship with the rising sub continental state is growing steadily. To put it bluntly, India is the key to the successful construction of this new US uni-multipolar world view. It provides the ideal ally with which the United States can successfully counterbalance Chinese growth and persuade the Peoples’ Republic to endorse a Sino-American partnership, instead of a confrontation. For this reason, the US should dramatically accelerate its current efforts to establish an alliance with India. While its efforts up to this point, which include a nuclear deal, the Doha trade rounds, and limited military cooperation, have been somewhat effective, more is needed. Therefore, the US should couple its current disengagement from its Pakistan alliance (due to the homegrown terror problem) with India’s fear of Chinese strength and use these to build on the current initiatives and work rapidly towards forming an individual alliance with India. The reasoning for this, in power terms, is simple. Today, the US has no true rival and is economically, politically, and militarily capable of maintaining its hegemony without a superpower partner in the short term. However, as American power begins to plateau over the next several decades, while its rivals’ grows exponentially, that will change and China could look to challenge US influence in Asia. But, if the US forms a strong, close, individual alliance with India based on mutual security, information-sharing, and bilateral economic stimulation, this development, coupled with America’s already strong alliances with Japan, Australia, and South Korea, would be more than enough to maintain US hegemony in Asia, while successfully accommodating ever increasing levels of Chinese growth and cooperation. The US would effectively change from (in 2009) a stand-alone superpower with a shoestring of alliances to a superpower maintaining its status as regional arbiter through a combination of superior regional partnerships and individual strength. This change, combined with maintaining strong trans-Atlantic ties, would be an excellent platform to maintain US uni-multipolar hegemony.
hurj spring 2010: issue 11
CONCLUSION In the ensuing decades, as it continues to grow, yet sees its relative power compared to the world’s other major actors wane, the United States will need to redefine its place in the world. Over the course of its history, America has been, at different times; a weak isolationist state, a multi-polar power, a bipolar superpower, and a uni-polar hegemony, so precedents for redefining the US role in the world are certainly present. In redefining its role, the United States should seek uni-multipolarity. This new system will allow America to not only absorb, but also benefit, from the growth of these new power states. Furthermore, it would allow the US to be an integral part of all global initiatives without having to lead (and bankroll) every single one, as has been the case for almost two decades. Uni-multipolarity will be difficult to achieve but certainly not impossible. America, unlike its hegemonic precursors, has a chance to redefine its role while it still has the power to do so. Though many who currently see America in recession have declared the US era of dominance to be over, the fact is that, despite its present economic weakness and military quagmires in Iraq and Afghanistan, America in 2010 still has no true rival and will not for at least another decade. US policymakers should recognize that US unipolar hegemony may be coming to an end because of the inexorable “Rise of the Rest” but this does not mean that US power itself is waning or the US is on the verge of losing its status as the world’s main actor. It means, instead, that the US remains alone atop the totem pole, but is now confronted by a number of rising states that will begin to rival it in power. However, while many writers have continued to equate this with an accompanying decline in American power and influence, there is no reason that this should prove to be the case. In fact, these developments actually represent an opportunity for the US to maintain its old trans-Atlantic ties while cultivating new alliances with two key states, India and Brazil. These alliances, the keys to the uni-multipolar system, should help maintain Americas’ role as international arbiter and prop up its relative strength in efforts to both partner with and contain a resurgent Russia and an emerging China. Ultimately, should these policies be pursued and these goals attained, there is no reason that America couldn’t flourish in a different, yet perhaps even more prosperous, way well into the future.
focus References 1. “US Historic GDP Chart.” Google Public Data. Web. <http:// www.google.com/publicdata?ds=wb- wdi&met=ny_gdp_mktp_cd &idim=country:USA&dl=en&hl=en&q=us+gdp+chart>. 2. “China GDP Growth Chart.” Google Public Data. Web. <http://www.google.com/publicdata?ds=wb-wdi&met=ny_gdp_ mktp_cd&idim=country:CHN&dl=en&hl=en&q=china+gdp+chart>. 3. “EU Institutions and other bodies.” Europa. Web. <http:// europa.eu/institutions/index_en.htm>. 4. Ellwell, Labonte, and Morrison, 2007. “Is China a Threat to the US Economy?” CRS Report for Congress. Web. <http://www. fas.org/sgp/crs/row/RL33604.pdf> 5. Ibid. 6. “China’s development benefits US Economy.” China Daily. Web. 28 August 2005. <http:// www.chinadaily.com.cn/english/ doc/2005-08/28/content_472783. htm>
18
hurj spring 2010: issue 11
focus
Biofuels and Land Use Changes Flawed Carbon Accounting Leading to Greater Greenhouse Gas Emissions and Lost Carbon Sequestration Opportunity Julia Blocher / Staff Writer As the United States searches for solutions to climate change and engages in the most important energy negotiations to date, it is crucial that decision makers amend a significant carbon accounting error, attached to bioenergy, present in current energy legislation which could lead to wide-spread detrimental land use changes, as well as greater greenhouse gas emissions. The international conventional carbon accounting and the Waxman-Markley comprehensive energy bill (ACES) fail to account for both the lifecycle emissions of biofuels and the potential future emissions from land conversion that will result from an increased demand of biofuels crops. For most scenarios, converting land to produce first generation biofuels creates a carbon debt of 17 to 420 times more CO2 ¬equivalent than the annual greenhouse gas reductions that these biofuels would provide by displacing fossil fuels.1 Furthermore, the carbon opportunity cost of the land converted for biofuel production is not considered; the sequestration potential of lands that could support forest or other carbon-intensive ecosystems is often much higher than the greenhouse gas emissions saved by using the same land to replace fossil fuels with bioenergy. Biofuel credits create market incentives for farmers to convert fertile cropland and already biologically productive but unused land to meet demand for biofuels and displace demand for crops.2 In order to understand the elements involved in evaluating the use of biofuels as an energy source, several dimensions of the argument will be treated. First, a typology of biofuels and their uses will be presented, focusing on the emissions savings presented by first-generation versus second-generation biofuels. Second, the way in which carbon accounting rules have failed to consider all of the aspects of biofuels production, combustion, and the carbon opportunity cost of the land required will be discussed. Finally, the current state of biofuels policies will be addressed.
Typology of biofuels Bioenergy is typically divided into two overall categories, first and second generation biofuels. Fuel from biomass is produced biologically by using enzymes derived from bacteria
19
or fungi to break down and ferment the plant-derived sugars, producing ethanol. At present, because the energy output of biofuels remains lower than that of conventional fossil fuels, biofuels are most commonly used by mixing up to 85 percent ethanol with petrol for transportation uses.3 The preferred first generation biofuel crops, because of their effectiveness as a substitute for petroleum, are corn, sugarcane, soybeans, wheat, sugar beet, and palms.4 Second generation biofuels are considered to have the ability to solve some of the problems of first generation biofuels, and can supply greater amounts of biofuels without displacing as much food production. The materials used for second generation biofuels are nonfood crop cellulosic biomasses, which can be leafy materials, stalks, wood chips and other agricultural waste. Because the sugars have to first be freed from the cellulose by breaking down the hemicelluloses and lignin in the source material (via hydrolysation or enzymes), second generation biofuel is more expensive to produce and generally has a smaller yield than does first generation bioethanol.5 The U.S. Department of Energy estimated in 2006 that it costs about $2.20 per gallon to produce cellulosic ethanol, twice as much as the cost for ethanol from corn.6
The problem with current carbon accounting The UNFCCC, E.U. cap-and-trade law, and ACES inappropriately exempt biofuels as being ‘carbon neutral’; assigning lifecycle emissions from bioenergy solely to land-use accounts, while counting the amount of carbon released from combustion of the biofuels as equal to the amount of carbon uptake from the production of biomass feedstocks.7 This rewards bioenergy, in comparison to fossil fuels, which are taken out of underground storage. This accounting skews the carbon reporting in favor of the destination nations of the biofuels, where the tailpipe and smokestack energy emissions are not debited.8 The areas of the world with the fastest growing biofuels production, notably Southeast Asia and the Americas, have to report net carbon release from harvesting biomass while the importing countries exclude the emissions from their energy accounts. The Kyoto Protocol caps energy emissions of developed countries but does not apply limits to land use in developing countries.9 The emissions from land
hurj spring 2010: issue 11 conversion required to produce biofuels feedstock are also not counted. Biofuels appear to reduce emissions, when in fact, varying with the source of biomass, the carbon debt is much higher than is repaid by the carbon captured in the production of biomass for feedstocks. In isolation, replacing fossil fuels with biofuels does not decrease emissions. The amount of CO2 released by combusting biofuels is approximately the same per unit of energy output as traditional fossil fuels, plus, the amount of energy required for its production is typically more than that required to process petroleum.10 Biofuels production can only reduce greenhouse gas emissions if the amount of CO2 sequestered in the process of growing feedstocks sequesters above the amount of carbon that would be sequestered naturally and if the annual net emissions from their production and combustion are less than the life-cycle emissions of the fossil fuels they displace. This can only be achieved by land management changes that increase carbon uptake by terrestrial biomass or utilize plant waste.11 Soils and biomass are the most important terrestrial storages of carbon, storing about 2.7 times more than the atmosphere.12 Clearing vegetation for cropland, coupled with the burning or microbial decomposition of organic carbon in soils and biomass, causes the release of significant amounts of carbon. ‘Carbon debt’ refers to the amount of CO2 released during the first 50 years of this change in the use of the land. Until the carbon debt is repaid, biofuels from converted lands have greater net greenhouse gas emissions than fossil fuels.13 In most scenarios, the time scale required to repay the carbon debt from land conversion with the annual greenhouse gas emissions savings from replacing fossil fuels with biofuels amounts to bioenergy representing a net increase of emissions. One study showed that all but two biofuels, sugarcane ethanol and soybean biodiesel, will increase greenhouse gas emissions for at least 50 years and up to several centuries.14 The amount of carbon debt varies by the productivity of the natural ecosystem. For example, converting Amazonian rainforest for soybean biodiesel would create net carbon emissions of 280 metric tons per hectare; a debt that would take about 320 years to repay.15 The use of second generation biofuels improves the tradeoff of carbon debt of land conversion to the emissions saved from displacing fossil fuels, but not considerably. Rapidly growing grasses showed promise for producing cellulosic biofuels; calculations show that ethanol from switchgrass at high yields and conversion efficiency attain a carbon savings of 8.6 tons of carbon dioxide per hectare annually relative to fossil fuels.16 However, this doesn’t take into account the opportunity cost of allowing the land to regenerate with trees.
focus scale conversion of land for bioenergy. Land-use changes already contribute 17 percent of the world’s total greenhouse gases; converting biologically productive land for biofuels only increases the need for carbon sequestration, thought to slow climate change.17 Biofuels production on existent agricultural land indirectly causes greenhouse gas emissions by land conversion elsewhere to fill the need for cropland.18 One study estimates that, given the current accounting methods, a global carbon dioxide target of 450 parts per million could displace virtually all global natural forests and savannahs by 2065 and release up to 37 gigatons of carbon dioxide per year, an amount close to the total human carbon dioxide emissions today.19 Converting a mature forest for corn ethanol, for example, releases 355 to 900 tons of carbon dioxide within a few years, plus sacrifices the ongoing carbon sequestration potential of the forest of at least seven tons per hectare per year.20 Carbon accounting is one-sided, in that it attempts to account for what is gained, but ignores what is given up. Current climate legislation considers the land base used for bioenergy to have come carbon-free, which is not the case. The land required for first generation biofuels is typically productive enough to support tree growth if left alone, which would sequester significantly more carbon than substituting fossil fuels with bioenergy saves.21 For example, a hectare of land in the U.S. that could be used to grow corn for ethanol could instead be left to regenerate into trees; the resulting biomass would capture carbon dioxide at a rate between 7.5 to 12 tons.22 In the tropics, the carbon opportunity cost is greater. Biofuels made from sugarcane were calculated to have carbon savings of roughly nine tons per hectare annually, and palm oil at 7.5 tons per hectare, while the reforestation rates in the tropics sequester carbon dioxide at a probable rate of 14 to 28 tons per year.23 Some researchers advocate economic incentives that will preserve forest and other carbon-dense ecosystems rather than bioenergy.
Land conversion emissions and carbon opportunity cost Carbon credits, the lowering of carbon caps and agricultural subsidies for biofuels crops in the U.S. and E.U., create global economic incentives for large
20
hurj spring 2010: issue 11
focus If developed countries were to transfer their total investment in biofuels (estimated at $15 billion) into preserving forests, promote reforestation, and prevent the destruction of peatlands, the total costs of halting and reversing climate change will be halved.24 Preventing deforestation and wetlands destruction doesn’t require technological development and entails little capital investment; the cost would be as low as 0.1 dollars per ton of carbon dioxide.25
Politics of biofuels Conventional lifecycle analyses that award biofuels false carbon credit continue to plague energy legislation.26 Under the Energy Independence and Security Act (EISA), signed into law in 2007, the total amount of biofuels added to gasoline in the U.S. is required to increase to 36 billion gallons by 2022, four times the current levels.27 Furthermore, since Barack Obama’s election, requiring that biofuels replace ten percent of transportation fuel by 2020 has become a common national policy around the world. The U.S. and Europe both sponsor the conversion of ‘food’ crops into first generation biofuels through massive subsidies, while global food demand is expected to increase by 50 percent by 2030.28 Current crop yields are not high enough to avoid the need to replace farmland used for biofuels production with more cropland elsewhere. Meeting U.S. and E.U. objectives would require 60 million hectares of land by 2020, a target that would necessitate biofuels production to consume 70 percent of the land expansion for wheat, maize, oilseeds, palm oil, and sugarcane.29 To avoid land-use change, world cereal yield growth rates would have to triple the Department of Agriculture’s current projections.30 Proponents of biofuels are beginning to recognize the effects of land-use changes in policy negotiations. Notably, the EISA states that cellulosic biofuels must, after accounting for both direct, tailpipe, and indirect conversion emissions, offer at least a 60% lifecycle greenhouse-gas reduction relative to conventional gasoline.31 However, the direct and indirect causes and effects of land-use changes remain difficult to quantify. In the E.U. and the U.S., biofuels producers can circumvent the rules of land conversion carbon accounting by separating materials for biofuels and those destined for food demand. For example, one tank for oil used from already cleared land will qualify for biofuels subsidies, while another tank of oil from newly cleared land can go to food production.32 Additionally, European policies and the EISA make provisions for imports of biofuels; the E.U. directive provides for 43 percent of the biofuels target
21
to come from imports.33 This has accelerated incentives for developing countries to produce biofuels, particularly in Latin America and Southeast Asia. Much of the expansion in biofuels production in these regions has been from palms grown on former peatlands and wetlands. These lands present the highest carbon cost because of their ability to store large quantities of carbon, which are released when the land is drained and cleared. In these biologically productive areas, between 43 and 170 tons of carbon per year per hectare of converted land are released over 30 years.34
The future of biofuels By excluding emissions from land-use change, carbon accounting is flawed, because it counts the carbon benefits of biofuels, but not the carbon costs of the source. Considering biofuels as carbon neutral by counting the change in biomass stocks and ignoring the tailpipe emissions has, in recent years, been recognized as inaccurate, but current legislation continues to ignore the carbon debt created when biologically productive land is converted to provide the land base needed for the production of first generation feedstocks. As a result, biofuels appear to reduce carbon emissions when, in actuality, the total emissions can be much higher than the life-cycle emissions of traditional fossil fuels. When calculations count costs as well as benefits, biofuels actually increase greenhouse gas emissions that contribute to climate change. Furthermore, the carbon opportunity cost of biologically productive land is ignored; carbon uptake would be much greater if land were allowed to regenerate to forests or similarly carbon-dense ecosystems. Current biofuel policies in developed countries risk exacerbating climate change by creating incentives that will lead to worldwide deforestation and threaten food security. Certain second generation biofuels, conversely, can indeed be beneficial, not only to make the transition away from traditional fossil fuels but also to reduce greenhouse gas emissions. Biofuels derived from some second generation feedstocks have lower life-cycle greenhouse gas emissions than fossil fuels and can be produced in substantial quantities. This includes perennial plants grown on degraded agricultural lands, crop residues, slash (branches and thinning) left from sustainably harvested forests, double crops and mixed cropping systems, and animal, municipal, or industrial waste.35 Additional factors may eventually improve the greenhouse gas emissions reductions presented by biofuels, such as technological improvements that reduce their carbon payback period by
hurj spring 2010: issue 11 increasing the carbon uptake potential of biomass.36 Since crop yields have grown steadily in the last century, proponents of bioenergy argue that they can improve enough to eliminate the problem of converting more land for its production.37 As the global population and demand for energy increases, the need for policy-makers and technology to find solutions to climate change will only become more urgent and expensive. The discourse on biofuels illustrates that as legislators engage in the most important climate treaty negotiations in U.S. history, it is vital that the technologies that are proposed as solutions to climate change are properly evaluated. Bioenergy can and should be an important part of energy legislation and make up a substantial portion of our future energy demand for transportation uses, but it must take into account real energy gains and lifecycle greenhouse gas emissions, preservation of natural carbon sinks, and sustainability of the global food supply. Clear tasks for the coming years include fixing the bioenergy carbon accounting error, reevaluating carbon credit systems and biofuels subsidies, and developing second generation biofuels technology. References: 1. Joseph Fargione et al., “Land Clearing and the Biofuel Carbon Debt,” Science 319 (2008): 1235. 2. Tim Searchinger et al., Science 319 (2008): 1238. 3. Oliver R. Inderwildi and David A. King, “Quo vadis biofuels?”, Energy & Environmental Science (2009): 344, http://www.rsc.org/publishing/journals/ EE/article.asp?doi=b822951c. 4. Joseph Fargione et al., “Land Clearing and the Biofuel Carbon Debt,” Science 319 (2008): 1235. 5. Oliver R. Inderwildi and David A. King, “Quo vadis biofuels?”, Energy & Environmental Science (2009): 344, http://www.rsc.org/publishing/journals/ EE/article.asp?doi=b822951c. 6. J. Weeks, “Are We There Yet? Not quite, but cellulosic ethanol may be coming sooner than you think”, Grist Magazine (2006), http://www.grist.org/ news/maindish/2006/12/11/weeks/index.html. 7. IPCC, “2006 IPCC Guidelines for National Greenhouse Gas Inventories, prepared by the National Greenhouse Gas Inventories Programme,” Institute for Global Environmental Strategies (IGES): Tokyo, 2007. 8. Ibid 9. Tim Searchinger et al., “Fixing a Critical Climate Accounting Error,” Science 326 (2009): 527. 10. IPCC, “2006 IPCC Guidelines for National Greenhouse Gas Inventories, prepared by the National Greenhouse Gas Inventories Programme,” Institute for Global Environmental Strategies (IGES): Tokyo, 2007. 11. Tim Searchinger et al., “Fixing a Critical Climate Accounting Error,” Science 326 (2009): 527.
focus 12. Joseph Fargione et al., “Land Clearing and the Biofuel Carbon Debt,” Science 319 (2008): 1235. 13. Ibid., 14. Ibid., 15. Ibid. 16. Tim Searchinger, “Evaluating Biofuels: The Consequences of Using Land to Make Fuel,” Brussels Forum Paper Series, the German Marshall Fund of the U.S., Rep. D.C (2009). 17. Ibid. 18. Tim Searchinger et al., “Use of U.S. Croplands for Biofuels Increases Greenhouse Gases Through Emissions from Land-Use Change”. Science 319 (2008): 1238. 19. Tim Searchinger et al., “Fixing a Critical Climate Accounting Error,” Science 326 (2009): 527. 20. Tim Searchinger, “Evaluating Biofuels: The Consequences of Using Land to Make Fuel,” Brussels Forum Paper Series, the German Marshall Fund of the U.S., Rep. D.C (2009): 8-13. 21. Ibid., 22. Ibid., 23. Ibid. 24. Dominick Spracken et al., “The Root of the Matter: Carbon Sequestration in Forests and Peatlands,” Policy Exchange (2008), http://www.policyexchange. org.uk/images/libimages/419.pdf. 25. Ibid. 26. Tim Searchinger, “Summaries of Analyses in 2008 of Biofuels Policies by International and European Technical Agencies,” Economic Policy Program, the German Marshall Fund of the U.S., Rep. D.C. (2009). 27. Energy Independence and Security Act of 2007, Public Law 110-140, H.R. 6, 2007. 28. Oliver R. Inderwildi and David A. King, “Quo vadis biofuels?”, Energy & Environmental Science (2009): 344, http://www.rsc.org/publishing/journals/ EE/article.asp?doi=b822951c. 29. Tim Searchinger, “Summaries of Analyses in 2008 of Biofuels Policies by International and European Technical Agencies,” Economic Policy Program, the German Marshall Fund of the U.S., Rep. D.C. (2009). 30. Tim Searchinger, “Evaluating Biofuels: The Consequences of Using Land to Make Fuel,” Brussels Forum Paper Series, the German Marshall Fund of the U.S., Rep. D.C (2009). 31. Energy Independence and Security Act of 2007, Public Law 110-140, H.R. 6, 2007. 32. Tim Searchinger, “Evaluating Biofuels: The Consequences of Using Land to Make Fuel,” Brussels Forum Paper Series, the German Marshall Fund of the U.S., Rep. D.C (2009). 33. Tim Searchinger, “Summaries of Analyses in 2008 of Biofuels Policies by International and European Technical Agencies,” Economic Policy Program, the German Marshall Fund of the U.S., Rep. D.C. (2009). 34. Tim Searchinger et al., “Fixing a Critical Climate Accounting Error,” Science 326 (2009): 527. 35. David Tilman et al., “Beneficial Biofuels – the food, energy, and environmental trilemma,” Science 325 (2009): 270. 36. J. Weeks, “Are We There Yet? Not quite, but cellulosic ethanol may be coming sooner than you think”, Grist Magazine (2006), http://www.grist.org/news/maindish/2006/12/11/weeks/index.html. 37. Tim Searchinger, “Evaluating Biofuels: The Consequences of Using Land to Make Fuel,” Brussels Forum Paper
22
hurj spring 2010: issue 11
focus
BEYOND KYOTO: Toshiro Baum / Staff Writer
In December 1997, nations from around the world adopted the Kyoto Protocol, an international agreement which compelled signatory nations to reduce their greenhouse gas emissions. The Kyoto Protocol was seen as a necessary response to a trend of global warming, which scientists contended was a result of the “Greenhouse Effect,” whereby human combustion of fossil fuels and other chemicals change the chemistry of the Earth’s atmosphere by causing it to trap more of the Sun’s energy. If this were to continue, scientists warned, it would cause further global warming, resulting in a number of negative ecological effects with equally harmful economic, social and political repercussions. Meanwhile, in the United States, the Senate passed a related resolution 95-0, stating that the US should not become a signatory to any international agreement which “would result in serious harm to the economy of the United States.”1 Although the Kyoto Protocol had been signed by then Vice President Al Gore, the Clinton Administration chose not to undertake the politically challenging task of persuading the Senate to agree to ratify it. With the election of George W. Bush, this would
23
continue, with the Bush Administration citing concerns over the leeway granted to large developing nations such as China, despite their high amount of greenhouse gas emissions.2 Throughout its tenure, the Bush Administration, and many like-minded conservatives, continued to resist government emissions reductions programs, citing concern over economic effects.3 With the election of a new president, Barack Obama, in 2008, environmental advocates regained hope that global warming would be addressed at the national level. Many in the scientific community were becoming increasingly disturbed by trends that warned of significant ecological upset if global warming continued. Problems such as the well-publicized plight of polar bears in the artic circle were becoming more well-known to a greater number of Americans. Reflecting this mounting concern, the Obama Administration’s acknowledgment that comprehensive and mandatory policies were needed to address global warming seemed to signal a willingness to address a growing problem.4 In this new atmosphere, characterized by the oft-repeated idea of “change,” it came as little surprise that
the newly empowered Democratic majority in Congress would move towards legislation that addressed climate change. Most notably, the House of Representatives voted to pass the American Clean Energy Security (ACES) Act, a highly complex bill which would, among other measures, enact a system of carbon emissions limitations characterized by tradable carbon credits, a system known to many as “Cap and Trade.”5 Although the final fate of the Waxman-Markey ACES Act is yet to be determined (it still must pass through the more conservative Senate before reaching the President to be signed into law), it is an indication that the era of political inaction in addressing the issue of climate change may be over. Sweeping policies, such as the ones called for by the ACES Act, will change the face of the American economy and, with his, the faces of American society and the global community.6 Perhaps even more significant is the fact that these changes have support from diverse sectors of American society. According to a House of Representatives Press Release, “the [ACES] legislation was backed by a coalition that included electric utilities, oil companies, car companies, chemi-
hurj spring 2010: issue 11
focus
EMISSIONS TRADING IN AMERICA cal companies, major manufacturers, environmental organizations, and labor organizations, among many others.”7 Such wide support indicates that sweeping policy changes have a real chance of moving out of the arena of politics and into the daily lives of Americans. However, effectively addressing climate change presents a new and complicated set of problems. It is often difficult to fully grasp the intricacies and implications of many proposed policy solutions. The following paper will examine a selection of policy options and proposals concerning the main avenue that legislation has followed up until this point: reducing greenhouse gas emissions. This examination will focus mainly on emissions limitations or emissions trading systems, as these are the most prevalent policy proposals. Each proposal will be examined and the efficacy of certain policy mechanisms will be explored.
Policy Approaches to Reducing Emissions When examining potential policies designed to reduce global warming by limiting the amount of greenhouse gases in the atmosphere, there are two
important considerations to be kept in mind. First, there is the ability of the policy to effectively reduce emissions, without which any policy becomes futile. Second, there must be an examination of how such a policy will affect the economy, and, by extension, the lifestyle and society, of a nation. The ideal policy will accomplish two principal tasks: effectively reduce greenhouse gas emissions while maintaining or improving the quality of life of the nations’ citizens, principally by stimulating or preserving economic growth and prosperity. Relevant policy options will be evaluated according to these two criteria. Policies designed to reduce greenhouse gas emissions can be divided into two main categories: non-marketbased and market-based. The first category includes policies which do not rely on market mechanisms (i.e. economic incentives) to change behavior. Non-market-based policies include the heavy limitation or outright banning of greenhouse gas emissions, similar to what has already been done with a number of ozone depleting substances.8 Although these are the most effective methods for reducing the total amount of greenhouse gas emissions overall, such restrictive policies can be considered impractical, due to the
prevalence of greenhouse gas-reliant practices in modern life, which would have to be limited should such prohibitive measures be placed on greenhouse gas emissions. Since policies of heavy limitation or total prohibition of greenhouse gases would be extremely disruptive to the economic life and society of a nation; they thus fail to meet or address the second criteria for evaluating greenhouse gas policies and as such, will not be discussed further in this paper. The second category of market-based policies focuses on the use of market mechanisms, such as augmenting the price of emitting greenhouse gases, in order to provide an economic incentive to reduce emissions. Market-based policies are generally considered more flexible and are therefore often more politically palatable.9 Market-based policies can be further divided into two categories: emissions taxes, which are designed to make greenhouse gas emissions reflect their “true” cost by factoring in environmental costs, and emissions trading systems, which seek to place a limit on emissions while preserving economic flexibility through trading.10 Emissions tax policies are interesting from an economic standpoint (much debate has gone into how one would determine the “true” cost
24
hurj spring 2010: issue 11
focus
of emissions, as well as how this numerous policy concerns about how cost would be implemented into to monitor and quantify the actual a functioning modern economy), amount of greenhouse gases emitted. but they are not often seen as ideal One of the main policy quespolicy options. Although a number tions is how to determine emissions Emissions Trading Systems (ETS) of congressional bills have proposed allowances, which can be referred are highly complex. In general, a trademissions taxes (or measures similar to as “credits,” or “shares.” (For the ing system is composed of a limit, or to emissions taxes), critics of such purposes of clarity, this paper will use cap, which clearly indicates the total policies often note that emissions allowance to refer to the tradable unit amount of greenhouse gas emistaxes do not guarantee a certain level of emissions in an emissions trading sions permitted in the economy. This of emissions reductions and thus system.) Allowances are the tradable emissions total is then broken down often fail to achieve the main point units in emissions trading schemes. into units that can be traded, allowof climate change legislation; namely, Allowances present a number of posing for flexibility in the economy and to combat global warming by reducsible benefits and problems for policy hopefully for markets to self-correct ing the amount of greenhouse gases makers. For example, distribution and reduce inefficiency. In designing in the atmosphere.11 Since emissions of allowances can take a number of an ETS, a policy must incorporate a taxes fail to guarantee significant different forms: by government alnumber of features, each of which emissions reductions, they also fail to location, auctioning, or a combination will have its own effects. satisfy the first criteria for evaluating of the two. Each of these has variLike an emissions tax, an ETS must greenhouse gas policies and will not ous implications: allocation is often choose a level at which it will prinbe further discussed in seen as a handout, as this paper. or importers “The era of political inaction in addressing the issue producers The second group of do not have to pay for of climate change may be over. Sweeping policies, market based policies, the emissions they are emissions limitation and such as the ones called for by the ACES Act, will allowed to produce, emissions trading polichange the face of the American economy and, with but are12 instead granted cies, are characterized by them. Auctioning is this, the faces of American society and the global an emissions limit (the seen as an effective “cap” in cap and trade), way to allow markets community.” which the economy in to determine the price question cannot exceed. of emissions, but can cipally regulate. What this “point of Emissions trading policies include a also be subject to collusion and specuregulation” boils down to is who will provision which allows for economic lation, which can artificially drive be subject to monitoring and reportplayers to trade specified units of emisup prices and hurt smaller firms. ing emissions to the administrating sions amongst themselves, as long as Addressing this issue is crucial in agency. Since many emissions trading the total number of emissions units does making an emissions trading scheme systems use a system of allowances to not exceed the set limit (the “trade” in effective and politically acceptable. determine allowed emissions, the point cap and trade). These policies, known In an effort to address some of of regulation can be thought of as who as Emissions Trading Systems (ETS), are these potential market manipulation must hold the allowances required for generally preferred by policy makers, as problems, numerous emissions limiemissions. For example, in regards to they help it to retain a certain amount of tations schemes and emissions trademissions from transportation, regulaflexibility in the economy and help it to ing schemes include provisions for tion could come at a number of difadapt to these new conditions imposed emissions offsets. Offsets are actions ferent points, focusing on the primary upon it. Emissions trading systems, due or projects which negate (offset) the producers or importers of greenhouse to their political viability and potential emission of greenhouse gases into the gas-producing fuels, the distributors of ability to satisfy the two goals set out atmosphere. Offset projects, if quantithe fuels, or the end-users who actuabove, will be the main focus of this fied and monitored, could be turned ally combust the fuels and produce the paper. into “offset credits,” which would greenhouse gases. This, in turn, raises
Emissions Trading Systems
25
hurj spring 2010: issue 11 essentially act as an additional emissions allowance.13 Regulated entities could use offset credits to help them meet their emissions limit or could sell them to other entities, just like an emissions allowance. Offsets present opportunities for policy makers but also present a number of problems. For example, policy makers seeking to encourage reforestation could include provisions in their trading scheme which would allow covered entities to fund reforestation efforts and to receive offset credits from it. In this way, offsets can be used to encourage a number of environmentally responsible practices, such as preservation of forests or the capture of carbon dioxide emissions from factory smokestacks.14 However, any emissions trading policy will need to address the credibility of an offset project, most notably whether or not it is actually able to offset emissions at the level it claims to, and whether or not the offset demonstrates the principle of “additionality.” Additionality is defined as whether the offset project or action would have taken place despite the issuance of a carbon credit.15 A prime example of a situation which does not reflect additionality is a tree farm reforesting part of its land with the intention to harvest it, since it can be argued that this would have occurred anyway. Offsets also have other implications from a policy standpoint. Most importantly, offsets can help control the price of carbon allowances by acting as an alternative. Thus speculation or collusion in the allowance market would be more difficult and less profitable, due to the fact that greenhouse gas emitters will have alternative methods of meeting their limits.16 While offsets can quickly become complicated and confusing, it is im-
focus portant to not lose sight of the larger trading structure of an ETS, i.e., the method and framework in which actual trading takes place. The nature of trading can take on many different forms with correspondingly different outcomes. Like the distribution of allowances, trading can be open to the public, to a select few stakeholders, or to a mix of the two. Open trading has the potential to become a powerful and vibrant part of the economy, but could also lead to speculation and price instability, whereas more closed forms of trading could lead to market manipulation by colluding groups or dominant stakeholders. Any emissions trading scheme which auctions off allowances has enormous potential to generate revenue for the government. The use of these revenues will be a useful tool for policy makers in making a politically acceptable proposal. As a final note, this paper will focus exclusively on emissions trading systems which affect the United States as a whole. However, climate change and global warming are concerns which stretch across international boundaries, with various consequences for communities across the world. Likewise, any proposed policy must also take into consideration the effects it will have on the US relative to other nations, especially in terms of economic competitiveness.
Present Legislation THE AMERICAN CLEAN ENERGY AND SECURITY ACT The Waxman-Markey Bill, also known as The American Clean Energy and Security (ACES) Act, or H.R. 2454, was an extensive piece of legislation introduced and passed by the
US House of Representatives in early July of 2009. Below is a brief examination of the 1,400 plus page bill, with a focus on the benefits and consequences of certain policy provisions. The ACES Act establishes an emissions trading (cap and trade) system with a clear emissions limit for the economy, which promises definite reductions. This allows for the creation of a set allowance unit, as well as an idea of how many allowances will be allowed each year, allowing the economy to predict and adapt accordingly. The act also establishes a price floor and ceiling for allowances. A price floor will preserve the incentive to reduce emissions (if allowances were worth nothing, or next to nothing, there would be little or no incentive to reduce emissions, since there would be an abundance of cheap allowances), and a price ceiling will curb speculative buying of allowances, as well as mitigate harsh economy-wide effects, such as inflation.17 The ACES Act also provides some revenue from the sale of allowances targeted to reducing the economic consequences of the bill on workers and on certain industries, as well as growing “green” sectors of the economy by investing in clean energy research, efficiency technologies, and climate change adaptation, all of which are crucial aspects of mitigating the harsher downsides of the bill. ACES also provides for the creation of an emissions offset credit program to be overseen by a federal agency. While it is not clear how strict this monitoring agency’s guidelines will be, it will be an important factor in the growth of the carbon offset industry as well as helping to prevent market manipulation by offering an alternative to emissions allowances.
26
focus
hurj spring 2010: issue 11 Additionally, the offsets provisions in ACES add an international dimension, allowing American firms to purchase foreign offset credits, provided they are subject to the same accreditation standards. By far one of the most beneficial aspects of the ACES Act is that it will have a minimal net impact on the daily financial lives of most Americans. According to a report done by the Congressional Budget Office (CBO), the net cost of the ACES Act to the American economy would probably be about $22 billion.18 Although this seems high, it would be an average net cost of only $175 for every American household.19 While this would vary according to income bracket and region, and with the major caveat that economic modeling often falls short of accurately predicting reality, it can reasonably be said that the ACES Act would meet the second goal of preserving the economic livelihood of American Society. The ACES Act does fall short in a number of areas, potentially establishing policies which will be harmful to its goal of reducing greenhouse gas emissions while preserving economic prosperity. Primary among these policies is the system of allocations to industries. Although this is intended to mitigate many of the negative economic effects the cap and trade system will have on industries, it is often seen as a corporate handout which will only serve to increase the profits of regulated companies, while still increasing the prices to consumers. In testimony before Congress last March the Obama Administration’s Budget Director, Peter Orszag, described a policy of allocation rather than auction as the “largest corporate welfare program that has [would] ever [have] been enacted in the history of the United States.”20 This system of allocations will not only grant corporations emissions allowances worth millions, it will also give them no incentive to change their business practices or to reduce emissions. Additionally, these allocations mean that the federal government will receive less money which could be channeled towards research and towards mitigating the economic effects of the bill. ACES also does not establish a clear point of regulation; since unallocated allowances are auctioned, presumably to the general public, it becomes difficult, even impossible, to track and monitor how such allowances are used. The ACES Act also allows for the banking of allowances, a practice which could lead to market manipulation through speculative trading or cartel or monopolistic practices. According to a report done by the Washington State Department of Ecology, holding auctions open to the public and permitting the banking of allowances greatly increases the chances of a speculative allowances market, market manipulation or abuses such as the reselling of previously used emissions credits.21 In short, the open system of the ACES Act is detrimental to stability and accountability in an area where such qualities are essential to the success of the policy.
THE VAN HOLLEN “CAP AND DIVIDEND ACT,” (H.R. 1862) The Van Hollen Cap and Dividend Act is another emissions trading system sponsored by Representative Van Hollen of Maryland. Unlike the more comprehensive ACES Act, the Cap and Dividend Act focuses only on carbon dioxide-producing
27
hurj spring 2010: issue 11 fossil fuels. Although it is one of the less harmful greenhouse gases, carbon dioxide (CO2) is distinguished by its prevalence and volume; per ton it is greenhouse gas emitted the most into the atmosphere. The Cap and Dividend Act establishes an emissions trading system with a clear limit, allowing markets to predict the amount of allowances and adjust accordingly. The Van Hollen Act is also well-constructed in that it has a clear point of regulation: the entity which makes the first sale of a fossil fuel. These “first sellers” are parts of a group with clear limits, which makes them easy to track and regulate. Additionally, the Van Hollen Act auctions all of the available allowances, rather than allocating them, in an auction open only to the regulated group, generating more revenue for the federal government and allowing the regulated firms to determine a fair market price for allowances. This closed auction also makes tracking and monitoring of allowance use easier and, when combined with quantity limitations imposed on buyers, helps curb the ability of players to manipulate the market or engage in harmful speculative behavior.22 The Cap and Dividend Act is also unique in that it creates a dividend program, whereby part of the proceeds from allowance auctions are channeled back into the economy through a refund or dividend given to everyone “with a valid social security number.”23 This program would help enormously in mitigating the rise in prices which would otherwise hurt American consumers.24 Like the ACES Act, the Van Hollen Act contains a number of provisions to help direct federal funding towards growing clean or green sectors of the economy. The Cap and Dividend Act is also beneficial in that is has specific international provisions designed to protect American industry by levying a tariff on foreign imports of fossil fuel intensive goods which would be equal to the costs incurred by domestic firms due to the emissions trading system, and providing subsidies to American exporters of the same goods, helping them to remain competitive internationally. These provisions become ineffective towards the firms of another nation as soon as
focus that nation creates its own comparable emissions trading systems, an added incentive for other nations to move forward on climate change legislation. Although the Van Hollen Act allows for an emissions offset credit program, it does not establish any clear oversight or guidelines for verifying or quantifying offsets, a weakness which could undermine the policy’s effectiveness in ensuring emissions reductions. The Van Hollen Act also allows for banking of allowances, a practice which could lead to market manipulation and hoarding, actions which could destabilize the allowance market and leave it vulnerable to price fluctuations.25
THE EPA ACID RAIN PROGRAM
The final policy examined is one which has actually already been implemented, adding to its ability to demonstrate certain policy mechanisms. The Environmental Protection Agency’s Acid Rain Program, originally started in 1995, regulates the emissions of chemicals associated with acid rain, including sulfur dioxide (SO2) and a number of nitrous oxide compounds (generally given the symbol NOx). Limits for the emissions of these substances are determined, and allowances are distributed through auctions and allocations to participating firms. One of the greatest benefits of the EPA Acid Rain Program is that it has been successful in reducing emissions.26 This success was helped by the fact that, since SO2 and NOx are byproducts of coal combustion, the EPA was able to establish an easy point of regulation: firms burning coal, primarily for the production of electricity. The EPA Acid Rain program is interesting in that it allows non-regulated entities (entities other than coal-burning power plants) to purchase allowances either to hold as assets or to “retire” (to hold an allowance until it becomes invalid, thereby further reducing the total number of usable allowances in circulation).27 This has not presented serious problems for the program, as the allowances being bought and sold are specific to certain industrial processes, and are therefore not easily used by a large number of private citizens and organizations. It is
important to note that this may not be replicable with other types of greenhouse gas emissions allowances, which are more easily used by non-industry entities. The final feature of note in the Acid Rain program is that it rewards implementation of certifiable emissions-reducing technology by allocating more emissions allowances, making the industry cleaner overall and serving as an example of effective and accountable regulation for other emissions reductions policies.28
The Way Forward Designing a national emissions trading system is no easy task. However, the growing evidence that global warming is a real threat which is linked to human emissions of greenhouse gases and holds disastrous consequences for both the national and international communities contributes to the urgency of implementing an effective emissions trading system. As shown by the brief comparison of a number of working and proposed emissions trading systems, an emissions trading system can have a number of different policy mechanisms, each of which, both individually and in concert with other policy features, have numerous outcomes. Nevertheless, there are a number of features which must be included in any emissions trading system in order for it to be effective and accomplish its two main goals: reduction of greenhouse gas emissions and preservation of economic prosperity. Building off of the policies described above the following section will outline a number of components which are necessary to the success of any emissions reductions policy. Point of Regulation: An effective national or wide-scale emissions trading system must have a high-level “upstream” or “first-seller” point of regulation. That is to say, an effective trading system must regulate the initial sale of greenhouse gasproducing substances, with the allowance unit based on the amount of emissions estimated from the use of the substance and the entities subject to regulation being the initial importer and producers of the substances. This is essential in order to accurately track and monitor the number of
28
hurj spring 2010: issue 11
focus allowances being sold and their actual use. A specific upstream point of regulation will help prevent the resale of allowances which have been used, as well as help relieve the regulatory burden of determining whether allowances have been used by what could be millions of private citizens, companies and groups. The ACES Act, which does not establish a clear point of regulation, faces just such a morass. It should be noted that the EPA Acid Rain program, despite its loose point of regulation, avoids such problems, because of the specific nature of its emissions allowance, which only applies to gas emitted in specific and limited industrial applications, a feature not shared by some greenhouse gases such as carbon dioxide.
AUCTIONS
Allowances should not be allocated. Allocation of allowances represents an enormous benefit to corporations (who will receive assets worth millions) and will thus remove any incentive to change their business practices and re-
would act as additional allowances. This will help grow a new sector of the green economy and help prevent manipulation or large fluctuation in the allowance market by providing an alternative to emissions allowances. Any offset program must include rigorous accreditation procedures which can verify emission reductions, permanence, and additionality, and can take into account other environmental consequences of the project.
REVENUES
The population that will likely feel the largest negative impact of any emissions trading system will be average consumers, who will face a rise in prices across the economy and a decrease in buying power. The revenues produced by an emissions trading system must address this situation in order to make any proposal politically feasible. The most politically palatable proposal is the one reflected in the Van Hollen “Cap and Dividend” proposal,
related firms at a disadvantage. Efforts should be made to mitigate these effects, such as the ones reflected in the Van Hollen Bill which calls for tariffs on imports of greenhouse gas intensive goods and subsidies for exporters of the same goods. While these provisions contradict free-trade agreements such as those set forth by the World Trade Organization, they can become useful tools in persuading other countries to implement their own emissions trading systems. By agreeing to end tariffs and subsidies if other nations implement comparable climate change legislation, the United States will be able to advocate for worldwide changes to address a worldwide problem. Since the passage of the Kyoto Protocol, America has radically changed its attitudes towards climate change and emissions reductions legislation. Although some say this shift has been too long in coming, while others contend it has been too fast or even unnecessary, the fact remains that we stand at a cross-
“The fact remains that we stand at a crossroads. We must decide to either move forward with complex and controversial legislation, which could represent an enlightened policy with great benefits for our nation and global community, or continue with inaction, possibly with disastrous results.” duce emissions. Auctions of allowances will allow the market to determine their price, permitting greater economic flexibility and preventing inefficiency. With that said, auctions must include price floors, to prevent a complete devaluation of allowances and resulting ineffectiveness of the policy, and price ceilings to prevent dire economic contraction. Auctions should also be limited to the entities subject to regulation, once again to avoid usage monitoring problems, as described above.
ALLOWANCES
Allowances should only be valid for a short period of time, and should come with a clear expiration date. This will prevent banking or hoarding of allowances and prevent market manipulation.
OFFSETS
Any emissions trading program should include a provision which allows for the issuance of offset credits which
29
whereby every individual with a social security number would receive an equal dividend payment. Similar to existing systems like the Alaska Permanent Fund, this would help to relieve the burden on the American consumer. Additional revenue should go towards covering the administrative costs of the emissions trading system as well as specific economic mitigation and funding for new clean energy and efficiency technologies, which will help speed the transition from a greenhouse gas-emitting economy to a cleaner one, priorities reflected well in the ACES Act.
INTERNATIONAL PROVISIONS
Climate change and global warming go beyond national borders and involve every country in the world. Additionally, it is clear that the passage of extensive economic regulation like an emissions trading system would place the American economy and its
roads. We must decide to either move forward with complex and controversial legislation, which could represent an enlightened policy with great benefits for our nation and global community, or continue with inaction, possibly with disastrous results. For the time being, America remains the unchallenged superpower of the world. Although they have not yet been perfect, attempts at crucial legislation such as the ACES Act represent the willingness of Americans to tackle these issues and act as a world leader in the effort to combat global warming.
hurj spring 2010: issue 11
focus
References: 1. “Byrd-Hagel Resolution (S.RES.98)” 105th Congress, 1st Session. July 25th 1997 <www.congress.gov> 2. Kirby, Alex. “US Blow to Kyoto Hopes” BBC News Online. March 28, 2001 <http://news.bbc.co.uk/2/hi/science/nature/1247518.stm> 3. “Humans Cause Global Warming, US Admits.” BBC News Online. June 3, 2002 <http://news.bbc.co.uk/2/hi/americas/2023835.stm> 4. Hargreaves, Steve. “Obama Act on Fuel Efficiency, Global Warming” CNN.com January 26, 2009 <http://www.cnn.com/2009/BUSINESS/01/26/obama.green/> 5. Office of Congressman Rick Larsen (D-WA). “Larsen: Energy bill Builds Clean Energy Economy, Creates Jobs in Northwest Washington.” June 26, 2009. <house. gov/apps/list/press/Wa02_larsen/PR_cleanEnergyJobs_062609.shtml> 6. Office of Congressman Jay Inslee (D-WA) “Historic Climate Legislation passes U.S. House of Representatives.” June 26, 2009. <www.house.gov/Inslee> 7. US House of Representatives. “American Energy and Security Act (H.R. 2454)” Press Release. June 2, 2009 pg. 1 8. “The Phaseout of Ozone Depleting Substances.” Environmental Protection Agency. April 14, 2009 < http://www.epa.gov/ozone/title6/phaseout/index.html> 9. “Opportunities and Quantification Requirements for Local Government Participation in Greenhouse Gas Emissions Trading Markets.” World Resources Institute. July 8, 2008. 10. Ramseur, Jonathan L, Larry Parker, Brent D. Yaccobucci. Congressional Research Service. “Market-Based Greenhouse Gas Control: Selected Proposals in the 111th Congress.” May 27, 2009 <www.crs.gov> pg. 2 11. Ibid. pg. 1 12. Samuelsohn, Aaron. “House Panels Seek to Limit Effect of Cap and Trade on Nation’s Pocketbook.” E&E Publishing LLC. March 9, 2009 <http://www. eenews.net/public/EEDaily/2009/03/09/1> 13. “Opportunities and Quantification Requirements for Local Government Participation in Greenhouse Gas Emissions Trading Markets.” World Resources Institute. July 8, 2008. 14. Ibid 15. Ibid 16. Washington State Department of Ecology. “Economic Analysis of a Cap and Trade Program; Task 4: Analysis of Options For Limiting Market Manipulation.” November 11, 2008. 17. Ibid 18. Congressional Budget Office. “Cost Estimate: H.R. 2454 American Clean Energy and Security Act of 2009.” June 5, 2009 <www.cbo.gov> 19. Ibid. 20. Bailey, Ronald. “Cap and Trade Handouts.” The Reason Foundation April 7, 2009 <http://reason.com/archives/2009/04/07/cap-and-trade-handouts> 21. Washington State Department of Ecology. “Economic Analysis of a Cap and Trade Program; Task 4: Analysis of Options For Limiting Market Manipulation.” November 11, 2008. 22. Ibid 23. H.R. 1862 “The Cap and Dividend Act of 2009.” (111th Congress, introduced April 2009) Rep. Chris Van Hollen (MD-8) <www.congress.gov> Sec. 9912 (a) 24. Congressional Budget Office. “The Estimated Costs to Households from a Cap-and-Trade Program for CO2 Emissions.” Statement of Douglas W. Elmendorf, Director (in testimony before the Senate Finance Committee) May 7, 2009. <www. cbo.gov> 25. Washington State Department of Ecology. “Economic Analysis of a Cap and Trade Program; Task 4: Analysis of Options For Limiting Market Manipulation.” November 11, 2008 26. United States Environmental Protection Agency “Cap and Trade: Acid Rain Program” Results” April 14, 2009 < www.epa.gov/airmarkt/cap-trade/docs/ ctresults.pdf> 27. United States Environmental Protection Agency “Acid Rain Program.” April 14, 2009 < http://www.epa.gov/airmarkt/progsregs/arp/basic.html> 28. Ibid.
30
hurj spring 2010: issue 11
focus
Stem Cell Ac
t Sp u rs N
Nezar Alsaeedi / Staff Writer
On March 9, 2009, an atmosphere of anticipation and excitement surrounded newly-elected President Barack Obama as he signed the Stem Cell Executive Order, which allowed federal funding for research concerning embryonic stem cells. This executive order concerning embryonic stem cells promised new hope for regenerative medicine, an area of science stalled by the actions of a hesitant Bush administration. The issue of obtaining embryonic stem cells is an area of much debate, with moral and religious fervor fighting back waves of scientific curiosity and inquiry. Embryonic stem cells are obtained from embryos discarded at in-vitro fertility clinics and can differentiate into any type of tissue in the body. However, they require the destruction of their host embryos, which is deemed a destruction of potential human life by many religious protesters. As a result of this moral debate, the Bush administration restricted the flow of federal money to fund embryonic stem cell research, except for 22 existing stem cell lines approved by the Bush administration.1 As a result, research concerning embryonic stem cells was mostly stymied. Researchers looked towards private backers to fund their studies. Postdoctoral students on federal training grants could not conduct pertinent research on embryonic stem cells without violating the stipulations of their grant. Even international scientific collaborations were hindered because of lack of funds, equipment and (most importantly) information. As Amy Comstock Rick, president of the Coalition for the Advancement of Medical Research, puts it, “If you
31
ew Age for M e
talk to some scientists, you hear absurd stories. One guy has green dots on the things in his lab that are federally funded and red dots on the privately funded equipment. That shows you how crazy it is.”2 Aside from the most obvious obstacles of funding, the existing 22 stem cell lines–unique families of constantly dividing cells originating from one parent stem cell–approved by the Bush administration were not genetically or ethnically diverse enough to experiment on. Furthermore, they were aging stem cells with accumulating chromosomal abnormalities, which made them more difficult to maintain. Yet, despite these obstacles, some found alternative avenues to bypass federal funding and assistance. In response to the federal restrictions on embryonic stem cell research, citizens of California approved Proposition 71, known as the California Stem Cell Research and Cures Initiative.3 This initiative dedicated three billion dollars of California tax dollars to the establishment of the California Institute for Regenerative Medicine (CIRM), an institution that offers grants for the study of stem cells and associated cures, with high priority given towards embryoderived cells.4 CIRM represented the first major cry against government restrictions on pursuing regenerative and therapeutic benefits of embryonic stem cells. Moreover, it illustrated the importance of maintaining the independence of scientific improvement from government initiatives. With embryonic research temporarily hindered, the focus of scientific research turned to potential medical treatments that used adult stem cells. Although these promise therapeutic
dici ne
benefits, adult stem cells differ from embryonic stem cells in many ways. Adult stem cells have the potential to differentiate into specialized cells, such as blood or skin cells. They are very hard to locate and often difficult to induce into specialized cell types. Embryonic cells can become any tissue in the human body and can proliferate readily, which often leads to uncontrollable tumors. However, adult stem cells have one advantage not shared by their embryonic counterparts: the ability to be accepted by the host. Because adult stem cells come from within the individual’s own tissues, they can be isolated from the patient, induced to form a functional tissue, and re-introduced as a native tissue, without the need for anti-rejection medication that can cause certain sideeffects. In fact, many disorders have been alleviated by isolating stem cells from the bone marrow and injecting them back into areas of damage. For example, Thomas Clegg, a 58-year old congestive heart failure patient, had adult stem cells isolated from his bone marrow injected into his heart to induce regeneration of damaged heart cells. As Steven Stice, director of the Regenerative Bioscience Center at the University of Georgia, notes, “In the short term– say, the next five years–most of the therapeutic applications from stem cells will be from adult stem cells.”5 Recently, researchers have shown that adult stem cells could hold greater promise than embryonic cells in their ability to be reprogrammed into their early embryonic form. These cells are called “induced pluripotent” stem cells because they are able to differentiate into many types of tissues through the introduction of four major genes. Although rudimentary in its development,
focus
hurj spring 2010: issue 11
the conception of induced pluripotent stem cells would render the ethical controversy surrounding embryonic stem cell collection nonexistent because these cells come from the host without the destruction of any embryos. Furthermore, there would be no immune rejection to any of these cells, because they are considered native to the individual.6 However, before such an objective can be realized, a deeper understanding of the pluripotency of embryonic stem cells must be attained. With the newly signed Stem Cell Executive Order, medical discoveries through embryonic stem cells can gain their rightful recognition and can finally be realized. The federal funding of embryonic stem cell research can provide soughtafter cures for many of the diseases plaguing patients today. In the short term, embryonic stem cells can be introduced into any organ to repair defects. This is especially significant for spinal cord injuries and Parkinson’s disease because neural adult stem cells are hard to find. These cells can also be used to repair damage brought about by stroke or heart attack and can replace defective beta-insulin cells in diabetic individuals.7 Nevertheless, the major significance of these stem cells lies in their long-term potential benefits. The executive order signed by President Barack Obama will not only enable scientists to discover the shortterm regenerative solutions many patients need, but also offers long-term solutions to bigger healthcare questions that plague America. Among the many dilemmas inherited by the Obama administration, the exponentially rising cost of healthcare has come to the fore-
front as a major priority in the American socio-economic sphere. Rising costs have been attributed to many factors, including the research that goes into drug discovery and refinement. A more comprehensive understanding of the genetic makeup of embryonic stem cells could spur a revolutionary wave of medical discovery, yielding more efficiency at a lower cost. With federal money from the National Institute of Health (NIH), scientists can study diverse ethnic stem cell lines with a multitude of unique defects and diseases. This would add to the information bank initiated from the study of the first 22 stem cell lines approved by the Bush administration.8 A variety of diseases affecting diverse ethnicities can be studied, and commonalities can be deduced through the study of these stem cell lines. Another long-term advantage is the use of embryonic stem cells in therapeutic experiments. Testing a drug on these cells will lead to more efficient results than testing the drug on conventional animal cells. These tests would offer more accurate results, because stem cells can be induced to differentiate into the target human organ under study. Moreover, these tests would be cost-effective, because they provide accurate and reliable data with a few meaningful tests, as opposed to multiple tests on animal models with less accurate findings. Aside from the financial benefits, scientists can gain a more complete understanding of how diseases progress by tracing the effect of diseases on human
cells from the earliest stage until the aging stage of their lifespan. This would save much-needed time and money on research, and, consequently, would save many lives. Unfortunately, the realization of any medical dream involving embryonic stem cells will take years to materialize. Little is known about stem cells, and much must be learned in the years to come. However, every medical breakthrough begins with persistent scientific inquiry coupled with the collaboration of supportive government action. President Obama has clearly made this his initiative by supporting “sound science,” as long as it conforms to humane and ethical standards. With the stroke of a pen, President Obama was able to usher in a wave of support for what could be an unparalleled medical revolution. It is only a matter of time before embryonic stem cell research gains its proper standing in the scientific community and begins to heal our disease-ridden society. References 1. CNN. “Obama Overturns Bush Policy on Stem Cells.” CNN Politics. 2009. 20 August 2009. http://www.cnn.com/2009/POLITICS/03/09/ obama.stem.cells/index.html. 2. Kalb, Claudia. “A New Stem Cell Era: Scientists cheer as President Obama ends restrictions on research. What the move means for your future.” Newsweek. 2009. 18 August 2009. http:// www.newsweek.com/id/188454. 3. Mathews, Joe. “What Obama’s Support for Stem Cell Research Means for California.” Scientific American. 2009. 17 August 2009. http://www.scientificamerican.com/article. cfm?id=stem-cell-research-in-california. 4. Conger, Krista. “Stem Cell Policy May Aid State Research Efforts.” Stanford School of Medicine: Medical Center Report. 2009. Stanford University. 18 August 2009. http://med. stanford.edu/mcr/2009/stem-cell-0311.html. 5. Hobson, Katherine. “Embryonic Stem Cells—and Other Stem Cells—Promise to Advance Treatments” US News and World Report: Health. 2009. 17 August 2009. http://health.usnews. com/health-news/family-health/heart/articles/2009/07/02/embryonic-stem-cells--and-other-stem-cells--promise-to-advance-treatments.html. 6. Lin, Judy. “Obama Stem Cell Policy Opens the Field to New Discoveries, Disease Treatment.” UCLA Today, 2009. UCLA. 18 August 2009. http://www.today.ucla.edu/portal/ut/obamastem-cell-policy-opens-the-85172.aspx. 7. Ibid. 8. Zenilman, Avi. “Reselling Stem Cells.” The New Yorker News Desk. 2009. 20 August 2009. http://www.newyorker.com/online/blogs/ newsdesk/2009/03/reselling-stem-cells.html.
32
hurj spring 2010: issue 11
science
micro
RNAs: A New Molecular Dogma
Robert Dilley / Staff Writer In 1958, Francis Crick identified the central tenet of molecular biology as the unidirectional flow of genetic information from DNA to RNA to proteins. DNA is transcribed into messenger RNA (mRNA) by specific polymerases, which is subsequently translated into proteins on ribosomes. When looking retrospectively at a protein’s life, this dogma gives insight into its mechanisms of synthesis. However, it was recently discovered that the human genome is comprised of approximately 95% non-coding DNA, which does not code for proteins. Major questions in all fields of biology have arisen about the functions and mechanisms of these vast stretches of DNA. Crick’s hypothesis has greatly aided biological advances during the 20th century in the fields of molecular biology, genetics, and biochemistry. However, the recent discoveries of non-coding microRNAs and their properties and functions have challenged the original dogma. MicroRNAs (miRNAs) are an evolutionarily conserved class of non-coding RNAs that regulate gene expression in many eukaryotes. The first miRNA was discovered in the nematode Caenorhabditis elegans in 1993 by Victor Ambros’ laboratory [1]. At the same time, the first miRNA target gene was discovered by Gary Ruvkun’s laboratory [2]. These simultaneous discoveries identified a novel mechanism of posttranscriptional regulation. The importance of miRNAs was not realized for seven years and was precipitated by the rising interest in another class of short RNA, the small interfering RNA (siRNA), which is involved in the phenomenon of RNA interference (RNAi), whereby mRNAs are degraded. Before looking at the biological impact of miRNAs, it is important to consider their biogenesis, mechanisms of action,
33
and the strategies for studying these intriguing molecules. Although miRNAs and siRNAs are both of the short non-coding RNA variety, they differ in their functions and biogenesis. SiRNAs have proven to be useful for in vitro laboratory studies to degrade a specific mRNA through RNAi, whereas miRNAs comprise an extremely important regulatory mechanism in vivo that operates in two closely related ways. Differing from double-stranded siRNA, miRNA is a form of single-stranded RNA about 18-25 nucleotides long, derived from a long primary precursor miRNA (pri-miRNA), transcribed from DNA by RNA polymerase II [3, 4, 5]. Pri-miRNAs can be exonic or intronic, depending on their surrounding DNA sequences, but the pri-miRNA has to be non-coding by definition. The long pri-miRNA is then excised by Drosha-like RNase III endonucleases or spliceosomal components to form a ~60-70 nucleotide precursor miRNA (pre-miRNA). The pre-miRNA is exported out of the
nucleus by Ran-GTP and a receptor, Exportin-5 [6, 7]. Once in the cytoplasm, Dicer-like endonucleases cleave the pre-miRNA, forming mature 18-25 nucleotide miRNA. Lastly, the miRNA is incorporated into a ribonuclear particle to form the RNAinduced gene-silencing complex (RISC), which enables the miRNA execute its function [8, 9]. The mature miRNA can inhibit mRNA translation in two ways.
Partial complementarity between the miRNA and the 3’-untranslated region (UTR) of the target mRNA inhibits translation by an unknown mechanism. If the complementarity between the miRNA and the 3’-UTR of the mRNA is perfect, then mRNA degradation occurs by a mechanism similar to RNAi performed by siRNA. As of now, most miRNAs discovered regulate gene expression posttranscriptionally. However, given the large number of miRNA genes (hundreds to thousands, or more per species), it is likely that some are involved in other regulatory mechanisms, such as transcriptional regulation, mRNA translocation, RNA processing, or genome accessibility [10]. In order to understand miRNAs, it is imperative to be able to visualize them, both spatially and temporally. Now that whole genome sequences are available for numerous organisms, the systematic analysis of mRNA expression levels has recently been expanded to the study of miRNA expression levels. Important techniques include microarrays, in situ hybridizations, reporter fusions, and northern blot analyses. Certain techniques give better spatial resolution, whereas others give better temporal resolution. Consequently, a combination of techniques most often pieces together the puzzle of miRNA localization and expression. Expression patterns will help to further the understanding of cis-regulatory factors, such as promoters and enhancers, that effect miRNA expression. Integrating the data of upstream regulators and downstream targets facilitates development of a miRNA pathway and circuitry map within the larger context of the cell [10]. MiRNAs exhibit exact developmental and tissue-specific expression patterns. They are implicated in the cellular processes of differentiation, proliferation, and apoptosis, and
science
hurj spring 2010: issue 11 some miRNAs may also have important functions in organ and immune system maturity. Recent studies have shown that dysregulation of miRNA expression is a common feature of human malignancies. Similar to protein-coding oncogenes and tumor-suppressor genes, miRNAs can also act as cancerpromoting or cancer-suppressing entities. The first identification of a miRNA abnormality in cancer came from studies of human chromosome 13q14 in chronic lymphocytic leukemia (CLL). Two miRNAs in this region, miR15a and miR-16-1, were deleted or down-regulated in 68% of CLL cases [11]. Subsequent studies showed that the miRNAs induce apoptosis by suppressing the anti-apoptotic gene BCL2 [12]. Hence, the miRNA acted as a tumor suppressor. MiR-15a and miR-16-1, along with other miRNAs, constitute a unique expression profile that correlates with the prognosis of CLL [13]. The first example of a miRNA demonstrated to function as an oncogene is miR-155, which is processed from the non-coding B-cell integration cluster (BIC) RNA. BIC was shown to cooperate with c-Myc in lymphonagenesis, and several years later, miR-155 was identified as originating from the last exon of the BIC mRNA [14]. Recent studies have shown that miR-155 expression is elevated in Hodgkin’s lymphoma samples, in diffuse large B-cell lymphoma, and in childhood Burkitt’s lymphoma, implicating its function as an oncogenic agent [15, 16, 17]. Although miRNAs compose only about 1% of the human genome, over 50% of them are located in cancer-associated genomic regions, such as fragile sights, frequently amplified or deleted regions, and break points for translocations [18]. Clearly, the
functions of miRNAs are important in normal cellular processes, and their dysregulated expression participates in disease progression. The discovery of miRNAs and their regulatory functions has opened the eyes of the scientific community to a new level of gene expression. MicroRNomics, a sub-discipline of genomics that describes the biogenesis and mechanisms of these tiny RNA regulators, has become an intense area of study, and novel findings are constantly elucidated by researchers all over the world. From basic cellular functions to disease biology, miRNAs are proving to be an invaluable source of information to piece together the regulatory pathways in all eukaryotes [10]. It is hoped that better understanding of the functions of miRNAs will provide a platform for their use in translational medicine. As stated by Gary Ruvkun, one of the pioneers of miRNA discovery, “It is now clear an extensive miRNA world was flying almost unseen by genetic radar” [19]. We have certainly entered a new era in the world of genomics. MiRNAs are revealing a much more complicated molecular dogma than previously conceived. The challenges to the central dogma of molecular biology may have raised more questions than answers, but have also ushered in many triumphs and exciting possibilities. References: 1. Lee, R. C., Feinbaum, R. L. and Ambros, V. (1993). The C. elegans heterochronic gene lin-4 encodes small RNAs with antisense complementarity to lin-14. Cell, 75, 843-854. 2. Wightman, B., Ha, I. and Ruvkun, G. (1993). Posttranscriptional regulation of the heterochronic gene lin-14 by lin-4 mediates temporal pattern formation in C. elegans. Cell, 75, 855-862. 3. Lin, S. L., Chang, D., Wu, D. Y. and Ying, S.Y. (2003). A novel RNA splicing-mediated gene silencing mechanism potential for genome
evolution. Biochemical and Biophysical Research Communications, 310, 754-760. 4. Lee, Y., Kim, M., Han, J. et al. (2004a). MicroRNA genes are transcribed by RNA polymerase II. European Molecular Biology Organization Journal, 23, 4051-4060. 5. Lee, Y. S., Nakahara, K., Pham, J. W. et al. (2004b). Distinct roles for Drosophila Dicer-1 and Dicer-2 in the siRNA/miRNA silencing pathways. Cell, 117, 69-81. 6. Lund, E., Guttinger, S., Calado, A., Dahlberg, J. E. and Kutay, U. (2003). Nuclear export of microRNA precursors. Science, 303, 95-98. 7. Yi, R., Qin, Y., Macara, I. G. and Cullen, B. R. (2003). Exportin-5 mediates the nuclear export of pre-miRNAs and short hairpin RNAs. Genes & Development, 17, 3011-3016. 8. Khvorova, A., Reynolds, A. and Jayasena, S. D. (2003). Functional siRNAs and miRNAs exhibit strand bias. Cell, 115, 209-216. 9. Schwarz, D. S., Hutvagner, G., Du, T. et al. (2003). Asymmetry in the assembly of the RNAi enzyme complex. Cell, 115, 199-208. 10. MicroRNAs: From Basic Science to Disease Biology, ed. Krishnarao Appasani. Published by Cambridge University Press. © Cambridge University Press 2008. 11. Calin, G. A., Dumitru, C. D., Shimizu, M. et al. (2002). Frequent deletions and down-regulation of micro-RNA genes miR15 and miR16 at 13q14 in chronic lymphocytic leukemia. Proceedings of the National Academy of Sciences USA, 99, 15524-15529. 12. Cimmino, A., Calin, G. A., Fabbri, M. et al. (2005). miR-15 and miR-16 induce apoptosis by targeting BCL2. Proceedings of the National Academy of Sciences USA, 102, 13944-13949. 13. Calin, G. A., Ferracin, M., Cimmino, A. et al. (2005). A microRNA signature associated with prognosis and progression in chronic lymphocytic leukemia. The New England Journal of Medicine, 353, 1793-1801. 14. Metzler, M., Wilda, M., Busch, K., Viehmann, S. and Borkhardt, A. (2004). High expression of precursor microRNA-155/BIC RNA in children with Burkitt lymphoma. Genes, Chromosomes, and Cancer, 39, 167-169. 15. Eis, P. S., Tam, W., Sun, L. et al. (2005). Accumulation of miR-155 and BIC RNA in human B cell lymphomas. Proceedings of the National Academy of Sciences USA, 102, 3627-3632. 16. Kluiver, J., Poppema, S., de Jong, D. et al. (2005). BIC and miR-155 are highly expressed in Hodgkin, primary mediastinal and diffuse large B cell lymphomas. Journal of Pathology, 207, 243-249. 17. van den Berg, A., Kroesen, B. J., Kooistra, K. et al. (2003). High expression of B-cell receptor inducible gene BIC in all subtypes of Hodgkin lymphoma. Genes, Chromosomes, and Cancer, 37, 20-28. 18. Calin, G. A., Sevignani, C., Dumitru, C. D., et al. (2004b). Human microRNA genes are frequently located at fragile sites and genomic regions involved in cancers. Proceedings of the National Academy of Sciences USA, 101, 2999-3004. 19. Gary Ruvkun, Professor, Harvard Medical School; Cell, S116, S95, 2004.
34
hurj spring 2010: issue 11
science
Double Chooz: Muon Reconstruction Leela Chakravarti / Focus Editor
Abstract ------------------------------------------------------------------This paper describes work done on the Double Chooz neutrino detection project at Columbia University’s Nevis Labs during the summer of 2009. Studies done on the elimination of background events in the experiment are presented in this paper. Cables for the outer veto system that reduces background were put together and tested for systematic errors. This report also describes studies of the reconstruction accuracy of muons and changes based on different starting energies and positions in the detector, as well as possible explanations of observed trends.
1. Introduction ------------------------------------------------------------------1.1 The Standard Model The Standard Model, which describes elementary particles and their interactions, is, at present, the most widely accepted theory in particle physics, resulting from decades of experimentation and modification. However, it still does not provide a complete explanation of various phenomena. One main issue is that the theory only accounts for the electromagnetic, strong nuclear and weak nuclear forces, excluding the fourth fundamental force of gravity.
All fermions have corresponding antiparticles with equal mass and opposite charge. Quarks have fractional charge and interact via the strong force; they combine to form hadrons, like neutrons and protons. Up and down quarks form neutrons and protons, while quarks in the other two generations are generally unstable and decay to particles of lesser mass. Of the leptons, three are charged and three are electrically neutral, and all have spin 1/2. The electron, muon, and tau all have a charge of -1, though the muon and tau are much more massive than the electron, and thus have short lifespans before they decay. Each charged lepton corresponds to a neutral, much lighter neutrino particle.
1.2 Neutrinos and Oscillations The existence of the neutrino was proposed by Wolfgang Pauli as an explanation for the experimental result of beta decay of a neutron into a proton, which showed that the electrons emitted in the decay have a range of energies, rather than a unique energy. [7] Electrons also do not seem to take the total energy that they are allotted, suggesting that there is another particle emitted that makes up for these discrepancies. The neutrino, thought to be massless, left-handed (counterclockwise spin), uncharged, and weakly interacting, was thus introduced. However, experiments have shown that this description is not entirely correct. Through the conservation of Lepton Family Number in the Standard Model, neutrinos cannot change flavor; an electron neutrino cannot become a muon neutrino or a tau neutrino. [5] Through the weak force, an electron and electron neutrino can transmute into each other, but particles cannot directly change families. A tau cannot directly decay into a muon without production of a tau neutrino. Despite this prediction, neutrinos do appear to oscillate and change flavors. For example, as an electron neutrino moves through space, there is a chance that it will become a muon or tau neutrino. This implies that mass states and flavor states are not the same, as previously thought, andthat neutrinos actually do have small masses. The waves of two different mass states interfere with each other, forming different flavor states, creating an oscillation probability for one neutrino to change flavors. In the case of electron and muon neutrinos, this probability is:
(1) Figure 1: Standard Model in Particle Physics [2]
The model consists of force carrier particles known as bosons, along with two main groups of fermions, quarks and leptons. Fermions are thought to be the building blocks of matter, while bosons mediate interactions between them.
35
where νµ and νe are the different flavors, ∆m is the difference in mass of the two particles, E is the energy, θ is the mixing angle, and L is the distance between the production and detection points of the neutrino.
science
hurj spring 2010: issue 11 The different neutrino flavor states are different combinations of mass states (ν1, ν2, and ν3), and the transition from one basis to the other is described by a mixing matrix. In the three-neutrino case, the transition is described by a unitary rotation matrix that relates flavor eigenstates to mass eigenstates. [6]
First, the positron produced annihilates with an electron, emitting two photons of about 0.5 MeV each. The neutron is then captured on a gadolinium nucleus after about 100 µs, emitting several photons with a total energy of around 8 MeV.
(2)
This matrix can be split into three matrices, each of which deals with a different mixing angle.1 Two of the angles, θ12 and θ23 have been determined by experiments with solar and atmospheric neutrinos, but θ13 is still undetermined, with only an upper limit of 13°. Various efforts, such as the Double Chooz project, are underway to try to determine this last angle and better understand the way that neutrinos oscillate.
Figure 2: Inverse Beta Decay Reaction [1]
The signals emit light, which is then detected by several photomultiplier tubes (PMTs) around the inner surface of the tank. This double signal with the appropriate time lapse indicates the presence of an electron antineutrino.
1.3 Double Chooz Double Chooz is a neutrino detection experiment located in the town of Chooz in northern France. Instead of studying solar or atmospheric neutrinos, this project focuses on neutrinos produced at two nuclear reactors. Through fission reactions of isotopes U-235, U-238, Pu-239 and Pu-241, electron antineutrinos are produced and move in the direction of two detectors. The original Chooz experiment only had one detector, but Double Chooz plans to achieve higher sensitivity and accuracy by using both near and far detectors and looking for changes in antineutrino flux from the near to the far. The use of two detectors corrects for uncertainties about the absolute flux and the location of the experiment because the two identical detectors are compared to each other and only differ on how far away each one is from the reactors. Figure 3: Double Chooz detector vessel
Assuming that oscillations will change some electron antineutrinos into other flavors, fewer electron antineutrinos should be observed at the far detector than at the near detector. Should this effect be observed, the probability can be calculated and, using equation 1, the value of sin2(2θ13) can also be determined. The near detector is 410km away from the reactors, while the far detector is 1.05km away. Both detectors are identical, with main tanks filled with scintillator material doped with gadolinium [4]. When an electron antineutrino particle reaches either detector, it reacts according to inverse beta decay: (3)
In each tank, there are about 6.79 x 1029 protons for the electron antineutrinos to react with. The actual detection of the particle is a result of the products of the inverse beta decay reaction.
Each detector has many layers and components. The central region is a tank filled with 10.3 m3 of scintillator. Moving outward, the gamma catcher region provides extra support for detecting the neutron capture signal. Surrounding the gamma catcher is the buffer region, where the 534 8-inch PMTs are located. Finally, the inner and outer veto systems are in place to help decrease background signal by other particles, such as muons or neutrons.
2 Muon Background and Reconstruction 2.1 Muon Background One of the main sources of background events and causes of error in the Double Chooz experiment is the effect of cosmic ray muons, along with gamma, beta and neutron signals in the detector and rock. Near-miss muons in the rock
36
science around the detector react and form fast neutrons, which go through the detector and create false signals. Muons produce neutrons in a detector through spallation (collision of high energy particle with a nucleus) and muon capture. Recoil protons from interacting neutrons are mistaken for positrons, and successive neutron capture confirms the false antineutrino signal. Muons can also make it into the detector and cause such background signals. In order to properly reject these signals, it is important to know which specific signals to ignore.
hurj spring 2010: issue 11 Clock, one for the Trigger, and one for the Gate. It is especially important that the Gate cable have the correct delay time, because it mediates data collection at certain intervals. Cables for the outer veto were tested for proper delay times using an oscilloscope. Each end of the cable connects to an input channel in the oscilloscope, and the difference in timing of pulse appearance is the delay time. It is apparent from the waveforms shown that there is a greater degeneration of the signal when using only RG174.
2.2 Outer Veto and Cabling One of the ways to reduce error due to muon background is to use an outer veto system, which identifies muons that can produce backgrounds in the detector. Once these specific muons are tagged, the signals they produce can be eliminated from the data set. The outer veto detector differentiates between muons that go through target and those that pass near the target. It also detects muons that may miss the inner veto completely or may just clip the edges of the inner veto. The outer veto is composed of staggered layers of scintillator strips above the detector. Strips in the X and Y directions can measure coincidence signals and identify muon tracks. Signals from light created in the scintillator are sent to PMTs, which process the signals in a similar fashion to the main detector. The arrival of event signals should be properly timed to minimize dead time for the detector, delay time of the signal and to preserve the pulse signal. Cables that carry the signals must therefore be made uniformly and within these specifications, while also taking into account the physical distance that must be traversed. Different types of cables offer different capabilities for data transfer. The outer veto uses RG58 and RG174 cables for data transfer. Each type of cable has a different characteristic delay time per foot, which must be accounted for to understand the total delay time for the signal. Moreover, 50-foot and 61-foot RG174 cables were cut, and will be combined with 110-foot and 97.5-foot RG58 cables for data transfer in the upper and lower sections of the outer veto. RG58 cable must be used in addition to RG174 because the use of only RG174 would result in a degeneration of the signal along the cable, as RG174 has a lower bandwidth and less capacity for data. The overall delay should be around 270 ns, and the cables must be tested for their individual delay times to ensure that this value remains constant for all cables to avoid systematic errors. RG174 sections of the outer veto cables have three cables bound together, one for the
1
37
Figure 4: Waveforms: Signal is better preserved along RG58 cable
For the RG174 cable, cable lengths of 50 feet should have a delay time between 76 and 81 ns, while 61-foot cables should have delay times between 93 and 99 ns. Plots of delay times indicate that all of the cables made for the outer veto have delay times within the expected ranges. These cables will be put into place in the final construction of the outer veto detector.
Figure 5: RG174 cable delay times
2.3 Simulation: DOGS Overview Muons that reach the detector and past the outer veto must be accounted for and identified. The muon-caused signals can be determined by muon location in the detector, as the signals are predicted to occur within some distance from muon location, depending on the energy and position of the muons. Simulation software is used to imitate muons passing through the detector and the reconstruction of the muonsâ&#x20AC;&#x2122; starting positions and energies. Various algorithms and
science
hurj spring 2010: issue 11 simulation processes are run through in order to reconstruct particles in the detector. The Double Chooz collaboration uses a software package called Double Chooz Offline Group Software, or DOGS. Within DOGS, there are basic simulation scripts that generate different types of particles, study them through the detector, and reconstruct their properties, such as type, starting position, and starting energy or momentum. The DOGS simulation keeps track of how much energy is deposited, where the energy is deposited, other particles produced due to the original particles, signals detected at different photomultiplier tubes, time between signals, and track directions, among other things. All of this information is stored and can later be accessed for analysis.
through reconstruction algorithms (discussed in the following section) to determine properties of the particles, such as spatial information and initial energy. It uses the information from DCRoSS about location and magnitude of deposited energy in the detector to reconstruct particle information. Output from each step of the simulation is stored in different Info Trees, and can be accessed in order to compare differences between actual and reconstructed information or look at where energy was deposited in the detector. Variables in Info Trees are accessed through the use of ROOT, a data analysis framework created to handle large amounts of data. A study of reconstruction efficiency as a function of starting energies and positions was thus conducted.
2.4 Data Flow in Muon Simulation In order to work with DOGS and produce specific simulations, it is often necessary to modify the skeleton scripts that are originally provided to meet different needs. DOGS uses a software called GEANT4 to generate particles and simulate their activity in a liquid scintillator detector. GEANT4 simulations take into account the geometry of the detector, the specific materials used, particle location with respect to the detector, optical photons, and properties of photomultiplier tubes. For this simulation of muons in the detector, one of the particle generator scripts was modified to include a generator gun, which allows for specification of particle type, the number of particles, production rate, starting position, initial momentum and energy. If energy is given, momentum is treated as directional only, to avoid over-specification of the problem. The generator script also contains information about whether or not photons and the Cerenkov light effect are included.2 In order to produce proper scintillation light for PMT detection, photons and Cerenkov light are activated. The script is run in DCGLG4sim, which is the Double Chooz version of the GEANT4 simulation. All information about the particles is used in following simulation scripts to show a response to the particle. The information is then used to reconstruct its original information. After particles are generated using DCGLG4sim, the output of the generation and particle tracking information is sent to the Double Chooz Readout Simulation Software, or DCRoSS. DCRoSS models the detectors response to the particles, from signal detection and amplification at the photocathode on the PMTs to data acquisition based on varying trigger levels depending on the expected signal strength. Within RoSS scripts, PMT and data acquisition settings are changed to accommodate specific simulations. Finally, the output from DCRoSS is channeled into a Double Chooz Reconstruction, or DCReco script. DCReco runs 2 Charged particles traveling through a medium in which their speed is greater than the speed of light in that medium disrupt the electromagnetic eld and displace electrons in atoms of the material. When the atoms return to ground state, they emit photons; this is known as the Cerenkov eect.
3 Reconstruction Accuracy 3.1 Reconstruction Algorithms Muon spatial and energy reconstruction in the detector is based on a maximum likelihood algorithm that combines available information about an event. The characterization of an event is a function of seven parameters, namely the fourdimensional vertex vector (x, y, z, t), the directional vector (Ď&#x2020;, θ), and the energy E [4]: (4)
The likelihood of an event is the product over the individual charge and time likelihoods at each of the PMTs:
(5)
Given a set of charges qi and corresponding times ti, Levent is the probability that the event has the characteristics given by the seven-dimensional vector, alpha. Reconstruction looks for a maximization of Levent to determine what specific combination of vertex, direction, and energy corresponds to the event. DCReco uses the above method to reconstruct muon information. Because the process uses a likelihood algorithm, reconstruction is based on a probability and, therefore, will not always yield the same results, even if original particles had the same information. The accuracy of this algorithm may also change, depending on starting positions and starting energies.
3.2 Different Starting Energies To assess reconstruction efficiency and overall accuracy, it is necessary to test different original particle information. In this study, starting energies and positions were changed to look for efficiency trends. Different starting energies were determined through consideration of the energy spectrum of
38
hurj spring 2010: issue 11
science muons. All of the muons tested at different energies had the same starting position in the detector, at a radius of 500 mm from the center at the top of the target region.
Including all tested energies indicates that there is a noticeable drop in RMS value of the difference in reconstructed and truth positions, and thus an increase in accuracy as energy increases. The effect of multiple Coulomb scattering as a particle goes through material could be responsible for this trend. Muons with lower initial energies, and thus less momenta, will not pass as easily through the detector, and the path may deflect because of multiple scattering off nuclei. This would result in a less well-defined path displayed by hits at the PMTs and a less accurate reconstruction of position. This idea is tested by calculating deflection angle θ0 at the different energies, using the formula:
Figure 6: Energy spectrum of muons from Double Chooz proposal [4] (6)
The tested energies were in a range from 1 GeV to about 25 GeV. This range corresponds to the section on the energy spectrum before the change in muon flux with respect to changing energy starts to decrease. Having energy range over different orders of magnitude helped show trends on a grander scale. About seven different energies were tested for trends in reconstruction efficiency. Because the reconstruction algorithm takes into account energy detected at each PMT (in terms of charge), there could be a correlation between energy amounts and efficiency.
where βc is velocity of the muon, p is the momentum, and x/X0 is the thickness of the scattering medium [3]. The value of x/X0 is calculated from the ratio of the track length to the radiation length in that material.3 Momentum of a relativistic particle is: (7)
and at high energies, is essentially equal in magnitude to the energy. The coefficient β for the c (the speed of light in a vacuum) is calculated using the equation: (8)
Figure 7: RMS and mean values: difference of reconstructed and truth R positions.
Plots of the difference between reconstructed radial position and actual radial position show that both the mean and RMS values decrease with increasing energy. Histogram width and deviation get smaller as energy increases, and the reconstruction algorithms seem to get closer to predicting the actual starting position. The RMS value at 25000 MeV is about 1/5 the value at 1000 MeV.
which considers starting energy and momentum. The theoretical deflection angle θ0was calculated for each starting energy. Using tan(θ0) and scaling for the height of the tank results in a comparable value in units of length to the RMS values previously shown.
Figure 9: Multiple scattering prediction
Figure 8: RMS Values at changing energies
39
3 Radiation length is defined as the mean path length required to reduce the energy of relativistic charged particles by the factor 1/e, or 0.368, as they pass through matter.
science
hurj spring 2010: issue 11 The original plot now includes the prediction of multiple scattering. Although the trend does not appear exactly identical, the result that the observed data and theoretical prediction are on the same order of magnitude, and in the same approximate range, shows that they could correspond. Plotting this range at the varying starting energies indicates that it is likely that multiple scattering produces the results seen, and the effect of this on reconstruction accuracy should be accounted for when considering different energy muons.
As a preliminary check, a plot of the energy deposited in the detectorâ&#x20AC;&#x2122;s central region shows that the particles are being generated according to the position specifications. There is a slight peak in the gamma catcher region where the muons go through the most scintillating volume. The buffer region, which is non-scintillating, should not detect any energy, which is also observed. The range of values for energy deposited also matches up with the expected energy loss rate of about 2.3 MeV/cm in the tank.
3.3 Different Starting Positions Muons starting at different distances from the center of the detector were also considered. In each run, muons were generated at the top of the detector, so that they were throughgoing. Runs with varying radii from the center changed the x-position based on different sections of the detector.
Figure 12: X and Y positions: difference between reconstructed and truth
Figure 10: Various starting positions in detector [4]
According to the diagram, x-position was changed to cover each different section, as well as tracks in the middle of sections and close to the walls, to see if PMT response changes when the particles are very close to the walls. Simulations took place in the middle of the target region (1), close to the wall on the target side (2), close to the wall on the gamma catcher side (3), in the middle of the gamma catcher region (4), close to the wall on the gamma catcher side before the buffer area (5), and through the buffer area (6).
Plots of differences in X and Y position reconstruction show that muons going through the target region are reconstructed slightly better than those going through the gamma catcher region. Comparing differences between reconstruction and truth starting positions shows that accuracy decreases when muons go through the gamma catcher region or near vessel walls. This effect could be due to changes in PMT response. Additional testing at more positions would give a more precise indication of whether or not PMT response is affected by muons that pass very close to the walls.
Figure 13: RMS values at changing positions
Figure 11: Energy deposited in central region of detector
A general decrease in accuracy appears in the gamma catcher region when all positions are considered. Other areas of the detector show fairly consistent reconstruction accuracy. Muons starting either in the target or outer regions seem to be reconstructed to within approximately 50 mm of the truth vertex.
40
science 4 Conclusions Identifying and rejecting background rates are important parts of data collection in Double Chooz. When background rates are accounted for, data collection becomes much more efficient. The use of hardware devices like the Outer Veto makes this possible in the actual data collection. Outer veto cabling tests indicate that the delay times and signal degeneration are within acceptable ranges. In analysis, reconstruction algorithms are important for understanding locations of background signals and tagging specific false events. Studies of muon reconstruction in the DOGS package give information about the softwareâ&#x20AC;&#x2122;s accuracy, as well as how it can be used to assist in analysis of real data. Reconstruction accuracy appears to decrease in Gamma Catcher region and close to vessel walls, possibly due to changes in PMT response. Reconstruction accuracy appears to increase with starting energy, likely due to the effect of multiple scattering at lower energies. Although these trends are observed in this study, higher statistics runs at additional energies and positions would provide a more definitive analysis to extend this initial study.
Acknowledgments I would like to extend a sincere thanks to project advisors Mike Shaevitz, Leslie Camilleri, Camillo Mariani, and Arthur Franke for their help and guidance on my work, as well as to everyone else who worked with me on Double Chooz this summer for support and input. I would also like to thank the National Science Foundation for providing me with the wonderful opportunity to work at Nevis this summer. References 1. Inverse beta decay, http://theta13.phy.cuhk.edu.hk/pictures/ inversebetadecay.jpg. 2. Standard model, http://www.fnal.gov/pub/inquiring/timeline/images/standardmodel.gif. 3. C. Amsler et. al (Particle Data Group). Passage of particles through matter. Physics Letters B667(1), 2008. 4. F. Ardellier et al. (Double Chooz Collaboration). Proposal. arXiv:hep-ex/0606025, 2006. 5. J. Kut. Neutrinos: An insight into the discovery of the neutrino and the ongoing attempts to learn more. 1998. 6. M. Shaevitz. Reactor neutrino experiment and the hunt for the little mixing angle, 2007. 7. R. Slansky et al. The oscillating neutrino: an introduction to neutrino masses and mixing. Los Alamos Science, 25:28â&#x20AC;&#x201C;72, 1997.
41
hurj spring 2010: issue 11
humanities
hurj spring 2010: issue 11
Resigned to the Fringes:
An Analysis of Self-Represenations of Argentine Jews in Short Stories and Films
Helen Goldberg / Staff Writer
Abstract ------------------------------------------------------------------Immigration and assimilation have been hot-button issues in American public discourse since the formation of independent states in the New World, but often left to the wayside is a discussion of how immigrant groups see and represent themselves in the face of pressure to assimilate. In Argentina, Jewish immigrants around the time of the centennial (1910) began to internalize and reproduce images of successful integration into the larger Argentine society in products related to their own cultural heritage, most notably in the short stories of famed Jewish-Argentine writer Alberto Gerchunoff. In sharp contrast, the films of ArgentineJewish director Daniel Burman, modern-day counterparts to Gerchunoff’s stories, reflect a growing sense of pessimism and resignation that currently pervades a Jewish community relegated to the fringes of society, due to their ultimate failure at real integration. Seen as counterparts, Gerchunoff’s short stories and Burman’s films reflect very different attitudes toward assimilation within the Jewish communities of Argentina. During periods when Jews have felt that there was real potential for them to be just as Argentine as any-
one else, such optimism is clear in their self-representations designed for public consumption, whereas when they have felt disappointed and relegated to the fringes of society, such pessimism and resignation is again reflected just as clearly in these representations. Why have Jews been unable to fully assimilate? Furthermore, how influential are these self-representations? They certainly reflect contemporary attitudes about Jewish integration in Argentina, but what role will they play in molding future attitudes and enlarging or further limiting the social space granted Jews in the country?
Introduction -------------------------------------------------------------------
The word “assimilation” has two meanings: it refers to both the outward accommodation of one social group into a larger dominant group, in terms of speech, dress, and customs, and the internalization of a dominant belief. Assimilation, in the first sense of the word, held particular sway in earlytwentieth century Argentina, following a wave of emigration. Official rhetoric promoted the idea that all immigrants could be molded into true Argentines. Jewish immigrants in particular began to “assimilate” this rhetoric into their own literature soon thereafter. The optimistic tone of the official rhetoric was translated into an equally optimistic tone in Jewish short stories.
42
humanities History, however, has shown that Argentine Jews have never actually been able to fully assimilate into Argentine society. Focusing on their own definition of assimilation via economic integration, Jews have failed to understand the Argentine nationalist and ideological definition of assimilation. Their frustration with this failure to be accepted within mainstream Argentine society is today reflected in the overwhelmingly pessimistic and resigned tone of current films representing Argentine Jewish life. It is thus that self-representations by members of the Jewish community in Argentina are reflective of the extent to which they have been able to mold themselves to fit the Argentine assimilationist ideal. Homogenization and Jewish Immigration The official attitude of Argentine government officials toward immigrants arriving around the turn of the twentieth century was, interestingly, optimistically inclusive. While the “state assimiltionist policy sought to subordinate minority cultural identity to a national ideology,” the emphasis on ideology, rather than on race or ethnicity, seemingly provided a space for the new immigrants to become fully “Argentine.” Argentine nationalism was focused at this point on “the creation of the national citizen, individuals publicly certified and approved of by the state,” not on the limiting of opportunities for immigrants. This sense of inclusiveness seems at first to contradict the strong sense of nationalism prevalent amongst the Argentine elite, but even the elite found themselves invested in promoting immigration and integration in order to build a stronger Argentine state. The governing class in Buenos Aires during this period was particularly influenced by positivism, which emphasized “the necessity of an integrated reform of immigrant society based on education, which would bring numerous benefits, such as greater political cohesiveness, economic growth, and the general modernization of society. It was also understood, as well, as an elimination of religious values and indigenous culture.” State leaders were the direct heirs of the Liberal project of the early 19th century, which promoted the formation of strong, independent nation-states, “whose integrating capacity stemmed from the development of cohesive, homogenizing master narratives of national identity diffused by the educational system.”
43
hurj spring 2010: issue 11 Effects of Assimilation Rhetoric on the Jewish Population Argentine Jews began to internalize and reproduce images of successful assimilation into the larger Argentine society beginning around 1910. This process of internalization of official rhetoric had particular effect on the Jewish community, specifically because of their education and the ambiguity of their ethnic identities. Assimilationist rhetoric was largely disseminated via the public school system. Because Argentine officials understood education to have a “civilizing” effect on immigrants, school curriculums promoted the replacement of any foreign language or ideology with that of the criollo population, that is, of those born in Argentina of Spanish colonial descent. Jews tended to be relatively well educated; they were more likely to send their children to school and to be literate enough to read newspapers and journals than members of other ethnic groups. According to Ricardo Feierstein, an Argentine Jewish intellectual, this emphasis on education was particularly important with regards to assimilation because it was the “intellectual who would begin to accept the images that others had of him” first. Moreover, Argentine Judaism was, and in many ways still is, a heterogeneous category. Jews in Argentina came from Europe, the Middle East, and other countries in the Americas. They spoke a variety of languages, held different beliefs, were educated to varying extents, and did not always consider one another to be of the same background. Jewish thinker Gershom Sholem once commented that, in Argentina, “One cannot define that which is called Judaism…Judaism is a living phenomenon in the process of constant renovation.” Together, these factors made the Argentine Jews particularly susceptible to assimilation rhetoric. Jewish elites soon began to employ their specific conception of Argentine assimilation in public discourse. Baron Maurice de Hirsch, the philanthropist who founded the earliest Jewish settlements in Argentina, “wanted the Jews to assimilate and so solve the ‘Jewish question.’” He promoted the idea that the best way to take ownership of Argentine identity was by contributing to the national project of building a great Argentina. He said in a speech to a Jewish congregation in Buenos Aires that the process of “making good agriculturalists and patriotic Argentines, though conserving their religious faith,” would lead to the “secularization and Hispanicisation of Argentine Jewish culture.” de Hirsch’s influence could be seen throughout the Jewish community, as
hurj spring 2010: issue 11 new immigrants sought to learn trades, agricultural methods, and to focus on economic goals of assimilation, rather than ideological or cultural ones. Self-Representations of Jewish Integration during the Centennial Jewish immigrants’ attempts to assimilate via economic integration were reflected in short stories written about them by Alberto Gerchunoff, a Jewish journalist and writer in Argentina during the turn of the twentieth century. Gerchunoff is perhaps best known for his series of short stories entitled The Jewish Gauchos of the Pampas, which appeared in Spanish in weekly installments in a popular Buenos Aires newspaper, and introduced much of the literate community to Gerchunoff’s own idealized understanding of the JewishArgentine relationship. The stories reflect a strong feeling of optimism with regards to breaking out of the limited social space to which many Jews were accustomed. In his first story, he writes, “To Argentina we’ll go-all of us! And we’ll go back to working the land and growing livestock that the Most High will bless.” This optimism reflects Gerchunoff’s belief that the Jews could insert themselves into the nationalist definition of a true Argentine citizen simply by working and contributing. Where Gerchunoff´s optimism became a problem for the Jews was in the fact that he, although seen as a trusted, legitimate voice within the community, perpetuated an assimilation imperative emanating originally from the State. This perpetuation of the assimilation ideal based on economic integration, printed and distributed for all to read, spread the idea throughout the community. Jewish Integration into Argentine Society Argentine Jews are by no means rejected by the larger society; they have, in many cases, reached levels of education and income impressive for the country. Yet, they remain on the fringes of Argentine life. They remain both “Argentina’s only ethnic group” and the only group unable to be considered as fully Argentine, without any sort of qualification. What continues to be a “crucial factor in the identity of the Jews…is the fact that they are considered as such by their non-Jewish neighbors and society.” They continue to be seen as the “other,” even today, as the fourth and fifth generations of native-born Argentine Jews consider themselves to be, without qualification, Argentine. A Christian character in a film by Jewish Argentine filmmaker Daniel Burman compares being homosexual to being Jewish; they are both alternate identities that are at first invisible, but, once known, relegate the person to the position of permanent outsider. The Jews of Argentina are stuck. They have reached the middle class, but cannot move above it into the elite strata. To compound the glass ceiling they experience economically, they possess little political or social voice to improve their situation. They are left with a comfortable
humanities income but little legitimacy in the public setting. Jews failed to fully assimilate into mainstream Argentine society largely due to their misunderstanding of the Argentine definition of assimilation. For an Argentine national identity to exist, assimilation had to be along ideological lines. It had to be about “questioning the legitimacy or the authority of …marginal cultures.” Baron de Hirsch’s plan of integrating economically, while maintaining Jewish belief systems, led many Argentines to question Jewish loyalty and patriotism. The community’s emphasis on conforming to the economic, as opposed to the ideological or cultural, relegated the Jews to their place on the very edges of accepted society. The Jewish community in Argentina reacted to their failure to fully integrate by retrenching. They sought to conserve the gains they had made by limiting risk-taking, which could potentially lead to losses of economic and educational gains. The more elite members of the Jewish community became particularly concerned with respectability. Jews who had already reached some measure of social standing put up strict barriers, making attempts to limit contact between themselves and more recent immigrants. The Jewish community realized that it was unlikely to make any more inroads into full acceptance within Argentina, so it resigned itself to the space allowed, and sought to ensure that said space would not be further reduced in any way. The Jews were allowed a small universe, several neighborhoods in the spatial sense, and a certain level of respectability in a social sense, and avoided the risks that would be necessary to expand that little bubble. Contemporary Self-Representations of Jewish Integration The modern-day counterparts of Gerchunoff’s stories, the films of Argentine-Jewish filmmaker Daniel Burman, reflect the sense of pessimism and resignation that the Jewish community feels at being left on the fringes of society. His three best known films can be viewed both as a series and as separate entities. Seen as a series, all three movies reflect Burman’s resignation; they show his being stuck. He uses the same actor in all three to play a very similar protagonist, a young, neurotic Jewish man trying to find a way to individuate himself from his family. Other actors and characters are also repeated in the films, most notably, the perpetually servile indigenous character Ramón, played by Juan Jose Flores Quispe. Burman also repeats his characters’ names. The love interest is named Estela in two of the three movies, and Burman’s protagonists are all named Ariel, a noticeably Ashkenazi Jewish name. All three movies take place in Buenos Aires, almost entirely in Ariel’s home or workplace. Scenes in public are short, or specifically business-related, as though Jews have no other real contact with the larger society, outside of business dealings. Burman demonstrates both a very circumscribed range of options and a sense of resignation when it comes to selecting only from that range. To take each movie individually, Waiting for the Messiah,
44
hurj spring 2010: issue 11
humanities features a young Jewish man who constantly seeks to find employment outside the family restaurant. Ariel succeeds in obtaining three contracts with a television company, but all last only six months, and all are “trial only” contracts. He secures some space for himself outside the Jewish neighborhood where he lives and works for his family, but it is only temporary. He talks a great deal about la burbuja, the bubble, from which he cannot escape. The bubble symbolizes for Ariel the limited options he has in life, which he describes as fitting into an already predetermined plan to which he learns to resign himself. One of Ariel’s last and most profound lines, “Uno se adapte,” one adapts, shows that despite his early attempts to take risks and try to widen the social space granted to him as a Jewish Argentine, he eventually learns to stop reaching and accept what he has. Similar themes appear in Lost Embrace. Like the first Ariel, this Ariel lives with his family and works in the family business, in this case a lingerie shop. The lingerie shop is located in a small shopping mall, between a Korean store and a modern-looking Internet shop. The owners of the Korean store are recent immigrants who are viewed with contempt, while the owners of the Internet shop are ideal Argentines: fair-skinned, Spanish-speaking, and economically successful; they are far more respected than the Korean couple. Ariel’s family’s shop is located in between the two other shops, and his family receives a level of respect that is also somewhere between the two. Jews have reached a level of assimilation higher than the more visible immigrants groups, such as recent Asian immigrants, but still have not reached the level of Christian Argentines. Again, they are caught in the middle. Family Law, the third of the series, features an Ariel who has made a bit more progress. This Ariel feels enormous pressure to become a lawyer and join his father’s practice. His father even names the practice “Perelman and Son,” long before Ariel decides to study law. Ariel succumbs and graduates from law school, but he individuates himself a bit by working as a professor and a defense lawyer for the state, as opposed to in his father’s firm. These positions put this Ariel fully in the public view, working for large, integrated public institutions, rather than ones owned and frequented primarily by Jews. But, by the end of the movie, Ariel leaves these jobs and takes over his father’s firm. He resigns himself to the role that was always predestined to be his: that of the Jewish lawyer working in his father’s firm. Another interesting aspect of this third Ariel is his marriage to a Christian woman. There is no discussion about the difficulties of intermarriage, but when his wife sends their son to a Swiss Catholic preschool, he finds himself strangely upset at the idea that his son could become wholly assimilated. This scene raises the question as to what extent the bubble is selfmaintained, or even self-created. Are the Jews of Argentina resigning themselves to being a fringe community because it was all the space granted to them, or because they themselves created a sort of parallel society that only sometimes
45
intersects with the larger Argentine community, in order to preserve tradition?
Conclusion
--------------------------------------------------------------------------------Ultimately, Jewish self-representations in Argentine stories and films seem to be interwoven with the feelings of belonging or not belonging that predominate during the time period. The extent to which they have integrated and their modes of representation cannot be separated. Seen as counterparts, Gerchunoff’s short stories and Burman’s films reflect very different attitudes toward integration within the Jewish communities of Argentina. During times when Jews have felt that there was potential for them to be just as Argentine as anyone else, such optimism is clear in how they represent themselves, and when Jews have felt disappointed and relegated to the fringes of society, such pessimism and resignation is just as clearly reflected in these representations. The question that arises for future prospects of integration of Argentina’s Jews is to what extent such representations of pessimism and resignation will prevent Jews from trying to burst the bubble limiting their role in the larger society. References 1. Armony, Paul. Jewish Settlements and Genealogical Research in Argentina. http://www.ifla.org/IV/ifla70/papers/091e-Armony.pdf, Aug. 2004. 2. Elkin, Judith Laikin and Gilbert W. Merkx, eds. The Jewish Presence in Latin America. (Boston: Allen & Unum, Inc.), 1987. 3. Family Law. Dir. Daniel Burman. Perf. Daniel Hendler, Julieta Diaz, Arturo Goetz. DVD. TLA Releasing, 2006. 4. Feierstein, Ricardo. Contraexilio y Mestizaje: Ser Judio en la Argentina. (Buenos Aires, MILA Ensayo), 1996. 5. Gerchunoff, Alberto. In the Beginning. In The Jewish Gauchos of the Pampas. (New York: Abelard-Schuman), 1955. 6. Humphrey, Michael. Ethnic History, Nationalism, and Transnationalism in Argentine, Arab and Jewish Cultures. as quoted in Ignacio Kilch and Jeffrey Lesser, eds. Arab and Jewish Immigrants in Latin America: Images and Realities. (London: Frank Cass), 1998. 7. Lost Embrace. Dir. Daniel Burman. Perf. Daniel Hendler, Adriana Aizenberg, Jorge D’Elia. DVD. TLA Releasing, 2003 8. Mirelman, Victor A. Jewish Buenos Aires, 1890-1930: in Search of an Identity. (Detroit: Wayne State University Press), 1990. 9. Newland, Carlos. The Estado Docente and its Expansion: Spanish American Elementary Education, 1900-1950. in The Journal of Latin American Studies Vol. 25 No. 2 (Cambridge: Cambridge University Press), 1994. 10. Senkman, Leonardo. Jews and the Argentine Center: A Middleman Minority. In Judith Laikin Elkin and Gilbert W. Merkx, eds. The Jewish Presence in Latin America. (Boston: Allen & Unum, Inc.), 1987. 11. Vidal, Hernan. The Notion of Otherness within the Framework of National Cultures. in Juan Villegas and Diana Taylor, eds. Gestos: Representations of Otherness in Latin American and Chicano Theater and Film. (University of California Press), 1991. 12. Waiting for the Messiah. Dir. Daniel Burman. Perf. Daniel Hendler, Enrique Pineyro, Chiara Caselli. DVD. TLA Releasing, 2000.
humanities
hurj spring 2010: issue 11
Innovation & Stagnation in Modern Evangelical Christianity Nicole Overley / Staff Writer
The Economic Framework Modern Christian evangelicalism in the United States has been supported immeasurably by the growth of megachurches. Many of these enormously successful churches, having already planted “daughter churches” across this country, have chosen to take their evangelical mission overseas—primarily to famously secular Western Europe, Great Britain in particular. I spent almost a month in Britain and continental Europe this summer, exploring the possibility that, with these churches’ almost limitless financial resources and manpower, they could contribute to a reversal of European secularism, which has only grown more profound in recent decades—but how would Europeans respond to an influx of stereotypically American spirituality? To answer this question, I utilize a “market-oriented lens,” an approach from economics that can
“illuminate what might otherwise seem a very disorderly landscape.”1 Firstly, I assert that the mostly statesupported churches of Europe gave rise to a monopolistic religious “economy”: the constant and reliable influx of state funds ensures the survival of the state church, regardless of its clergy’s response (or lack thereof) to the needs of its congregation. With little to no serious competition or other outside threats, this supply-side stagnation facilitates the increasing decline of religion in Europe, which is further enabled by each country’s incremental steps away from dependence upon the church. Therefore—and this is key—I combine the supply-side secularization hypothesis of sociologists like Roger Finke, Rodney Stark, and Steven Pfaff with a equal emphasis on declining demand, blaming a lack of both for the declining role of religion in European society. My further proposition relates this declining role to the
megachurch movement in America, which is best analyzed in the context of the competitive and diverse American religious economy, in that it is a direct result of American churches’ acknowledgement of and response to the modernization of society. If the American megachurch movement, tweaked to fit the demands and desires of its customers, were exported to Europe—expanding, as any traditional capitalist firm inevitably would, in search of greater profits—it could introduce competition into a currently stagnant religious market, naturally increasing supply as well as demand, thus reversing the continent’s trend towards secularization.
An Economic History of American Religion: Historical Precedents for a Modern Phenomenon Having immigrated to the colonies primarily to escape from religious persecution, colonists quickly established religious freedom in the soon-to-be United States as a defining tenet of colonial life. The famous separation of church and state that so proudly differentiated America
46
humanities the highest benefit and lowest cost. In from Europe created an unregulated short, it allows each person to find the religious market and rampant spiritual experience which suits them pluralism of the Christian faith—which best. It thus logically follows that, in is manifested today in the myriad of nations lacking such choices, there is denominations here. Some scholars less involvement in religion, because argue that “pluralism [should instead the only available weaken] faith— option will suit that where multiple fewer people.4 religious groups Americans—as compete, each well as, we will discredits the other, see, Britons—are encouraging the fundamentally view that religion consumers; they per se is open to “shop around question, dispute, for their spiritual and doubt.” 2 But needs.”5 there is no doubt This idea of that religious “rational choice” participation in inevitably inspires America does, consideration indeed, show of demandan indisputable side theories, long-run growth which counter trend, despite the that religious expected cyclical phenomena in upheaval. Even The Crystal Cathedral in Anaheim, CA America and though, with few Photo by Nicole Overley elsewhere can in exceptions, consistent fact be explained by fluctuations in decline is exhibited in Europe and much of the rest of the world, American demand—in other words, shortages and surpluses. Some argue that participation in organized religion different cultures beget different has increased markedly over its two levels of spiritual dependence, taking centuries of history. Stark’s constant “innate demand” What can explain this? Finke and adjusting it from nation to nation, and Stark’s prominent analysis does continent to continent—a theory that not point to differences in demand is not without its merits, and will for religion between Europe and be discussed later in relation to the America—in fact, they claim that all evolution of contemporary British humans, across the globe, have the society.6 But a second, more active way same innate desire for spiritual life to address demand focuses on demandand fulfillment. Instead, they credit side interventions: in other words, the glut of supply in the United churches in America, through the years, States compared to other nations and, furthermore, the actions—most notably have intervened purposefully to keep demand high. Surprisingly, this can fit innovation—that this “competition” with Stark’s supply-side hypothesis encourages. Their supply-side theory like two pieces of a jigsaw puzzle. of competition conjectures that this Ultimately, it is only with competition crowded religious market in America and the desire to stay ahead of the allows for the “rational choice” of curve that these churches actively try the nation’s religious consumers.3 Mimicking the way in which Americans to augment religious demand; in a monopolistic environment, this would choose what products to consume in their everyday trips to the grocery store, be unnecessary. Assuming each American church this model asserts that churchgoers desires to be the religious choice of rationally, if subconsciously, assess the the maximum number of Americans marginal benefit afforded to them by possible, the consistent shift in the each church, based on their personal practice of American religion—a preferences, and choose the one with
47
hurj spring 2010: issue 11 movement that correlates with sociocultural change and America’s religious growth trend—signals exactly such a relationship between ample supply and changing demand. First, let’s explore the reality that, unfortunately for some American churches, not every denomination or segment of religion has been fortunate enough to share in this long-run general growth trend. Herein lies the most crucial principle to grasp, the one that illuminates why there has been a trend of both growth and dramatic change in American religion, and one which is rooted in the straightforward realities of a laissez-faire free market. From the perspective of an individual church, if it wants to grow and become increasingly popular, its goal is pure and simple: to continuously reinvent itself to ensure that, as people change, it still identifies with a majority of them—and not every church is able to do this. As Finke and Stark assert, the “churching of America was accomplished by aggressive churches committed to vivid otherworldliness.”7 In other words, the most successful churches learned how to attract, commit, and retain followers in a changing world, by responding and adjusting to the changes in society that their “customers” had already grown accustomed to—before those changes could drive a wedge between these customers and Christianity. As many churches find, it’s a skill which is vitally important if they want to make a “profit” of believers. From an economic perspective, just like any business, the effectiveness of a church depends upon not only their organization and their product but also “their sales representatives and marketing techniques”—methods of increasing demand that have played a huge role in the success of American religion.8 When translated into language reflecting the history of American religion, spiritual marketing brings to mind names like the famous “fire and brimstone” preacher George Whitefield, an example of evangelical Christianity uniquely tweaked to fit America in the early 18th century. The “Great Awakening” that Whitefield pioneered was essentially a “well-
humanities
hurj spring 2010: issue 11 planned, well-publicized, and wellfinanced revival campaign,” which fed off of the fervor of the times and, in doing so, managed to capitalize off of the spirit of the era.9 It helps to think of Whitefield and his fiery camp meetings in the context of the colonial American landscape during his lifetime: besides being “quite simply one of the most powerful and moving preachers,” he drew crowds that gathered outside to hear him speak wherever he traveled.10 His revivals were almost shocking and revolutionary to attendees, simply because of the rather subdued nature of church preaching until then. The Great Awakening had several important ramifications for the future of American religion and helped to shape the development of the megachurches today. How? Primarily, it “demonstrated the immense market opportunity for more robust religion,” setting a precedent for later preachers and whetting American churchgoers’ appetite for it as well.11 Also, it’s interesting to note that disillusioned members of the American mainline denominations, more than any other group, were the ones who flocked to Whitefield’s camp meetings, just as nondenominational megachurches draw lapsed members from established denominations today. It could be argued, then, that the rise of Whitefield predicted the “decline of the old mainline denominations,” caused by “their inability to cope with the consequence of religious freedom and a free market religious economy,” especially when a viable competitor arose—a real-life example of the failure to “keep up” that we just addressed.12 Around this time the idea of “revivalism” developed, centered around the outbreaks of “public piety” occurring throughout America in the late eighteenth and nineteenth centuries. Reminiscent of the Great Awakening, these revivals were planned and conducted periodically “to energize commitment within their congregations and also to draw in the unchurched.”13 The idea of the
“
camp meeting, developed in the early nineteenth century, became hugely popular in rural America in part due to the fact that camp meetings occurred in venues that were familiar and agreeable to attendants: the open outdoors. This comfortable setting mirrors today’s casual, come-as-you-
unorthodox churches that reached people within the context of their radically changing lives. The 1960s and the response (or lack thereof) of traditional American churches proves a testament to the importance of innovation in a competitive market. In fact, almost every one of the nation’s most phenomenally expanding churches today pride themselves on their uniquely developed methods of outreach oft-tailored to the specific needs and culture of the population they begin around. Whether this includes a Starbucks or McDonald’s within the church or designated parking spots just for churchgoers with motorcycles, these megachurches recognize and respond to a need to “change with the times” and, by making concessions in environment and style of worship for the comfort of attendees, they hope to simultaneously continue interest and affinity for religion, prevent the alienation of the general public, and even perhaps attract even more members by gradually breaking down the barriers that keep formerly “unreligious” people from stepping into an intimidating or formal church atmosphere.
megachurches recognize and respond to a need to ‘change with the times’ are megachurches that shy away from traditionally ornate or grandiose environs.14 A contemporary observer noted, “Take away the worship [at the camp meeting] and there would remain sufficient gratifications to allure most young people”: in other words, they made Christianity seem comparatively fun to the “contemporary” generation at that time.15 Fast-forward to the 1960s, and we see how the Great Awakening was only one of a series of innovative changes during the history of American religion that occurred in response to cataclysmic social shifts. The cultural crisis and subsequent questioning of the “hippie” sixties generation, coupled with the increasing unease and outright protest brought about by Communism abroad and the Vietnam War, encouraged disillusionment in what was perceived as religion that was out of touch with the gritty real world. Suddenly, traditional or “mainstream” churches, with their white steeples, Sunday schools, and potluck dinners, began to experience a period of decline that continues today and has affected almost every one of the nation’s numerous established denominations. The mainline denominations “suffer in times of cultural crisis or disintegration [like during the 1960s and 1970s], when they receive blame for what goes wrong in society but are bypassed when people look for new ways to achieve social identity and location.”16 The only segment that benefited from this trend, or at the very least was unhurt, was the burgeoning nondenominational Protestant segment, most famously manifesting itself in recent decades with the establishment of new,
”
A Comparison of Religion in Britain with its American Counterpart: The Secularization Thesis In contrast to the free market American religion we’ve just analyzed, “there is ample evidence that in societies with putative monopoly faiths, religious indifference—not piety—is rife.”17 While within the largest example of this—Europe—there are exceptions to the rule, like Italy and Ireland, both of which exhibit levels of religious involvement and participation almost as high as those in the United States, most European countries very strongly support this hypothesis—I will center on Britain.18 Admittedly, after
48
hurj spring 2010: issue 11
humanities consideration, the British religious environment seems counterintuitive— as did the coupling of growth and pluralism in the United States. Why, then, does this paradox occur at all? There are a few primary problems faced by any religious monopoly that Adam Smith himself first unearthed and Whitefield later confirmed: firstly, a “single faith cannot shape its appeal to suit precisely the needs of one market segment without sacrificing its appeal to another.” Therefore, it lacks the ability to mobilize massive commitment because of its intrinsically smaller “customer” base, a structural explanation for the decreased religious demand that we later see.19 In contrast to the vast variety of churches in the US, the relative singularity of Christianity in Britain forces congregants to either conform to the only available option or choose not to attend altogether. Furthermore, simply and just as compellingly—“monopolies tend to be lazy.”20 But why, then, doesn’t the Church of England fear obsolescence? Smith notes in his famous Wealth of Nations, as he addressed statesustained European churches, that, “in general, any religious sect, when it has once enjoyed for a century or two the security of a legal establishment, finds itself incapable of making any vigorous defense against any new sect which chooses to attack its doctrine or discipline.”21 And that monopolistic state support of a single Christian denomination has long afforded to that church a natural advantage in established resources that has thwarted competition. For centuries, the Anglican Church has remained exactly the same—stagnant in church hierarchy, theology, worship style, and even the dress of church leaders. Although, over the
49
years, it might have faltered in the face of serious competition, it simply hasn’t faced any—none has arisen because of the asymmetric base of power which supports Britain’s “official religion.” Without the multipledenomination challenges omnipresent in American religious society, religion in Britain has settled into a pattern where there’s no need to continually reinvent or innovate, change with the times, or “exert themselves for the spiritual welfare of their respective congregations” as their American counterparts must.22 This stagnant, monopolistic supply is one rather quantitative depiction of the British and American religious “markets” that leads logically to both the conceptualization and the justification of recent scholars’ “secularization thesis.” Derived from painstaking analysis of years of Europe’s church attendance and religious adherence, the secularization hypothesis claims that Europe’s citizens are slowly but surely moving towards an utter lack of religious iconography and away from even the slightest semblance of religious presence in everyday life. This is evidenced by the declining numbers of church attendance in recent decades and from national polls in which more and more Europeans—and Britons in particular—claim to have “no affiliation with religion.” It’s a widely believed hypothesis, one that leads famous and influential thinkers such as French family sociologist Martine Segalen to claim that European nations are becoming simply “post-religion societies.” Beyond just supply-side stagnation, I argue that many factors contribute to this secularization, one of which worries global religious leaders far more: actual demand-side decline, that “Europe’s religious institutions, actions, and consciousness have lost their social significance.”23 While some hold demand constant, naming supply variations as the origin of both European decline and American growth, others point to industrialization, urbanization, and a conglomerate of constant yet
gradual forces propelling the world into post-modernism and signaling the unavoidable downfall of religion. The view exists that the evolution of European society towards an embrace of science and modernity and towards a gradual sense of the “death of religion” represents the future of all societies across the globe—the ubiquitous progressiveness of Europe has simply led its society to become the first on the globe to literally not need religion. Inevitably, secularization is an “absorbing state— that once achieved, it is irreversible and institutionalized, instilling mystical immunity.”24 With this perspective, the now-thriving American churches can be explained by the idea of “American exceptionalism”—that the American deviation from this so-called norm is a “case of arrested development, whose evolution has been delayed” and, soon, religious demand here will decline just as it has in Europe.25 They claim that the increasing lack of depth or substance in religious services illustrates a decline in public commitment to religion that foreshadows an imminent, rapid decay.
Britain: The True Anomaly? For centuries since its founding, the Church of England enjoyed consistently high membership, attendance, and tithing from the citizens of Great Britain: this was the period where a traditional—arguably stagnant—church aligned with a similarly traditional society, enabling the majority of Britons to feel that the Anglican Church was relevant to their lives. The first half of the 20th century seemed no different—in fact, general enthusiasm and affection for the church reached an all-time high, and the Church, having remained essentially the same in almost every aspect since its beginnings, felt safe and comfortable with its position within the state. But the cultural shifts of the 1960s onward facilitated an ever-widening gap between British culture and religion that ultimately correlated with a sharp dropoff in church attendance. It was at that moment in a parallel timeline
hurj spring 2010: issue 11 when the differences between the supposed to change—while society monopolistic and competitive religious changed around it, it was supposed to markets become truly noticeable: while be a rock or anchor, keeping people in in the US, decline of the mainline line with ‘true’ spirituality, the way it denominations encouraged the entry was intended to be—much the same of new ‘firms’ into the market and as the recently deteriorating mainline spurred intense competition, realized in denominations in the United States the form of changing styles of worship have argued. But the competition and ultimately in the megachurches of today, in the UK, the Church of England refused to evolve or adapt with society, leaving it an outlier as social change became more and more profound. It is thus in their response to the gradual secularization of society where American and British Christianity differ—and, facing bankruptcy from the government’s own fiscal crisis The Metropolitan Cathedral in Liverpool, UK - Photo by Nicole Overley and rumors that the English church and state could inherent in American Christianity finally be separated upon the ascent ensured that, despite the objections of of Prince Charles as king, the Church a few, innovation, and not stasis, was of England has realized, finally, that the norm. Without that competition, a it has reached a critical turning point, pathological stagnation was legitimized and that the next decade or two and justified as a positive trait in the will determine whether it dies out Church of England’s clergy until very completely or manages to adapt and recently, the product of Adam Smith’s live on. ‘lazy monopoly’ on religion because the Given their aversion to the Church simply didn’t need to reach out most stereotypical aspects of American to the citizens.26 Christianity, I was surprised to find The downfall of that mentality, that the leaders of the Anglican Church likewise, resulted from the clearly seem to be gradually acknowledging impending loss of their monopoly: the idea that American Christianity—or accompanying the realization that at least its innovative and competitive change was necessary was the nature—is the way church is ‘supposed paired realization that church in to be,’ and in doing so, they have Britain, since its first inception as fundamentally altered the very the Church of England, had been theology of the stagnant Church that acting directly contrary to not just has existed for the past 300 years. For the rest of Christendom but to all that time, the Church was viewed the actual intentions of the Bible as something in society that wasn’t itself. Monopolistic Britain, not the
humanities competitive religious market of the United States, has been the anomaly in Christian history. The very reason for the success of the early Christian church lay in its willingness and ability to adapt to each new society or culture it came in contact with, a mandate its leaders of the time viewed as both Biblical in origin and crucial for its survival. Among other things, this tradition explains why, as the church expanded, we celebrate Easter and Christmas when we do— not as arbitrary holidays, but as holy days centered around the original pagan festivals of the Roman Empire. State support of the church—which occurred with the establishment of the Anglican Church in the UK—gave that church such stability that it no longer needed to pay attention to the ways in which the world around it was changing. The central leadership has commissioned groups to implement modern ways of reaching out to its disillusioned populace, many of them patterned after successful American megachurches like Rick Warren’s Saddleback Church and the 20,000-member Willow Creek in Chicago, and with the intent of bringing about a total overhaul of the Church. Alice Morgan of the Church Mission Society in Oxford explained the theological shift from “maintenance to mission” and the importance of fully actualizing this change: the Church can’t deal with its potential ‘death’ by reaching out to the rest of the world—it has to reach inward.27 It is a problem illustrated by the simple fact that there are far more practicing Anglicans in
50
hurj spring 2010: issue 11
humanities Nigeria than there are in Britain— this is because, when the Anglican Church first noticed decline at home, it seemed easier to transplant new churches in places that would be more easily accepting of them, rather than to change their domestic approach to church. While in the 1990s, the Anglican church tried unsuccessfully to buffer its decline by planting churches in areas that were perfect copies of the original successful ones, today’s new models focus on the importance of contextual, or ‘bottom-up’ planting: they utilize community mission, where leaders are sent out into the world as individuals not to reinforce paradigms, but to draw in those around them, “combining scale with micromanagement.”28 Michael Moynagh, an Anglican advisor and academic at Oxford University, cites Neil Cole’s “Organic Church” model in Sheffield, which focuses on the importance of getting beyond the church fringe and planting churches amongst those with no religious background.29 The Mission-Shaped Church Report, published in 2004, and the following Report of Church Army’s Theology of Evangelism one year later detail these problems, solutions, and the overarching new direction for the Anglican Church. Fundamentally, these reports verbalize the unspoken knowledge that the UK has become “a foreign mission field”—re-evangelizing Britain is now a cross-cultural mission.30 It details the Anglican Church’s acknowledgement that “church must reflect culture”—for how else, ultimately, can it connect to the people? Yet there still exists the danger of church becoming just culture—there must be a palpable underlying base of Christianity. They focus also on the “passive receivers’ problem”—the need to really involve churchgoers in the religious experience, making them committed to reach out to others and
51
perpetuate their faith, for without that aspect, “even the most radical movement conceivable becomes boring.”31 This method is summarized as the “incarnational church model,” recognizing that individual churches are most successful if they are founded by believers in their own way, as a natural outgrowth of the local culture and its needs—and if those churches continue to respond to how those needs change.32 Today, for the first time in its history, the Church of England faces the legitimate threat of obscurity. As the Church struggles to reinvent itself at this critical turning point, it has been forced to set aside many of its beliefs and assumptions about the way church is supposed to be. I am optimistic that, if the Church continues to embrace innovation and change in the decades to come, it might again capture the hearts and minds of British ‘consumers’ of religion.
References 1. Roger Finke and Rodney Stark, The Churching of America: Winners and Losers in Our Religious Economy (New Brunswick: Rutgers University Press, 1992), 18. 2. Ibid, 18. 3. Rodney Stark and Roger Finke, Acts of Faith: Explaining the Human Side of Religion (Berkeley: University of California Press, 2000), 38. 4. Ibid, 39. 5. Davie, Grace, Religion in Britain Since 1945: Believing without Belonging (Oxford: Oxford University Press, 1994), 39. 6. Steve Bruce, “The Social Process of Secularization,” in The Blackwell Companion to the Sociology of Religion, ed. Richard K. Fenn (Malden: Blackwell Publishers, 2001), 252. 7. Finke and Stark, 1. 8. Finke and Stark, 17. 9. Davie, 46. 10. Finke and Stark, 49. 11. Ibid, 51. 12. Davie, 54. 13. Finke and Stark, 88. 14. Ibid, 96. 15. Ibid, 96. 16. Steven Pfaff, Growing Apart?: America and Europe in the 21st Century (Cambridge: Cambridge University Press, 2007), 246. 17. Finke and Stark, 19. 18. Robin Gill, “The Future of Religious Participation and Belief in Britain and Beyond,” in The Blackwell Companion to the Sociology of Religion, ed. Richard K. Fenn (Malden: Blackwell Publishers, 2001), 280. 19. Finke and Stark, 19. 20. Pfaff, 34. 21. Ibid, 52. 22. Finke and Stark, 19. 23. Ibid, 230. 24. Ibid, 230. 25. Ibid, 221. 26. Steve Hollinghurst, Interview, Church Army Sheffield Centre, Sheffield, UK, August 20, 2009 27. Alice Morgan, Interview, Church Mission Society, Oxford, UK, August 12, 2009 28. Ibid 29. Alice Morgan, Interview 30. Steve Hollinghurst, Interview 31. Ibid 32. Ibid
engineering
hurj spring 2010: issue 11
Fractal Geometry and the August Sodora / Staff Writer Fractal image compression techniques, which have remained in obscurity for more than two decades, seek for a way to represent images in terms of iterated functions which describe how parts of an image are similar to other parts. Images encoded in this way are resolution-independent. This means that the information stored about an image can always be decoded at a prescribed level of detail, regardless of the size of the decoded image and without the usual scaling artifacts such as pixelation. The size of the resulting encoding is based on the encoding algorithm’s ability to exploit the self-similarity of the image, theoretically leading to more efficient encodings than traditional arithmetical methods, such as those used in the JPEG file format Despite such advantages, fractalbased image formats have not gained widespread usage due to patent protection and the computational intensity of searching an image for self-similarity. Although decoding an image from a fractal-based format can be performed quickly enough for it to be a potentially suitable format for video playback, encoding an image takes considerably longer. Modern implementations, even with very sophisticated encoding algorithms, have not yet been able to demonstrate the ability to encode images quickly enough to make the technique viable for capturing video. Here, we take a different approach and simplify the encoding algorithm in order to explore the possibility that an
“
New Image Compression
execution environment where the algorithm can operate on different parts of the image simultaneously might enable faster image encoding. By reducing encoding time, such an implementation might help close the gap between encoding time and real time as described by the frame rate of American television (approx. 24Hz), and make fractal-based images feasible for video applications. We chose the graphics card as our execution environment, using it for general purpose computing through the OpenCL API. OpenCL is supported by most NVidia and ATI graphics cards less than three years old and has been ported between Windows and *nix, making it an accessible vehicle for computation.
Applying Fractals to Image Compression The term “fractal” was coined in 1975 by Benoit Mandelbrot, derived from the Latin fractus meaning fractured. Taken generally, it refers to a “shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole” [5]. A shape that has this property is said to be self-similar. Self-similar shapes can be defined using recursive functions, in which the shape appears in some form in its own definition. For example, WINE, the name of an implementation of the Win32 API for Linux, stands for WINE Is Not an Emulator. One can imagine substituting the expansion for the acronym WINE in itself infinitely many times (WINE Is Not an Emulator Is Not an Emulator…). Thus, the acronym for WINE is self-sim-
ilar and has a recursive definition. Another example of a recursive definition is that of the Fibonacci sequence. Each element of the Fibonacci sequence is expressed as the sum of the previous two elements, with the first two elements (referred to as the initial conditions) being zero and one respectively. Thus, the element following zero and one is one; the element following one and one is two; the element following one and two is three; the element following two and three is five, and so on, ad infinitum. In the context of fractals, the process of repeatedly applying a recursive definition on a set of initial conditions is termed an Iterated Function System (IFS). In fact, anything generated by an IFS is guaranteed to be recursively defined and thus self-similar. A famous example of a fractal generated by an IFS is the “Barnsley Fern,” which is created by iterating four linear functions over four points (Fig. 1). The generated “fern” demonstrates how a very small set of simple functions iterated on a few points can generate something with such organic detail. The example also indicates the sensitivity of the result to the initial conditions; compared to the few initial conditions specified, the amount of detail represented is enormous [1]. In a previous example, the acronym WINE was said to be self-similar and could thus be generated by an Iterated Function System. Note, however, that the result would go on forever. The “Barnsley Fern,” on the other hand, although capable of representing detail on an infinite scale, seems to progress toward a particular image with each itera-
The term “fractal” was coined in 1975 by Benoit Mandelbrot, derived from the Latin fractus meaning fractured.
” 52
engineering tion. This is because the functions that describe the “Barnsley Fern” are contractive, meaning that with each application, the function converges to a result, or an image in this case, known as the fixed point. The fixed point is what one would ideally like to represent by the IFS. Consider the Fibonacci sequence again. If we take the ratio between each two consecutive numbers in the sequence, we find that as the numbers in the sequence get larger, the ratio between two consecutive numbers approaches the golden ratio. It may seem surprising that an operation iterated over an infinite sequence of numbers that themselves grow to infinity can converge on a particular value. The concept, however, is akin to that of a limit in mathematics; a function like 1/x approaches zero as x grows to infinity. The application of Iterated Function Systems to image compression comes from the idea that if we can find a set of functions which converge on a particular image, then perhaps we can represent the image solely in terms of the parameters of the functions and their initial conditions. If the functions can describe how each part of the image is self-similar to another part of the image, then as long the functions are contractive we can regenerate the image simply by iterating them over a prescribed set of initial conditions. While it promises extremely high compression ratios, the IFS approach also poses a daunting problem—how to conduct a search for functions which will closely approximate an image. A simple way to approach the problem involves superimposing a square grid on the image and looking for functions which will take a portion of the image and make it look as close as possible to each cell in the grid. The encoding of an image then amounts to a set of functions which generate each cell in the grid from other parts of the image. There will be exactly one function for each cell, in order to ensure that the IFS as a whole will generate the entire source image [3]. We can simplify this arrangement even further by superimposing a second, coarser grid on the image which we will use to restrict the parts of the image on which functions in the IFS can oper-
53
hurj spring 2010: issue 11 ate. The first grid will for convenience be defined to be twice as fine as the second grid; so if the first, fine grid contains 4 pixel by 4 pixel sections of the image, the second, coarse grid would contain 8 pixel by 8 pixel cells. The cells in the coarse grid are called domain blocks, while the cells in the fine grid are called range blocks. The functions that make up an IFS for an image will transform domain blocks into range blocks in such a way that they minimize the difference between the transformed domain block and the range block. In order to begin comparing domain blocks and range blocks in an effort to find self-similarity, domain blocks are contracted to the size of range blocks. The collection of contracted domain blocks is called the domain pool and gives us a finite and discrete set of image parts in which to search. The contraction process involves dividing each domain block into 2x2 cells, and taking each pixel value in the contracted block to be the average of the pixel values in the corresponding 2x2 cell. Thus, each domain block is made to be a quarter of its original size, the same size as range blocks. Each contracted domain block can further be transformed to approximate a range block as closely as possible. We keep these transformations very simple, and restrict them to changes in brightness and contrast. Changing the brightness is analogous to multiplying all the pixels in a block by a certain value, and changing the contrast is analogous to adding a certain value to all the pixels in a block. Technically, the contrast value must be less than one in order to ensure contractivity, but our experiments show that better results are obtained without this restriction. The last transformation is the implicit translation from the location of the domain block to that of the range block.
In total, a function in the IFS for an image is composed of a contraction, an adjustment in brightness and contrast, and a translation. There must be one function for every range block, and each function operates on a particular domain block, a value to adjust brightness, and a value to adjust contrast. In order to find the optimal function for a given range block, all we have to do is find the domain block which, when contracted and modified by particular brightness and contrast values, most closely approximates the range block [2]. An error function such as the root mean square (RMS) provides a means of determining how closely a transformed domain block approximates a corresponding range block. More precisely, in an image where each pixel can have a value between 0 and 255 inclusive, the error between two pixels could be defined as the root of the squared difference between the two pixel values. The RMS error between two equal-sized collections of pixels is simply the root of the averaged square difference between corresponding pixels. To find the best domain block to use to approximate a given range block, each domain block is transformed to the range block, and the RMS error between the range block and the transformed domain block is calculated. Optimizat i o n
Figure 1: The “Barnsley Fern”
hurj spring 2010: issue 11 techniques can be used to determine the brightness and contrast adjustment values that minimize the error during transformation. Given these optimal transformation parameters, the domain block that produces the smallest error with the given range block is chosen as the domain block that will be used to generate that range block. In this manner, one can determine a set of functions to represent, or encode, a given image. The primary reason this technique for representing images has been intractable is the time it takes to complete an exhaustive search for the optimal domain block to use for each range block. Attempts to mitigate encoding time by classifying domain blocks and improving the heuristics of the search have been moderately successful, but none have achieved the efficiency required for capturing video [6,7,8,9]. The level of detail that can be represented and the space-efficiency with which we can represent this detail is intimately tied to the way we segment the image into domain and range blocks. More sophisticated segmentation schemes do not require that blocks be fixed in size, allowing large swathes of color to be represented tersely without sacrificing the ability to represent fine detail when necessary. The simplicity of the framework described in this article is so as not to obscure how implementing the algorithm on more capable hardware might affect the tractability of the problem. Decoding an image from its representation as an IFS is considerably easier. Recall that each function in the IFS corresponds to a range block in the original image. An arbitrarily sized, arbitrarily colored image is used as the starting point, making sure to adjust the sizes of the range and domain blocks so that the number of range blocks in the decoded image matches the number of functions saved in the encoding. Each range block in the decoded image can then be generated by applying its corre-
engineering sponding function to the specified domain block. The application of the function consists of contracting the contents of the domain block, adjusting them by the brightness and contrast values, and then moving them to the range block. A single iteration of the decoding process is the application of all the functions in the IFS. After seven or eight iterations, the decoded image should resemble the original image within .1 dB of accuracy (Fig. 2). Note that increasing the iterations beyond this number does not significantly improve the quality of the decoded image.
From 0 to 60 in 40ms: Parallel Encoding on the Graphics Card Over the course of the past decade, graphics hardware has become specialized and powerful enough to greatly exceed the capabilities of a central processing unit (CPU) for certain families of tasks. These tasks were initially very simple, such as performing vector and matrix operations or drawing geometric shapes, but eventually became quite complicated, aiding in intensive lighting and shading calculations. Recently, graphics processing units (GPUs) have been used for general and scientific computing tasks including predicting stock prices and running simulations. The hardware manufacturers have embraced this practice and now software exists that enables one to take advantage of a diverse set of functions. The appeal of computing on the GPU rather than the CPU lies in fundamental differences between the two architectures and the execution models which they support. A CPU and the way in which an operating system allows programs to use the CPU favors a serial execution environment, which means that only one program can be using the CPU at any given time. This ends up working well on desktops and other multitasking systems because
often a program will request data from disk or over the network, and during this time, control of the CPU can be freely given to another program that is not waiting on another device. Graphics applications, on the other hand, often involve doing the same operation on independent pieces of a larger set of data, such as pixels in an image. As a result, GPUs have hundreds of stream processors which are individually less powerful than CPUs and run the same program on many different pieces of data simultaneously. This execution environment is said to be parallel and is most suited to solving â&#x20AC;&#x153;parallelizableâ&#x20AC;? problems involving data that is not interdependent. Both the encoding and decoding algorithms for a fractal image compression format have such parallelizable portions that operate on data which is independent. During the encoding process, for example, the search for the optimal function for one range block does not depend on the search for the optimal function of another range block. Hence, the parallel environment of the GPU allows for a simultaneous search for each range blockâ&#x20AC;&#x2122;s optimal function. This expedites the process of exhaustively checking every combination of range and domain block and helps reduce the impact that the size of the image has on encoding time. The process of contracting domain blocks in both encoding and decoding is also parallelizable as each domain block is contracted independently of the others. Parallelizing the iteration of functions in the decoding process is slightly trickier. Note that when applying each function for each range block sequentially, we raise the possibility that a function for a later range block will end up operating on a domain block that was different at the start of the iteration. Because the functions converge to the image regardless of the order of their application during each iteration, it is only necessary that we ensure that an entire itera
Figure 2: The original image followed by seven iterations of decoding.
54
engineering
hurj spring 2010: issue 11
tion’s work is complete before proceeding to the next one. An additional level of parallelization can be created during encoding if we not only partition the exhaustive search by range block, but only allow processors to operate on a fraction of all the domain blocks. When all of the domain blocks have been searched for a given range block, the results can be reduced down to leave only the optimal result. Our serial encoder and decoder were implemented in C++ and ran on an Intel Core 2 Duo P7450 processor with each of its two cores clocked at 2.13GHz. The parallel versions were also implemented in C++ with the aid of the OpenCL library and ran on an NVidia 9600M GS. The results (Fig. 3) are very encouraging, and demonstrate that with the help of the GPU, fractal image compression can indeed be useful as a resolutionindependent video capture format. As long as the encoding can be done in fewer than 40 ms (the frame rate of American television) then the size of the screen on which content is displayed ceases to matter. If a content provider like a cable company can deliver content encoded at a specific level of quality, then it should display correctly on a Figure 3: A comparison of encoding on CPU vs. GPU wide variety of displays, as long as they agree on a common decoding process. The usefulness of applying Iterated Function Systems to create resolution-independent representations for data re-lates to References 1. Barnsley, Michael. (1988) “Fractals Everywhere”, Academic Press, Inc., more than just images. The waveforms that make up sound 1988. and music can just as easily be searched for self-similarity 2. Fischer, Y. (1992) “Fractal Image Compression”. SIGGRAPH’92 course by replacing contrast and brightness adjustments with Fou- notes. 3. Jacquin, A. E. (1992) Image Coding Based on a Fractal Theory of Iterated rier transforms. It is perhaps not obvious what would result Contractive Image Transformations. IEEE Transactions on Image Processing, if a piece of audio were scaled to occupy different amount of 1(1). time, but the decoding algorithm would simply use the avail4. Lio, P. (2003) Wavelets in bioinformatics and computational biology: able information to attempt fill in the gaps. The use of IFS in State of art and perspectives. Bioinformatics, 19 (1), pp. 2-9. 5. Mandelbrot, B.B. (1982) The Fractal Geometry of Nature. W.H. Freeman finding a source of patterns within even genetic code is beand Company. ISBN 0-7167-1186-9. ing investigated [4]. No one knows what kind of interesting 6. Martínez, A. R., et al. (2003) Simple and Fast Fractal Image Compresfinds IFS will help detect, but an interesting way to find out sion. (391) Circuits, Signals, and Systems might be to use IFS on itself, by analyzing the current body of 7. Truong, Trieu-Kien; Jeng, J. H. (2000) Fast classification method for fractal-inspired research for self-similarity between different fractal image compression. Proc. SPIE Vol. 4122, p. 190-193. 8. Wu Xianwei, et al. (2005) A fast fractal image encoding method based on techniques and applications, running the resulting iterated intelligent search of standard deviation. Computers & Electrical Engineering function system, and seeing where it converges. If only it were 31(6), pp.402-421. that easy! 9. Wu Xianwei, et al. (2005) Novel fractal image-encoding algorithm based on a full-binary-tree searchless iterated function system. Opt. Eng. 44, 107002. Figures Figure 1. The Barnsley Fern, courtesy Wikimedia Commons. Figure 2. The original image followed by seven iterations of decoding. Figure 3. Graphs comparing encoding time on the CPU and GPU.
55
hurj spring 2010: issue 11
engineering
Robotic Prosthetic Development:
The Advent of the DEKA Arm
Kyle Baker / Staff Writer
A Brief History of Prosthetic Advancements The history of amputations date back as early as Hippocrates in ancient Greece. Amputations have continuously been an element of war, but advancements in prosthetics to improve a soldier’s life post-war have not kept up with the times. Early United States efforts in limb prosthetics began around World War II. In a meeting at Northwestern University in 1945, a group of military personnel, surgeons, prosthetists and engineers collaborated to resolve what needed to be done in the field of limb prosthetics. This meeting resulted in the establishment of the Committee on Prosthetics Research and Development (CPRD), which directed endeavors in the field for over twenty-five years. Between 1946 and 1952, IBM developed some electrical limbs, but they were bulky and difficult for the user to operate. Another device, the Vaduz hand, developed in Germany after World War II, was a system ahead of its time. Childress explained that “the hand used a unique controller in which a pneumatic bag inside the socket detected muscle bulge through pneumatic pressure, which in turn operated a switch-activated position servomechanism to close the voluntary-closing electric hand.” The most recent advancement in robotic prosthetics is the invention of the “Power Knees.” These innovative mechanical legs are operated by a motor that propels each leg forward. The legs work in tandem to keep the user at a constant speed. Last year, Josh Bleill, who lost his legs in a roadside bomb in Iraq, became the first person to operate the Power Knees in daily life. The development of advancements such as these and the fresh attitude toward improving prosthetics have helped make robotic prosthetics look more and more promising in recent years.
DARPA and the DEKA Arm Project Since amputations are highly prevalent in the war spectrum, it is only natural for the military to fund prosthetic research. DARPA (The Defense Advanced Research Projects Agency), which “is the same group that oversaw the creation of night vision, stealth aircraft, and GPS,” funded a $100 million Pentagon project called Revolutionizing Prosthetics to reform limb prosthetics, specifically upper limb prosthetics. In order for the arm to be effective, it must have “sensors for touch, temperature, vibration and proprioception, the ability to sense the position of the arm and hand relative to other parts of the body; power that will allow at least 24 h use; mechanical components that will provide strength and environmental tolerance (temperature, water, humidity, etc.); and sufficient durability to last for at least ten years.” To fulfill these aims, DARPA has been working in conjunction with the DEKA Research and Development Corporation to produce a robotic arm that is no bigger than a human arm and weighs no more than nine pounds. The head of
DEKA is Dean Kamen who is also the inventor of the “Segway.” According to Colonel Dr. Geoffrey Ling, manager of DARPA’s program, when DARPA approached Kamen about this extraordinary goal, Kamen told them “the idea was crazy.” That did not stop Kamen and his team of 40 engineers. After one year, they came up with The DEKA Arm. Scott Pelley of 60 Minutes interviewed Ling, who remarked, “It is very much like a Manhattan Project at that scope. It is over a $100 million investment now. It involves well over 300 scientists, that is, engineers, neuroscientists, psychologists.” Unfortunately, the prosthetic designs available in the market today remain well behind the times. Currently, a hook developed during World War II that resembles the hook in Peter Pan is the most common option. Two major problems that advanced prosthetics currently pose and that the DEKA Arm must overcome is irritation around the shoulder and excessive weight that tires the users. The cost of purchase is a vital concern as well. Current estimates for the DEKA Arm are around $100,000, which may seem high, but is comparable to what current systems cost.
Figure 1: A man demonstrating the DEKA arm.
56
engineering The basic approach utilized in the DEKA Arm design parallels Germany’s Vaduz prosthetic system. The DEKA Arm is mounted on the shoulder and inside the shoulder apparatus (Figure 1.) there are tiny balloons that are displaced across the user’s shoulder. These balloons inflate when a muscle flexes and respond by sending signals to the processor inside the arm. In addition to flexing the shoulder, the users, with the help of a set of buttons placed in their shoes, can direct the arm to perform specific functions. In Figure 2, Fred Downs, the Veterans Affairs official in charge of prosthetics, uses his toes to control the grasping of a water bottle. Along with effective arm movements, one needs control of hand sensitivity. As Pelley asked in the interview, “How do you pick up something that you might crush?” To address this, the DEKA Arm incorporates a vibrator sensor to alert the user on how tightly something is being grasped. The shoulder feels this vibration, and the vibration escalates upon a more intense grasp. This function is demonstrated in Figure 3 as Chuck Hildreth picks a grape and eats it. For DEKA Arm, the next step is to attach the arm directly to the nervous system. The final device Revolutionizing Prosthetics desires is an arm controlled simply by the user’s thoughts. Ling explained that although the arm is gone, the nerves are not necessarily lost. In this case, the brain still has an effect on the movement of the shoulder. This direct connection allows the user to simply think about moving his arm to accomplish the task. Jonathan Kuniholm of Duke University (Figure 4) described that electrical impulses occurring in his arm—generated by merely thinking— allow the computer to make movements in the hand based on these impulses. Eventually, this system of neuron-controlled movements will replace the buttons in the user’s shoes creating one continuous mechanical arm. An important issue that follows is whether one can learn to fully operate such a mechanical arm. Suppose, for example, that someone is born without either hand or arm. How is this person supposed to think about moving their limb when they have never done this in
57
hurj spring 2010: issue 11 Hero said Armiger “is to motivate users to practice pattern based signal classification.” Obviously, sensitivity of movements is a major concern, and Armiger and Vogelstein hope to surmount the issue using this game. Continual practice with the game forces the player to think about moving his fingers, which in turn assists the program by calibrating the system. This helps users adapt to a novel robotic process that they may end up using when the prosthetic arm becomes available. According to Armiger, Air Guitar Hero “provides a fun way for users to practice finger control while simultaneously providing ‘training’ data for pattern recognition algorithms. This helps the user learn to control the system and improves the way the system interprets the user’s inputs.” Armiger hopes his research and Air Guitar Hero “will attract bright students” to ultimately refine prosthetic control.
The Big Picture: DEKA Arm and the General Public Figures 1, 3, 4
their life? Although the nerves exist in the residual limb, the concept of moving something that is not there may seem abstract. To test this, Jacob Vogelstein and Robert Armiger of the Applied Physics Lab (APL) at Johns Hopkins University have developed a spinoff of the very popular video game, Guitar Hero. Guitar Hero consists of five colored fret buttons and the strum control. The five colors cross the television screen and the user presses the color on the guitar while simultaneously strumming. APL’s “Air” Guitar Hero eliminates the strum, and so instead of using the standard guitar with its five colored fret buttons, the user simply “thinks” about pressing the colored buttons. This sends signals to electrodes that are placed on the user’s amputated limb. Kuniholm has been involved in this testing process. He has found that without any strumming, the electrodes recognize his muscle movements, and he can enjoy the game like any other user. The goal of Air Guitar
Although DARPA’s massive investment of $100 million comes at the expense of taxpayers, many people such as Ling maintain that this technology is absolutely necessary to honor the nation’s commitment to its soldiers. For concerned members of society, Ling reassures that this program is “not a classified, military weapons system. This is an advancement in medical technology.” As a result, other universities and companies such as Johns Hopkins and DEKA can collaborate to develop the most effective solution. Though the DEKA Arm is not available to the public just yet, it is currently being tested at the Department of Veterans Affairs. At this point, the first recipients of the DEKA Arm would be the nearly 200 amputees from Iraq and Afghanistan. The end goal, however, is that the general public will see their money returned to them in the form of the innovative DEKA Arm. In an attempt to increase transparency and cooperation, Kuniholm founded the Open Prosthetics Project which is an
hurj spring 2010: issue 11 open-source web site “that aims to make prosthetic-arm technology as open source and collaborative as Linux and Firefox.” DARPA and Johns Hopkins are on the same page as Kuniholm; they too want the hardware and software behind the research to be “open source so that prosthetic-arm research innovation can evolve organically.” Clearly, the engineers and scientists involved understand that this is a massive team project that should not be a competition among companies and universities. Collaboration will expedite the evolution and in the process, generate the most advanced prosthetic product. Veterans seem to be the topic of most discussion when it comes to amputations, but this is mostly a result of media coverage and research grants. They are certainly not the only ones who could use a prosthetic arm. Accidents, diseases, birth defects, and war all are causes of loss of limb. In fact, most amputations are “related to work-related civilian trauma.” The DEKA Arm may seem like an unnecessary gadget to some of those unlikely to receive any of its benefits immediately, but in the future, the technology is in everyone’s interests. Since this project comes at the taxpayer’s expense and the money is not directly returned to them, the current generation might ask, “Why are we spending so much money on this?” Those people should consider that this innovation could one day improve the life of their child. That is why this research is being conducted right now. According to DARPA, the technologies they develop are expected to be “readily adaptable to lower-extremity amputees [with] civilian amputees [benefitting] as well as amputee soldiers.” Currently, hooks developed around World War II are being used in place of amputated hands. This is simply unacceptable. The United States has the ability to revolutionize this outdated technology, and is taking important strides forward. With clinical testing now taking place, the DEKA Arm and robotic prosthetics will be available in the foreseeable future for amputees. The DEKA Arm appears very promising, especially with the open source configuration that the leading developers have elected to use. Furthermore, with increasing grant
engineering support, the DEKA Arm will likely become the top model for all robotic prosthetics. References 1. Adee, Sally, “For those without hands, there’s Air Guitar Hero.” IEEE Spectrum. (2008). 12 November 2009 <http://spectrum.ieee.org/ consumer-electronics/gaming/for-those-withouthands-theres-air-guitar-hero>. 2. Answers.com. 2009. Answers Corporation. 15 Nov. 2009. <http://www.answers.com>. 3. Armiger, Robert. Email interview. 2 December 2009. 4. Bogue, Robert, “Exoskeletons and robotic prosthetics: a review of recent developments.” Industrial Robot: An International Journal. 5 (2009): 421-7. 14 Nov. 2009 <http://www.emeraldinsight. com/Insight/viewContentItem.do?contentType= Article&contentId=1806007>. 5. Childress, Dudley S, “Historical aspects of powered limb prostheses”. Clinical Prosthetics and Orthotics, 9(1): 2-13, 1985. 6. Ellison, Jesse, “A New Grip on Life.” Newsweek. (2008). 14 November 2009 <http://www. newsweek.com/id/172566>. 7. Graham, Flora. “Disability no barrier to gaming.” BBC News 12 March 2009. 16 November 2009 <http://news.bbc.co.uk/2/hi/technology/7935336.stm>. 8. Meier, R.H., & D. J. Atkins, ed. Functional Restoration of Adults and Children with Upper Extremity Amputation. New York: Demos Medical Publishing, 2004. 1-7. 9. Pelley, Scott. “ The Pentagon’s Bionic Arm.” 60 Minutes 20 September 2009: 1-4. Web. 29 October 2009. 10. Pope, David, “DARPA Prostheitcs Programs Seek Natural Upper Limb.” Neurotech Reports. 14 November 2009 < http://www.neurotechreports.com/pages/darpaprosthetics.html>. 11. United States. DARPA. Revolutionizing Prosthetics Program. February 2008. 16 November 2009 <http://www.darpa.mil/Docs/ prosthetics_f_s3_200807180945042.pdf>.
58
hurj spring 2010: issue 11
can you see yourself in hurj? share your research! now accepting submissions for our fall 2010 issue focus -- humanities -- science -- spotlight -- engineering
hurj 59
we thank you for reading!
For over half a century, the Institute for Defense Analyses has been successfully pursuing its mission to bring analytic objectivity and understanding to complex issues of national security. IDA is a not-for-profit corporation that provides scientific, technical and analytical studies to the Office of the Secretary of Defense, the Joint Chiefs of Staff, the Unified Commands and Defense Agencies as well as to the President’s Office of Science and Technology Policy. To the right individual, IDA offers the opportunity to have a major impact on key national programs while working on fascinating technical issues.
broaden your perspective your career
•
your future
•
your nation
IDA is seeking highly qualified individuals with PhD or MS degrees
Sciences & Math
Engineering
Other
Astronomy Atmospheric Biology Chemistry Environmental Physics Pure & Applied Mathematics
Aeronautical Astronautical Biomedical Chemical Electrical Materials Mechanical Systems
Bioinformatics Computational Science Computer Science Economics Information Technology Operations Research Statistics Technology Policy
Along with competitive salaries, IDA provides excellent benefits including comprehensive health insurance, paid holidays, 3 week vacations and more – all in a professional and technically vibrant environment. Applicants will be subject to a security investigation and must meet eligibility requirements for access to classified information. U.S. citizenship is required. IDA is proud to be an equal opportunity employer. Please visit our website www.ida.org for more information on our opportunities. Please submit applications to: http://www.ida.org/careers.php
Institute for Defense Analyses 4850 Mark Center Drive Alexandria, VA 22311