HURJ Hopkins Undergraduate Research Journal
Fall 2010 | Issue 12
from genes to society
how genetic research is affecting society at large 1
2
a letter from the editors
{
We live in a world of data. Companies like Google embody the spirit of the time, one increasingly aware of and dependent on the collection and distribution of all kinds of information. This machinery of data collection has benefitted few fields as greatly as genetics and specifically, the completion of the human genome project. Yet the completion of this project is simply one, perhaps the most visible, in a series of successful endeavors to understand our biology at its fundamental levels. The true test of usefulness for all this genetic data will be passed when it is translated into meaningful medical treatments. At the same time, the nature of the information has entered previously unknown levels of intimacy as one can now determine the chemical basis of any individual fairly quickly. The impact this kind of knowledge has goes far beyond the laboratory and generates crucial ethical questions while simultaneously promising incredible medical cures. This issue of the Hopkins Undergraduate Research Journal tackles these questions head on with five focus articles that explore new research in the field and potential questions it raises. There is an interesting discussion of the latest research in epigenetics and the kinds of therapy it offers as well as the obstacles facing gene therapy. Another piece explores the varying personal genomic sequencing options available today while another addresses the ethical questions a couple might face if they consider conceiving a genetically selected child in order to save the ailing health of its elder sibling. The fifth article analyzes genetics at the intersection of science and policy and calls for new regulation to ensure that personal freedoms do not become collateral damage of unchecked discovery. Humanities and science research is also presented, highlighting some of the most interesting work undertaken by Hopkins undergraduates. We would like to thank all the students who have contributed their research to this twelfth issue of HURJ. It is their dedication to their respective subjects which creates the thoughtful work we are privileged to publish. Thanks also to our excellent staff whose care and guidance brought each of these articles from proposals to print. Regards,
Johnson Ukken Editor-in-Chief, Content
Paige Robson Editor-in-Chief, Layout
3
table of contents fall 2010 focus: from genes to society
4
12
Why Genetic Regulation is Urgently Needed
15
Pre-Implantation Genetic Diagnosis - Hopes & Fears
19
Prospects of Nanoparticle Gene Delivery
23
Epigenetics: Heritable Regulation of Gene Expression
26
The Era of the Personal Genome
Paul Grossinger
Maha Haqqani
Deng Pan
Dong Kim
Anne Kirwan
table of contents spotlights on research 8
German Health Care: The Importance of Regulation in an Insurance-Based Healthcare System Calvin Price
9
Finding Social Science Research: What Hopkins Needs to Do for Social Science Undergraduates Isaac Jilbert
10 Building Your Foundation with a Master’s Degree Nezar Alsaeedi 11 Atomization and Tradition: A Call for Integration Zachary Yerushalmi
humanities 29 The Importance of Applying Rule-Based Law to the International Legal System Anisha Singh 32 President Woodrow Wilson’s Western Tour of 1919: The Formation of Wilsonian Foreign Policy and its Effects on Current International Relations Wallace Feng 35 Accessability and Affordability: Redefining Care through the Refugee Experience Anna Wherry
science & engineering reports
38 Production of a p63 Reporter to Interrogate the Self-Renewal of Prostate Stem Cells Camille Soroudi 41 Quantitative Analysis of Extinct and Extant Crocodyliform Dental Morphology to Understand Underlying Evolutionary Patterns Jessica Noviello 44 NDMA Receptor Subtype Expression in Oligodendrocyte Development and the Effect of Neuregulin on Subtype Expression in O4 Oligodendrocytes Linked to Patients with Schizophrenia Robert Martin 47 Aging Patterns and Sexual Dimorphism in Bone Remodeling witin an English Middle Iron Age Population Trang Diem Vu 50 Robotic Tunneling Worm for Operation in Harsh Environments Blaze Sanders
5
hurj 2010-2011 hurj’s editorial board Editor-in-Chief, Content Johnson Ukken Editor-in-Chief, Layout Paige Robson Content Editors Leela Chakravarti Mary Han Isaac Jilbert Andi Shahu Michaela Vaporis Layout Editors Kelly Chuang Sanjit Datta Edward Kim Lay Kodama Sydney Resnik Copy Chief Editor Haley Deutsch Copy Editors Paul Grossinger Kiran Parasher Jessica Stuyvenberg PR/Advertising Javaneh Jabbari Photographer/Graphic Design Sarah Frank
hurj’s writing staff Nezar Alsaeedi Wallace Feng Paul Grossinger Maha Haqqani Isaac Jilbert Dong Kim Anne Kirwan Robert Martin
about hurj:
The Hopkins Undergraduate Research Journal provides undergraduates with a valuable resource to access research being done by their peers and interesting current issues. The journal is comprised of five sections - a main focus topic, spotlights, and current research in engineering, humanities, and science. Students are highly encouraged to submit their original work. disclaimer: The views expressed in this publication are those of the authors and do not
constitute the opinion of the Hopkins Undergraduate Research Journal.
6
Jessica Noviello Deng Pan Calvin Price Blaze Sanders Anisha Singh Camille Soroudi Trang Ngoc Diem Vu Anna Wherry Zachary Yerushalmi
contact us: Hopkins Undergraduate Research Journal Mattin Center, Suite 210 3400 N Charles St Baltimore, MD 21218 hurj@jhu.edu http://www.jhu.edu/hurj
hurj fall 2010: issue 12
can you see yourself in hurj? share your research! now accepting submissions for our spring 2011 issue focus -- humanities -- science -- spotlight -- engineering contact hurj@jhu.edu
hurj 7
spotlight
hurj fall 2010: issue 12
German Health Care:
The Importance of Regulation in an InsuranceBased Healthcare System
Calvin Price, Class of 2012 International Studies With America’s recent healthcare reform, its entire healthcare system is in a state of flux. The movement towards universal coverage and tightened regulations of insurance companies without a public insurance option or publicly funded health care is a solution which may be most similar to Germany. Germany has the oldest universal healthcare system in the world, datiug from Otto von Bismarck’s Social Legislation in the 1880’s. [1] The system is, however, different from the usual American vision of European healthcare. Germany still has an insurance-based system, more like that of the US than those of countries such as Great Britain or Norway. Still, there are major differences between the American and the German healthcare systems, starting with how insurance is paid for. The Germans pay roughly 8 percent of their income to a nonprofit, but also non-government, insurance company (called a Krankenkasse, literally “sickness fund”) of their choice, with their employers paying about the same amount. [2] Naturally, this means that wealthier Germans pay more for their insurance than poorer Germans do, and it is at this point where the German model starts to look more “European.” It seems to be even more so once benefits are discussed. The German sickness funds face much stricter regulation than American insurance companies do, meaning that they are forced to provide more benefits without forcing patients to deal with as much red-tape as the American system. For example, the complete lack of deductibles, combined with low co-payments, means that Germans can get treatment whenever they feel that they need it [3]. Even with this system, Germany spends about 11 percent of its GDP
on health care, while in the US this number is closer to 16 percent. [4] Another interesting aspect of the German health care system is that it provides its citizens with a degree of choice. While having insurance is mandatory, paying into the sickness funds is not. Germans can also choose to get private, for-profit healthcare, which tends to provide better benefits, but the price is also determined by how healthy the individual is, unlike the sickness funds, which cannot discriminate based on health status. [5] Most Germans still choose the sickness funds. The combination of affordability, good coverage, and a trust in the government that is not seen in the US makes these funds good options for Germans of all income levels. [6] Although the American and German health care systems have major differences, the Patient Protection and Affordable Care Act (commonly known as “Obamacare”) seems to be a move to greatly increase the similarity between the two systems. In particular, the extra regulation of the American insurance industry, with provisions such as those prohibiting discrimination based on preexisting medical conditions, seems to be a direct copy of the German system. While US health insurance will remain a wholly private operation, the increased level of government supervision of the insurance industry marks a surprising change for a country that often seems to fear its own government. Further, although insurance in the US continues to not be a percentage of income received, extra tax credits to poorer families is an attempt to, as the Germans say about their health care system, “leave no man behind.” Our lack of a storied history, like that of the German health care system, is evident in the widespread dissidence against health care reform in the US. Maybe in 125 years, the citizens of the United States will feel as content with their universal health care as the Germans do today, but until then, we will have to wait and see whether America is really ready to follow a more European solution to a critical social issue. References [1] Holborn, Hajo. A History of Modern Germany — 1840–1945. Princeton UP, 1969. pp. 291–93 [2] Knox, Richard. Most Patients Happy with German Health Care. NPR. July 3, 2008. [3] Reinhardt, Uwe. Health Reform without a Public Plan: The German Model. The New York Times Economix. April 17, 2009. http://economix.blogs.nytimes.com/2009/04/17/healthreform-without-a-public-plan-the-german-model/ [4] Borger C, Smith S, Truffer C, et al. (2006). “Health spending projections through 2015: changes on the horizon”. Health Aff (Millwood). [5] Gesetzliche Krankenkassen. http://www.finanztip.de/web/abc-der-krankenkassen/ [6] Knox, Ibid.
Per capita government expenditure on health at average exchange rate (US$) 1995-2006. Courtesy of Dionisis Orkoulas-Razis.
8
hurj fall 2010: issue 12
spotlight
Finding S o ci a l S cience R es e arch: What Hopkins Needs to Do for S o ci a l S cience Underg radu ates Isaac Jilbert, Class of 2012 International Studies Johns Hopkins, one of the world’s great bastions of the scientific method, has been the laboratory of the world since its inception. As a university, we have pioneered new ideas, made countless medical advances, and improved the lives of millions through an unwavering dedication to research and an unrivaled passion to understand and improve the world in which we live. Our graduate programs, building off the legacy of our university, continue to be at the cutting edge of research, whether it is in medicine, chemistry, physics, or engineering. Our research is the pride of our university, and professors, graduates, and undergraduates alike flock to the heart of Baltimore to contribute to such a legacy. Daily, there are students around me scurrying about to their labs at the hospital or off to work with graduates on developing some new technique or discovering something revolutionary in their fields. There is a wealth of opportunities available to those in the natural sciences and medicine. Undoubtedly, this stems from our legacy as one of the greatest hospital and medical school in the world, as well as our reputation as a “boot camp” for pre-meds. However, the sad truth is that some undergraduates still feel left behind. There is no shame in promoting the natural sciences and medicine, but what of the other disciplines? It often feels as if the University and administration forget how the social sciences live. Undoubtedly, there are amazing, world-renowned professors and highly qualified graduate students in our social science departments. They conduct some of the most respected research in their fields, but still, Hopkins is more focused on the natural sciences and medicine. By saying I go to Hopkins, people’s first reaction is to assume I am interested in being a doctor, because our reputation is so tied to our hospital. As a university with nine different schools, seven of which are professional, the undergraduate community can feel lost at times, especially social scientists. Our social science departments, while formidable, are small by comparison to the same departments at other universities. The size of our economics department pales in comparison to that of Harvard, and our sociology department is also comparably miniscule. Those who really lose out in such a situation are the undergraduates who can’t take advantage of the resources SAIS or Krieger-based graduate
schools have to offer, due to the fact Hopkins is so decentralized. Undergraduates in these disciplines particularly lose out when it comes to research, as it is often difficult to find research positions in the social sciences. Undergraduate pre-medical students can easily find positions running tests and doing research in one of the infinite number of labs that Hopkins seems to possess, but in the social sciences, there is a relative dearth of opportunity. It may just be the nature of our research, but to join a professor’s research project as an assistant is rare, often involving much begging, as undergraduates are passed up in favor of graduate students. Social science undergraduates need to be able to have more access to research. We came to Hopkins to participate in such research, only to find that such opportunities are often elusive, and it is important to fulfill the mission of our university and ensure that research pervades all parts of our community. There are relatively easy ways that we, as a university, can improve this situation. The administration needs to encourage professors and researchers in the social sciences to take on undergraduates by raising these concerns to faculty and graduate students. It may even be useful to develop programs where undergraduates can help graduate students on their independent research and papers that are being prepared for publication. It is not for a lack of willingness that undergraduates are not involved in social science research. Professors are often too immersed in their own projects to look to students as less of a hindrance and more of a benefit. Undergraduates are here to learn, and we want to help. In addition, why not have a website where research opportunities can be posted? There is a website for job openings, but why not one for research, especially since students will often do research for free? The university could provide a forum which would streamline the hiring process, thus encouraging professors to look for more undergraduate help. There are many positive steps that can be taken, and the university needs to take them. We came here as undergraduates to learn not just the basics of reading and writing, but to develop the skills needed to succeed out in the world. Research provides a valuable learning experience that is not always achieved in a classroom, and Hopkins needs to get its undergraduate social science students involved. If nothing else, it is important to increase research opportunities in social sciences just to stay true to our mission as a university.
9
spotlight
hurj fall 2010: issue 12
Building Your Foundation with a Master’s Degree Nezar Alsaeedi, Master’s Candidate Molecular and Cellular Biology In today’s competitive world, education is the currency by which most of us communicate. Your education determines your job placement, the way you are perceived amongst colleagues, and your contribution towards the improvement of our human civilization. However, a worldwide phenomenon has recently shown up in society which deems a bachelor’s degree as insufficient to guarantee success in the working world. Gone are the days of securing a job straight out of college, a notion that predominated much of the past century. Such old attitudes are quickly washed away because of the result-driven and ultra-competitive society we live in today. Yet, this is not to say that the competitiveness of today’s world has created a survivalist culture, where the strong triumph and the weak dwindle. Rather, I suggest an alternative notion, where the circumstances of today’s workforce push applicants and students to search for their own unique qualities and strengths to set them apart from the crowd. It is up to the scholar to discover his true potential, but it is the responsibility of the institution to guide him. This relationship between student and university is clearly exemplified by the many master’s programs that Johns Hopkins University provides its students. These programs cultivate a culture of independent thinking and innovation that are a testament to this university’s legacy. In addition, students enrolled in these degrees have a chance to experiment with different learning styles and build an academic foundation that reaps rewards in the real world. Hopkins’ academic structure is one to be emulated, as it encourages student creativity and gives them t h e
10
necessary tools to face the world we live in. One of the benefits of the master’s program at Hopkins is the opportunities it gives its students to learn new skills. Take the master’s degree in Cellular and Molecular Biology for example. This program requires its students to teach an introductory Biology Laboratory Course to underclassmen. This requirement allows applicants not only to deliver scientifically-rigorous information to students in the form of clear and concise instruction, but it also gives the master’s student an opportunity to review principal concepts and practice public interaction skills that are a necessary part of any scientific career. Moreover, through this student-teach-student attitude, Hopkins fosters a culture of collaboration and teamwork within its student body. It opens avenues of idea exchange and gives its students a chance to learn from their peers and develop leadership skills. One of Johns Hopkins University’s main goals is to create leaders out of their students by encouraging them to think independently and innovatively. Most master’s degrees at Hopkins are thesis-based, where students have to submit a culmination of their research and present their findings to receive their degree. This fifth year of academics represents a different type of learning from the routine classroom lectures or textbook learning that characterized much of the four year undergraduate education. Research inspires the curiosity of the student, making him hungry for solutions to some of science’s unsolved mysteries. It affords the student the time flexibility of a basic scientist, but at the same time encourages the scholar to take classes specific to his field of interest and to design his own school curriculum to enhance the goals of his independent project. More importantly, it forces the student to become an authority in a particular field, able to hold conversations with senior scientists and lecturers in the area of interest. The applicant becomes a part of a scholarly community, contributing his own creative touches to the canvas of scientific discovery. Interestingly, many graduate schools- including medical schools- have begun to appreciate the importance of student creativity and have begun integrating such thesis-oriented education in their respective curriculums. Hopkins’ master’s programs have continuously produced stellar students that meet the needs of these schools. It has provided their applicants with a starting point for their future careers. Often times, students who have started a project as master’s students at Johns Hopkins continue working on their topics throughout medical schools, earning them more credibility and authority in their scientific topics of interest. Other times, the skill set learned from a master’s degree at Hopkins changes the direction of one’s goals, expanding one’s aims from merely practicing science, for example, to researching, and teaching science. More importantly, the master’s degree offered by Johns Hopkins imparts to its students not only an attitude of confidence to search for the truth behind life’s many mysteries but also a sense of intrepidness to brave the real world armed with a valuable education.
hurj fall 2010: issue 12
spotlight
Atomization and Tradition: A Call for Institutional Integration Zachary Yerushalmi, Class of 2011 Economics It is early September again, nighttime to be exact, and Charm City is that same-old two degrees above comfortable that it has been since you got here as a freshmen. But you are a senior now, older, wiser, on the cusp of a mystical journey of unemployment. Being that time of year, you and your mates do what generations have done before you and head to the historic Lower Quad. It’s dark outside and memories of your freshman year flow back. You hold the candle with one hand instead of two--the aged, daredevil you are. And just when you work up the nerve to say hello to that cute guy who was in your section in CIP freshman year, the new class of 20-whatever comes out. I spent the past year studying in Beijing and worked for a You probably do not remember having this thought. And the time in Shanghai and Xinyi. During that time, I participated in reason why is probably because something created two years ago is not a grand total of one event hosted by the Hopkins community. a tradition. Take Princeton, a place so stuffy and bound up in tradition The lack of gatherings represents a systemic problem. On paper, they even know the history of each one. Why is it that every student Hopkins should have a fantastic network in Asia, especially so that has ever graced its halls can, at the drop of a silver dollar, recite the in China. The SAIS Hopkins-Nanjing Center, started in 1984, school’s motto? Could the same be said about JHU? It’s ‘semper fi’ right? was literally the first large scale educational initiative between I hear a lot of people complain about a lot of things at Hopkins; the People’s Republic of China and a US institution. It remains a deficit of school spirit being merely one of them. But—and I the largest to this day. Yet, Hopkins undergraduates are unable do not mean to dismiss their claims—a deficiency in pep rallies to attend this school. During my time in China, neither did is reflective of a deeper problem: institutionalism. Universities I get a single introduction to the community of students and such as Hopkins are almost genetically predisposed to the disease alumni there nor was I updated about Hopkins related events. of atomization, of diverging departments and schools. This Is this the ‘norm’ for US institutions abroad? Just focusing on problem is made more acute given that the only bottom line, China individually, Yale, Harvard, Princeton, even Duke, all have understandably, is research. Individual branches of the school are small, university operated “campuses”. All leverage these institutions predisposed to maximizing research efficiency within their own to integrate their students into the PRC and surrounding alumni fiefdoms. It is in this pursuit that departments have lost scope and population. Though maybe most important of all, the students and fail to utilize all the resources available to them. And it is in this alumni themselves reflect their school’s investment in them and do their process that the University loses the good will of its undergraduates best to give back regardless of whether they are in Seattle or Shanghai. and the possible pep rallies that may accompany them. The school has the resources to give because we have amazing academic Now, I want to take a moment to qualify that last point. Every infrastructure. With greater integration and student investment Hopkins student is tremendously lucky to have the wealth of in Hopkins, the university will find that it will receive in kind. opportunities afforded to them here; in the greater scheme of things Hopkins is an amazing place. The University’s strength is and relative to other academic institutions in America itself. We derived from and relies on its institutional prowess in subject areas are one of the foremost research institutions in the world, and we ranging from sociology to bio-medical engineering in campuses should cherish this legacy. However, where there is opportunity located across the globe. But this school, like many large academic for improving this legacy, we have a scholarly obligation to take it. institutions, has peccadilloes. Chief among them is lack of school The Economics Department, which I proudly count myself a part spirit and inefficiently used resources. This piece merely posits that of, has never provided me with so much as free punch, let alone a real these two are in some way causally related. Hopkins suffers because chance to interact with fellow students and professors. Probably has there is a mismatch between the strength of existing institutions to do with economies of scale. Yet why is it so rare for undergraduates, within the school and the opportunities afforded to students. Though graduates, and professors to collaborate and discuss? Walls should the school may not command an endowment with the heft of its gilded not exist between different types of students, less we lose out on the peers, its global infrastructure has the capacity to give motivated benefits. A willingness to exchange ideas and to learn from each students unparalleled opportunities to meet, work, and research other, not just a textbook, should be prized more. Why are there with the best. Hopkins needs to do more to facilitate the interaction barriers that hinder independent research initiatives, collaborative between the students below and the overarching institution above. work between schools, and the participation of undergraduates?
11
focus
Why Genetic Regulation is Urgently Needed Paul Grossinger , Class of 2011 Political Science
“God and man are of the same race, differing only in their degree of scientific advancement.” While many disagree with this old, unattributed Age of Enlightenment epigram, developing research in the field of DNA testing certainly lends the story a degree of credence. After all, if humans are eventually able to identify, change, and eliminate parts of the human genome with impunity, to manipulate life, what truly separates us from the divine? Although the human-divine relationship is an intriguing subject, this particular analysis does not deal in supernatural hyperbole. Instead, it focuses on the dichotomy between the great opportunities and existential threats posted by this developing field. Certainly, the development of DNA testing and its increasingly widespread application present potentially limitless possibilities for the scientific community. However, while technological progress for the sake of science is always laudable, there are several major issues with the increased application of DNA testing. These issues, namely surreptitious testing, insurance issues, and the field’s assault on personal privacy, need to be understood before research in the DNA testing field develops. Further application of DNA testing without understanding its potential effects could have profound consequences on the structure and function of our society. However, before delving into the potentially disastrous consequences of DNA testing, it is important to acknowledge the field’s profound opportunities and limitless possibilities. The technology has given rise to scientific projects that quite recently would have been relegated to science fiction. Highly accurate genetic testing during pregnancy can uncover any genetic diseases or
12
hurj fall 2010: issue 12 defects a child will have, which allows parents to prepare more thoroughly for those eventualities. Additionally, fetal genetic testing can also accurately determine parentage and, in the near future, scientists will perfect safe methods to modify the allele structures of an individual’s genetic makeup, effectively manipulating the appearance and characteristics of unborn children. These incredible possibilities nonetheless have great potential for misuse. This is particularly true because the field of DNA testing remains highly under-regulated and generally misunderstood. The lack of either governmental or internal regulation of the field is of particularly great concern. Even within the sciences, the question of “who regulates genetic testing?” is not easily answered. Several different federal organizations from the Food and Drug Administration to the National Institute of Health play some role in the regulation of DNA testing or genetic prescreening and these various associations have yet to agree on a standard regulatory procedure for the field. Indeed, according to the independent Genetics and Public Policy Center: “Currently there is no uniform or comprehensive system to assess the analytic and clinical validity of tests before they are offered to patients, and there are no laboratory standards that specifically address molecular or biochemical genetic testing or require laboratories to enroll in proficiency testing programs that assess their ability to perform the tests correctly.” [1] Essentially, although the science in the field is progressing, regulation is lagging well behind it. Moreover, there has been little effort to educate the public about the dangers of genetic testing and related operations. As a result, patients and consumers rarely understand the implications of tests they choose to undergo and the government cannot effectively protect them from the more dangerous and unproven procedures currently entering the market. This lack of regulation is a serious concern because patients currently have no way to determine whether a specific procedure is spurious or unnecessary. Outside of the DNA testing field, the process of determining clinical and analytical validity and utility is clear and well regulated. Potential products, drugs, and procedures all go through a rigorous testing process and a series of progressing clinical trials overseen by the FDA, which gradually determines if they fit a demonstrated scientific and consumer need, have a high degree of success, and are safe to use on human patients. For example, the cholesterol drug, Lipitor was first synthesized by Bruce Roth at Pfizer in 1985, but it took years of testing and clinical trials before the drug was fully approved for consumer use in 2005 [6]. However, this sophisticated regulated trial testing is absent in the rapidly developing DNA field and, according to the Genetics and Public Policy Center at Johns Hopkins, “The current oversight system does not ensure the analytic or clinical validity or the clinical utility of genetic tests.” [2] Federal regulators are in the process of developing new regulatory procedures, including the Center for Disease Control’s (CDC) Evaluation of Genomic Applications in Practice and Prevention Program (EGAPP), which, “seeks to establish and evaluate a systematic, evidencebased process for assessing genetic tests and other applications of genomic technology in transition from research to clinical and public health practice.” [3] However, while EGAPP is developing and may become a viable regulatory option in the future, it has not yet formed a full set of regulatory tests and trials and its recommendations on both clinical and analytical viability and utility of products are neither easily accessible to consum-
hurj fall 2010: issue 12 ers nor binding for laboratories and test manufacturers. Therefore, while the field is developing rapidly, federal regulation has been unable to keep up and, consequently, a number of new, unproven, and dangerous procedures are available, of which government agencies have not tested nearly enough to vouch for their safety. Moreover, the regulatory problem extends beyond the boundaries of governmental scientific oversight and into the realm of basic legality. Specifically, these concerns deal with the illegal use of partially developed technology and issues with access to an individual’s ge-
focus discover all kinds of unrelated information about the individual in question. For example, DNA samples acquired by law enforcement are often outsourced to labs and, according to the Genetics and Public Policy Center, “some companies are willing and able to analyze DNA left on discarded items, such as chewing gum or used Q-tips. Assuming DNA can be extracted from the sample, a variety of analyses can be performed, from healthrelated testing to parentage determination. Such testing could lead to parentage disputes, lawsuits, and other accusations.” [4] As a result, simple tests easily garner
“Unless legal framework and government regulatory structures concerning the DNA testing field are changed, the widespread availability and inadvertent publicity of genetic information will change how companies hire and insure individuals, destroy the individual privacy that underpins our civil liberties, and quite possibly create what some political theorists have taken to calling a ‘genetic underclass.’” netic information. The combination of continuous technological development, widespread testing availability, and weak regulatory procedure has created an environment ripe for surreptitious DNA testing. These furtive genetic tests are now directly available to consumers (sometimes even through the internet) for a number of different uses including testing for genetic disorders, parentage testing, ancestry, and criminal background testing. Such tests, which require only minor procedures but yield crucial DNA data about the individuals in question, are often conducted by unregulated users. Though it is becoming increasingly widespread, covert DNA testing and its insidious consequences remain largely invisible to the average consumer except in the area of law enforcement. Twenty-first century police routinely use DNA testing technology to solve crimes, but it is common knowledge—and typical yet sometimes accurate fodder for television crime dramas—that police also conduct surreptitious tests in order to close cases. The use of DNA testing technology through less than transparent means is a major concern for all civil liberties advocates. This is because, on top of law enforcement misuse, such testing can be redirected to
incredibly sensitive data, which would suggest that highly stringent legal protections are needed to stop widespread exploitation of the technology. Nonetheless, there are few legal safeguards currently in place; indeed the only comprehensive Federal law on the books detailing legal consequences for surreptitious testing is the Genetic Information Non-Discrimination Act of 2008. While the 2008 law gives limited protection to consumers against DNA-based discrimination by insurance companies and the government, it provides no legal bulwark against the spread of simple testing kits or the sensitive information that they often uncover. As a result, the current legal framework is clearly inadequate and, unless it along with relevant government regulatory procedures are reworked thoroughly, widespread use of surreptitious DNA testing will eventually threaten the basic structure of our society. That last point begs the question: how can weakly understood issues in an emerging yet still largely invisible field fundamentally threaten the structure and function of our society as a whole? The answer is that, unless legal framework and government regulatory structures concerning the DNA testing field are changed, the
13
focus
hurj fall 2010: issue 12
widespread availability and inadvertent publicity of genetic information will change how companies hire and insure individuals, destroy the individual privacy that underpins our civil liberties, and quite possibly create what some political theorists have taken to calling a ‘genetic underclass.’ The increasing use of false surreptitious testing could threaten the ability of a large percentage of consumers from obtaining affordable health insurance. Today, insurers do not have access to an individual’s genetic information, so companies are essentially forced to make a bet on each client they insure. However, if peoples’ genetic information, particularly any data on their propensity for genetic diseases, were available to insurance companies, then the temptation to use it would be difficult to resist. Currently, the Genetic Information Non-Discrimination Act prevents insurance companies from accessing genetic information through lab tests taken by the individual in question, but as DNA testing companies are increasingly able to legally (or semi-legally) conduct tests on discarded items without explicit consent, the information from these tests is almost certain to find its way to insurers one way or another [5]. Considering that insurers would be likely to refuse expensive coverage to an individual that they know has a propensity for major genetic diseases, this could stop individuals from getting insurance. Whether this development would constitute the creation of a “genetic underclass” is up for debate, but it would definitely publicize genetics and negatively impact our society’s treatment of certain individuals. However, while problems with regulation, extralegality, and insurance concerns are all major issues in
14
and of themselves, they collectively contribute to the key problem associated with the unchecked and un-monitored expansion of genetic testing technology: the erosion of cherished personal privacy and the essential civil liberties that underpin our society and values. Unfortunately there is little hard data to support this allegation since the field of DNA testing is still developing and has not affected daily life or entered the national consciousness nearly enough to produce such fear. However, despite that, it is important to note that the potential for this erosion of privacy and liberal values is there if the field continues to be highly under-regulated. Ultimately, DNA testing is an interesting and emerging field that has enormous growth potential. While such enlightening contributions to science are to be applauded, these advances cannot be pursued blindly. Policymakers and ordinary citizens alike need to understand the potential implications of DNA testing and push for the field to be effectively regulated because without effective regulation, surreptitious testing could break down our health, privacy, and civil liberties. References: [1] “Who Regulates Genetic Testing,” Compiled by Audrey Huang; www.DNApolicy.org [2] Ibid. [3] Ibid. [4] Ibid. [5] Genetic Discrimination Issue Brief; www.DNApolicy.org [6] www.drugs.com/.../lipitor-approved-reduce-strokes-heart-attacksdiabetics-1547.html.
hurj fall 2010: issue 12
focus
Preimplantation Genetic Diagnosis Hopes and Fears
Maha Haqqani , Class of 2014 Chemistry
Preimplantation Genetic Diagnosis (PGD) is a clinical procedure that allows the screening of fertilized embryos and prefertilized oocytes for genetically inherited diseases. The development of PGD began in 1967, when Edwards and Gardner were able to determine the sex of rabbit embryos without harming the embryos in the process [1]. This set the foundation for the applications of PGD in humans. In 1989 a preimplantation diagnostic test for cystic fibrosis was set up by Coutelle et al., and in 1992, the first healthy girl free of cystic fibrosis was born following successful embryonic screening by Handyside et al. [2]. Over the past twenty years, the development of efficient Polymerase Chain Reaction (PCR) techniques has enabled accurate, cheaper and more wide-ranging preimplantation diagnoses. According to the Human Fertilization and Embryology Authority in the UK there are, as of October 2010, 159 conditions that can be tested for by PGD [3]. In recent years PGD has been regarded by some people as a major breakthrough in science with great hope for the future and by others as a highly controversial procedure. A technique initially designed to screen for genetic defects, PGD also has the potential to create “designer babies� – children specifically selected for superficial, aesthetically pleasing characteristics. When evaluating the benefits and risks of PGD, it is important to consider both the science behind the process and the ethical dilemmas associated with it.
15
focus
hurj fall 2010: issue 12
PGD: The Scientific Process There are three main components to preimplantation genetic diagnosis: obtaining an embryo by stimulating ovarian production of oocytes, taking a biopsy of the embryo, and carrying out a genetic analysis. In PGD, the procedure used to obtain embryos is the same as that used in in-vitro fertilization (IVF). Controlled ovarian stimulation using gonadotrophins leads to the development of follicles, after which hormones are used to trigger oocyte maturation. Transvaginal ultrasound-guided oocyte retrieval is carried out after around 36 hours. The oocytes are then transferred to a culture medium and fertilized. The day after the oocytes have been retrieved, embryos are examined to determine normal fertilization, which is indicated by the presence of two pronuclei. These normally fertilized embryos are separated from the abnormal or failed ones, and returned to the culture for further development. A sample can be extracted during various developmental stages to carry out a biopsy for genetic analysis [4]. The biopsy stage of PGD is subject to significant ethical controversy. There are three main biopsy methods used: cleavage-stage biopsy, polar body biopsy, and blastocyst biopsy, which raise either problems associated with reliability, questions of morality, or both. The cleavage-stage biopsy involves sectioning out of two or more cells from a blastomere for screening. An advantage of this method is that it is carried out at an early stage, and soon after fertilization, so the cells can be screened for genetic conditions inherited from both the mother and the father. However, the results from one or two cells may not represent the embryo as a whole, as chromosomal mosaicism – a condition in which cells within
bryo contains up to 300 cells. However, this also means that there are less blastocysts available for screening. Another problem is that taking the biopsy later in development allows less time for diagnosis and before implantation. The ethical concerns over blastocyst biopsy are also similar to those for cleavage-stage biopsy, though possibly slightly reduced, as removing a few cells from the embryo in a more advanced state is less likely to disrupt development than removing cells during the 8-cell stage.
PGD Today Since PGD can be a controversial procedure, the laws regarding the procedure vary in different parts of the world. In Germany, PGD is prohibited due to the Embryo Protection Act of 1990 [5]. In the USA it is controlled privately or by state laws. In other parts of the world, PGD is legalized but controlled by various regulatory authorities, such as the Human Fertilization and Embryology Authority (HFEA) in the UK. An important aspect of PGD is deciding whether a condition is “serious” enough for screening to become a viable option. PGD is increasingly being used to test for several genetic defects, and can in the future be developed to test for many more conditions. However, like any other reproductive technique, PGD has its risks, and when choosing whether or not to undergo the process, couples and clinicians need to bear in mind the potential benefits against limited chances of success. Another factor to consider is the high cost of the process – up to $6000 per cycle. In addition to the social issues to be taken into context when considering PGD as an option, it is important to understand the ethical implications of the process and consider individual situations and their outcomes. A crucial advantage of PGD over prenatal testing is that it is carried
“A crucial advantage of PGD over prenatal testing is that it is carried out before establishing a pregnancy, so the uncertainty and potentially disturbing psychological effects of testing during pregnancy or of terminating pregnancy upon screening positive for a genetic disorder, are avoided.” the same organism have different genetic make-ups – is known to occur in cleavage-stage embryos. Another problem is that sectioning out cells can disrupt embryonic development, destroying the purpose of embryonic screening in the first place. Moreover, since screening is carried out using cells from a fertilized embryo, there are many who believe that the process is morally wrong as it involves the selection of only one or two healthy embryos for implantation, and the disposal of other healthy embryos as well as those with the defective gene. Nevertheless, it remains the most popular screening method, used in 94% of all screenings. Polar body biopsy makes use of the two small polar bodies produced during meiotic oogenesis of the oocyte. Polar bodies are by-products of meiotic oogenesis that are not required for embryonic development, so their removal for screening is safe and does not harm the embryos. Another advantage of this method is that as it screens cells before fertilization, the ethical issues associated with sectioning out fertilized embryos are erased. However, polar body biopsy can be unreliable and inconclusive as it only manages to screen for genetic defects passed on by the mother, since the unfertilized egg contains no DNA from the sperm. Despite these setbacks, it is a safe and ethical alternative to other biopsy methods and has the potential to be further developed. Blastocyst biopsy overcomes the main problem associated with cleavage-stage and polar body biopsies: the chance of an inaccurate or misleading genetic diagnosis due to the lack of material available for screening. Using this method, a biopsy is taken later on in development – normally on the fifth or sixth day after fertilization – when the em-
16
out before establishing a pregnancy, so the uncertainty and potentially disturbing psychological effects of testing during pregnancy or of terminating pregnancy upon screening positive for a genetic disorder, are avoided. However, as with any assisted reproductive technique, PGD carries risks. Although the embryo biopsy allows the screening and identification of genetic defects, there is the risk embryo biopsy poses on future development. Moreover, PGD is a relatively new technique, so studies on the long-term effects of embryo biopsy on the development of children born using PGD have been limited. It is therefore important for couples and clinicians, when deciding whether PGD is the appropriate course of action, to consider how threatening the genetic condition in question is, and what the risk of the child being born with that condition is. Weighing the possible benefits against the potential risks is difficult, as perceptions of risk vary between individuals. The best course of action when making such a tough decision on PGD is for the couple and the clinical experts to consider not only their personal opinions but also the practical viability of using the procedure even if it may
Ethical Dilemmas sometimes result in being denied the chance to “choose” a healthy child. As is often the case with scientific technology, PGD opens up a plethora of ethical debates. The most pressing of these is over the status of the human embryo. There is considerable debate regarding the point at which
hurj fall 2010: issue 12 a fetus can be regarded as a “person”. Some argue that life begins at the point of fertilization of an egg by a sperm, and that embryos should therefore be treated with the same respect, and are subject to the same rights as any child or adult. As a result, PGD is viewed as ethically unacceptable, because it involves selecting one suitable embryo for implantation and the disposal of any unwanted embryos. However, others feel that it is inappropriate to accord an embryo the same status as a living human being. They believe that embryos discarded following PGD have less of a moral value than a developing fetus terminated following prenatal diagnosis during pregnancy, and a child that could be born free of a severe disease with the help of PGD is considered more “human” than mere embryos. Another recent phenomenon in the ethics surrounding PGD is the
focus “savior sibling”, an embryo selected specifically for its ability to become an optimal donor for a child in the family with a serious illness. The most prominent case of savior sibling selection is that of the Hashmis, a family from Leeds, UK, who had a 2-year-old son with Beta-Thalassemia. Their application to the HFEA for a license raised issues because their case involved selecting for a particular desirable trait, instead of against one affecting the potential child’s quality of life [6]. The case gained immense publicity and they were finally granted the license, although even after several attempts the procedure was not successful, ending in miscarriages. In a similar case, the Whitakers wished to create a savior sibling for their child who suffered from Diamond-Blackfan Anemia, but suffered a different fate, as their request for a license was
Analyzing mutations After biopsy, genetic screening for abnormalities is carried out. PGD can be used to diagnose genetic disorders of various types, which include the following:
Single-gene disorders: genetic conditions that are the result of a single mutated gene, such as achondroplasia.
Numerical chromosome abnormalities: also known as aneuploidy, occurring when there is either a chromosome missing from a pair of chromosomes (monosomy), or one or more additional chromosomes (trisomy, tetrasomy, etc.).
X-linked disorders:
caused by mutations in genes on the X-chromosomes.
Chromosome translocations:
a chromosome abnormality brought about by an exchange of two segments between nonhomologous chromosomes.
17
focus
hurj fall 2010: issue 12
turned down by the HFEA in 2002. This decision was criticized as it was inconsistent with that regarding the Hashmi case, and while the HFEA did not give specific reasons for their decision, it appeared to be a measure of caution. These two cases show how difficult it is for regulatory bodies to “draw the line” on a technology such as PGD. The savior sibling is also subject to great criticism. There is an opinion that a child brought into the world to save another may not be viewed as a child in its own right [7]. Many people also believe that selecting a savior sibling is similar to creating “designer babies”, since parents choose the child for a particular reason. However, it is unfair to consider the savior sibling and the designer baby in the same light. For one thing, creating savior siblings is not as expensive and resource-consuming as selecting for a baby with perfect facial features. Secondly, there is a significant moral difference between the two. The savior sibling is brought into the world for the purpose of doing good – saving an existing child’s life. The designer baby, on the other hand, is selected for superficial characteristics – the color of his hair or skin, for instance. There is no reason to think that the “savior sibling” will inevitably lead to the “made-to-order baby”; in fact, it has great potential to prevent the suffering, or even the deaths, of many existing children. However, it is becoming increasingly difficult for regulatory bodies to control and limit the use of PGD, and
PGD and the Future one can only wonder where the misuse of this technology may lead. With the constantly developing technology and the increasing use of PGD, many questions have been raised about its implications on future generations. Two of the most pressing issues are: the potential for PGD to yield designer babies, and its possible impact on society. The “designer baby” is the most feared possible future outcome of PGD, and according to some people the slide towards it has already begun. There are fears that the growing use of PGD will eventually lead to selection for desirable behavioral characteristics or even aesthetically pleasing features, leading to the designer baby. Those who criticize PGD express concern over the increasing medicalization and dehumanization of reproduction, and use this implication of “designer” babies to suggest that children selected for particular characteristics will, in the future, become mere commodities in an increasingly consumerist society. If the use of PGD really
18
does extend to selecting for desirable qualities, then we would be making way for eugenics-driven generations with negative stereotypes of disabilities, where the selection of children free of certain conditions would have a psychological and social effect on those already suffering from the disability. Those who do not have the ideal physical or behavioral characteristics could be seen as inferior. Accepting people as they are, whether healthy or sick, attractive or plain-looking, could be a thing of the past, and with it, the individuality and sense of identity that makes us human. So is PGD a hope or a threat? The debate will rage on, and it will be a long time before we know the future impact of PGD on society as we know it. It may be better to see PGD not as a possible way of creating a superficial, dehumanized society, but as a technology with the potential to make the lives of many children more human. However, it also has the potential to be greatly misused, and until the effects of PGD are allowed to play out across time, researchers, parents, and regulators must proceed with caution.
References: [1] Edwards RG and Gardner RL (May 1967) Sexing of live rabbit blastocysts. Nature, 214 pp. 576-577. [2] Handyside AH, Lesko JG, Tarin JJ, Winston RM, Hughes MR. (Sept 1992) Birth of a normal girl after in vitro fertilization and preimplantation genetic diagnostic testing for cystic fibrosis. NEJM, 327 pp. 905-909. [3] Human Fertilisation and Embryology Authority (HFEA UK) [Internet]. [updated 2010 Oct 19]. PGD Conditions Licensed by the HFEA; [cited 2010 Oct 19]. Available from http://www.hfea.gov. uk/pgd-screening.html. [4]Braude P, Pickering S, Flinter F & Ogilvie CM. (2002) Preimplantation Genetic Diagnosis. Nature Reviews Genetics, 3 pp. 941-953. [5] Federal Law Gazette, Part I, No. 69 [Internet]. [updated 2004 Mar 04]. Bonn (Germany): Act for Protection of Embryos (The Embryo Protection Act); [cited 2010 Feb 02]. Available from http:// www.bmj.bund.de/files/-/1147/ESchG%20englisch.pdf [6] BBC Health News UK [Internet]. [updated 2003 Mar 31]. ‘Zain has a right to life’; [cited 2010 Jan 07]. Available from http://news.bbc.co.uk/1/hi/health/2903723.stm [7] Boyle R, Savulescu J (2001). Ethics of using preimplantation genetic diagnosis to select a stem cell donor for an existing person. BMJ, 323(1) pp. 1240–3. Additional sources and further readings: Cai P. (2008) Embryonic Screening: Selecting for the Perfect Child. Harvard Science Review, 22(1) pp. 21-23. Braude P, Pickering S, Flinter F & Ogilvie CM. (2002) Preimplantation Genetic Diagnosis. Nature Reviews Genetics, 3 pp. 941-953. Franklin S & Roberts C. (2006) Born and Made: An Ethnography of Preimplantation Genetic Diagnosis. Princeton, New Jersey: Princeton University Press. Scott R. (2007) Choosing Between Possible Lives: Law and Ethics of Prenatal and Preimplantation Genetic Diagnosis. UK: Hart Publishing. Boyle R, Savulescu J (2001). Ethics of using preimplantation genetic diagnosis to select a stem cell donor for an existing person. BMJ, 323(1) pp. 1240–3.
hurj fall 2010: issue 12
focus
Prospects of Nanoparticle Gene Delivery Deng Pan, Class of 2012 Biomedical Engineering
Severe combined immune deficiency (SCID) is caused by the inability of white blood cells to produce the required proteins for proper function of the human immune system. These white blood cells lack certain genes necessary for synthesizing required proteins, such as adenosine deaminase. In the US, 400 children are born each year with SCID [1]. Currently, the most effective method for treating this disease is a bone marrow transplant which provides stem cells needed for the body to build up its immune system. However, it is very difficult to find matching donors. This is just one of many genetic diseases for which gene therapy can provide a possible answer.
19
focus
hurj fall 2010: issue 12
Gene therapy is the replacement of missing or defective genes in order to treat diseases. Unlike conventional methods of treatment, which seek to alleviate the symptoms of diseases, gene therapy aims to correct their root causes. While initially aimed at curing hereditary diseases, its potential has now been expanded to fighting cancer and other metabolic disorders. In order for gene therapy to be successful, there are several difficulties that need to be surmounted. The blood contains salt and proteins that can dissociate the DNA. A cell’s lipid bilayer is not permeable to charged DNA molecules. The cytoplasm of cells also contains DNases, which are enzymes that digest linear DNA as part of a cell’s mechanism to protect it from viral infection. These obstacles indicate that a vector, or carrier of DNA, is required for successful gene therapy. The earliest experiments of gene delivery utilized viruses as carriers for DNA [2]. HIV and related viruses are capable of injecting their own genes into human DNA as part of the infection process. HIV has proteins on its surface that dock onto specific human cells and trigger endocytosis. It can then release its content when released from endosomes [3]. Gene therapy utilizes the potent ability of viruses to infect human cells. The disease-causing genes of the viruses are removed and replaced by the genes intended for delivery. The
host genomes, but when viruses integrate their packaged DNA into the host, they do so in a random manner. Sometimes they insert the gene in between a host gene, effectively shutting down that particular host gene. Other times, they may trigger overproduction of certain proteins, such as growth factors, which can lead to uncontrolled cell growth [6]. Because of these problems with viral vectors, researchers explored other approaches to gene delivery. One of the emerging methods is using nanoparticles for gene delivery. Instead of using viruses, this method uses cationic polymers. DNA is negatively charged, so positively charged cations and DNA participate in electrostatic interactions capable of compacting DNA into particles in the nanoscale range between 50 to 200 nm. These can then be injected into the host via a variety of methods. The nanoparticles formed have much less restricted volume for packaging DNA and a greater variety can now be used. Because nanoparticles do not contain “signature” proteins for viruses, they will not trigger a systematic immune response from the host. Since the cations are synthesized, it allows for a large degree of modifications via chemical reactions according to the need of gene delivery. Over the last two decades of research, many different polymers have been proposed, including polyethylenimine, polyphosphoramidate,
“Unlike conventional methods of treatment, which seek to alleviate the symptoms of diseases, gene therapy aims to correct their root causes. While initially aimed at curing hereditary diseases, its potential has now been expanded to fighting cancer and other metabolic disorders.” ability of the virus to replicate is also destroyed for the safety of the patient. These manipulated viruses are then injected into the host to allow the virus to infect target cells with “good” copies of the DNA. The major advantage of using viruses is that they have high efficiency [4]. This is partly because viruses possess proteins that bind to the cell membrane, increasing the chance that any virus will be taken up by the cell rather than wandering in the bloodstream. Viruses also have sophisticated protein machineries to assist them in delivering their contents into the cell. Some of these viruses, such as HIV, even have ways to send their packaged DNA into cell nuclei. These viruses also possess enzymes that are capable of integrating the intended gene with that of the host, resulting in permanent expression of genes within the cells. As such, it was no wonder that hopes were high that an effective gene therapy would soon become a mainstream treatment. Unfortunately, with continuing research, many difficulties with the viral system have become apparent. The first problem is size. A virus is an intricate system, and natural selection has ensured that no unnecessary parts are present [5]. The result: there is very limited space for delivering therapeutic DNA. More important is the problem of safety. The proteins on these viruses, effective in binding to host cells, also trigger immune responses from the host. This means that later administrations of viral gene delivery will be less effective than earlier administrations. More importantly, systematic immune response can sometimes lead to organ failure or shock, endangering a patient’s life. It may be that HIV can readily insert genes into
20
chitosan, poly-L-lysine and many others [7]. Research on gene delivery tends to focus on getting nanoparticles into target cells. However, nanoparticles must overcome many obstacles on their way to target cells [8]. Nanoparticles are usually delivered intravenously. Currently the two most used methods of delivery on a mouse model are tail vein injection and hydrodynamic injection – injecting a large volume of solution of DNA-containing nanoparticles. Hydrodynamic injection, in particular, can lead to high efficiency. In both methods, nanoparticles are delivered through veins and will come into contact with blood. This is the first barrier for transfection. Blood is high in salt and serum. The high salt and serum levels mean there are many positively charged ions or proteins capable of dissociating the nanoparticles by ionic interactions. It may also lead to fusion of the nanoparticles to form aggregates that cannot transfect cells. In addition, there are macrophages in blood that may recognize the nanoparticle as foreign and actively remove these particles. Nanoparticles may also need to traverse physical barriers. For example, hepatocytes in the liver – the major cell in liver and the target for liver gene delivery, are protected by a layer of tissue called the fenestrae. This tissue acts as a sieve and excludes any molecules larger than 170 nm. After finally arriving at the target cells, the nanoparticles must traverse the membranous lipid bilayer. Because the cationic materials used are charged in physiological conditions, the particles cannot diffuse through hydrophobic membranes. Unlike viruses that have sophisticated mechanisms mediated by their own proteins, nanoparticles usu-
hurj fall 2010: issue 12
ally enter cells via non-specific endocytosis – the ability of cells to pinch their own membranes to bring in solutes from their surroundings. Understanding of the proteins involved for binding and transporting these nanoparticles can improve efficiency of this step. Once brought in, these nanoparticles must escape from their encasing vesicles to release their content. However, vesicles tend to have harsh interior environments, such as very low pH, and specific methods are needed for nanoparticles to escape. Finally, because transcription – copying DNA in order to make proteins – only occurs in the nucleus, only the DNA that manages to enter the nucleus will be able to express genes. With this multitude of difficulties for nanoparticles to deliver their genes, it is no wonder that efficiency of gene delivery by this method is still dwarfed by viral gene delivery. Compounding the difficulty is the fact that results from in-vitro experiments often do not accurately represent in vivo situations. In the face of mounting difficulties, it is easy to be discouraged about the prospect of nanoparticle-based gene delivery. But thanks to continuous research, we have gleaned an increased understanding of the mechanisms by which it may work. The high efficiency of polyethylenimine, the second cationic polymer carrier discovered, once baffled researchers. Now we know that because it protonates at a pH close to that of endocytotic vesicles, it is able to attract positive ions (and negative ions by charge balance) into the vesicle, creating an osmotic pressure that eventually bursts the vesicle membrane. This ability of polythylenimine gives it a high rate of endosomal escape that contributes to its high efficiency in transfection. Understanding this mechanism has lead to improvements of other carriers of gene delivery by attaching pH sensitive groups to the polymer to simulate this “proton sponge” effect [9]. It is hard to find polymers that are both stable enough in blood and transfect well, because high efficiency also depends on the carrier’s ability to release DNA when needed. Therefore, instead of finding the “perfect” carrier, scientists use reversible crosslinkers to bind to nanoparticles. The crosslinkers act like a string with two sticky ends, tying up two linear polymers so that the entire nanoparticle forms a net-like structure that is more stable due to the additional chemical bonds instead of just electrostatic interaction, which is weaker. Furthermore, these crosslinkers contain disulphide bonds that are broken down by the enzymes in the cytoplasm. This enables the nanoparticles to release their DNA content only when they are inside the cells [10]. Increasing uptake by cells is a tricky problem to solve. There is little theory about the exact protein involved in the process. However, in a few special cases, this problem may be fixed by considering specific protein-ligand interactions. Only certain types of ligands will be accepted by a given receptor. Hepatocytes in liver contain galactose ligands on their surfaces. Galactose chains have been engineered onto nanoparticles that targets liver cells for gene delivery, enabling high local concentration of nanoparticles around hepatocytes. This has led
focus
to increased efficiency in gene delivery for liver targeted nanoparticles. Other more general types of ligands have also been incorporated for more general gene delivery. For example, antiCD3, an antibody for cell surface protein CD3 has been incorporated onto Polyethyleneimine for delivery of genes in a variety of cell lines [11]. A new trend that has recently emerged is the concept of the tertiary complex. Instead of using one cation and one DNA to form nanoparticles, an additional layer of anions is used to further coat a nanoparticle. The original rationale was that positive charges on nanoparticles disrupt the negatively charged cell membrane, leading to loss of cell integrity, and, eventually, cell death. By using an additional negatively charged layer, it was proposed that nanoparticles can be made less toxic with only a marginal drop in efficiency due to size increase and other factors. Hyaluronic acid, a gel component in many animals, including humans, has been used to coat various nanoparticles for gene delivery with reduced toxicity. Surprisingly, it was also observed that the extra coating of hyaluronic acid increases the efficiency of gene delivery. This has led to increased research in multi-complex nanoparticle [12]. Current research is a concerted effort of using biomaterial to mimic viruses in order to achieve the stunning efficiency that has taken nature thousands of years. At the same time, scientists have also been using rational design strategies to reduce toxicity in order to make gene therapy safe. Recent trends in this area have been considering incorporating biologics into nanoparticles. For example, research has been done using nuclear localization signal – a protein peptide – to increase translocation of DNA [13]. Not only has the design of vectors for delivery been diversified, but also their particular uses and strategies. Cancer cells have the unique effect known as “Enhanced Permeation and Retention”, which leads to the preferential uptake of molecules. Polyethlenimine has also been used to deliver genes to ovarian cancer in rat model [14]. These nanoparticles are capable of delivering apoptotic genes to cancer cells while evading normal tissue. Many other novel strategies have also been proposed, and continue to contribute to the diversity of gene therapy.
References: [1] M. P. Jennifer, The Journal of allergy and clinical immunology 2007, 120, 760. [2] M. Cavazzana-Calvo, A. Thrasher, F. Mavilio, Nature 2004, 427, 779. [3] K. Miyauchi, Y. Kim, O. Latinovic, V. Morozov, G. B. Melikyan, Cell 2009, 137, 433. [4] X. Gao, K. Kim, D. Liu, The AAPS journal 2007, 9, 92. [5] H. M. Blau, M. L. Springer, New England Journal of Medicine 1995, 333, 1204. [6] C. Baum, O. Kustikova, U. Modlich, Z. Li, B. Fehse, Human gene therapy 2006, 17, 253. [7] A. Rolland, Critical reviews in therapeutic drug carrier systems 1998, 15, 143. [8] C. Wiethoff, C. Middaugh, Journal of pharmaceutical sciences 2003, 92, 203. [9] A. Akinc, M. Thomas, A. Klibanov, R. Langer, The Journal of Gene Medicine 2005, 7, 657. [10] M. A. Gosselin, W. Guo, R. J. Lee, Bioconjugate Chemistry 2001, 12, 989. [11] M. O’Neill, C. Kennedy, R. Barton, R. Tatake, Gene therapy 2001, 8, 362. [12] M. Hornof, M. de la Fuente, M. Hallikainen, R. Tammi, A. Urtti, The Journal of Gene Medicine 2008, 10, 70. [13] M. Zanta, P. Belguise-Valladier, J. Behr, Proceedings of the National Academy of Sciences of the United States of America 1999, 96, 91. [14] M. H. Louis, S. Dutoit, Y. Denoux, P. Erbacher, E. Deslandes, J. P. Behr, P. Gauduchon, L. Poulain, Cancer Gene Therapy 2005, 13, 367. [15] X. Jiang, Y. Zheng, H. Chen, K. Leong, T. Wang, H. Mao, Advanced Materials 2010, 22, 2556.
21
focus
22
hurj fall 2010: issue 12
hurj fall 2010: issue 12
focus
EPI GENETICS
Heritable Regulation of Gene Expression Dong Kim, Class of 2013 Molecular and Cellular Biology
Many learn in high school biology how the theory of Lamarckism lost support in favor of Darwin’s theory of natural selection. Jean Baptiste Lamarck believed that characteristics acquired during a parent’s lifetime could be passed on to their offspring, which seemed implausible. A common example is a mother giraffe passing on the developed trait of a longer neck through her struggle to reach for the higher branches of trees, which seemed unlikely. However, the developing field of epigenetics suggests a revisit to Lamarck’s theory. To describe epigenetics as a new field is a misnomer. The British biologist Conrad Waddington, in his 1957 paper The Strategy of the Genes, first used the term epigenetics and defined it as the study of how genotypes give rise to phenotypes1. Presently, scientists commonly define epigenetics as the study of all other mechanisms that affect gene expression besides the well-established paradigm of transcription and translation to make proteins from the genetic code [1]. It is starting to become clear that our knowledge of genetics is much more limited than this definition implies. In fact, epigenetics is such a new field that arguments still persist over how to define the phenomena and what mechanisms may be included under its purview.
23
focus
hurj fall 2010: issue 12 Possibly the biggest implication of epigenetics is its effect on the nature versus nurture argument. This popular argument about whether nature or nurture has a bigger effect on an individual’s development seems to offer an overly simplified view of what is actually a complex and dynamic system. For example, the phenomena of parental imprinting, or the epigenetic marking of certain sub-regions of the parent genome, points towards the power the environment has on gene expression. The focus of human genetics has been the effect of early childhood development on gene expression. Specifically, epigenetics invokes the concept that gene expression can be turned on or off within the lifetime of an individual. This concept itself has led to a revolution in terms of rethinking genetic determinism. That is, gene expression may be malleable as a consequence of environmental factors. Genetics may no longer restrict individuals to a certain phenotype. Epigenetics provides a newfound freedom from our genetics. An interesting analogy is that epigenetics may be the software that is running genetics, the hardware. This new way of thinking has pointed to the idea that what truly separates us is the epigenome, not the genome. DNA methylation is one of the most wellknown and studied epigenetic mechanisms. It is a post-translation modification that involves the addition of a methyl group to the cytosine and adenine bases in DNA [2]. The information indicating which site should be methylated is retained and passed on to the next generation. This memory-induced marker continues through generations of cells, and is often referred to as epigenetic memory. There are still ongoing studies to try to understand the mechanism. The simplest conceivable mechanism for maintenance depends on semi-conservative copying of the parental strand methylation pattern onto the progeny strand [2]. DNA methylation is typically removed during zygote formation and put back through successive cell generations. There are many more theories on how DNA methylation occurs. Its popularity stems from the fact that it is the most easily measured epigenetic marker in examining epigenetic changes between ages or generations. In 2003 a study conducted by Jirtle, et. al showed the dramatic consequence environmental exposure has on epigenetics via DNA methylation marks. The experiment involved varying the diets of two pregnant agouti mice. The agouti gene gives mice yellow coats and a propensity for obesity. The control group had a regular diet, but the experimental group received a methyl rich diet which caused methyl groups to attach more
24
frequently to the CpG sites in the upstream transposable element [3]. The CpG site is the cytosine phosphate guanine and the term CpG is used to separate it from cytosine guanine nucleotide pairing. The transposable element, sometimes referred to as “junk DNA”, is a small, mobile part of the DNA that moves to other parts of the genome. However, this transposable element upstream of the agouti gene is important since it is responsible for the expression of the gene. Thus, it was shown that an increase in DNA methylation near the element caused a decrease in the expression of the gene. Descendants of the mouse with the methyl rich diet had a dark coat and were much less susceptible to obesity. Astonishingly, the genetic sequences remained unchanged, but the DNA methylation patterns differed. The dramatic difference in phenotype due to the DNA methylation strongly supports the notion that the lifestyle choices of parents and ancestors affect offspring. Other studies have tried to correlate DNA methylation marks with age difference. It was not until 2008 that the first study was published that strongly linked early life environmental conditions with later adult disease susceptibility. An epidemiological investigation that evaluated individuals conceived in Denmark during the famine of World War II compared individuals with periconceptual (before birth, during fetal development) exposure to famine to their same sex siblings that were conceived after the famine and therefore unaffected by it. Individuals conceived during the famine had, six decades later, a significantly lower methylation of their IGF2 gene, which is a key factor for human growth and development. Epigenetic modification through DNA methylation significantly affected offspring. This is the first study to show that earlylife environmental conditions can cause epigenetic changes in humans that persist throughout life [4]. Research on monozygotic twins has also lead to many insights in the field of epigenetics. Monozygotic twins have the same DNA sequence, but have been shown in later life to have some phenotypic variation or differences in disease susceptibility. In 2005, a collaboration of labs compared DNA methylation between twins with the sample mean age of 30.6 years and a range of 3-74 years. Twins in the age range of approximately 3 to 30 years old showed few differences in DNA methylation level [5]. However, older monozygotic twins were shown to have significant levels of differences in DNA methylation. This study concluded, “future studies should now address the specific mechanisms responsible for the observed epigenetic drift of monozygotic twins.” The important lesson to learn from these studies is that only correlation is demonstrated. The epigenetic changes may or may not be linked to changes in disease susceptibility. This paper provided only a small insight into how much the environment can af-
hurj fall 2010: issue 12 fect phenotypic expression. In addition, the authors raised the possibility that epigenetic differences occur due to age. In another study, contrary to what was expected, a comparison of DNA methylation in a wide number of genes between 20 year olds and 60 year olds failed to show significant correlations in DNA
focus data points to the possibility of an epigenetic strategy for identifying patients of a common risk disease. The breadth of applications for epigenetics is seemingly tremendous, but there are several important issues to consider before accepting any definite theories or hypotheses. First, there have not been enough studies to
“Individuals are no longer restricted to a genetic fate. Instead, epigenetics may provide a sort of malleable path in genetics.” methylation marks and disease susceptibility [6]. Researchers are considering the idea that epigenetics is the missing link to understanding many human diseases. Already, several diseases have been linked to a lack of parental imprinting. Loss of imprinting of genes, including IGF2 and H19, is associated with Wilms’ tumor and other cancers. The NIH has recognized the importance of epigenetics and devoted $190 million to this field. Universities such as Johns Hopkins have established centers focused on connecting different research areas through epigenetic study. In the Department of Biology at Johns Hopkins, Dr. Xin Chen is currently examining the role of epigenetics in differentiation of stems cells in testis of Drosophila M. It is possible that failures in normal epigenetic expression may result in differential development and cancer genesis. Johns Hopkins School of Medicine faculty member Dr. Andrew Feinberg has devoted his research to understanding the link between the environment and gene expression, and how this link results in human disease susceptibility. However, even $190 million is not enough to truly examine the epigenome. The Human Epigenome Project is underway, but requires extensive funding, beyond that of even the Human Genome Project, due to the vast number of possible epigenetic sites. The media has begun taking notice of the broader implications of epigenetics. Time Magazine’s in-depth article on epigenetics in January of this year was titled “Why Your DNA Isn’t Your Destiny.” Newsweek published an article on epigenetics titled “Beyond the Book of Life”. Both articles focus on the notion that individuals are no longer restricted to a genetic fate. Instead, epigenetics may provide a sort of malleable path in genetics. Ethical and legal issues have also been raised as epigenetics has received more attention. For example, there are worries about epigenetic discrimination if individuals can be measured for disease susceptibility risk before they are even born [7]. The most recent study supporting the link between epigenetic variation and disease susceptibility comes from Dr. Feinberg’s lab. In a study of a small sample of people from Iceland, the authors have developed a personalized epigenome for each participant. This personalized epigenome was formed by an unbiased genome scale analysis of ~4 million CPGs in 74 individuals with comprehensive array-based relative methylation analysis [8]. The
understand the role of epigenetics in humans. Extensive studies on humans are fairly difficult since tracking and obtaining human DNA requires time and money. That brings up the second point, which is that many of the recent studies are epidemiological in nature. These papers looked for correlations rather than investigating the biological mechanisms behind epigenetic modification and if any differences were actually a result of epigenetic modification. Providing evidence of a causal link is the next step. Many of the questions of the impact of diet, smoking, and other behaviors will require time and longitudinal studies to answer. Only by understanding the mechanisms by which epigenetic differences occur can there be epigenetic-based applications to medicine such as new and powerful drugs. Epigenetics is a new and exciting field that holds remarkable potential to shape the way that people think about inheritance and disease. However, like every advance, it must be considered with caution and patience. Epigenetics is only the beginning of a shift in the perception of genetics.
References: [1] Bird A: Perceptions of epigenetics. Nature 2007, 447:396-398. [2] Bird A: DNA methylation patterns and epigenetic memory. Genes Dev. 2002;16:6. [3] Waterland, R. A. & Jirtle, R. L. Transposable elements: targets for early nutritional effects on epigenetic gene regulation. Mol. Cell. Biol. 23, 5293–5300 (2003). [4] Heijmans, B. T. et al. Persistent epigenetic differences associated with prenatal exposure to famine in humans. Proc. Natl Acad. Sci. USA 105, 17046–17049 (2008). [5] Fraga, M. F. et al. Epigenetic differences arise during the lifetime of monozygotic twins. Proc. Natl Acad. Sci. USA 102, 10604–10609 (2005). [6] Eckhardt, F. et al. DNA methylation profiling of human chromosomes 6, 20 and 22. Nature Genet. 38, 1378–1385 (2006). [7] Rothstein MA, et al. Ethical implications of epigenetics research. Nat Rev Genet. 2009, Apr;10(4):224 [8] A.P. Feinberg, et. al, Personalized epigenomic signatures that are stable over time and covary with body mass index. Sci. Transl. Med. 2, 49ra67 (2010)
25
focus
hurj fall 2010: issue 12
Era of the Personal Genome Anne Kirwin, Class of 2012 Molecular and Cellular Biology
What do Craig Venter, Glen Close, Ozzy Osbourne, and Archbishop Desmond Tutu all have in common? Beyond their public fame, they are among the first people to have sequenced their genomes. Twenty years after the birth of the original Human Genome Project, we find ourselves immersed in a world of genes, health, and genetics. Today, over $32 million dollars of federal funding has been channeled into making whole genome sequencing affordable for the average consumer [1]. Biotech companies like 23andme and Illumina have been rapidly developing technologies to help people better understand the 3.3 billion base pairs and their importance to one’s health.
26
hurj fall 2010: issue 12
History of our Decoded Genome Our obsession with decoding the human genome began in the 1990s. At that time, Francis Collins of the National Human Genome Research Institute (NHGRI) and Craig Venter from Celera Genomics battled to be the first with a completed sequence of the Human Genome. With millions of dollars in hand and immortal fame at stake, they finished a “rough draft” in 2000 and came up with the finished product in April 2003 [2]. Both Collins and Venter have since become household names in the research and science world. However, even with the massive amounts of information uncovered by the Human Genome Project, most of our genetic composition is poorly understood. The array of As, Ts, Cs, and Gs in our genome are meaningless without knowing their functions. Therefore, several other projects have been developed to expand research in this area. In 2002, an international research consortium founded the HapMap Project to expand upon the original human genome findings [3]. Although the DNA between any two humans is usually be very similar, about one out of every 1,200 base pairs will differ between the two individuals. The HapMap (or Haplotype Map) project exists to help document common variants among humans. Single base changes within the DNA, of which there are about ten million, account for most of the genetic disparities among humans. These single nucleotide polymorphisms, or SNPs, are being mapped across many different populations to find correlations between SNPs, certain genetic traits, and haplotypes [4]. Begun in 2003, the ENCODE project aimed to document further complexities in the genome. For this project, the NHGRI coordinated with research facilities across the country to help identify functional units within our DNA that act at both the genetic level and even within transcription and translation of proteins. Research groups propose the locations of the functional sequences, and map them to our genome. This data has been released to the online database UCSC Genome Browser for public access [5]. After discovering these novel elements in the genome, researchers became very interested in how many components of the genome interact and produce certain phenotypes. Genome-Wide Association Studies (GWAS) more specifically involve looking through the genome as a whole to find markers that could predispose or indicate a certain genetic condition. Genetic markers can also be found which, when combined with certain environmental effects, dramatically increase the risk for certain disorders. Research groups do this by comparing DNA from affected individuals with DNA from non-affected individuals, on a scale of hundreds of samples per group. These studies have been able
focus to target groups of genes involved in disease, since many conditions are not caused by single genes [6]. Furthermore, with the age of personalized medicine comes an interest in one’s own DNA and the information that can be extracted from it. The cost of genome sequencing has decreased dramatically—from 3 billion dollars to about $50,000—in the last few years, which has made such information much more accessible to consumers. Therefore it is only normal to see the rise of genome sequencing projects. The Personal Genome Project (PGP) is described as “…a public genomics research study that aims to improve our understanding of genetic and environmental contributions to human traits”. This effort, founded in 2006 by George Church, started with ten samples from well-known
Biotech Industry on the Rise contributors in the scientific, medicine, and research world [7]. With affordable, marketable whole genome sequencing in sight, many private companies are stepping up as pioneers in this new market. Two different methods of whole-genome scanning are being marketed to consumers for recreational and clinical use. The most commonly offered by direct-to-consumer companies is a whole genome SNP-array. This first method targets the known sites of single-base changes within the human genome for variants associated with a specific condition. Traits tested by these companies can vary from benign characteristics such as eye color to serious diseases like lung cancer. DeCODE Genetics lists their starting price at $2000 [8]. There is a much more complex and expensive sequencing method which uses the same kind of sequencing as the HGP and PGP. Knome (pronounced “know me”) offers exome sequencing, or sequencing of all gene-encoded regions, for $68,500 a person [9]. These methods can find known SNPs, rare or novel variants, and other unique features of an individual’s DNA. This type of sequencing is not readily marketed to consumers and the company suggests that this type of sequencing be offered to “physician-directed” families rather than a curious consumer [10]. Direct-to-consumer companies are trying to start a consumer revolution that can be described as “predictive medicine” [11]. Although these single nucleotide polymorphisms have been known to predispose certain types of traits, they do not invariably predict an individual to have a certain condition. For example any person can have a genetic variant that predicts obesity, but environment and lifestyle are major contributors in developing obesity. Nonetheless, there are genes for which concrete statistics are available. Genes like Huntingtin can be characterized based on full gene sequencing to accurately predict if an individual has Huntington’s Disease [12]. Steven Pinker, a professor of Psychology at Harvard University and a participant in the Personal Genome Project, decided he would rather not know about the variants of several genes, like the apolipoprotein E gene, which predisposes a dramatic increase in the onset of Alzheimer’s disease [13]. Genome sequencing is also being used by drug companies to help predict how drugs will affect specific individuals. This new technique is called Pharmacogenomics. Pharmaceutical companies are hoping to be able to cater their drugs to make them more affective, stronger, and safer on patient populations. Although this is not a reality just yet, clinical trials monitoring a protein cytochrome P450 have shown promise in this area. This enzyme is involved in breaking down a variety of pharmaceuticals, and the ability of cy
27
focus
hurj fall 2010: issue 12 tochrome P450 to metabolize drugs varies among individuals. Thus, companies can adjust the prescribed amount of drugs based on the performance rate of this enzyme as determined by individual genetic information [14]. These so called “designer drugs” are predicted to be the standard model of prescribing medicine for therapies, especially cancer, by 2020 [16]. Whole genome sequencing has also brought a wealth of information to doctors, patients, advocates, and consumers. Publications and websites such as Online Mendelian Inheritance in Men (OMIM) and GeneReviews are used as common sources to find genetic testing locations and research groups who are interested in certain diseases. The information gleaned from the HGP as well as other databases has helped to uncover the genetics which have been the cause of hereditary diseases and help promote research in areas which are still lacking in knowledge [17].
Ethical Issues Surprisingly, not many people have seemed to shy away from publicly sharing their genetic information. The ten original participants from the Personal Genome Project have agreed not only to let their genome sequencing be published, but also their complete medical history [18]. Fortunately in 2008, the Genetic Information Nondiscrimination Act (GINA) was signed into law, thus making discrimination for health insurance and employment illegal [19]. However, with companies like Illumina promising to create applications for carrying around one’s personal genome on portable devices, we can’t automatically assume that our privacy will be respected. Technology has also been moving at such a fast pace that errors in protocols and methods are not being caught until it is too late. A recent published example of this was a study linking specific genes to centenarians, using a Chip called the 610-Quad. Less than a week after being published much criticism came about concerning the stability of the microarray chip used in the experiment [20]. More genetic studies must be performed in order to solidify that results can be validated. Additionally, we need to consider the ethical concerns underlying informing someone that they carry traits for a genetic disease or disorder. Direct-to-consumer genetic testing services do not offer psychological consulting or Genetic Counseling after the deliverance of results, despite the fact that the average consumer cannot readily interpret genetic sequencing results. People who do receive genetic consultation are actually motivated to live healthier lifestyles and attempt behaviors which may reduce their risk for a certain disease. One recent study showed that people with a variant which increases predisposition to Alzheimer’s are much more likely to participate in healthy lifestyle activities despite having no clear treatment regiment available [21]. Studies like this imply that individuals who seek genetic testing are looking to be informed consumers, rather than to simply accept what might be their biological fate. Our decoded genome is the key to discovering therapies for complex disorders, and genetics will be a key player in the next few decades with drug development and therapeutics. Whole genome sequencing is not realistic for the majority of the population, but decreasing costs combined with increasing health
28
benefits may make it a reality in the near future. For now, scientists must work on translating genomic research into useful tools for genetics, therapeutics, as well as for general knowledge. References: [1] NHGRI Expands Effort to Revolutionize Sequencing Technologies [Internet]: NIH News; c2005 [cited 2010 October]. Available from: http://www.nih.gov/news/pr/aug2005/nhgri-08.htm. [2] The Human Genome Project Completion: Frequently Asked Questions [Internet]: National Human Genome Research Institute (NHGRI); c2009 [cited 2010 October]. Available from: http:// www.genome.gov/11006943. [3] International Consortium Launches Genetic Variation Mapping Project [Internet]Washington: National Human Genome Research Institute (NHGRI); c2010 [cited 2010 October]. Available from: http://genome.gov/10005336. [4] International HapMap Project: About the Project [Internet]: HapMap Data Coordination Center; c2007 [cited 2010 October]. Available from: http://hapmap.ncbi.nlm.nih.gov/whatishapmap.html. [5] ENCODE Project at UCSC: About the ENCODE Data Coordination Center [Internet]: Genome Bioinformatics Group, UCSC; c2009 [cited 2010 October]. Available from: http://genome. ucsc.edu/ENCODE/. [6] Genome-Wide Association Studies [Internet]: National Human Genome Research Institute (NHGRI); c2010 [cited 2010 October]. Available from: http://www.genome.gov/20019523. [7] Personal Genome Project: Overview [Internet]; c2010 [cited 2010 October]. Available from: http://www.personalgenomes.org/pgp10.html. [8] deCODEme Complete Scan [Internet]: deCODE Genetics; c2010 [cited 2010 October]. Available from: http://www.decodeme.com/complete-genetic-scan. [9] It’s personal: Individualized genomics has yet to take off. The Economist 2010 Jun 17th 2010. [cited 2010 October]. Available from: http://www.economist.com/node/16349402 [10] KnomeFrequently Asked Questions [Internet]: Knome; c2010 [cited 2010 october]. Available from: http://www.knome.com/faq.html. [11] It’s personal: Individualized genomics has yet to take off. The Economist 2010 Jun 17th 2010. [cited 2010 October]. Available from: http://www.economist.com/node/16349402 [12] http://www.ncbi.nlm.nih.gov/pubmed/17240289 [13] Pinker S. My genome, my self. The New York Times Magazine 2009 January 11th, 2009. [cited 2010 October]. Available from: http://www.nytimes.com/2009/01/11/magazine/11Genome-t. html?pagewanted=4&_r=1 [14] Ingelman-Sundberg M. Pharmacogenetics of cytochrome P450 and its applications in drug therapy: The past, present and future. Trends Pharmacol Sci 2004 Apr;25(4):193-200. [15] Collins, Francis and McKusick, Victor A. Implications of the human genome project for medical science. JAMA 2001 February 7 2001;285(5):540. [16] Ibid. [17] Ibid. [18] News in perspective: Postagene.com. The New Scientist 2008 25 October 2008:6. [19] “GINA” Genetic Information and Nondiscrimination Act of 2008, Information for Researchers and Healthcare Professionals [Internet]: National Human Genome Research Institute (NHGRI); c2009 [cited 2010 October]. Available from: http://www.genome.gov/Pages/PolicyEthics/ GeneticDiscrimination/GINAInfoDoc.pdf. [20] Carmichael M. The little flaw in the longevity-gene study that could be a big problem. NewsweekJuly 7 2010. Science. [cited 2010 October]. Available from: http://www.newsweek. com/2010/07/07/the-little-flaw-in-the-longevity-gene-study-thatcould-be-a-big-problem.html [21] Chao S, Roberts JS, Marteau TM, Silliman R, Cupples LA, Green RC. Health behavior changes after genetic risk assessment for alzheimer disease: The REVEAL study. Alzheimer Dis Assoc Disord 2008 Jan-Mar;22(1):94-7.
hurj fall 2010: issue 12
humanities
The Importance of Applying Rule-Based Law to International Legal Systems Without a rule-based standard, international law becomes unclear territory—relying on individual opinions and inconsistent practices. Anisha Singh, Class of 2012 International Studies, Economics
There are two basic schools of thought regarding how international legal cases should be arbitrated [1]. Rule-based law, or command law, is a system similar to that of many countries’ legal systems. Rule-based law requires judges to make rulings by considering existing international legislation. In the international system, this most often refers to United Nations legislation. Justices at the International Court of Justice consider whether a case has broken any international laws, and then determine the outcome of the case. Process-based law, however, is the more popular school of thought. Process-based law is the system of evaluating court rulings, academic opinions, and social norms to find a legal precedent. Process-based law is set through practices of individuals, and is not necessarily formalized into codified legislation. Since international legislation is often outdated and can be slow in its creation, process-based law allows rulings to be made on current issues. Supporters of process-based law argue that laws need to be able to change in accordance with transient global circumstances. The difference between process and rule-based law is problematic to the international judicial system, since international justices wishing to use both would have to try cases with two separate sets of rules and methodologies. Standardization of rulebased law within the international legal system is necessary in promoting fairness in arbitration of international disputes. Rule-based law is fundamental to the international legal system; it sets a standard for arbitration and a basis for international agreements and settling disputes. It places substantial value into law and allows rational actors to understand all the obligations of taking part in treaties, agreements, and international institutions. Without a rule-based standard, international law becomes unclear territory—relying on individual opinions and inconsistent practices. There is, of course, a place for non-rule-based law, often referred to as process-based law. Process-based law, which includes court rulings, professors’ opinions, and social norms, is extremely important in the creation of rule-based law. Without the new ideas found through individual occurrences and opinions throughout the world, law would lack innovation and consequently become outdated. However, process-based law is a process, a step towards the creation of rule-based law. Individual actors contribute their views (via practice) until the view is seen as universal and, therefore, can be translated
into solid legislation with the status of international law. Hence, process-based law by itself cannot be considered international law, because it is too arbitrary and contains too many contrary norms and expectations. Countries in dispute could simply apply certain legal guidelines that fit their case. Judges, rather than looking towards legal precedent, could make rulings based on their individual preferences. Process-based law becomes extremely problematic in an international legal system because there is no standard for arbitration. Cases involving international disputes between nations are brought to the International Court of Justice (ICJ) at The Hague in the Netherlands. The importance of rule-based law within the ICJ’s system is exemplified best through prominent court cases, such as the 1955 case of Liberia and Ethiopia versus South Africa. Liberia and Ethiopia argued that both countries could bring a
29
humanities claim to the International Court of Justice, and try South Africa for creating inequality under the Mandate system. However, neither nation had been in the original League of Nations, prompting a dispute as to whether or not these countries had a legal interest in the case. In cases such as this, the historical context is important in understanding the core issues of the case and the connection to process based law. After World War I, the League of Nations created a Mandate system to promote decolonization. The system transferred control of a region from a former colonizer to a local country. The purpose of this system was to “realize the well-being and progress of inhabitants of the territory” [2]. At the time, it was believed that, following years of colonization, parts of Africa could not rule themselves. These nations were put under the rule of the Mandate system since the global community believed that the nations required supervision. South Africa was given a Mandate to rule over South West Africa, a former German colony. In 1955 Liberia and Ethiopia brought South Africa to the ICJ, stating that South Africa had violated the rules of the Mandate system. A 1966 ruling by the ICJ rejected Liberia and Ethiopia’s case, since the countries did not have a legal right to take part in this case. Judge Kotaro Tanaka, a presiding judge, wrote the dissenting opinion of the case. In the international judicial system, judges who disagree with the opinion of the majority are entitled to write a dissenting opinion. Although dissenting opinions do not change the outcome of court rulings, they allow judges to elaborate as to why they voted the way they did. A majority of the ICJ judges ruled that Liberia and Ethiopia were not the proper parties to bring this case to court. Tanaka, however, disagreed, and backed his opinion with processbased legal theory. It is through his opinion—and the opinions of other ICJ judges—that the flaws of process-based law emerge, and the legal and political consequences
30
hurj fall 2010: issue 12 of
the judicial system are exposed. The first issue of contention of rule-based is the admissibility of this particular case. Do Liberia and Ethiopia have a legal interest in this case even if the respective countries are not directly involved? Judge Tanaka argues that the countries don’t have a legal interest in the case, but that they do have a humanitarian interest. He writes that, “The State may become the subject or holder of a legal interest regarding social justice and humanitarian matters...In short, each State may possess a legal interest in the observance of the obligations of other States” [3]. This, however, sets a dangerous precedent, where most States would be allowed to have a legitimate claim against other states’ agreements, regardless of their own involvement. States can find a humanitarian problem in most countries, and would then be able to take part in an international legal case. In this case, South Africa and the League of Nations had an agreement to create a mandate system. A contract, such as this one, sets out particular standards and obligations by which all parties must abide. It is through both parties’ respect of the contract’s guidelines that the contract is binding and effective. The agreement between South Africa and the League of Nations did not include other individual nations, so other parties cannot get involved with this contract. Ethiopia and Liberia argued that apartheid was a jus cogens issue—a fundamental humanitarian problem above all other rule of law—and therefore, allowed. Tanaka, however, never specifies that only jus cogens humanitarian issues are reasons for valid claims. He believes social and humanitarian issues ought to be reason enough for a breach of standard court practice. Thus, the case should be admitted under process-based law, and considered in front of the International Court of Justice. Under rulebased law, the standard for proper arbitration in the International Court system, Ethiopia and Liberia cannot become a part of this case. The 1966 Court ruled that, under the Mandate, individual League member states did not have the right of direct inter vention and that only the supervisory organ of t h e L eague o f Nations was able to inter vene [5]. Since t h e
hurj fall 2010: issue 12 aforementioned contract involved only two groups, a third party could not get involved, even if one of the original parties had dissolved. This principle is essential to maintaining the validity of a contract, and is a way to prevent nations from getting involved in non-domestic political issues such as the merits of apartheid. According to the Court, “... [it] would have to go beyond what could reasonably be regarded as being a process of interpretation and would have to engage in a process of rectification or revision...it had to exceed the bounds of normal judicial action.” [6] The next point of contention is the transfer of supervision from the League of Nations to the United Nations. The original mandate required South Africa’s authority over South West Africa to be supervised by the League of Nations, but Judge Tanaka argues that the obligation to the League was simply a formality. Since the United Nations replaced the League of Nations, the Mandate automatically shifted under the supervision of the UN [7]. Unfortunately, the Mandate did not make mention of any transfer of power. According to Tanaka, once the League of Nations was absolved, so was any contract. This precedent, however, interferes with another fundamental component of contracts: unless otherwise specified, if one party of the contract ceases to exist, the contract cannot apply anymore; power cannot be transferred to other parties. If this principle is not upheld, treaties become increasingly hard to negotiate. For instance, South Africa may not have agreed to the Mandate treaty if they knew their supervision could be arbitrarily transferred. Decreasing incentives for international agreements would lead to the breakdown of the international legal system. Since the dissolution of the League of Nations was not anticipated, it is necessary to apply a rule-based standard of law with contacts made under the League. In the Court’s 1971 ruling, the Advisory Opinion decided that the application of rulebased law would void contracts in which one party no longer exists. This would enable the establishment of new agreements that would be better suited to new legal and political conditions. Tanaka also ruled on the system of apartheid, and the ability of Ethiopia and Liberia to claim that South Africa breached its contact as a result of the apartheid in South West Africa. While present day legal bodies consider apartheid to be a violation of international human rights, the 1920s Mandate system differed in view. Tanaka writes that that it was under the United Nations that a non-discrimination standard was set, and not under the League of Nations. During the formation and construction of the League of Nations, discrimination was not defined as a violation. Tanaka believed that s i n c e non-discrimination is a UN standard, South Africa’s establishment of apartheid in South West Africa is a violation of the Mandate system. The problem with this ruling would be a retroactive change of the Mandate. In 1922, the Mandate contracts were looked upon as favors to the international community. South Africa agreed to aid the League of Nations by ruling over South
humanities West Africa. In order to promote the well being of native people, South Africa was given full control of South West Africa, with the understood supervision of the League of Nations. Under the League of Nations system, apartheid was considered an acceptable means to ruling. Since South Africa’s agreement was not changed when the United Nations replaced the League, South Africa could not be punished for using a method that was accepted by the international community at the time the agreement was created. According to international law, the current view that apartheid is a violation of human rights can have no bearing on treaties established prior to this view. Apartheid, at the time of the Court ruling, was not a jus cogens violation, which would have allowed for international involvement. Tanaka’s ruling makes it possible for any semi-humanitarian question to be reason for contractual change. The international legal system would become unstable and ineffective if it were subject to rule in accordance with one nation’s demand of another. International law would become solely subject to power relations. Process-based law leaves too much interpretation by parties with bias opinions, and thus, cannot be used in the international legal system. Under a rule-based system, the debate over apartheid would not even be addressed. First, the admissibility of this case by Ethiopia and Liberia would be declined; and second, the Mandate would be voided at the time of dissolution of the League of Nations. In order to address any issues of apartheid, the United Nations would have to create legislation outlining the illegality of apartheid. Only once apartheid is made illegal within the international legal community can South Africa be retried in violation of human rights. While this rule-based approach is slower, it provides clearer and equal standards on which to hold all countries. Ultimately, the international legal system is more stable and more effective. As exemplified in the South West Africa case, process-based law is an ineffective way for the international legal system to make rulings. It is only through a rule-based system that contracts and agreements maintain validity. Since the international legal system is comprised of nations with varying political, judicial, and social organization, it is essential that the legal system rule on equal grounds. Concrete legislation, the foundation of rule-based law, is the only way to set an equal precedent with regard to what is, and what is not, acceptable under international law. Through the interpretation and implementation of this legislation, states create a dialogue that keeps law in balance with the transient views that affect the international community. Rule-based law is the best way to legitimize the international legal system and thus sets an equal precedent for all states to uphold. References [1] Rosalyn Higgins, Problems and Process: International Law. Clarendon Press, 2000 [2] Tanaka, Kotoro. Dissenting Opinion of Judge Tanaka. Ethiopia and Liberia vs. South Africa. International Court of Justice. 1966. P 267 [3] Tanaka, Kotoro. Dissenting Opinion of Judge Tanaka. Ethiopia and Liberia vs. South Africa. International Court of Justice. 1966. P 253 [4] “jus cogens.” Encyclopædia Britannica. 2010. Encyclopædia Britannica Online. 03 Oct. 2010 <http://www.britannica.com/EBchecked/topic/308641/jus-cogens>. [5] South-West Africa Cases (Second Phase). International Court of Justice. 18 July 1966. Print. P 72 [6] Ibid. [7] Tanaka, Kotoro. Dissenting Opinion of Judge Tanaka. Ethiopia and Liberia vs. South Africa. International Court of Justice. 1966. P 270 - 274 [8] Legal Consequences for States of the Continued Presence of South Africa in Nambia (South-West Africa) Notwithstanding Security Council Resolution 276. International Court of Justice. 21 June 1971. Print. [9] Tanaka, Kotoro. Dissenting Opinion of Judge Tanaka. Ethiopia and Liberia vs. South Africa. International Court of Justice. 1966. P 288 [10] Tanaka, Kotoro. Dissenting Opinion of Judge Tanaka. Ethiopia and Liberia vs. South Africa. International Court of Justice. 1966. P 303
31
humanities
hurj fall 2010: issue 12
President Woodrow Wilson’s Western Tour of 1919: The Formation of Wilsonian Foreign Policy and its Effect on Current International Relations Wallace Feng, Class of 2012 Molecular & Cell Biology On September 4, 1919, President Woodrow Wilson began one of the most daunting tasks of his presidency: a month-long national speaking campaign dubbed “The Western Tour.” The purpose of this campaign was to convince the American public that the Treaty of Versailles would bring peace to the post-war world. If the American people supported the Treaty, the Senate would stop debating and finally ratify it. Wilson, however, failed to convince the citizens and their representatives that the Treaty would be effective. This veto of the Treaty ultimately led to the U.S.’s refusal to enter the League of Nations. American’s actions both weakened the League and forced a return to isolationist policies. This paper analyzes the way in which Wilson crafted his speeches, in order to better understand the failure of his 1919 campaign. In his tour, Wilson stressed two main themes, which were reflective of regional sentiments and contemporary political and economic issues. Although Wilson failed to get the Treaty ratified, Americans in the latter part of the 20th century realized that the ideals he promoted in his speeches had merit, and began to take a more active role in shaping international politics [1].
32
Wilson’s Words to the American People Woodrow Wilson was a strong supporter of the Treaty of Versailles and wanted to convey the treaty’s impressiveness to the American people. During his tour, Wilson’s speeches consistently emphasized two aspects of the Treaty: the formation of a League of Nations and the fulfillment of Articles X and XI, which were provisions of the Treaty that would help maintain peace. To Wilson, the establishment of the League of Nations in which to settle international disputes was a necessary component to maintaining dialogue and promoting peace. The majority of his addresses focused on how the Treaty would provide liberty to the oppressed people around the world through the League of Nations, and how the Treaty would prevent future conflicts. In order to make the public understand how the Treaty would prevent a reoccurrence of the Great War, Wilson began his speeches by explaining his views as to the causes of WWI. During a speech in Ohio, he stated that German failure to observe territorial boundaries helped bring about the war. Wilson also believed that secret treaties between European countries prior to the war provoked conflicts between nations. Articles X and XI would safeguard against the reemergences of secret treaties and territorial disrespect. In Los Angeles, Wilson told his audience that “under Article X, all the members of the League engage to respect and preserve…the territorial integrity… of the other member states.” [2] By promoting territorial integrity
hurj fall 2010: issue 12 and abolishing secret treaties, Article X would facilitate international respect and decrease international conflicts. Any nation refusing to submit to these articles would be sanctioned by the League of Nations. Wilson also appealed to American values of freedom and equality, by explaining that the Treaty of Versailles embodied an American sense of liberty. To Wilson, “the American principle is that the weak man has the same legal rights that the strong man has.” [3] In other words, everyone has an equal right to freedom. Ratifying the Treaty would be a way to assure that other countries would be able to enjoy America’s ideals of liberty and equality, for “the heart of the treaty is that it gives liberty and independence to people who never could have got it for themselves.” [4] Wilson believed that if the Treaty were ratified, the League of Nations would give such liberty to these oppressed people. In addition, Wilson explained that the Treaty provided a Magna Carta for the labor force through a provision which guaranteed equal rights for American and European workers. “[The Treaty] provides that high standards of labor… shall be extended to the workingman everywhere in the world.” [5] Labor rights included a limit on work hours, the elimination of child workers, and the establishment of a labor council to redress grievances. Wilson believed that American sensitivity to issues of labor, equality, and freedom would encourage the citizens to support the ratification of the Treaty.
The Appeal to Regionalism
President Wilson was conscious of the vast regional differences throughout America. When his Western Tour made its way to the areas of the Midwest, Pacific Northwest, and West Coast, Wilson depended on his two themes to play to regional sentiments. In the Midwest, Wilson’s audiences consisted mostly of people from Polish, Slavic, and Italian decent, who had been attracted to metropolises with industrial job openings. In order to appeal to these immigrant populations, Wilson emphasized that the Treaty of Versailles embodied liberty, due to its provision of self-determination for European ethnic groups. He claimed that the Treaty “presented Poland with unity and independence,” [6] and that “Czechoslovakia, Rumania, Yugoslavia— all those nations now have their liberty and independence guaranteed to them.” [7] Wilson was supportive of regional and ethnic politics to try to convey that the Treaty was sympathetic to people of European descent. He hoped that his demonstration of such sympathy would translate into regional support for his crusade for treaty ratification. As his tour moved into the Western part of America, Wilson’s audiences were less concerned with European relations, and more concerned with Asian relations—particularly Japanese and Chinese relations. The West Coast, given its proximity to Asia, was home to many Japanese and Chinese immigrants. While anti-Japanese sentiments loomed large during Wilson’s tour, he focused on regional pro-China sympathies. Wilson recalled the post WWI transfer of possession of the Shantung province in China from Britain to Japan. Playing off of centuries-old tensions between Japan and China, Wilson was able to appeal to the Chinese population and to Americans who supported the Chinese by advocating a pro-China policy stance. Wilson asserted that Article XI of the Treaty of Versailles would be favorable for China, by telling his audience that Japan’s annexation of Shantung would become unlawful under this Article. In fact, the Article would abolish secret treaties and claimed that, “the rights of China shall be sacred as the rights of those nations that are able to take care of themselves by arms.” [8] Speeches playing off of regional sentiments, however, were not the only rhetorical way in which Wilson
humanities attempted to gather American support for the Treaty of Versailles.
Addressing American Demands
While Wilson’s campaign focus was on issues pertinent to future US foreign relations, he also recognized the growing need for domestic change. Following World War I, the American workforce initiated numerous labor strikes. There was also a growing political divide in Congress, resulting from ongoing debates over the ratification of the Treaty of Versailles and the role of the United States in the post war world. In order to connect his cause with more immediate domestic issues, Wilson discussed the necessary economic and political changes. The first prominent worker strike occurred in 1919 in Seattle, when a shipyard company rejected workers’ demand for higher pay. [9] In the months following, the American steel workers went on strike for the same reason. The demand for wage increases soon spread to public sectors of the workforce, and on September 9, the Boston police force went on a strike after the city failed to set wages comparable to living costs. In order to address worker concerns, Wilson focused on the Treaty’s Magna Carta provision for laborers. He referred to the provision as “a great guarantee for labor—that labor shall have the councils of the world devoted of the discussion of its conditions and of its betterment.” [10] According to Wilson, ratification of the Treaty was in the best interests of the American people within both international and domestic settings. The Treaty would address the disparity “between those who organize enterprise and those who make enterprise go by the skill and labor of their hands,” by setting employees and employers as equals. [11] While American workers sought labor equality, the U.S. government did not want equal standing with other nations. The majority of the Senate was concerned with the interpretation of a particular passage in Article X, which stated that nations must “respect and preserve from external aggression the territorial integrity of all states.” [12] The U.S. government, however, felt that the clause endangered American sovereignty and autonomy, and demanded that the treaty be amended prior to ratification. Wilson addressed Congressional concerns by explaining that if the League of Nations used Article X to call nations to war, “there would be no necessity on the part of the Congress of the United States to vote the use of force.” [13] Hence, American troops could not be called into action without the government’s consent. The Senate was not satisfied with Wilson’s view, however, and its worry over the Treaty’s effect on the United States escalated. In fact, Republican Senator Hiram Johnson from California started his own western tour advocating against the Treaty, which he felt forced America to protect the war gains of other nations. [14] Despite the lack of political support in Washington, Wilson maintained that the Treaty was crucial to future military policies for the U.S. He continued to emphasize that the Treaty would essentially prevent war by requiring member countries of the League of Nations to “agree to respect and preserve against external aggression the territorial integrity and existing political dependence of the others.” [15] Unfortunately, on September 25, 1919, Wilson suffered a stroke that left him incapable of finishing his tour. Within the next two months, the Senate rejected the ratification of the Treaty of Versailles. In support of the Senate’s decision, Senator William Borah told a crowd outside of Chicago that, “you can’t have a League without sacrificing [George] Washington’s policy of no European entanglements.” [16] Paradoxically, Wilson’s tour may have actually contributed to his ultimate failure to convince the people and the Senate about
33
humanities the necessity of the Treaty. In his speeches, Wilson neglected to show how America could avoid foreign entanglements if Senate were to ratify the Treaty. Along his tour, he could have emphasized that one way the League of Nations would prevent future wars was if it allowed America to remain free of European affairs. Yet, by grounding his speeches on appeals to regionalism and on themes that showed how the treaty could cater to American desires, Wilson made the Treaty a predominantly domestic issue rather than an issue of foreign policy. Because he neglected to alleviate the fear that treaty might force America into European entanglements, Wilson was not able to quell the fears of people who were concerned about the issue of American involvement in foreign affairs and consequently, he could not get the crucial support of his Senate opposition.
“Paradoxically, Wilson’s tour may have actually contributed to his ultimate failure to convince the people and the Senate about the necessity of the Treaty [of Versailles].” Wilsonian Foreign Policy Legacy Although Wilson did not succeed in gathering adequate support for the Treaty of Versailles, the political views he expressed during his Western Tour eventually became the basis for America’s foreign policy. This policy, known as Wilsonianism, was based around four main principles: collective security, under the leadership of United States, in an international governing body; self-determination of oppressed people; respect for the territorial integrity of nations; and, the abolition of secret treaties. Following the Second World War, Wilson’s efforts and policies culminated with America’s help in the establishment and membership of the United Nations in 1945. During the first General Assembly session of the U.N., President Truman credited Wilson for his previous attempts at establishing an international governing body. He told delegates that they “have given reality to the ideal of…Woodrow Wilson.” [17] Wilsonian idealism has remained a key component of mid to late 20th and 21st century international relations. Collective security emerged as a forefront policy issue during the Cold War, and has remained the most essential factor in current foreign relations. The U.S. responded to the Soviet threat in Europe and North America by joining forces in the North Atlantic Treaty Organization, which “furnished the guarantee of European security that continental nations, particularly France, had been seeking since 1919.” [18] More recently, the ideology behind collective security has resulted in the transformation of tactics from multilateral to primarily unilateral. In 1999, Russia and China prevented U.N. intervention of human rights violations in Kosovo. In response, the United States sent the American military into the Balkan region. As evident in past and present international relations, Wilsonian ideals remain highly incorporated in American policies—indicating that Wilson’s Western Tour was, in fact, successful within a larger context. The
34
hurj fall 2010: issue 12 American people have finally taken heed of Wilson’s words that were spoken almost a century ago; as we move into the future, Wilsonianism will continue to play a key role in our foreign policy. Author’s Note: I am indebted to Professor Jeffrey Brooks and also Amy Breakwell for all their help with this paper, your course has taught me a great deal about writing and history. Thank you very much. References [1] The United Nations, a modern embodiment of the League of Nations, was established in April 1945. [2] Woodrow Wilson Foundation, Princeton University, The Papers of Woodrow Wilson Volume 63, September 4-November 5, 1919, Arthur S. Link, John E. Little, Manfred F. Boemeke, L. Kathleen Amon, Phyllis Marchand, compliers (Princeton: Princeton University Press, 1990). “An After-Dinner speech in Los Angeles” 403 [3] Link, complier, “An Address in the Marlow Theater in Helena” 187 [4] Link, complier, “A Luncheon Address in San Francisco” 341 [5] Link, complier, “An Address sin the Minneapolis Armory” 137 [6] Woodrow Wilson Foundation, Princeton University, The Papers of Woodrow Wilson Volume 63, September 4-November 5, 1919, Arthur S. Link, John E. Little, Manfred F. Boemeke, L. Kathleen Amon, Phyllis Marchand, compliers (Princeton: Princeton University Press, 1990).” An Address in the Indianapolis Coliseum” 25 [7] Link, complier. “An Address in the Minneapolis Armory” 133 [8] Link, complier, “An Address in the San Francisco Civic Auditorium, 332 [9] Robert Friedheim, The Seattle General Strike, (Seattle, University of Washington, 1964) 75 [10] Woodrow Wilson Foundation, Princeton University, The Papers of Woodrow Wilson Volume 63, September 4-November 5, 1919, Arthur S. Link, John E. Little, Manfred F. Boemeke, L. Kathleen Amon, Phyllis Marchand, compliers (Princeton: Princeton University Press, 1990).” An Address in the Des Moines Coliseum” 78 [11] Link, complier, “An Address in the St. Paul Auditorium” 146 [12] Alan Sharp, The Versailles Settlement: Peacemaking in Paris, 1919 (London, Macmillan education LT,1991) 64 [13] Link, complier “An Address in Indianapolis Coliseum,” 22 [14] Unknown, “Johnson Assails Allied Duplicity” The New York Times, September 12, 1919 [15] Link, complier “An Address in the Spokane Armory” 227 [16] Unknown. “Chicago Cheers Senate Radicals” The New York Times. September 10, 1919 [17] Harry S. Truman, Public Papers of the Presidents of the United States: Harry S. Truman, 1945. (Washington, DC: Government Printing Office, 1961) 144 [18] Frank Ninkovich, The Wilsonian Century, (Chicago, The University of Chicago Press, 1999) 164
hurj fall 2010: issue 12
humanities
Accessibilty and Affordability: Redefining Care through the Refugee Experience
Anna Wherry, Class of 2014 Anthropology, Public Health As Congress debates healthcare reform, one question becomes crucial: how exactly is quality healthcare defined--by advancement of medical technology or by accessibility and affordability? Defining quality healthcare is a difficult task, and to do so, it is important to consider healthcare within a global context. Only through a comparison of healthcare systems in both developed and underdeveloped nations can the U.S. system be analyzed accurately. Through this comparison, one can begin to understand factors that comprise a good healthcare system which ultimately leads to the best outcome possible. Certain groups of people like refugees, who have experienced care in underdeveloped countries, can provide a window for evaluating health care in this country. Through personal interviews with 50 refugees, this paper compares the way in which the healthcare system in the United States is perceived compared to the care in each refugee’s home country. The refugees involved in this survey represent the following nations: Bhutan, Burma, Nepal, Iraq, Iran, Liberia, Somalia, Congo, Cameroon, Angola, Burundi, Bosnia, Egypt, Ethiopia, Eritrea, and Togo. The interviewees reflected upon the following aspects of healthcare: the availability of doctors, the quality of health care received, the costs of health services, and their overall experience with each system both in their native country and in the United States. Ultimately, how do barriers to receiving proper care differ between the U.S. and these nations? The most valuable insight did not come from the answers to the standardized questions, as was initially expected. Rather, each individual story and personal
experience provided unique insight into the fundamental problems with access to healthcare facing refugees. Through listening to their stories, one can begin to make out the true definition of quality care. Surveys were conducted at Highlandtown Community Health Center in Baltimore, Maryland. Highlandtown is a federally qualified health center located in a medically underserved area. Here, residents without insurance can receive health care at a steeply discounted rate. For example, if an office visit normally costs $100.00, a patient who qualifies for the sliding fee program could pay as little as $10.00. Community health centers such as this are one of the few places people without health insurance can be seen regularly for preventive and routine care. Each year, this health center receives an average of 375 new refugee patients from all over the world. The doctors and staff at Highlandtown Health Center provide physical exams to newly resettled refugees through a contract with the Maryland Office of New Americans. The center has cared for approximately 3,000 patients since the contract began seven years ago. Today, 1,490 refugees receive primary health care at the center. Patients also come to Highlandtown because the center offers other services that are not found in a typical doctor’s office. Because many of the patients are new immigrants or refugees, the center has built an international services program. The program’s staff assists patients in many ways, including interpreting at their appointments, coordinating referrals, and explaining insurance questions. The interviews began by asking customer service questions: the friendliness of the doctors and staff, how long they waited to see their doctor, and if they would recommend their doctor to a friend. The refugees were then asked in-depth questions con-
35
humanties
hurj fall 2010: issue 12
trasting the health care now with that of their home country. It was initially assumed that refugees would be more satisfied with the U.S. health system since many were coming from countries that didn’t have properly trained doctors, suitable medical equipment, or an adequate supply of prescription drugs. It was also believed that the level of satisfaction would increase the longer each refugee had lived in America; they would have more time to find jobs with health insurance and to become accustomed to the U.S. health care system. Surprisingly, this was not always the case. The Sharathi* family arrived at Highlandtown for their first doctor’s appointment since they arrived as refugees from Bhutan only one month prior. For 22-year-old Dorji Sharathi* and his two sisters, ages 26 and 33, this experience was monumental; it was not only their first doctor’s appointment in the U.S., but their first doctor’s appointment ever. For the past 17 years they lived in a refugee camp located in neighboring Nepal. According to the Sharathis, if people in the camp became ill, they could not go to a doctor. Instead, they had to wait in long lines to receive over-the-counter medications. The siblings spoke about the emotional trauma of living in a refugee camp. Dorji stated, “There was absolutely no privacy; you lived and slept in a tent with a hundred other people. The camp had no sanitary conditions and human waste often infected the drinking water. Worst of all, the landless, homeless, and jobless conditions of the refugees caused many people to go crazy and often commit suicide.” Dorji and his sisters were powerless as they watched both of their parents suffer from illness. Sadly, both their mother and father died of disease while living in the camp. The Sharathis were extremely grateful for the help they had received since arriving in the U.S. Their humble attitude characterized other Bhutanese families that were interviewed. Many patients made statements like “Everything is good here and everyone is kind to us.” They were very satisfied with the health center, the staff, and the doctors. When asked about the care in the U.S. compared to that in Bhutan, each patient had a similar answer: the two countries cannot be compared, because in Bhutan, there was no health care. Several patients from Somalia, who had also spent many years in refugee camps, had similar responses to those from Bhutan. One refugee woman from Somalia said that it was very difficult and expensive to see a doctor in her native country. Another man from Somalia commented on how the government discouraged citizens from seeking medical care unless it was absolutely necessary. They too spent many years in refugee camps similar to the ones in Nepal; both camps had unsanitary conditions with overwhelming disease. One patient from Cameroon said that only well connected or wealthy people could afford to see a doctor. Another family from Nepal stated that in order to have a medical procedure, one had to travel to another country, a privilege which few citizens are able to afford. However, not all responses were as positive as those of the initial interviews. Refugees from Eastern Europe and a number of African counties responded quite differently and tended to be less complimentary of health care in the United States. These patients had usually been in the U.S. for a longer period of time and had not stayed in
a refugee camp for nearly as long as the Bhutanese or Somali patients. Lena Tarasov*, a refugee from Bosnia, has been in the United States for seven years. She feels very strongly that health care in her home country is far better than what she has experienced in the U.S. Lena shared, “In my country, you had a free health care system. The doctors were good and everyone was taken care of without question. You could also get care from a specialist, such as a radiologist, which is very hard to achieve here without health insurance.” She went on to explain how breast cancer runs in her family, but her American insurance company said she was below the age allowed for a mammogram. If she wanted to get a mammography test, she would have to pay for it herself. According to Lena, “This would not happen in my country.” A 62 year old refugee from the Congo, who had been in the United States for 8 years, also insists that his previous health care system was much better. He stated, “I could see the doctor when I wanted and even though I paid up front, it was much cheaper.” This patient said he did not have health insurance and if not for the Highlandtown Health Center, he would not get medical care. A 34 year old female patient, also from the Congo, who has been in the United States for 7 years, feels she has paid too much for health care. When she was in the Congo, she paid in cash, but it was very cheap. Here in America, she cannot afford the health insurance because of the expensive rates. Amira Hayat*, formally a pediatrician in Iraq, has only been in the United States a few months, but appeared somewhat skeptical of the American health care system. Her past experience as a pediatrician gave her a unique lens for comparing healthcare in Iraq with that of the U.S. Amira liked the system of free public care that was available in her home country much better than the expensive care offered in America. She stated, “Everyone, no matter what income, was able to receive medical care in Iraq. There was also the option to pay for a private clinic if you had the money.” It was found that, even if refugee patients were not satisfied with health care in the United States, they made an exception for the Highlandtown Health Center. As patients at Highlandtown, they were happy with the level of customer service and care they received. Some patients went as far as saying they would not have health care at all if not for the center. The refugee patients were mostly unhappy with their inability to see specialists. For instance, if a Highlandtown doctor referred them to a cardiologist or a dermatologist, they knew they couldn’t afford to go and would not even make the appointment. When comparing the responses of the 50 interviews, two interesting patterns emerged. First, refugee patients who experienced many years in a camp with no health care were very satisfied with the U.S. health system. Second, of the patients who had been in the U.S for over one year, many who were interviewed had some form of health care before they came. Fully half of these viewed health care as having been better in their home country. Furthermore, the dissatisfaction seems to rise over time. Six out of seven of those here for over five years were less satisfied with health care in the United States than that of their home country. Their answers implicated a single common factor: affordability.
“Of all the forms of inequity, injustice in health care is the most shocking and inhumane.” Martin Luther King, Jr.
36
hurj fall 2010: issue 12 David Mbeya, manager of International Services at Highlandtown, and Abdalla Siyaad, case manager of Somali refugees provided insight into this trend. David and Abdalla each immigrated to the U.S. several years ago and have firsthand knowledge of the health care barriers refugees face. They explained that refugees are initially enrolled in Medicaid, a government insurance program. However, this benefit ends at 8 months, or earlier if a refugee finds employment. As with millions of other Americans, many refugees accept employment that does not provide private insurance. According to David and Abdalla, refugee patients are often confused when their Medicaid ends. Tremendous effort goes into convincing refugee patients to continue returning for care. Abdalla commented, “It is important that new refugees have routine doctor visits and adopt this as part of their new life in America.” He says many come to the United States with prior illnesses that require on-going care. The international staff works hard to obtain specialty appointments for patients while they are still covered by Medicaid. David adds that the sliding fee program allows patients to receive affordable primary health care. However, it is still challenging to see a specialist or have a procedure. Unfortunately, even with the sliding fee program, many refugees still do not return for primary care after they lose Medicaid. 75% of refugee patients being seen at the health center are still in their initial period of enrollment with Medicaid. Of the remaining 25%, half are using the sliding fee and half have found employment that offers commercial insurance. It was startling to find that some refugees thought they received the same or better quality health care in their home country. The basis of this perception may lie with their definition of quality health care. To them, quality was a matter of simplicity, accessibility, and affordability. Once in the United States, refugees face the same healthcare barriers as do native-born Americans.
humanities Author’s Note: When I began this project, I expected to learn a great deal, but I was not prepared for the emotional impact of this experience. The patients shared extraordinary stories with me about the atrocities that happened to their neighbors, friends, and family members. Two women from the Congo told me tales of the rebel soldiers inflicting fear in their country. Another young Iranian man was persecuted for being part of the Baha’i faith. The Bhutanese and Somali refugees told me unforgettable stories about living life for nearly two decades in refugee camps and losing family members to disease and mental illness. They shared their culture, politics, religion, and important life lessons in courage and surviving hardship. It was humbling to meet each one of these refugees and I am grateful they allowed themselves to be a window into the suffering in their countries. It was also humbling to realize the limitations of my own country’s health care system for anyone who lacks the luxury of health insurance.
Although not a scientific sample, these 50 interviews revealed a number of interesting results: -
Length of time in the United States often leads to decreasing satisfaction; for some, the longer they are here, the less satisfied they became.
-
Of those who have been here longer that one year, 50% believed health care was as good or better in their home country.
-
Refugee satisfaction was often low if these patients previously had access to health care in their home country.
-
Whether just arriving in the U.S. or here for several years, refugees were largely satisfied with the Highlandtown Health Center.
-
Seeing a specialist or obtaining a procedure presented more financial challenges for uninsured patients.
-
Refugee patients base quality of care on affordability and access, not on factors such as physician training, safety, and advanced medical technology.
-
Health insurance coverage appears to have a direct impact on refugee satisfaction with the U.S. health care system.
*Names of the interviewees have been changed.
37
science & engineering reports
hurj fall 2010: issue 12
Production of a p63 Reporter to Interrogate the Self-Renewal of Prostate Stem Cells Camille Soroudi Dr. Owen N. Witte Department of Microbiology, Immunology, and Molecular Genetics Abstract Prostate cancer is thought to be initiated by a small population of basal cells with stem-like qualities. Therefore, studying the mechanisms that regulate prostate stem cells (PSCs) is crucial for understanding this disease. Transcription factor p63 is expressed in prostate basal cells and is required for formation of the prostate gland. Prostate cells grown in Matrigel give rise to spheroid structures called prostate spheres, which represent a model to interrogate PSC regulation. It has been demonstrated that peripheral cells of these prostate spheres are enriched for self-renewal and, in some cases, selectively express p63. We hypothesize that p63+ cells are responsible for self-renewal and mutlilineage differentiation in growing prostate spheres and could represent or include the PSC compartment. To investigate the self-renewal capacity of p63+ cells, we used cloning techniques to develop a lentiviral vector in which green fluorescent protein (GFP) was under the regulational control of the p63 promoter. Prostate cells growing in Matrigel were infected with this lentivirus and fluorescent activated cell sorting (FACS) was used to separate GFP+ and GFP- populations to compare their capacities for self-renewal. We will use real-time polymerase chain reaction and immunohistochemistry to confirm that these GFP+/- populations represent p63+/- cells, respectively. This reporter enables us to interrogate the mechanisms of PSCs and their role in prostate cancer development.
Introduction Prostate cancer is the most frequently diagnosed form of cancer in American males (1). The disease is often treated by organ removal, which is non-ideal because the erectile nerves can be damaged (2). Other conventional treatments include chemoradiotherapy and hormone therapy to target actively cycling cells in the prostate (3). Patients who undergo these treatments often show development of secondary neoplasm years later, which indicates the existence of therapy-resistant tumorigenic cells (3). This ability to generate new tissue is characteristic of stem cells, which are defined by their capacity for self-renewal and multilineage differentiation (6). Functional similarities between stem cells and prostate cancer suggest that the disease is driven by a small population of prostate cancer cells with stem cell-like qualities (6). By studying prostate stem cells we hope to better understand their ability to promote or hinder cancer cell proliferation. The prostate sits at the junction of the bladder and the urethra and is comprised of three types of epithelial cells: luminal, basal, and neuroendocrine (2). Current data suggests that prostate stem cells are localized within the basal compartment due to their relative androgen independence and their abundance after castration (4, 5). Purified basal populations based on expression of a combination of surface antigens known as CD45-Ter119-CD31- (lin-)sca1+CD49f+ have demonstrated increased prostatic tubule structure formation over luminal and stromal cells when grown as renal grafts (5). When transduced with three oncogenes, AKT, ERG, and AR, basal cells have also demonstrated a capacity to give rise to cancerous lesions similar to those found in human prostate cancer samples (10). These assays support the hypothesized location of
38
stem cells within the basal compartment, necessitating enhanced purification of this stem cell population for further investigation. Several transcription factors, including p53 (known to be critical for coordinating the cell cycle with apoptosis), have been implicated in the development of cancer. Transcription factor p63, a homolog of p53, is of particular interest because it is known to be critical in the proper development of murine epithelial tissue and in the maintenance of adult prostate tissue (7). This suggests cells expressing transcription factor p63 could define or include the prostate stem cell compartment. Moreover, it has been demonstrated that murine prostate cells can give rise to daughter spheres in a 3D matrix and can be hormonally induced to show a hierarchy of cell differentiation (6). Cells in the interior
Figure 1. Immunohistochemical (IHC) stain demonstrating p63 expression is exclusive to peripheral prostate cells.
hurj fall 2010: issue 12 of these spheres have been shown to be deficient in self-renewal, while the most exterior cells in these structures demonstrate enrichment for self-renewal and multi-lineage differentiation (6). P63 expression in these spheres has been shown by immunohistochemistry to be exclusive to the exterior-most cells in these spheres (6, Fig 1). This data further supports the hypothesis that p63 expressing cells are those responsible for self-renewal and expansion in the prostate. However, this has not been conclusively demonstrated. In order to confirm the self-renewal activity of p63 expressing cells in growing prostate spheres, we report a method by which we selectively isolated p63 expressing cells from growing spheres to test their ability to give rise to new spheres. Confirming the stem-like qualities of these cells will hopefully give rise to several further functional investigations of the prostate stem cells. Materials and Metho ds Construction and preparation of lentiviral vectors Vector pBabe Neo eCFP was a gift from Bill Lowry, Department of Molecular, Cell and Developmental Biology, University of California, Los Angeles (Fig 2A). This vector was used to amplify the 1.8 kb p63 promoter region by PCR using promoters p63pro.Pac1. For and p63pro.BamH1.Rev (Table 1). The 1.8 kb fragment was cloned into the FUGW vector that was prepared by double digestion with BamH1 and Pac1 restriction enzymes. The resulting vector is referred to as FpGW (Fig 2B). FpGW was subsequently digested with Pac1 and Xba1 to generate a 2.5 kb fragment of p63 promoter and green fluorescent protein (GFP). The 2.5 kb fragment was directionally cloned into the FU-CRW vector that was cut with Pac1 and Xba1 to remove the 1.3 kb 5’ untranslated region hu ubiquitin fragment. The resulting vector is referred to as FpG-CRW (Fig 2C).
Figure 2. (A) Diagram of vector pBabe Neo eCFP. (B) Diagram of novel vector FpGW. (C) Diagram of novel vector FpG-CRW.
Table 1: p63 promoter region by PCR
science and engineering reports Preparation of lentivirus and infection of prostate cells Lentivirus was prepared as previously reported (9). Dissociated prostate cells were prepared from 6-10 week old mice and were infected by the centrifugation method (9). Prostate sphere-forming assay Dissociated prostate cells were suspended in 1:1 Matrigel/ prostate epithelial growth factor (PrEGM) in a total volume of 100 μl. Each sample was plated around the rim of a well in a 24-well plate and allowed to solidify for 1 hr before 1 ml of PrEGM was added. The media was refreshed every three days and spheres were allowed to grow for 5-7 days. For passaging of spheres, the media was aspirated and Matrigel was digested by incubation in dispase solution. Digested cultures were pelleted and incubated in 0.05% Trypsin/EDTA and incubated at 37°C for 10 min. Cultures were washed once with PBS and resuspended in PrEGM for replating or for FACS analysis. Fluorescence-Activated Cell Sorting (FACS) and analysis To prepare prostate cells for FACS, cells were dissociated as previously described and suspended in PrEGM (9). FACS analysis was performed by using the BD FACS Aria II Special Order System and BD FACSDiva Software (BD Biosciences). Results We produced retrovirus from vector pBabe Neo eCFP by transfecting 293T cells growing in culture and used the virus to infect primary murine prostate cells. We plated the infected cells in Matrigel for 5-7 days to allow sphere formation, then dissociated the cells and prepared them for FACS to isolate CFP+ from CFPcells. Due to technical limitations, we were unable to reliably detect cyan fluorescence; therefore, we used cloning techniques to develop a virus that produced green fluorescence in place of cyan fluorescence. Green fluorescence is easily detectable by available equipment, which facilitated our investigation of p63 expression. To develop our new construct, we amplified the p63 promoter by PCR from the pBabe Neo eCFP vector, incorporating Pac1 and BamHI restriction sites at the start and end of the 1.8 kb fragment, respectively. We cut the previously constructed vector FUGW with Pac1 and Xba1 to remove the constitutive ubiquitin promoter in order to reduce false positive detection of fluorescence. We then cloned the 1.8 kb p63 promoter into the Pac1 and BamH1 sites, generating the novel vector FpGW, which maintains expression of GFP under the control of the p63 promoter. 293T cells were transfected with the FpGW and helper vectors mdl, vsvg, and rev, which encode the viral proteins necessary for virus production. Lentivirus was collected and was used to infect dissociated prostate cells. 293T cells in culture do not express p63, thus we lacked a method to determine our viral titer. In order to resolve this issue, we constructed a novel vector that constitutively expressed fluorescence by which we could titer the virus produced. To develop our new construct, we cloned the p63 promoter and GFP regions into a previously constructed vector FUCRW. Restriction enzymes Pac1 and Xba1 were used to excise the 2.5 kb p63 promoter + GFP fragment from FpGW. The ubiquitin promoter was removed from FUCRW by digestion with enzymes Pac1 and Xba1 as well. The resulting linear fragment termed FCRW was ligated with the
39
science & engineering reports 2.5 kb fragment p63 promoter + GFP to generate the novel vector FpGCRW. FpG-CRW retained GFP expression under the regulation of the p63 promoter and also expressed constitutive RFP under the control of the CMV promoter. Due to this constitutive red fluorescence, we were able to determine that the titer of the lentivirus produced was 2x108 infectious units/mL. This titer demonstrates that we successfully produced lentivirus that contains a reporter of p63; an assumption we will confirm by quantitative PCR of cells infected with this virus. Discussion We used fluorescence to detect p63 expression in growing prostate spheres and FACS to isolate p63+ cells for further investigation. We have successfully produced virus from our reporter and can now use that reporter in prostate sphere assays which will either support or refute the prediction that p63+ cells define or include the prostate stem cells. The virus produced from FpG-CRW facilitates our investigation of the self-renewal of p63+ cells by allowing immediate detection of red fluorescence upon transfection and allowing p63+ cells to be detected in a single sort b y FACS. We will sort cells infected with this virus to isolate three populations: RFP-/GFP- cells which represent uninfected cells, RFP+/ GFP- cells which represent p63- cells, and RFP+/GFP+ cells which represent p63+ cells. We will separate these three populations and compare their capacities for self-renewal in a sphere assay. The results of this assay will allow us to investigate the pathways and genes that characterize p63+ and p63- populations. These studies will enhance our understanding of prostate stem cells. References (1) “2009 Estimated US Cancer Cases.” American Cancer Society. 2009. <http://www.cancer. org/index>. (2) “Prostatectomy (Surgery).” Prostate Cancer Foundation. 2009. <http://www.pcf.org/site/c. leJRIROrEpH/b.5699537/k.BEF4/Home.htm>. (3) Shen MM, Wang X, Economides KD, Walker D and Abate-Shen C. Progenitor Cells for the Prostate Epithelium: Roles in Development, Regeneration, and Cancer. Cold Spring Harb Symp Quant Biol. 2008;529-538. (4) Longo, Dan Louis, MD, and Bruce Chabner, MD. Cancer chemotherapy and biotherapy: principles and practice. 4th ed. Lippincott Williams & Wilkins, 2005. (5) Goldstein AS, Lawson DA, Cheng D, Sun W, Garraway IP, and Witte ON. Trop2 identifies a subpopulation of murine and human prostate basal cells with stem cell characteristics. Proc. Natl. Acad. Sci. USA. 2008;105,20882 -20887 (6) Xin L, Lukacs RU, Lawson DA, Cheng D, Witte ON. Self renewal and multilineage differentiation in vitro from murine prostate stem cells. Stem Cells. 2007;25:2760–9. (7) Laurikkala J, Mikkola ML, James M, Tummers M, Mills AA, Thesleff I. p63 regulates multiple signalling pathways required for ectodermal organogenesis and differentiation. Development. 2006;133:1553–1563. (8) Lawson DA, Xin L, Lukacs RU, Cheng D, Witte ON. Isolation and functional characterization of murine prostate stem cells. Proc Natl Acad Sci U S A. 2007;104:181–6. (9) Xin L., Ide H., Kim Y., Dubey P., Witte O. N. (2003) Proc. Natl. Acad. Sci. USA 100(Suppl. 1):11896–11903. (10) A. S. Goldstein, J.Huang, C.Guo, I. P.Garraway, O. N. Witte. (2010) Identification of a cellof-origin for human prostate cancer. Science 329, 568–571.
40
hurj fall 2010: issue 12
hurj fall 2010: issue 12
science and engineering reports
Quantitative Analysis of Extinct and Extant Crocodyliform Dental Morphology to Understand Underlying Evolutionary Patterns Jessica L. Noviello Johns Hopkins University Abstract Crocodiles have existed on Earth for millions of years, and therefore quantitative analyses of their morphology can yield a better understanding of evolutionary processes. In this study, approximately 940 teeth from extinct Malagasy taxa and extant South American species were measured and analyzed to study evolutionary pathways among crocodylians. All the teeth were individually photographed, measured, rationalized, and placed into a statistical analyzer. The results suggest that dietary adaptations are a greater factor in dental morphology than are the familial relationships among members of Crocodylia. This methodology could be extended to other animal groups in order to better understand evolution and its implications.
Introduction
Metho ds
Evolution is the process by which species adapt to their surroundings, and the evolutionary history of long extinct animals can be revealed by careful analysis of the fossil record. Because teeth are the hardest and most dense parts of skeletons, there is an abundance of teeth which have been fossilized. In addition, dental morphology is affected by both phylogenetic and functional factors, and can therefore yield insights into the evolutionary relationships and life history strategies (e.g., diet) among species. Crocodiles are an ideal group for this type of research because they are one of the few large animal taxa to have survived the massive Cretaceousâ&#x20AC;&#x201C;Tertiary (K-T) extinction event. They also enjoy a wide range across both time and geography, having changed little over 70 million years. The resulting abundance of teeth, coupled with their continued prominence in certain geographic areas, puts them in a unique position to act as a window deep into the past. The purposes of this study are to determine whether variation in crocodylian teeth can be attributed to taxonomy, and to determine the underlying reasons for any measured differences in dental morphology using a living population. The first stage of this study focused on teeth from six distinct crocodyliform taxa recovered from the Maevarano Formation of northwestern Madagascar, a rock unit formed in the Maastrichtian stage of the Upper Cretaceous (Krause et. al., 2006). The second stage of this study focused on the extant crocodylians from South America, because their level of biodiversity most closely matches that of Late Cretaceous Madagascar. Of the seven species studied, six belonged to the subfamily Caiminae, which minimized the effects of phylogenetic evolution. The diets of these crocodylians are well-documented and highly varied, suggesting that if variation in dental morphology in the extant population is observed, it is more likely a result of diet. This study resulted in the elaboration of a method for differentiating species of extinct crocodyliforms using dental morphology,the use of which has widespread paleontological and biological implications. It can be used on isolated teeth in the fossil record to identify taxa, even with a lack of other skeletal elements, such as skulls or vertebrae.
Jaws and teeth of at least six crocodyliform taxa have been recovered and identified from the Maeverano Formation since 1993 (Krause et al., 2006). Simosuchusclarki was excluded from this study (Buckley et al., 2000). The five crocodyliform taxa investigated in this study all have conical teeth: Mahajangasuchusinsignis; Araripesuchustsangatsangana (Turner, 2006); Miadanasuchusoblita, (Simons and Buckley, 2009); and two other unnamed crocodyliforms, called Longirostrine and Neosuchian, with distinct dental morphologies. (Krause et al. 2006) In the extant sample, a total of 10 skulls of seven different species were borrowed from the American Museum of Natural Historyâ&#x20AC;&#x2122;s Herpetology Department for the purpose of this study. Those species are Paleosuchuspalpebrosus, P. trigonatus, Caiman latirostris, C. crocodilus, C. yacare, Melanosuchusniger, and Crocodylusacutus. A total of 940 teeth were analyzed from both the extinct and extant samples in this study. The methods employed by Smith to assess dental variability in the teeth of theropod dinosaurs were modified for use on the available samples of extant and extinct crocodyliforms. (2005) Digital pictures were taken of each tooth using a Zeiss Axiovision V.12 microscopic camera in two views.The images were measured using Zeiss Axiovision software, where a total of five linear measurements from the lingual view and three from the mesial view were collected (Figure 1). All measurements were taken in micrometers.
41
science & engineering reports To account for the difference in direct size of the teeth between taxa and the variance of teeth within each jaw, six ratios and four angles were computed and analyzed. Theseratios and angles are the following: Distal Height (DH)/Crown Base Length (CBL), Crown Base Length/Crown Base Width (CBW), Mid Crown Length (MCL)/ ¼ Crown Length (QCL), Mid Crown Length/ Crown Base Length, ¼ Crown Length/Crown Base Length, Distal Height/ ¼ Crown Length, Mesial tip (angle A), Mesial angle (angle B), Lingual angle (angle C), and Mesial base angle (angle D). The data were then analyzed using discriminant function analyses (DFA) in the statistical software SPSS. The methodology was first conducted on the extinct sample and was then expanded to the in situ teeth of the extant sample multiple times. Crocodylusacutus, although not a Caiman, was included in the extant DFA as the outgroup.
hurj fall 2010: issue 12 The x-axis discriminating variable was most closely correlated with CH/QCL and the y-axis discriminating variable was most closely correlated with the radian measurement of angle A. The second analysis included Paleosuchustrigonatus, Caiman crocodilus, C. yacare, and C. latirostris (Figure 4). In this analysis, there was only one distinct group.In the third analysis, Caiman crocodilus, C. yacare, Melanosuchusniger, and Caiman latirostris were included (Figure 5). Caiman crocodilusand C. yacare were included as outgroups. The fourth analysis analyzed the dental morphology of the Paleosuchusgenus (Figure 6) and included C. crocodilus as an outgroup. The two clusters do not show the two Paleosuchus species clumping with each other; instead, the overlap instead occurs between P. trigonatus and C. crocodilus.
Results Tooth morphology of extinct sample The analysis of the extinct sample included all taxa to test for the relationship in dental morphology as a result of phylogenetic influence (Figure 2). To reduce overlap, the serrated teeth of Miadanasuchusoblita were excluded from the second analysis. In the second analysis run on the extinct sample, 89.4% of cases were correctly classified when cross-validated. For both analyses, the x-axis discriminating variable was most closely correlated with CBL/CBW and the y-axis discriminating variable was most closely correlated with MCL/QCL.
Tooth morphology of extant sample The first analysis of the extant sample tested for the relationship in dental morphology as a result of phylogenetic influence (Figure 3). Rather than forming a clump for each separate species, the data are clumped into two distinct groups along the x-axis.
Discussion The analyses of the extinct crocodyliform dental morphology were conducted to test if different taxa can be differentiated on the basis of dental morphology using quantitative methods. As shown in
42
hurj fall 2010: issue 12 Figure 2, each taxon has its own characteristic dental morphology. When Miadanasuchusoblita is excluded on the basis of serrated teeth, the differentiation between the four remaining taxa becomes more pronounced. This shows that different taxa can be differentiated on the basis of dental morphology. To determine the cause of the variation, the analyses on the extant crocodylians were conducted. The South American crocodylian fauna is known to feed on a wide variety of prey, including but not limited to fish, invertebrates, small terrestrial vertebrates, and even some snakes (Magnusson et al., 1987; Thorbjarnarson, 1993; Santos, 1996; Da Silviera, 1999; Borteiro, 2009). In the first analysis of extant crocodylians, Crocodylusacutus, rather than forming its own group, instead clusters with the Caiminae of similar dietary habits. Unlike M. niger and P. palpebrosus which form their own group separate from the other Caiminae along the x-axis. Both species are known to eat terrestrial invertebrates, fish, and hard-shelled aquatic prey (Thorbjarnarson, 1993). The variation in dental morphology is best explained by adaptations due to similar feeding habits. In the second analysis, the taxa which regularly ingest small terrestrial invertebrates and aquatic prey were analyzed. The results of this analysis show that phylogeny plays a lesser role in dental morphology variation, which is indicated by a low cross-validation percentage. The third and fourth analyses tested phylogenetics versus diet. In the third analysis, M. niger, which has a taxonomy closest to C. latirostris but a different diet, does not group with C. latirostris, C. crocodilus, or C. yacare. This also suggests that diet may be more influential than phylogenetics in driving the evolution of tooth morphology in extant crocodylians. Finally, the genus of Paleosuchus was analyzed. While P. trigonatus eats mainly small terrestrial invertebrates and freshwater shrimp, P. palpebrosus eats a mainly terrestrial diet of invertebrates supplemented with crabs and fish. The diet of Caiman crocodilus most closely resembles that of P. trigonatus, andgroups closely with P. trigonatus. Even within a genus, the dental morphology is shown to vary dramatically and may be most closely associated with diet.
science and engineering reports References Borteiro, C., F. Gutierrez, M. Tedros, and F. Kolenc. 2009. Food Habits of the Broad-snouted Caiman (Caiman latirostris: Crocodylidae, Alligatoridae) in northwestern Uruguay. Studies on Neoptropical Fauna and Environment 44: 31-36. Buckley, G. A., Brochu, C. A., Krause, D. W., and Pol, D. 2000. A pug-nosed crocodyliform from the Late Cretaceous of Madagascar. Nature 405: 941-944. Busack, Stephen D., Pandya, Sima. 2001. Geographic Variation in Caiman crocodiles and Caiman yacare (Crocodylia: Alligatoridae): Systematic and Legal Implications. Herpetologica 57: 294-312. Da Silveira, R., Magnusson, W.E. 1999. Diets of Spectacled and Black Caiman in the Analvihanas Archipelago, Central Amazonia, Brazil. Journal of Herpetology 33: 181-192. Krause, D., O’Connor, P. M., Curry-Rogers, K., Sampson, S. D., Buckley, G. A., and Rogers, R. R. 2006. Late Cretaceous terrestrial vertebrates from Madagascar: implications for Latin American biogeography. Annals of the Missouri Botanical Garden 93: 178-208. Magnusson, William E., Eduardo Vieria da Silva, Albertina P. Lima. 1987. Diets of Amazonian Crocodilians. Journal of Herpetology 22: 85-95. Magnusson, William E., Albertina P. Lima. 1991. The Ecology of a Cryptic Predator, Paleosuchus trigonatus, in a Tropical Rainforest. Journal of Herpetology 25: 41-48. Prasad, G. V. R., and Lapparent de Broin, Fd. 2002. Late Cretaceous crocodile remains from Naskal (India): comparisons and biogeographic affinities. Annals de Paleontologie 88: 19-71. Santos, S.A.1996. Diets of C. c. yacare from Different Habitats in the Brazilian Pantanal. Herpetological Journal 6: 111-117. Simons, E.L.R., and Buckley, G.A. 2009. New Material of “Trematochampsa” oblita (CrocodyliformesTrematochampsidae) from the Late Cretaceous of Madagascar. Journal of Vertebrate Paleontology 29: 599-604. Smith, J.B., 2005. Heterodonty in Tyrannosaurus rex: Implications for the Taxonomic and Systematic Utility of Theropod Dentitions. Journal of Vertebrate Paleontology 25(4): 865-887 Smith, J. B., Vann, D. R., and Dodson, P. 2005. Dental morphology and variation in theropod dinosaurs: implications for the taxonomic identification of isolated teeth. The Anatomical Record Part A 285: 699-736. Thorbjarnarson, J.B. 1993. Diet of the Spectacled Caiman (Caiman crocodiles) in the Central Venezuelan Llanos. Herpetologica 49: 108-117. Turner, A. H. 2006 Osteology and phylogeny of a new species of Araripesuchus (Crocodyliformes: Mesoeucrocodylia) from the Late Cretaceous of Madagascar. Historical Biology 18: 255-369.
Conclusions The method developed, described, and tested here shows distinct dental variation among the taxa of crocodyliforms. When the method is applied to an extinct sample, the taxa cluster against each other and form “territories” of distinct dental morphology characteristics. This is a useful tool for scientists, because it suggests that an isolated tooth of unknown affinity can be accurately attributed to a known taxon in the absence of other skeletal remains. When the methodology is expanded to include an extant sample, the underlying evolutionary processes can be more thoroughly evaluated. In this study, the main factor affecting the level of variation was the diet of the extant animal. From analyzing the relationship among the extant sample, and including the extinct taxa individually as unmarked groups, the diet of the extinct crocodyliforms can be inferred. Suggestions for future research include studying different samples of extant crocodylians as well as expanding this methodology to other reptilian groups to study their evolutionary trends over time. The presence of variation, and the level to which it is seen, can determine whether phylogeny or behavior is more responsible for the differentiation within and between animal groups, and give a better understanding of evolutionary patterns.
43
science & engineering reports
hurj fall 2010: issue 12
NMDA receptor subtype expression in oligodendrocyte development and the effect of Neuregulin on subtype expression in O4 oligodendrocytes linked to patients with schizophrenia Robert Martin1, Manabu Makinodan1, Paul A Rosenberg2, Gabriel Corfas2 Department of Neuroscience, Johns Hopkins University Department of Neurology and Program in Neuroscience, Children’s Hospital and Harvard Medical School
1 2
Abstract At the cellular and molecular level, the human nervous system functions by taking advantage of chemical and electrical properties across the cell membranes of neurons, one of the major constituent cell types that make up the central nervous system. A type of glial cell called an oligodendrocyte facilitates the conduction of the electrical signal, thus making the flow of information possible. This study found that there are characteristic levels of expression of NMDA receptor subtypes throughout the course of oligodendrocyte development in the central nervous system. These findings support the theory that distinct subunits of the NMDA receptor must be expressed at key stages in normal oligodendrocyte development. It is known from previous studies that patients wtih schizophrenia have NMDA receptor dysfunction coupled with axon dysmeylination. Thus, dysplasia of oligodendrocytes, whose main function is myelinating axons in the CNS may have deleterious effects on signal conduction and contribute to the pathogenesis of schizophrenia and other neurodegenerative diseases. A neurotrophic growth factor, Neuregulin 1, plays a role in mediating expression of these subtypes in neurons, yet it is still unknown whether it has the same regulatory effect in oligodendrocytes, despite some preliminary data. Introduction In order to facilitate the transmission of neurons, oligodendrocytes also express some ion channels. The two most abundant ion channels in the CNS are the α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptor and the N-methyl D-aspartate (NMDA) receptor. These specific ionotropic, or ligand-gated, receptors work through a specific molecule called a neurotransmitter, which binds to the receptor, causing a conformational change of the transmembrane protein, which then opens the channel, allowing ions to pass through. The AMPA and NMDA receptors have to work together, because the NMDA receptor is voltage-gated in addition to being ligandgated, meaning it also requires both a change in electric potential of the membrane and a ligand to open. The AMPA receptor is activated by the most abundant excitatory neurotransmitter, glutamate, which binds to it, allowing the influx of positively charged sodium ions into the cell and causing the negatively charged intracellular space to become more positively charged, hence the necessary depolarization. Once the membrane is depolarized, a magnesium ion blocking the NMDA receptor is expelled, allowing glutamate and another molecule, glycine, to bind to the receptor. Glycine activates the receptor so that it can then let even more positively charged calcium ions into the cell, further depolarizing the membrane. There are seven distinct receptor subunits that make up the NMDA receptor: NR1, NR2A, NR2B, NR2C, NR2D, and NR3A and NR3B. Of these subunits, only NR1 and NR2A and NR2B have been studied extensively, primarily in neurons (Zhong, 1995). The NR1 subunit gene is expressed and is specific to glycine, whereas the NR2 family of subunit genes is expressed andis specific to glutamate, hence the receptor’s co-activation, or its necessity to bind to both glutamate and glycine. The ligand-gated,
44
ionotropic receptor is also voltage-dependent (Wenzel et al., 1997). It previously had been believed that NMDA receptors were not expressed in oligodendrocytes until 2005, when the receptor subtypes were found (Karadottir et al., 2005). This study is interested in the NMDA receptor subtypes’ expression over the course of oligodendrocyte development and differentiation, especially in 04, 01, and MBP stages of oligodendrocytes. It is hypothesized that there is upregulation/downregulation of the various subtypes over the stages of development of oligodendrocytes. In addition, neuregulin-1, a neurotrophic growth factor, or protein that has been found to stimulate the growth of neurons, the differentiation of nerve cells, neuroplasticity and neural signaling, has been shown to upregulate the subunit NR2C in neurons (Ozaki et al., 1997). It has been found that interactions between neuregulin and the receptor tyrosine-protein kinase enzyme ERBB4 also play a role in the pathogenesis of patients with schizophrenia. Based on a study which has shown that the NMDA receptor subunit expression of NR2C is induced in neurons during synaptogenesis by the Neuregulin-β isoform, we question whether the NR2C subunit is also induced in oligodendrocytes. (Ozaki et al., 1997) Therefore, in this study preliminary testing on O4 stage oligodendrocytes will be done as part of the hypothesis that the NR2C subunit expression, as well as the NR1, NR2A, and NR2B, are modulated and induced by Neuregulin-1. Metho ds Oligodendrocytes were cultured in O1, O4, and MBP medium in six well plates. One well in each plate was given 48 hour treatment of 3nm Neuregulin, one 48 hour vehicle 0.1 mg/mL Bovine Serum Albumin (BSA).
hurj fall 2010: issue 12 The monoclonal antibodies A2B5, O4, and O1 are frequently used to define distinct stages in the maturation of oligodendrocyte progenitors. In general, A2B5+/O4- define the oligodendrocyte progenitors, O4+/O1- define the oligodendrocyte precursors, and O1+ defines the immature oligodendrocytes. Mature oligodendrocytes are cells that express the myelin basic protein (MBP), which is the major protein constituent of the myelin synthesized by mature oligodendrocytes.
science and engineering reports culture samples, were homogenized using a syringe and purified using RNeasy kit and spin columns. The mRNA density was then checked using a spectrophotometer to calibrate the reverse transcriptase reaction. A thermocycler and BioRad reverse transcriptase setup was run with iScript reverse transcriptase to make cDNA to be used in the real-time analysis.
Figure B: Primer Annealing Temperature (62∞)
Results Figure A: Immunochemical characterization and response to OGD-induced toxicity of OL lineage cells. (A) Stage-specific cultures. Cultures were allowed to differentiate for 0, 2, 6, or 10 days and stained for the OL stage-specific markers A2B5, O4, O1, and MBP, as well as the astrocytic marker GFAP. Percent of immunolabeled cells in the total population and characteristic morphology for each stage are shown. (Scale bar = 50 µm.) (Deng et al., 2003)
Real-time qPCR reactions were run using primer sequences for NR1, NR2A, NR2B, and NR2C, as seen in Table 1 below, that had been pretested for efficiency and annealing temperature (on rat hippocampus and cerebellum tissue). Real-time qPCR: In order to measure the expression of the NMDAr subtypes, each of their primers had to be tested. In the literature, we found high efficiency primer sequences (Floyd et al., 2003). These primers were tested for their proper annealing temperatures (Figure B) using harvested rat hippocampus and cerebellum. NR2A and NR2B are mostly expressed in the rat hippocampus, whereas NR2C is mostly expressed in the rat cerebellum (Wenzel et al., 1997). NR1 is widely expressed throughout the brain (Wenzel et al., 1997). The best annealing temperature was found to be 62°C using a simple PCR reaction and analysis with gel electrophoresis. A dilution curve was also made to measure the efficiency of each pair of primers using the real-time apparatus for use in analysis of the mRNA expression. The hippocampus and cerebellum, as well as the cell
Table 1: Real-time qPCR high efficiency primer sequences
Figure 1 shows the relative expression of the respective NMDA receptor subtypes over three stages of oligodendrocyte development (O4, O1, and MBP). The O1 and MBP data have been normalized to the O4 expression. For the NR1 subtype, expression is significantly decreased about 90% from O4 to O1 and is again significantly reduced from the O1 to MBP stage by about 68%. NR2A seems to behave in a similar manner to NR1, as its expression is reduced from the O4 to O1 and from the O1 to MBP stages. For NR2B and NR2C a different course of expression emerged. Based on these data, the expression of NR2B has a relative increase from the O4 to O1 stage by about 20%, and then is significantly reduced from the O1 to MBP stage to less than 7% of its expression in O1. NR2C also exhibits high expression levels during the O1 stage. Figure 2 reveals that preliminary results show no significant increase in NMDA receptor subtypes after Neuregulin treatment in O4 cells. Discussion The results from this study have revealed the expression of NMDA receptor subtypes in oligodendrocytes over the course of their development. It was found that the NR1 and NR2A subtypes follow similar expression levels, in which they are high in the O4
45
science & engineering reports stage and steadily drop off in the O1 and MBP stages. NR2B and NR2C exhibited different courses of expression: levels of NR2B are high during the O4 and O1 stages but are low in the MBP stage. It is interesting to note the course of NR2C expression, in which the subtype has a characteristic spike during the O1 stage. Normal NMDA receptor subtype expression and distribution is critical, as the NMDA receptor is one of two main receptors activated by glutamate, the most abundant excitatory neurotransmitter. Under normal conditions, when glutamate binds to the receptor
Figure 2: Neuregulin Treatment O4 Cells. Courtesy M. Makinodan
Figure 1: Course of NMDAr Subtype expression. Courtesy M. Makinodan
at slightly polarized or resting membrane potential, very few ions flow through the channel because the opening is blocked by Mg2+. Under such conditions, the excitatory potential is mediated by the AMPA receptor. However, the AMPA receptor can depolarize the membrane enough to expel the Mg ions from the NMDA pores, thus allowing the NMDA receptor to respond to glutamate and activating second messengers and signal cascades within the cell. It is important to closely observe oligodendrocytes throughout the course of their development because a multitude of mental health problems, namely schizophrenia, result from a combination of genes and development factors relating to NMDA receptor subtype expression. Schizophrenia is known to run in families, which points to the disease’s strong genetic basis. However, the disease has also been found in sets of identical twins in which only one twin has the disease (Ratner, 1987).Thus, it is believed that, despite having a robust genetic linkage, there are still development factors that contribute to the development of the disease, hence the second phase of this experiment which involves treatment with neurotrophic growth factor neuregulin. It is one of the leading hypotheses that schizophrenia develops in the early stages of the brain’s development significantly before the symptoms of the disease start to appear in the late teens and early twenties (Ho, et al., 2003). Another hypothesis tied to the disease is that dysmyelination, or the inability for oligodendrocytes to properly wrap around and support neurons in their signaling processes, is also one of the causes of schizophrenic symptoms. Thus, the results
46
hurj fall 2010: issue 12 in our study provide some insight into which receptors, namely the NMDA receptor subtypes, are present in the critical stages of development of oligodendrocytes. It seems that the NR1 and NR2A subunits are present and possibly necessary for oligodendrocytes early on in the O4 stage, whereas the NR2B and NR2C are upregulated later on in the O1 stage (Figure 1). Thus, if NR1 and NR2A are negatively affected or not expressed at all early on, the consequences may lead to symptoms of schizophrenia. Nevertheless, more work must be done to further investigate this hypothesis. Another dimension added to this study was the effect of neuregulin on each of the three stages. In previous studies, the neuregulin protein was discovered to play a key role in development; namely,assisting in shaping the brain and its structures in early development. Mice deprived of neuregulin experienced problems with forming their dendritic spines (sites of excitatory synaptic transmission on neurons), as they tended to degrade over the course of their development (Mueller et al., 2009). The protein is also known to have a role in the simple differentiation of nerve cells, neuroplasticity, and neural signaling, all of which contribute to the pathogenesis of neurodegenerative diseases (Mueller et al., 2009). Although there were no significant fluctuations in NMDA receptor subtypes in O4 oligodendrocytes, there is still the possibility of neuregulin affecting other stages. Additionally, in order to see the effects of neuregulin, it is likely there must be very strict conditions which involve added potassium, glycine, and glutamate to the cell culture medium, which has yet to be explored at the mRNA level. In the Ozaki experiment, Glycine (3 mM) and KCl (20mM) were added, as these modifications were necessary to observe the neuregulin effects on the NR2C subtype (Ozaki et al., 1997). Thus before proceeding with this study, it may be important to take a step back and explore the physiological conditions under which neuregulin has the greatest effect on oligodendrocytes at both the protein and mRNA levels. References Deng W, Rosenberg PA, Volpe JJ, JensenFE (2003) Calcium-permeable AMPA/kainate receptors mediate toxicity and preconditioning by oxygen-glucose deprivation in oligodendrocyte precursors. PNAS 100:6801-6806. Floyd DW, Jung KY, McCool BA (2003) Chronic ethanol ingestion facilitates N-MethylD-aspartate receptor function and expression in rat lateral/basolateralamygdala neurons. The Journal of Pharmacology and Experimental Therapeutics 329:3:1020-1029. Ho BC, et al. (2003). Schizophrenia and other psychotic disorders. In RE Hales, SC Yudofsky, eds., Textbook of Clinical Psychiatry, 4th ed., pp. 379–438. Washington, DC: American Psychiatric Publishing. Karadottir R, Cavelier P, Bergersen LH, Attwell D (2005) NMDA receptors are expressed in oligodendrocytes and activated in ischaemia. Nature 438:1162-1166. Micu I, Jiang Q, Coderre E, Ridsdale A, Zhang L, Woulfe J, Yin X, Trapp BD, McRory JE, Rehak R, Zamponi GW, Wang W, Stys PK (2006) NMDA receptors mediate calcium accumulation inmyelin during chemical ischaemia. Nature 439:988-492. Mueller U et al. (2009) Impaired maturation of dendritic spines without disorganization of cortical cell layers in mice lacking NRG1/ErbB signaling in the central nervous system. Proceedings of the National Academy of Sciences, Feb 17, 2009. Ozaki M, Sasner M, Yano R,Lu HS, Buonanno A (1997) Neuregulin-b inducesexpression of anNMDA-receptor subunit. Nature 390:691-694. Ratner C (1982) Do studies on identical twins prove that schizophrenia is genetically inherited? Int J Soc Psychiatry 28:3 175-178. Salter MG, Fern R (2005) NMDA receptors are expressed in developingoligodendrocyte processes and mediate injury. Nature 438:1167-1171. Ulbrich MH,Isacoff EY (2008) Rules of engagement for NMDA receptor subunits. PNAS 105:37:14163-14168. Wang C, Pralong WF, Schulz MF, Rougon G, Aubry JM, Pagliusi S, Robert A, Kiss JZ (1996) Functional N-methyl-D-aspartate receptors in O-2A glial precursor cells: a critical role in regulating polysialic acid-neural cell adhesion molecule expression and cell migration.J Cell Biol. 135:1565-81. Wenzel A, Fritschy JM, Mohier H,BenkeD (1997) NMDA receptor heterogeneity during postnatal development of the rat brain: differential expression of the NR2A, NR2B, and NR2C subunit proteins. Journal of Neurochemistry 68:2:469-478. Xi D,KeeleraB, ZhangW, HouleaJD, GaoaWJ(2009) NMDA receptor subunit expression in GABAergic interneuronsin the prefrontal cortex: Application of laser microdissection technique. Journal of Neuroscience Methods 176:172-181. Zhong J, Carrozza DP, Williams K, Pritchett DB, Molinoff PB (1995) Expression of mRNAs encoding subunits of the NMDA receptor in developing rat brain. Journal of Neurochemistry 64:2:531-539.
hurj fall 2010: issue 12
science and engineering reports
Aging patterns and sexual dimorphisms in bone remodeling within an English Middle Iron Age population Trang Ngoc DiemVu†, Evan Garofalo‡, Heather Garvin‡, Christopher Ruff‡ †Johns Hopkins University ‡Center for Functional Anatomy & Evolution, Johns Hopkins School of Medicine Abstract Biological anthropological studies have found that bone loss and sexual dimorphism in bone aging patterns have been increasing in modern populations. These trends have been associated with technological advancement and the resultant decrease in physical activity and its bone-strengthening benefits. This study examined patterns of bone remodeling with age in a pre-industrial population from Bradford, England and compared the findings to a similar previous study of a modern American industrial population. Measurements and molds of leg bones from the archeological individuals were used with radiographic measurements to generate cross-sectional images from which geometric parameters were derived. The results were examined with linear and quadratic leastsquares regressions on age. ANCOVA (analysis of covariance) was used to test for differences between the sexes concerning the effects of age on the parameters. The quadratic regressions revealed similar aging trends between the sexes of the archaeological population and none of the ANCOVA values reached significance, indicating low sexual dimorphism in aging trends. The archeological population also displayed lower percent loss in bone area than in the American population. In sum, the Bradford population exhibited trends of low sexual dimorphism in aging and less bone loss than the modern American sample, possibly due to relatively greater levels of physical activity among both sexes. These findings underscore the effect of physical activity on patterns of bone loss due to sex and age. Introduction
Materials and Metho ds
In response to the currently growing impact of osteoporosis and other types of bone loss, some researchers have turned to archaeological records for answers. Biological anthropologists and osteologists analyze the bones of past populations for age-related bone loss patterns, which may reveal potential lifestyle changes or treatments with possible benefit to bone health in today’s populations. Studies have found that bone rigidity relative to body size has been dropping exponentially within the last million years [10]. Some studies have confirmed the increasing gracility of the skeleton as people change from hunter-gatherers to agriculturalists to industrialists. Sexual dimorphism has also been found to decrease over generations. Some researchers associate these trends with changes in physical activity: as societies “advance,” their people become more sedentary and experience weakening of the bones [8]. As men and women become equal in society, the differences in workload between them also decrease, so their bones lose sexual dimorphism. The effect of physical activity on bone loss is not yet certain, and more archaeological samples are being analyzed to provide evidence. This study examined patterns in bone remodeling with respect to age in a pre-industrial population from Yorkshire, England. These findings were compared to a similar study of a modern Western industrial population. The modern population was expected to exhibit low sexual dimorphism and lower bone loss with age than the pre-industrial population due to the archaeological population’s active lifestyle. The results confirmed these expectations, indicating low sexual dimorphism in aging patterns and less change in bone structure per decade relative to the modern population, therefore confirming the effects of physical activity on bone health.
Measurements and molds were taken from the midshafts of femora and tibia of sixty individuals, ages 20 to 65. The specimens had been buried during the middle Iron Age at a cemetery in Wetwang Slack, a location in East Yorkshire, UK [2]. These individuals came between the second century BC and first century AD, before Romanization of the surrounding Yorkshire Wolds area [1]. Digital images of the molds of the midshaft sections were used with radiograph measurements as described by Trinkaus and Ruff [11] to create a midshaft cross-section complete with medullary cavity. Once acquired, the cross-sections were entered into image analysis programs that calculated geometric properties. The parameters that were calculated included cortical area (CA), total subperiosteal area (TA), medullary area (MA), percent cortical area (percent CA/TA), polar second moments of area (J), second moments of area (S.M.A.’s) about the M-L and A-P axes (Ix and Iy) and maximum and minimum S.M.A.’s (Imax and Imin).
Figure 1. Basic illustration of three geometric parameters on a tibial cross-section.
The CA represents the bone’s resistance to compressive loadings, the TA and MA illustrate the apposition or resorption of
47
science & engineering reports the subperiosteal and endosteal surfaces, J represents resistance to torsional rigidity, the second moments of area (the I values) represent bending rigidity within an indicated plane, and Z represents bending strength. Age trends in the parameters were examined with both linear and quadratic least-squares regressions on age in men and women separately. ANCOVA (analysis of covariance) was used to observe differences between the sexes concerning the effects of age on the cross-sectional areas.
hurj fall 2010: issue 12 Most quadratic regressions also showcased similar aging trends between men and women. The peak of femoral percent CA is during the early 30â&#x20AC;&#x2122;s for women and at 40 for men (Figure 3).
Results None of the ANCOVA values approached significance, revealing that gender had negligible effect on bone remodeling patterns. This is also reflected in the graphic representations of the regressions. Linear regressions are displayed in Figures 3-5. The similar slopes of the linear regression lines illustrate the minimal effect of gender on aging patterns.
Figure 3: Quadratic regression showing change with age in percent cortical area of the femur.
In femoral MA, an inverse pattern is evident; the minimum is in the late 30s for men and in the late 20s for women (Figure 4). This shows that most of the changes in bone remodeling occurred at 40 years in men and 30 years in women.
Figure 4. Quadratic regression showing change with age in medullary area of the femur.
Discussion
Figure 2. Linear regressions showing change with age in percent cortical area of the tibia (top) and total Subperiosteal area of the femur (bottom). Female trend line is dashed; male trend line is solid. Female values are squares, male values are circles.
48
Like any other organ system, the skeletal system constantly responds to stimuli. According to Wolff â&#x20AC;&#x2122;s Law, bone grows to adapt to external mechanical forces [6]. It follows that stronger muscles exert greater forces on bones, stimulating bone remodeling [8]. From this it is clear that physical activity strengthens both muscle and bone. This reasoning has been tested by biological anthropological studies, including this study, which compares the bone health of
hurj fall 2010: issue 12 populations with different lifestyles. It is expected that archaeological societies with more physically active lifestyles will have stronger bones; modern, sedentary populations will exhibit weaker bones. It has also been found that sexual dimorphism in aging patterns increases as societies transition from hunter-gatherer to agricultural to industrial. Higher levels of physical activity in more active populations compensate for effects on bone strength from differences in hormonal changes between sexes. In modern, sedentary populations, women are subject to greater bone loss after menopause because of rapid declines in female sexual hormones. Therefore, less sexual dimorphism in aging patterns is expected for earlier populations than for modern populations. The percent losses in cortical bone by the Yorkshire people were much smaller for both sexes in comparison to that of a 1988 cadaveral sample of white Americans—in both the femur and the tibia [9]. The data from this study are presented alongside the data from the 1988 study in Table 1, seen below. In this data, it is clear that greater increases in medullary area in the American population—despite slightly higher increases in subperiosteal area in the Americans—cause a much more rapid expansion of the medullary cavity and thinning of cortical area among the Americans. This explains that the Yorkshire population has a percent change in cortical area among both sexes that is consistently less than that of the modern Americans [9]. The comparison with the 1988 sample of white Americans was the most applicable in this case, but these findings could have been compared with many other modern populations, as weakening of bones is found among most industrial, developed nations. A study of Hong Kong women [4] found that the rate of hip fracture incidence increased severely over a period of rapid industrialization. A Swedish study [3] found that a rural population experienced greater physical activity and lower fracture incidence in comparison to an urban population. The Yorkshire population’s lower sexual dimorphism in aging patterns relative to the American population was also as predicted. Many of the linear regressions and some quadratic regressions reveal similar aging trends between the sexes. Sex had a greater effect on aging of bone in the Americans than in the Englishmen. The lower levels of bone loss and lower sexual dimorphism in aging patterns evident in the Wetwang Slack population may have been a result of their transitional society. These individuals came from the pre-Romanization middle Iron Age time period [1]. Domesticated animals were an important part of their diet, including pigs whose bones and charred remains were found in the Wetwang Slack graves [2]. But the absence of marine foods [2] indicates that these people relied more on livestock than on hunting and fishing. Structures and
science and engineering reports pits suitable for grain storage were also found near the graves [1]. These signs characterize the Yorkshire people as agriculturalists. The associated physically strenuous lifestyle would explain the low dimorphism in aging patterns. The constant physical labor caused greater stimulation of bones by mechanical forces than in sedentary modern people. This results in less bone loss with age as well as diminished effects of sexual hormones on bone aging patterns. There have been previous observations of similar effects of society and lifestyle on bone health. Robling and Stout [7] found evidence of bone loss in the mid-femur of individuals from a prehistoric Peruvian village. They attributed these results to a change in lifestyle as their economy transitioned from hunting-gathering practices to more sedentary maritime subsistence [7]. A study of 18th-19th century Spitalfields Englishmen[5] showed low rates of bone loss in the female proximal femur. The researchers observed that the archaeological females of that population spent long hours weaving and walking everywhere they needed to go [5]. Therefore, the findings of this study and other similar studies confirm the benefits of physical activity on bone remodeling with age. References
1. Dent, JS. 1982. Cemeteries and settlement patterns of the Iron Age the Yorkshire Wolds. Proceedings of the Prehistoric Society. 48:437-457. 2. Jay, M, Richards, MP. 2006. Diet in the Iron Age cemetery population at Wetwang Slack, East Yorkshire, UK: carbon and nitrogen stable isotope evidence. Journal of Archaeological Science. 33:653-662. 3. Jonsson B, Gardsell P, Johnell O, Sernbo I, Gullberg B. 1993. Life-style and different fracture prevalence: a cross-sectional comparative population-based study. Calif Tissue Int. 52:425-433. 4. Lau, EM; C Cooper, C Wickham, S Donan, DJ Barker. 1990. Hip fracture in Hong Kong and Britain. International Journal of Epidemiology. 19:1119-1121. 5. Lees B, Molleson T, Arnett TR, Stevenson JC. 1993. Differences in proximal femur bone density over two centuries. The Lancet. 341:673-75. 6. Martin, BR. 2003. Functional adaptation and fragility of the skeleton. In: Agarwal SC, Stout SD, editors. Bone Loss and Osteoporosis: An Anthropological Perspective. New York: Kluwer Academic/Plenum Publishers. p. 121-138. 7. Robling AG, Stout SD. 2003. Histomorphology, geometry, and mechanical loading in past populations. In: Agarwal SC, Stout SD, editors. Bone Loss and Osteoporosis: An Anthropological Perspective. New York: Kluwer Academic/Plenum Publishers. p. 189-205. 8. Ruff C. 2006. Gracilization of the modern human skeleton. American Scientist. 94: 508-514. 9. Ruff, CB, Hayes, WC. 1988. Sex differences in age-related remodeling of the femur and tibia. Journal of Orthopedic Research. 6:886-896. 10. Ruff, CB, Trinkaus E, Walker A, Larsen CS. 1993. Postcranial robusticity in Homo, I: Temporal trends and mechanical interpretation. Am J Phys Anthropol. 91:21-53. 11. Trinkaus E, Ruff CB. 1989. Diaphyseal cross-sectional morphology and biomechanics of the Fond-de-Forêt 1 femur and the Spy 2 femur and tibia. Bull Soc Roy Bel Anthropol Préhist. 100:33-42. on
Table 1. Linear regression data from this study (bold) and the data from the 1988 study of cadaveric Americans (italicized). Note lower values of percent change (%Δ) in the archaeological sample compared to the modern sample and in males compared to females.
49
science & engineering reports
hurj fall 2010: issue 12
Robotic Tunneling Worm for Operation in Harsh Environments Michael J. Kuhlman*, Blaze D. Sanders†, LafeZabowski‡ and Jessica A. Gaskin§ *University of Maryland, College Park †Johns Hopkins University ‡Embry-Riddle Aeronautical University §NASA Marshall Space Flight Center Abstract Though man has stepped foot upon the Moon, we still have little understanding of its origin and history. The authors have conceptualized and initiated prototyping of a novel worm-like robot that is designed to burrow deep within the lunar regolith. The overall design of the platform consists of an ultrasonic drill, a conical auger, and multiple elongating segments mimicking the peristaltic motion of an earthworm. The goal of this robot would be to collect and return scientific samples at various depths, make in situ measurements, or act as a sensor deployment system. The ultimate goal for this system is to become an instrumental platform in the scientific exploration of the Moon as well as other extraterrestrial bodies. This paper provides a detailed focus on each of the sub-systems that have been developed in order to effectively drill and burrow into the Moon, the construction of sub-system prototypes, and the subsequent preliminary testing performed in order to obtain necessary data to prove the viability of such a platform. Introduction The exploration of the Moon has once again piqued the interest of space agencies worldwide. One of the most important tasks is to determine how the Moon was formed and to understand its evolutionary path. Drilling deep within the lunar regolith at various locations on the Moon to analyze the composition of the underlying material is one way to accomplish this. During the moonwalks made by the astronauts during the final Apollo missions, drilling and coring of the regolith was considered a high priority [1]. Lunar regolith generally exhibits a very jagged and sharp geometry, even on the submicron scale. This not only provides an extremely abrasive quality, but also allows the particles to interlock and clump together [1]. This, combined with compression fusing from meteorite impacts, can make the lunar regolith exceedingly dense and difficult to drill through, rendering traditional drilling techniques impractical. To overcome such properties and successfully drill and burrow deep within the Moon, the authors propose a worm-like robot, which consists of a piezoelectric ultrasonic drill, a conical auger, and multiple elongating segments which mimic the peristaltic locomotion of an earthworm. Similar robots designed to perform such tasks include the Moon Mars Underground Mole (MMUM) [2], an autonomous burrowing screw robot [3], and other worm-like robots as conceptualized in [4].
Fig. 1: Conceptual design of lunar wormlike robot.
Concept Design Over view The robotic worm design consists of three sub-systems which work as an assembly line to pulverize, remove, and push through
50
the lunar regolith. These include a piezoelectric ultrasonic drill, a conical auger, and multiple elongating segments which exhibit the same peristaltic motion utilized by earthworms. These three independent drilling and locomotion techniques proposed in [3][5] combine into a single platform so that the capabilities of each subsystem complement each other in the drilling process to provide a robust and versatile platform for scientific sampling beneath the surface of the Moon. This proposed system, pictured in Fig. 1 and outlined in Fig. 2(a), operates under several assumptions pertaining to its deployment, power source, and data transfer. It is assumed that the wormbot will be deployed via a boom from either a lander or rover which will act as a “home base.” The home base will also provide power, data transmission, and retracting capability by means of a flexible and durable tether. A. Ultrasonic Drill A piezoelectric motor powers the ultrasonic drill at frequencies greater than 20 kHz. This vibration is directly transferred to a horn, which then excites a free mass. The free mass resonates between the horn tip and the top of the drill stem and then propagates to the regolith being drilled, generating subharmonics that are critical to ultrasonic drill performance [5]. This existing, JPL developed system is lightweight, consumes a small amount of power, requires a low preload force, can withstand a large temperature gradient, and is operational in a vacuum. For the proposed worm-like robot, the ultrasonic drill will fit within the conical auger and will be the first point of contact with the lunar regolith. B. Conical Auger The conical auger was custom designed with three important qualities in mind: a conical shape, variable pitch of auger blades, and logarithmic spiral geometry. The conical shape is necessary to form a smooth transition from the ultrasonic drill bit to the first segment of the body. The variable pitch of the auger blades and the logarithmic spiral geometry are needed to facilitate the movement of material and to prevent clogging by regolith. The main function of the auger in this design is to carry loose material created by the ultrasonic drill out and around the first segment of the body.
hurj fall 2010: issue 12
science and engineering reports butadiene styrene (ABS) plastic using Fused Deposition Modeling (FDM) in the Rapid Prototyping Facility at the NASA George C. Marshall Space Flight Center in Huntsville, AL. A 10,000 rpm brushless DC motor with a 264:1 planetary gearbox drives the conical auger, which is nested within the auger to keep the assembly compact.
Fig. 2: Overview of Robot Concept Design
C. Body Segments and Peristaltic Motion Five body segments are the propulsive mechanism for the system. Internal segment actuators enable the segments to contract or expand in the sequence shown in Fig. 2(b) to mimic peristaltic motion. This motion is advantageous for three reasons: it provides the necessary preload force required for effective operation of the ultrasonic drill, propels the robot, and displaces loosened regolith around and behind the platform. Experimental Prototype A single segment and conical auger, without the ultrasonic drill, have been designed and built to determine the feasibility of the design, given the required actuation capabilities and forces required for burrowing. It is important to note that this system can be scaled to any size. Many of the components that were chosen for this stage of prototyping were chosen for their high availability and low cost. The necessary testing apparatuses and experiments have also been developed in order to obtain performance parameters. A. Conical Auger The conical auger, Fig. 3, has a major diameter of 16 cm at the base and is 25.4 cm in height. It was rapid prototyped out of acrylonitrile
Fig 3. ABS Prototype of conical auger.
B. Worm Segment The worm segment prototype contains two 11.31 x 11.31 cm square end plates, with two side plates per side connected to each other by a side plate hinge, for a total segment height of 25.4 cm. The geometry of the rigid links and hinges constrain the end plate motion along their normal axis, forcing the end plates to always be parallel. Four Dynamixel AX-12+ servos inside the segment are attached directly to individual side plate hinges using pistons to apply force. The servos must move with the assembly during actuation to allow the pistons to align with the side plate hinges. Rotation of the entire servo assembly is constrained by a square centering rod. C. Control System The Arduino Mega microcontroller serves as the embedded control system because of its functionality (it can control both the AX-12â&#x20AC;&#x2122;s and the auger), its ease of programming, and its numerous online resources and tutorials. The Arduino is controlled by a personal computer (PC) and can transmit test data such as command sequences, status packets (feedback) and measured auger velocity. A dedicated half-duplex serial bus synchronously controls the four AX-12+ servos. Experimental Setup A. Auger Testing The purpose of the auger test bed was to determine if the auger could transport material up and away from its tip without clogging auger channels (open space between auger spirals) and provide propulsive force. The auger assembly test apparatus consisted of a 3 foot diameter by 3 feet high cylindrical container of compressed, bleached flour from Conagra and a tripod mount to stabilize the auger assembly during drilling. The auger assembly consists of a conical auger and brushless DC motor with planetary gearbox and was rigidly attached to the end of a 1.5 inch diameter PVC pipe. This assembly skids through a PVC collar/sleeve in the center of the tripod. A cut channel in the PVC pipe and bolt system constrained auger pole rotation imparted by motor torque and also imposed stroke limits on drilling depth such that the auger did not contact solid bottom of the test apparatus. The collar and PVC pipe were sanded to reduce parasitic friction against preload. To control the preload force, additional weight was added to a rigid container fastened to the PVC pipe. Penetration depth was measured using 1 cm tick marks labeled on the PVC pipe. During testing, a video camera recorded the speed at which the auger penetrated the test bed. For each trial, recorded measurements allowed for the drilling performance (i.e. the specific energy) to be calculated. The prescription for this calculation follows from [6] and is outlined in (1). SE is the specific energy [energy required to remove a unit volume of material](J/m3), E is the required energy per minute to auger (J/min), V is the volume of material removed per minute (m3/min), fprop is the propulsive force (N), D is the diameter of auger Hole (m) [0.16 m], Ď&#x2030;is the rotational speed (rpm), m1 the mass of robot (kg), g is gravitational acceleration (m/s2), fpre is the preload force (N), PS is the penetration speed (m/min).
51
science & engineering reports Roughly five hundred pounds of all-purpose flour was used for this preliminary testing. The flour filled a test bed that was 3 feet in diameter, or roughly 6 times the diameter of the auger, and 3 feet deep. The properties of flour that correlate well with lunar simulant
include both the average particle size and compaction characteristics. To determine the average particle size of the flour, the authors placed a small amount into an FEI Quanta 600 FEG scanning electron microscope and ran a particle size distribution algorithm using EDAX’s Genesis X-ray microanalysis software. The flour particles ranged in size from 4 μm to 143 μm, with an average particle size of 18 μm. The median lunar regolith mean grain size for the 236 cm deep core sample taken from the Apollo 15 site (designated 1500115006) was reported to be ~ 50 μm [1]. Though the shape of the average flour grain is dramatically different from that of the lunar regolith, they both have a similar compression index. For loose lunar soil, the estimated mean compression index is 0.3 [1], compared to that of wheat flour, which is estimated to be around 0.2 [7]. B. Segment Testing Segment testing was a particularly important element of this effort. There are two performance parameters that must be met to ensure feasibility of the design. One performance requirement is that this worm-like robot must be able to hold its own weight by contracting its segments, generating normal forces against the tunnel
hurj fall 2010: issue 12 walls. Additionally, the segments must also be powerful enough both to enable locomotion and to generate the necessary preload force for drilling. Both of these properties can be bounded by conducting initial testing on a single worm segment prototype. The authors thus developed two experiments, one to measure the normal/ expanding force and another to measure downward/preload force. In addition, testing highlighted mechanical construction problems that caused jamming of the servos. Parameters such as side wall length tolerance, servo horn alignment, and use of a single square guiding column would result in jamming if not properly designed. 1) Normal Force Generation: The instantaneous force produced at varying angular positions of a servo with respect to the test bed was calculated. Fig. 4 depicts the geometry of the rotational axis of the servo and the moment arms that generate the normal force pushing the segment side wall out. Given servo characteristics, (6) predicts the theoretical normal force outputs of an individual side of a segment. The tested segment consisted of the bottom half of a complete segment, with the two non-force measured side walls removed. Regions of low force can be avoided and/or segments can be actuated in unison to produce additional force, creating a higher degree of compaction and grip on tunnel walls.
2) Preload Force Generation: Preload force generation tests determined maximum segment preload capabilities as a function of servo angular position. (7) predicts the theoretical preload force of the segment given individual servo measurements from the first experiment. This testing involved placing a free mass on top of a fully constructed segment, setting the servos to desired angular position, and commanding the servos to actuate to a fully contracted state. The test conditions ensured that the AX-12+ built-in P controller saturated servo torques, ensuring constant torque throughout testing. In Fig. 5, the maximum preload force is defined as the maximum load the segment can react to (push against) before failure.
Results & Analysis A. Auger Testing The experimental auger assembly successfully drilled through 18.5 inches of flour in 185 seconds at a no load speed of 20 rpm with a 3 kg preload. The target auger rotational velocities of 20, 47, and 68 rpm where not reached under load, and the recorded rotational velocity data were noisier than expected. We suspect that this was because the speedometer software code was tested only under no load conditions. The data and observation suggest that the builtin P controller of the auger controller saturated maximum torque limits in all trials. If this is the case, the steady state auger rotational velocity can be extrapolated from the cleanest data set to other trials. This steady state velocity is estimated to be ~ 3.5 rpm using a median filter with a 5 sample window and then taking the mean of all non-noise rpm filtered data points (rpm values less than 10).
Fig. 4: Diagram highlighting moment arm geometry clarifying AX-12 servo data and force generation
52
B. Segment Testing Segment testing determined the maximum static nor
hurj fall 2010: issue 12
science and engineering reports
mal force the segment prototype could exert on the tunnel walls to maintain position. Additionally, the maximum static and continuous preload forces achievable by a segment were determined. Preliminary requirements were highly dependent on the final robot mass, number of segments, and number of segment side walls with a goal of sufficient normal force to anchor against the tunnel walls and provide 5 N of preload force for the ultrasonic drill. Preload data showed that those requirements were exceeded with a maximum measured force of 286 N as illustrated in Fig. 5. The normal force data generated was measured to be within a workable range (24 to 40 N) and is summarized in Fig. 6. Additional simulation and applied frictional testing is needed to substantiate these findings. Conclusion The authors of this paper introduced a conceptual wormlike robotic platform designed for drilling and tunneling into the surface of the Moon. The design and purpose of each subsystem was described and the manufacturing of a prototype body segment and conical auger was carried out. It was determined through testing that both the segment and auger performed in a manner that proves the feasibility of the proposed platform. Confirmation of future research has been secured by the authors and by a supporting professor to further develop subsystems for the next two years. F u t u r e Wo r k Future work should include improved ultrasonic drill bit geometry and sensor suite development to detect density of the regolith in order to avoid nearby boulders as well as to detect regolith particle resonance in order to adapt the drill’s sub-harmonic
Fig 6. Preliminary results of normal force test data.
frequency or free tip velocity [5] to best cut through the regolith. Dynamic simulation of the ultrasonic drill bit and auger interaction with the lunar regolith utilizing techniques outlined in [8] will be paramount in developing space flight hardware. Another important task will be to enable three dimensional tunneling motion by using multiple antagonistic pairs of linear actuators per segment, to create a much more robust and modular platform capable of many types of tasks. The determination of an appropriate method of auger actuation, while accommodating the ultrasonic drill in the center of the auger body, is also a necessary body of future research.
References [1] G. Heiken, D. Vaniman, and B. French, Lunar sourcebook: A user’s guide to the Moon. Cambridge Univ Pr, 1991. [2] C. Stoker, A. Gonzales, and J. Zavaleta, “Moon/Mars underground mole,” in NASA Science Technology Conference. Accessed online September, 2007, pp. 07–0117. [3] K. Nagaoka, T. Kubota, M. Otsuki, and S. Tanaka, “Experimental study on autonomous burrowing screw robot for subsurface exploration on the Moon,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008. IROS 2008, 2008, pp. 4104–4109. [4] A. Fukunaga, J. Morookian, K. Quillin, A. Stoica, and S. Thakoor, “Earthwormlike exploratory robotics,” California Institute of Technology, Jet Propulsion Laboratory, Amherst, MA, NASA Tech Brief, 1999. [5] Y. Bar-Cohen, S. Sherrit, B. Dolgin, N. Bridges, X. Bao, Z. Chang, A. Yen, R. Saunders, D. Pal, J. Kroh et al., “Ultrasonic/sonic driller/corer (USDC) as a sampler for planetary exploration,” in Aerospace Conference, 2001, IEEE Proceedings., vol. 1, 2001. [6] H. Rabia, “Specific energy as a criterion for drill performance prediction,” Name: Int. J. Rock Mech. Min. Sci. Geomech. Abstr, 1982. [7] J. Malave, G. Barbosa-Canovas, and M. Peleg, “Comparison of the Compaction Characterisitics of SelectedFood Powders by Vibration, Tapping and Mechanical Compression,” Journal of Food Science, vol. 50, no. 5, pp. 1473–1476, 1985. [8] A. Hasan and K. Alshibli, “Discrete Element Modeling of Strength Properties of Johnson Space Center (JSC-1A) Lunar Regolith Simulant,” Journal of Aerospace Engineering, vol. 1, p. 19, 2010.
53
54
hurj 55