Note from the Editorial Board Dear Readers, We are pleased to present the Fall 2007 issue of the DUJS, which includes a variety of fascinating articles produced by our staff of intrepid science journalists. This issue was developed in the midst of great scientific events on campus, including the Thayer School’s Nanomaterials Symposium, the Biology Department’s Life Sciences Symposium, and the Thayer and Tuck Schools’ Energy Symposium. We were also fortunate to have Nobel laureate Thomas Cech serve as a Montgomery Fellow and deliver a public lecture entitled, “Exploring the Edges Between Scientific Disciplines.” Current president of the Howard Hughes Medical Institute, Cech earned the Nobel Prize in Chemistry in 1989 for his work in characterizing the catalytic properties of ribozymes. All of these events offered the Dartmouth community a glimpse of the latest information at the forefront of scientific development. As Fall 2007 draws to a close, we at the DUJS hope that you have gotten the chance to engage in the scientific world at Dartmouth and beyond. In this issue, you’ll find epidemiological analyses of two great killers of mankind. Nicholas Ware ’08 discusses the impact of irrigation development on malaria infection rates in different communities in sub-Saharan Africa. Chad Gorbatkin ’08 compares different techniques used to prevent the spread of cholera. Other articles examine the mysterious workings of the human mind. Grace Chua ’07 addresses the causes and risk factors associated with childhood Post-Traumatic Stress Disorder (PTSD), and the implications of these factors for intervention and treatment. Rukayat Arigonjoye ’10 studies the incompletely-understood mechanism of infantile amnesia – the loss of early autobiographical memory. In earth sciences, Lauren Edgar ’07 investigates the environmental deposition of calcium carbonate, an analysis that can help us interpret ancient deposition patterns. Meanwhile, Laura Myers ‘08 explores the history and properties of garnet, as well as its many uses. Boyd Lever ’10 inspects a method of disturbing the cell cycle progression of the HIV virus, reducing its viability. Drawing on research performed for Chemistry 63, Bailey Shen ’08 et al. examine the production of greenhouse gases by car exhaust. Laura Myers discusses the potential usage of stem cells to create treatments for diabetes, and Tim Shen ’08 takes a look at current and future developments in prosthetic devices designed to replace lost limbs. Finally, Shreoshi Majumdar ’10 provides an insightful interview with several student interns in the Women in Science Project (WISP), who presented the results of their research projects at the Wetterhahn Symposium this past spring. We hope you enjoy these articles, and encourage you to look for upcoming science events on campus, such as the “Polar Connections” exhibit in Baker Library, which will illustrate Dartmouth’s history in the research and exploration of the polar regions of the globe. Thank you for reading!
The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge.
EDITORIAL BOARD
President: Frank Glaser ‘08 Editor in Chief: Laura Sternick ‘08 Managing Editor: Nida Intarapanich ‘08 Managing Editor: Laura Myers ‘08 Managing Editor, Images: Tim Shen ‘08 Asst. Managing Editor: William Schpero ‘10 Layout Editor: Anthony Guzman ‘09 Secretary: Benjamin Campbell ‘10 Publicity: Edward Chien ‘09
STAFF WRITERS
Rukayat Ariganjoye ‘10 Grace Chua ‘07 Lauren Edgar ‘07 Chad Gorbatkin ‘08 Boyd B. Lever ‘10 Shreoshi Majumdar ‘10 Bailey Shen ‘08 Nicholas Ware ‘08 Faculty Advisors
Alex Barnett Mathematics Ursula Gibson Engineering Marcelo Gleiser Physics/Astronomy Gordon Gribble Chemistry Carey Heckman Philosophy Richard Kremer History Leslie Sonder Earth Sciences Megan S. Steven Psychology Samuel Vélez Biology Special Thanks
Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Provost’s Office Whitman Publications Private Donations The Hewlett Presidential Venture Fund Women in Science Project
Sincerely, The DUJS Editorial Board Cover photograph courtesy of the Centers for Disease Control and Prevention. Background manipulations by Tim Shen ‘08.
FALL 2007
DUJS@Dartmouth.EDU Dartmouth College Hinman Box 6225 Hanover, NH 03755 (603) 646-9894 www.dartmouth.com/~dujs Copyright © 2007 The Trustees of Dartmouth College 1
DU The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge.
In this issue: Features
Stem Cells: A Potential Cure for Type I and Type II Diabetes 4
Laura Myers ‘08
Remember When...?: Infantile Amnesia 6
Rukayat Ariganjoye ‘10
Garnet: A Tour Through Barton Mine, North Creek, NY
11
The Life and Death of the Cholera Pathogen Vibrio cholerae
13
Laura Myers ‘08
Chad Gorbatkin ‘08
Resilience in Child Post-Traumatic Stress Disorder: Implications for Treatment 17
Breaking the Mold: Moving Toward More Functional Prostheses
21
WISP and Wetterhahn: Undergraduate Women In Science
25
2
Grace Chua ‘07
Tim Shen ‘08
Shreoshi Majumdar ‘10
Dartmouth Undergraduate Journal of Science
UJS Fall 2007
Volume X, No. 1
DUJS
Research
Geographic and Demographic Considerations: Does Irrigation Development Decrease Local Malaria Infection Rates?
31
Ooid Production and Transport on the Caicos Platform
36
Incorporation of Fluorinated Nucleotide Analogs into HIV-1 TAR RNA Boyd Lever ‘10
40
It’s Getting Hot in Here: Analysis of Prius Carbon Dioxide Emissions by FT-IR Spectroscopy
43
Nicholas Ware ‘08
Lauren Edgar ‘07
Bailey Shen ‘08, Constantinos Spyris ‘09, Benjamin Blum ‘09, Bryan Chong ‘09, and Daniel Leung ‘09 Advisors: Siobhan Milde and Charles Ciambra
Image Credits from left to right: National Institutes of Health, Centers for Disease Control and Prevention, Laura Myers ‘08, Shreoshi Majumdar ‘10, Ossur Americas, Boyd Lever ‘10 and Tim Shen ‘08. FALL 2007
3
medicine
Stem Cells:
A Potential Cure for Type I and Type II Diabetes laura myers ‘08
D
iabetes is a growing health concern in the United States. Although it is estimated that 20.8 million Americans suffer from diabetes, only two-thirds are actually diagnosed with the disease. Diabetes occurs when the body cannot properly metabolize sugar, which results in high glucose levels in the blood. Normally, the hormone insulin is released into the blood by specialized β islet cells in the pancreas in response to high blood glucose levels. Insulin binds to receptors embedded in the cell membrane of muscle and fat cells, which allows insertion of glucose transporters in the membranes of these cells and the uptake of glucose. The disease is caused by insulin deficiency or insulin resistance and can have serious consequences on one’s health such as blindness, kidney failure, heart disease, and stroke. There are two primary forms of diabetes: Type I and Type II. Type I diabetes, or juvenile diabetes, occurs when the body’s immune system attacks its own β-islet cells. In Type I diabetes, the immune system attacks β islet cells, which results in extreme insulin deficiency in the body. In order to compensate for the lack of insulin producing cells in the body, Type I diabetics inject themselves with insulin multiple times per day. Although recent technological breakthroughs like the insulin pump and the insulin inhaler
Frequent blood testing to monitor blood glucose levels is necessary for many people with Type I diabetes.
have facilitated the regulation of blood sugar levels for Type I diabetics, they still remain completely insulin dependent. Type II diabetes, or adult onset diabetes, occurs when β cells fail to secrete enough insulin and/or the insulinsensitive cells builds up a resistance to insulin molecules. Type II diabetes is the most common form of diabetes. The onset of Type II diabetes has been linked statistically with age, racial background, genetics and lifestyle. In particular, there has been shown to be a direct correlation between obesity or inactivity and the likelihood of develping Type II diabetes. Generally, Type II diabetics rely on supplement pills that either help the pancreas make more insulin, keep the liver from overproducing glucose, help cells to use insulin better, or slow down carbohydrate digestion. With careful insulin regulation and medication, diabetics can lead relatively normal lives, but there is nevertheless a huge motivation to cure diabetes to improve their quality of life. In recent years, improvements in medical transplantations have made it possible for a small number of Type I diabetics to receive pancreatic islet transplantations. In one study, insulin production was seen early on in 43 percent of patients and 50-80 percent of patients at a later time point. However, there are many complications with this procedure. While it is a minimally invasive surgery, patients are required to take immunosuppressive drugs, which increases the risk of contracting communicable diseases. Additionally, many surgeons refuse to perform the surgery without also transplanting kidneys because the kidneys become so compromised during insulin deprivation. There is a higher demand for pancreatic transplants than willing or capable donors. Lastly, there is also the possibility of immune rejection of the pancreas. Because of the limited supply of pancreatic organs for transplants and the concern over immune rejection, scientists are investigating ways to improve pancreatic islet transplantation. A potential source of these cells is stem cells, which have the ability to self-renew and differentiate. It was originally thought that transplantation of a single embryonic stem cell into a patient could differentiate and give rise to a ß cell. However, it has been shown that ß islet cells arise only from pre-existing ß cells. Thus, the current hope is to culture embryonic stem cells in vitro, providing the proper conditions for them to differentiate into insulin-secreting cells that can respond to the presence of glucose. Creating patient-specific β cells from embryonic stem cells that can be transplanted into patients is a long-term goal because it will overcome the hurdle of rejection by the immune system.
Image courtesy of the National Institutes of Health.
4
Dartmouth Undergraduate Journal of Science
transplant of a pancreatic cells from the same donor should prevent rejection of the cells. While this technique has much promise, there are many longterm problems that have to be addressed before this technique is used in humans. First, the chances of obtaining both a pancreas and bone marrow from the same donor are slim. At the same time, however, stem cells could potentially be differentiated into both types of tissues so that their genetic identity matches each other and the patient. Second, this technique only seems to work in mice when they were extremely young. In humans, it may take years before diabetes actually sets in, and at this point it may be too late to perform this therapy.9 More research is needed to be able to use stem cell technology to treat diabetes in a safe, reliable and inexpensive way. Nevertheless, the current research is promising, and many are working toward making this prospect into a reality. The insulin protein. Image courtesy of the Jena Library of Biological Macromolecules.
At the same time, it is only a temporary treatment because it does not address the autoimmune destruction of ß cells that is the primary cause of Type I diabetes. After two or three years, the autoimmune response will attack the ß islet cells and prevent the normal production of insulin once again. One potential solution to this problem could be the implantation of insulin-secreting cells in other organs of the body. This may prevent the immune system from locating and destroying the cells, which could lead to a long-term cure for diabetes. The first step toward being able to culture these cells in vitro is understanding the basic developmental biology that controls the process of differentiation. A complex array of transcription factors act in the expression of certain genes at certain time points in this process. If we can understand which genes are being turned on or off at each point, we can work toward directed differentiation of embryonic stem cells into ß islet cells. Recent data shows that we are now able to culture human embryonic stem cells in vitro, but with minimal efficiency. More work must be done to verify these results and improve percent yield in order for transplantation to become a reality. Potential research that would be geared specifically toward treatment of Type I diabetes is using small molecule inhibitors that block the autoimmune destruction of ß islet cells. In studying this link, we might be able to develop drugs that could act at the molecular level to prevent cell death and, in turn, reverse the symptoms of diabetes.8 Another way of trying to prevent the autoimmune destruction of ß cells to treat Type I diabetes is to perform autologous bone marrow transplants alongside ß islet transplants. Bone marrow transplantation has been done in diabetic mice and been shown to repress the autoimmune destruction of ß cells. Combining this therapy with a FALL 2007
References 1. Diabetes Overview (2006. Available at http://diabetes.niddk. nih.gov/ dm/pubs/overview/index.htm (5 July 2007). 2. R. Bretzel, et al., Langenbeck’s Archives of Surgery 392 (3), 239, (2007). 3.Y. Dor, et al., Nature 429, 41(2004). 4. S. Gangaram-Panday, et al., Trends in Molecular Medicine 13 (4), 164 (2007). 5. Q. Zhou, et al., Developmental Cell 13, 103 (2007). 6. W. Jiang, et al., Cell Research 17, 333 (2007). 7. C. Sia, et al., Review of Diabetic Studies 3, 39 (2006). 8. S. Ikehara, et al., PNAS 82, 7743 (1985).
5
neuroscience
Remember When...? Infantile Amnesia Rukayat Ariganjoye ‘10
Introduction
Memory. It is the internal scrapbook that defines one’s individuality—a sense of self that is crucial to the human psyche. Nevertheless, due in large part to its longevity, memory is a difficult term to define. It appears to be ineffaceable, a solid entity of the past, yet it frequently eludes its owner, making one question the validity and transiency of memory. These large memory gaps of narratives are difficult to seal, thereby creating a disconnect between one stage of life and another. One of the most common and largest gap in the human memory bank consists of recollections of the first few years of infancy and babyhood. Ever wonder why it is a struggle to remember an event prior to the age of two? Or even the day you were born? One wonders why such a significant event as one’s birth fades into oblivion, never to be recovered. This inevitable condition is identified as infantile amnesia, a term forged by the renowned Sigmund Freud when he commenced his studies of this marvel in 1905 (1). Significant light has been shed on the details concerning the nature of infantile amnesia since Freud’s era, yet many mysteries remain. For example, how does autobiographical memory shape infantile amnesia? Autobiographical memory or personal memory is a branch of memory that is established for encoding, storing, and retrieving events and experiences to construct one’s personal past. Psychologists have demonstrated that autobiographical memory minimizes the constraints of this particular amnesia, but one question remains unclear—what inhibits autobiographical memory? It is a question that has beguiled psychologists for nearly half a century. This paper will investigate the different and oftentimes opposing theories that help explain the elusive phenomenon of infantile amnesia.
Mapping Memory—A Brief Overview
Memory is a complex entity of the human brain. Declarative and non-declarative memories are central, but distinct divisions exist that contribute to our notion of forming memories. Declarative memory is devoted to processing names, places, events, and facts. Semantic and episodic memories are the two subtypes of declarative memory. Semantic memory supports retention of “knowledge of facts and data that may not be related to any event” (2), while episodic memory relates more heavily to unique events linked to place and time. Perceptual and motor skills contribute to 6
non-declarative memory, which does not necessitate the deliberate recall that declarative memory requires. Furthermore, autobiographical memory is a component of declarative memory that is particularly episodic because it is linked spatially and temporally to unique events.
Memory—A Fallacy?
In any study, there is the likelihood of experimental error. In fact, errors are expected because 100% accuracy is virtually impossible. But in the case studies concerning infantile amnesia, are the methods faulty or are the components of the human subconscious at fault? Could both play an integral role in the results? Stuart Zola explores the reliability of memory in his 1998 case studies on infantile amnesia. People who have recovered memories help researchers understand the biological and behavioral factors of memory. Zola’s objective was to answer two questions about memory: 1) do memories for traumatic events change over a period of time; and 2) can memories be created for traumatic events that didn’t occur? Infantile amnesia is the inability to recollect memory at certain stages of infancy. An explanation that Zola gives for infantile amnesia points to the slow maturation of the cortical areas after birth. Processing and storing information necessary for reconstruction of conscious memories occurs later. According to Zola, infantile amnesia provides a restrictive component on recovered memory. Yet, there are some cases in which recovered memories from early infancy were described in great detail (3). Recently, researchers attempted to “create” memories for the subjects by collaborating with their families who were instructed to “remind” the participant of an early event that had never actually occurred. The participants were, of course, unaware of their family’s involvement in the experiment. They were asked several questions, and their answers were tape-recorded and replayed to their respective families for further analysis of accuracy and specificity. Results showed that 25% of the subjects “recalled” the false events and even furnished additional, fabricated details when asked by researchers to retell the event. This experimental data affirms forgery of false infantile memories (3). Zola’s other studies have shown that there are other memory systems in the brain that may affect memory recall. The overall conclusion drawn from Zola’s report speaks to the pliable nature of memory. Memory is Dartmouth Undergraduate Journal of Science
the materialization of ideas distributed across the brain, Harley and Reese drew on earlier studies not localized in one region. Zola claims that memory that demonstrated that parents who are able to corresponds with regions of the brain that help dictate the elaborate information about past events expand on real from the imagined, and these regions establish the their children’s memory information. Parents’ “talk” temporal position of the memory. In his report, Zola makes assists in children’s “memory talk,” because it helps note of the biological basis of memory and its distortions. develop children’s ability to describe autobiographical Zola provides anecdotal evidence as well as experimental memories (4). In a 1993 study, Harley and Reese evidence for his hypotheses, all of which seem to used several methods in a novel experiment that support his conclusion. His report informs future studies showed the interplay of Howard and Courage’s selfthat it is important to take several types of memory knowledge and parental styles of communication (4). into consideration, In order because relying on to assess children’s just one would give verbal memory misleading results and maternal styles due to the fact that of reminiscing, memory is controlled mothers were asked by multiple brain to choose several systems. And one-time events that because it is difficult they shared with to understand clearly their children (i.e. the causes of infantile feeding ducks at the amnesia, researchers local pond, airplane should not assume flights). The events that the recollections were then introduced provided by into a conversation participants in their between mother and research are complete child. Conversations and valid (3). Memory: An inflated representation of the left hemisphere showing frontal brain regions involved in were videotaped memorization. and audio-taped by researchers in order to Revelations about maintain a profile for Autobiographical Memory and its each mother-child case. During designated monthly time Relation to Language intervals, children would be prompted to discuss past One of the most definitive factors of autobiographical memory is that it has the potential to events with their mother. The purpose of the experiment be verbalized. If a memory can be expressed verbally was to determine how well the child’s elaboration of and understood, others are given the opportunity to events depended on the mother’s maternal reminiscing internalize that memory in a way that will help them style. They hypothesized that the extensiveness of the associate it with the individuality of others. But to mother’s language style would result in stronger memory what extent are the implications of verbalization a elaborations of the child. Mothers facilitated recalls by factor of autobiographical memory? In their 1999 case giving three event-specific cues to prompt more detailed experiment, Keryn Harley and Elaine Reese focused responses. Event cues were observed to be effective on the cognitive and social factors of autobiographical if they overlapped with information already stored in memory in 1.5 to 2.5 year-old children. According to the memory. The mothers were strongly discouraged Harley and Reese, autobiographical memory functions to provide further assistance beyond just the cues (4). to facilitate individuals’ learning about others by According to Harley and Reese, the role of the relating past experiences. Harley and Reese reviewed the parent in interacting with the child is not only crucial to experiments of self knowledge theorists who argued that the child’s communication skills, but also significantly the lack of understanding of personal self contributed impacts the child’s autobiographical memory. Parents who to infantile amnesia. With the acknowledgement of the provide greater detail when speaking about the past with cognitive self, greater stores of autobiographical memory their children and who follow their child’s conversation are produced. As awareness of self becomes stronger, cues tend to engage their child in discussions more than there is more interplay among the different types of parents who do not provide elaborative conversations. memory, which allows us to develop individuality (4). The parents that do not engage in discussions inhibit narrative construction of their children. Children with Image courtesy of Professor William Kelley, adapted from Wig et al., 2004, Journal of Cognitive Neuroscience.
FALL 2007
7
parents who are more expressive and vocal develop the ability to recall and vocalize past events more lucidly, especially during the window of infantile amnesia. Thus, maternal stylized elaborations encourage children to synthesize narratives heavy in diversified content and breadth as early as age three. This has lasting impressions on children because it increases and reinforces the amount that older children recall from earlier recollections. Children who were capable of vocalizing thought at the time of an experience connect memory and language more naturally, because events can be encoded verbally (4).
Speculations about Early and Late Memories
Harley & Reese’s experiments are strongly suggestive of how excellent verbalization of recollections aid in the continuity of autobiographical memory. But one can’t help but question whether there are disparities between memories of early life and those of later years. In 1999, Tiffany West and Patricia J. Bauer pioneered a study in which they attempted to characterize the differences between early and late memories. It is known that memory recall for most adults during the first years of life is limited. The few that have remained for recall have strong emotional impact on the individual. These memories, interestingly, are recalled in the thirdperson, rather than the first-person, perspective. This is most likely due to the fact that recognition of self has not fully developed. As later memories form, the perspective sharply changes to the first person. West and Bauer, in their research, studied the differences in memory storage in two stages in human development— from as early as age 5 to as old as age 10 (1). Using the same methods in male and female test subjects, experimental evidence showed that there are few differences between early and later memories in both sexes when three variables—emotion, perceptual information, and perspective—are accounted for. In two sessions, West and Bauer asked male and female students of University of Minnesota to report events that they recalled from their childhood. In the first session, students had to report four memories from before the age of seven. In the second session, students were asked to report on memories after the age of seven. Both sessions were conducted in one sitting and everyone had one week to complete the reports. It was important for accuracy and details to be accounted for equally. (1).
Delineating Infantile Amnesia
Memory can be difficult to interpret if the context in which it is described is not understood entirely by another party. In a 2005 study, Darryl Bruce examined the depth of memory expression in young adults, by focusing in on the nuances between first memory 8
fragments and first event memories. Bruce defines memory fragments as isolated memory moments that have no event context and are remembered as images, behaviors, and emotions. First-fragment memories are believed to form earlier in life than first event memories. Bruce and his team concluded that childhood amnesia is gradually supplanted by fragments of early childhood experiences, and not by episodic memories. In Bruce’s experiment, the research team designed a web survey taken by 185 students from Saint Mary’s University. Students were asked to provide information about their lives, in addition to their earliest memories. The students indicated the age at the time the memory had taken place and the corresponding confidence in this approximation. The participants believed that they were younger when they had their earliest fragmental memory. These results are reflected in the calculated mean ages of the participants who had been divided according to the information they were asked to provide. Data showed consistency regarding the expected early timeframe of fragmental memory of the participants. Results not only showed significant uncertainty in the times when the fragmental memory was estimated to have occurred, but also poorer content quality (5).
Autobiographical Memories Across a Wide-Age Range
Studies of infantile amnesia frequently examine adult subjects, but few have used adolescents. In a 2005 study conducted by Peterson, Grant, and Boland, memory recollection was investigated in 136 young children and adolescents in the 6-19 age range. Involvement of parents was crucial, because they provided the facts that would help the researchers attain more accurate results (6). Over several years, researchers visited the homes of the recruited participants to conduct the investigations. Each participant had to answer several questions regarding the earliest recollections of their childhood. A parent or guardian provided confirmation of details as needed. If parents believed certain memories were incorrect, that information was discarded. Also, if parents believed that certain recollections were based explicitly on photos and videos, those memories were excluded. The research team then developed a scoring system that took into account age of earliest recollections, nature of the event, emotional tone, structure, and social orientation surrounding the memories. Children who were between six and nine had earlier memories than older children. In contrast, older children tended to have more emotional memories than younger children, and there were few gender differences across all the children’s ages (6).
Dartmouth Undergraduate Journal of Science
Culture and Infantile Amnesia
Zola, Bruce, Harley, Reese, and Bauer studied different factors that affect infantile amnesia, yet none took into account cultural implications on memory. In 2003, Qi Wang provided evidence that childhood environment can significantly affect autobiographical memory and infantile amnesia. Just as there are different maternal styles of elaborations that affect memory, broader factors such as cultural difference also affect the degree of memory recall. In his recent cross-cultural study, Wang recognized the differences in memory between Eastern and Western culture groups. Wang and his team compiled memory narratives from American and Chinese mothers and their three- to four-year-old children. They had asked 154 Chinese and American children a series of questions relating to early memories, which they took note of and documented in tape-recordings. On average, Caucasians in Western nations have a relatively good recollection of events by 3.5 years of age. Asians attained similar levels of recollection as Caucasians, but generally several months after 3.5 years (7). Not only does the average temporality of earliest memory vary between cultures, but the nature of event recall also seems to be quite different. Wang’s testing of international subjects demonstrated that Western childhood memories tended to be self-focused and emotionally detailed, while Chinese childhood memories were more generic and not heavily focused on the individual. The results stem from different child-rearing practices and beliefs: Western nations tend to place more emphasis on socializing children than Eastern nations. This explains why American children provided more emotionally detailed memories than their Chinese counterparts. Traditionally, Chinese families do not introduce social skills until the child reaches six years of age. By that age, children are disciplined in societal and filial duties (7).
Where the Problems Lie—The Issue of Infantile Amnesia
Factors that are important in autobiographical memory formation are multitudinous and complex. Although the multiple studies have different experimental focuses, they collectively show that autobiographical memory is dynamic and is closely related to infantile amnesia. Before one can draw conclusions about infantile amnesia, one must first question the validity of autobiographical memory, and thus, the results of the experiments above. Stuart Zola’s study is most relevant in resolving the issue, because he addresses the fallacies of memory. We are all susceptible to different kinds of memory distortions. At times, distinguishing between the real and the imagined leads to confusion. Such confusion is the result of the dynamism of memory. And FALL 2007
so, memory cannot be categorized as a simple, wellunderstood entity. Storage of autobiographical memory, for example, is affected by new events. Consequently, certain components of a memory may differ from its original versions, facilitating the loss of some information. Therefore, memory recalls will not only be distorted, but also incomplete, due to consistent synaptic changes that occur with every memory recall of a person (3). Our bodies are changing constantly—from the ions that exit certain protein channels to the reproduction of cells that make up our skin tissue by the process of mitosis. So it should come as no surprise that our neural connections are just as dynamic as other regions in the body. If these synapses are constantly changing, then wouldn’t the stored information that is being transmitted change as well? It is important to recognize that the reorganization of neural configurations changes gradually with time (3). However, memory doesn’t change quickly within one period in development. If this were the case, complications in learning would arise and information would be highly unreliable. But since humans are always learning, even during infancy, information changes do not occur too quickly in just one stage of development. Thus, when juxtaposing Zola’s analysis with Bauer and West’s results, one observes sharp contrasts. Bauer and West’s results showed that there are no substantial differences between early and late memories. One may be inclined to accept the validity of the data; however, it is difficult to reconcile that memories in two distinct stages (early age five and late age ten) in human development are considerably consistent. Developmental differences must play a critical role in memory. Older people tend to have more understanding and awareness of the world, allowing them to evaluate cause and effect in their lives. A higher degree of understanding allows for a greater level of recollection. Furthermore, older children and adults have the unique ability to rationalize events well. Interpretation of the results in the experiment may have been right, but the results themselves could have been flawed, due to the subjectivity of the participants, who may have unintentionally favored one version of an event over another. West and Bauer considered such possibilities themselves in their final discussion to show that they were aware of potential sources of error. If all potential errors are set aside and the results are legitimate, then West and Bauer’s discovery of similarities between stages of memory and its implications on autobiographical memory is telling (1). Furthermore, there is a discrepancy between Peterson, Grant, and Boland’s 2005 results and West and Bauer’s 1999 results, even though both had studied similar subjects and their early and later memories. In the 2005 study, the researchers found that older children’s memories are likely to follow an articulate 9
narrative form, unlike younger children’s memories, which likely account for specific moments in time. They then concluded that earliest memories of younger children reflect less elaborative narrative skill, which is quite similar to Harley and Reese’s findings in their 1999 study. Basically, their results suggested that differences do exist between early and later memories (6). In many of the studies described above, researchers and participants alike had been reared according to Western ideologies and practices. But what implications do their results suggest on a global scale? As suggested earlier about Qi Wang’s report, there may be slight flaws in the contemporary theories regarding infantile amnesia because they are rooted primarily in Western views, which do not correlate exactly with other cultures. Qi Wang clearly details the facts about how infantile amnesia is not a standard condition because nurture—religion, behavioral lessons and expectations, philosophy, and politics—plays a role. Autobiographical memory is not only an expression of the individual, but also a product of culture. Results on infantile amnesia should be considered carefully and with great caution, especially when other cultural standards may play a role (7).
Image courtesy of the Centers for Disease Control and Prevention.
Conclusion
Though we now know more about infantile amnesia than ever before, the data is still inconclusive. This is because memory is not simply an object one can place in a pitri dish to observe via a microscope. It has too many possible distortions and too many unaccountable gaps to draw incontestable conclusions. Yet still, we cannot allow the unreliability of memory to deter further study and consideration of experimental data such as that of West and Bauer’s 1999 experiment. These studies help us achieve a greater understanding of the elusiveness of memory. We could also learn more about ourselves, since infantile amnesia is tied so closely to autobiographical memory. We understand some of the neurological, cognitive, social, and cultural factors that explain why almost every human being has so few recollections of the first years of their lives, but we need to explore the issue further. Our scrapbooks still contain missing pages, but with more carefully conducted experiments, we will be able to reconstruct our pasts and live for a more enlightened future. References 1. T. A. West, P. J. Bauer, Memory. 7, 3 (1999). 2. J. E. Zull, The Art of Changing the Brain: Enriching the Practice of Teaching by Exploring the Biology of Learning, Stylus Publishing, Sterling, 2002, p. 81. 3. S. Zola, Clinical Psychology Review 18, 8 (1998). 4. K. Harley, E. Reese, Developmental Psychology 35, 5 (1999). 5. D. Bruce, Memory & Cognition 33, 4 (2005). 6. C. Peterson, V. V. Grant, L. D. Boland, Memory 13, 6 (2005). 7. Q. Wang, Memory 11, 1 (2003).
Have you ever done anything like this? Have you ever played with one of these?
Submit your research to the DUJS! Image courtesy of Tim Shen ‘08.
10
Dartmouth Undergraduate Journal of Science
science history
Garnet:
A Tour of Barton Mine, North Creek, NY LAURA MYERS ‘08
G
arnet, the January birthstone and official gem of silicate with the general formula A3B2(SiO4)3 where A can New York State, is a mineral prized for its beauty be Mg2+, Fe2+, Mn2+, or Ca2+ and B can be Al3+, Fe3+, but often unrecognized for its practical applications. or Cr3+ (4). It has a crystalline structure of alternating Garnet jewelry was historically worn by the nobility and large divalent and small trivalent ions. The divalent ion is upper-middle classes; garnet necklaces dating back to surrounded by eight oxygen atoms, which take the shape the Bronze Age have been found in the graves of upper- of a distorted cube around the single divalent ion. Garnet class citizens and in the tombs of some of Egypt’s oldest comes in many colors, but jewelry is usually made with mummies, used to ward off evil spirits when the deceased the reddish-brown variety. The stone appears dark brown person entered into the next world. For centuries, this stone when extracted from metamorphic rock but can be polished has been a symbol of faith and trust, to a reddish tint by jewelry-makers. and gifts containing garnet are seen Garnet is a hard material with a as a token of loyalty. However, 6.5-7.5 on Moh’s scale of hardness beside its aesthetic value, many (4). (Diamond, which has a Moh’s of garnet’s qualities like hardness, hardness rating of 10, is considered specific gravity, chemical inertness the hardest material known to man.). nontoxicity, and angular fractures Garnet can be classified have lent themselves to practical into two general groups based on applications such as sandpapering, elemental composition. Pyralopite drilling, and polishing glass. contains iron, and ugrandite contains The stone can take on a calcium. Almandine, for example, variety of appearances. Its color belongs to the pyralopute group and can vary from red-brown to gold is the most common variety of ironand light green. However, one clue containing garnet. In fact, almandine that can be used to identify garnet is is the most abundant variety of garnet location. Garnet is found in gneisses in the Barton Mine and appears as and schists as well as contactreddish-brown crystal deposits within metamorphic deposits in crystalline large boulders of metamorphic rock. limestones, pegmatites, and While on our tour of the Barton serpentinites (1). Locations in the Mines, we were able to mine garnet Garnet was initially found in this area. The water line was United States where garnet is mined accidentally damaged, resulting in flooding that prevented ourselves and purchase the unpolished include New York, Idaho, Montana, further garnet mining on this mountainside. stone for bulk price. Our tour guide New Hampshire, North Carolina, and showed us how to pour a bucketful Oregon (2). The largest garnet reserve is a crystalline gneiss of water over the ground and look for pieces of stone found in North Creek, NY, which I visited this past summer. bigger than a dime (4). The water helped to wash away In the late 1800s, Mr. Barton of North Creek began mining dirt from the area and made the stones shinier and easier to garnet on his land on Gore Mountain, using picks and chisels locate. We were able to find two pieces of reddish-brown to etch out the deposits of red-brown stone from surrounding garnet that were approximately 0.5 inches in diameter. hornsblend and feldspar. He soon developed a small garnet Many industries such as ceramics, glass-making, company called the Barton Mines Co (3). In order to extract shipbuilding, and wood furniture operations have used large amounts of stone faster to be shipped out to waiting garnet as an abrasive or polishing agent (5). Crushed garnet jewelry-makers, he began blasting the sides of the mountain is made into sand paper by applying glue to a paper surface, with dynamite. However, within a year, they hit the water spreading garnet powder over it and allowing it to dry. The line and three deep lakes sprung up unexpectedly. Since powder is usually very fine and is dark reddish-brown. Many these lakes blocked access to the quarry, Mr. Barton and his tools have been developed for grinding garnet into powder. workers moved to another mining site five miles away on Garnet is used in a variety of other applications Ruby Mountain. Garnet mining is still active at that site, and such as grinding plate-glass and polishing optical lenses the original site on Gore Mountain is open to tourists (3). because it is fairly inexpensive and extremely hard. In fact, When discussing the composition of a mineral, it a piece of garnet from the Barton Mine was used to polish isFALLimportant to consider its physical properties. Garnet is a the Hubble Telescope (4). Garnet is also used in the scratch2007 Image courtesy of Laura Myers ‘08
11
free lapping of semiconductor materials and is currently While new uses for garnet are being developed replacing silica sand in blast cleaning media because it every day, it is still common in jewelry. In fact, there is does not produce dangerous airborne dust. Finally, it is a small jewelry store adjoining the Barton Mines Museum used for blast cleaning aluminum surfaces on aircrafts that sells necklaces, earrings, and rings made of garnet. and cleaning drill pipes in the petroleum industry (5). Some pieces of garnet jewelry are made with a combination The industries that currently consume the most of garnet and other gems such as diamond and mounted on garnet are those that use water jet cutting to cut materials silver or gold. Prices for the finished product vary according such as steel (5). It uses a jet of water at either high velocity to the size and purity of the garnet as well as the value of the or pressure and an abrasive stones surrounding the garnet. surface to cut through material; For example, a necklace mounted essentially, an accelerated version in silver with a precious garnet of water erosion is used to the size of a pinky-fingernail cost make metal parts for machinery. about $24; the same necklace Garnet is added to increase the mounted in gold was $37. Of abrasive power of the steam. course, this price accounts for the This technology has reduced the extraction, processing, polishing, difficulty and cost of cutting hard and mounting of the stone and materials, and is used in mining includes the amount charged for and the cutting of parts for the garnet, mounting, and the household appliances or motors. labor that went into producing the Recent statistics show finished product. A cost analysis that the US continues to be one of the jewelry sold in this store of the leading consumers of Rock samples showing pockets of garnet that must be removed for was conducted based on estimates use in drill tips or in jewelry. industrial garnet. In 2005, the US from July 8, 2006. On our tour we accounted for over 35 percent of global garnet consumption, found that the stone carried out which totaled 68,600t (5). Since the 1990s, however, the from the mine cost $1/lb, and we can estimate that a stone of amount of garnet bought from foreign exports in domestic the size in the aforementioned necklace would weigh about markets has begun to exceed the amount bought from 3g (0.007lb), so if mined directly, the garnet should cost no domestic production. Prices of garnet vary depending more than a penny. Thus labor and mounting cost far more on application, quality, quantity purchased, source, and than the stone itself, illustrating that while pure garnet is a type. In 2005, the price of crude garnet ranged from $58- high quality stone, it is great for budget-conscious consumers. 120 per metric ton with an average of $96 per metric ton, The Gore Mountain Mine is an important reserve while the price of refined garnet ranged from $61-298 per of garnet in the world, located in an area not yet too metric ton with an average of $268 per metric ton (5). It industrialized. The museum is a classic rural upstate New is thought that the demand for garnet polishing powders York building without air conditioning; the tour was given by commonly used in polishing television and monitor screens a student whose summer job was to work at the museum store; will decline as flat screen technology takes hold of the and the road leading to the mine is not yet paved and winds television industry, since flat screens do not need to be around the mountain with hairpin turns every 50 meters. The polished by garnet during the manufacturing process (5). area has remained a place of natural beauty, so far spared On the other hand, recent increases in petroleum from seizure of its material resources. It is valuable to visit prices have caused high demand for new sources of oil, the Barton Mine in order to understand how materials can which may, in turn, increase the demand for garnet, as it be extracted from the earth and put to use for both practical is used in cleaning drill pipes (5). Since oil has became and aesthetic purposes. Garnet may even be used in more an important source of energy, however, other types diverse applications as new technology continues to develop. of fuel have been developed such as vegetable oil and hydrogen. It will be interesting to see which sources of References fuel power the new millennia. The steam engine powered 1. O. Johnson, Minerals of the World, Princeton Field Guides (Narayana Press, Princeton and Oxford, 2004) pp. 29-32, 68-69. most of the Industrial Revolution, so it is possible that 2. O. Johnson, Minerals of the World, Princeton Field Guides, Narayana another fuel source-a newly refined oil, a controlled type Press, Princeton and Oxford, 68. of hydrogen fusion, or a material undiscovered as of yet- 3. Garnet Mines Tour, Barton Mines, Gore Mountain, Long Lake, NY, may fuel a second Industrial Revolution in the future. July 8, 2006. 4. O. Johnson, Minerals of the World, Princeton Field Guides, Narayana If oil becomes the fuel of choice, garnet will be valuable Press, Princeton and Oxford, 2004. as a component of the cleaning media for drill pipes. 5. D. Olson, Minerals Yearbook, 2005, pp. 29-32, 68-69. Accessed Image courtesy of Laura Myers ‘08
online on August 15, 2006 through Kresge Physical Sciences Library’s link to sources on mineralogy, pp. 28-32. 12
Dartmouth Undergraduate Journal of Science
Vibrio cholerae in the human intestine can cause increased mucous production, as shown. Severe diarrhea is the result. Image courtesy of the Centers for Disease Control and Prevention.
biology
The Life and Death of the Cholera Pathogen Vibrio cholerae CHAD GORBATKIN ‘08
D
irty water has a profound effect on the lives of the global majority. Waterborne disease takes lives directly (1), and further weakens or kills those affected by HIV/AIDS, malaria, and malnutrition (2, 3). Viral, bacterial, prion, fungal, and protozoan waterborne diseases can be eliminated simultaneously with the provision of clean drinking water (3). In regions where reliable sanitation systems do not yet exist, local environmental conditions may dictate which pathogens will be more abundant. This article will focus on the cholera pathogen Vibrio cholerae, a bacterium that causes extreme physical pain with high mortality, primarily in Southeast Asia, Africa, and Latin America, although it occurs throughout the world (4, 5, 6). V. cholerae is a gram-negative bacterium responsible for 180,000-500,000 cases of cholera infection annually (7, 8). V. cholerae has at least 155 serogroups, and of these, only two serogroups contain strains that are currently responsible for epidemic and endemic cholera (i.e. O1 and O139) (9). Strains within these two serogroups vary in production of cholera toxin, colonization factors, surface antigens, and polysaccharides contributing to chlorine-resistance (10). Since 1817, eight pandemics of cholera have taken lives across the world, spanning all continents except Australia and Antarctica (6). The exchange of V. cholerae between regions can occur by several vectors, including infected individuals and contaminated food or water (6, 11). Pathogenic strains of V. cholerae can remain viable in fresh, brackish, or FALL 2007
salt water for months before disease outbreaks actually occur, which is typically when water temperatures rise (12). In these aquatic systems, the bacterium can attach to many different organisms including zooplankton (e.g. copepods) (13), phytoplankton (14), or amoebas (15), and benthic organisms such as oysters (11). The bacterium can also form biofilms on these hosts (16, 17). Influxes of rain or nutrients may cause seasonal maxima first of phytoplankton populations and then copepods (6, 11). With a bloom of organisms that serve as V. cholerae attachment sites and with additional resources provided directly to the bacteria, the bacterial populations thrive (6). Untreated water therefore has a greater probability of containing sufficient toxin-producing cells to initiate an infection, especially for children, the elderly, or those with compromised immune systems (3, 8). Once an infectious dose of approximately 104106 cells is ingested (6), the production of cholera toxin acts primarily on the epithelial cells of the intestine (8, 18). After binding to a cellular receptor, the toxin’s most significant effect is inhibition of the guanosine triphosphatase (GTPase) activity of a signaling protein subunit (Gαs) (19, 20). As part of the signaling pathway, when Gαs is bound to GTP (Gαs -GTP), it activates adenylate cyclase which then produces cyclic adenosine 3’,5’-monophosphate (cAMP) (18). The secondary messenger cAMP activates chloride export by the cystic fibrosis transmembrane conductance regulator (CFTR) and inhibits sodium absorption (8). The cells therefore 13
continue to lose salts, which are followed by water through osmosis. Normally, the hydrolysis of GTP on GÎąs stops the signaling pathway; however, the cholera toxin inhibits the GTPase activity. The result is the rapid onset of diarrhea, with patients losing 10-30 liters of fluids in the first three days (5). The severe volume depletion and electrolyte imbalance that follow can result in dehydration, shock, and acidosis (8). Acidosis occurs after the loss of water and therefore blood volume, which then impairs normal kidney function (21). The tubule cells decrease in efficiency of sodium reabsorption, lowering blood anionic base concentration. The tubule cells also decrease in the efficiency of ammonium (i.e. net hydrogen) excretion. The result is the overall lowering of blood pH. Each step along the GTPase signaling cascade represents a target for cholera treatment for those with access to healthcare (8). Possibilities include the inhibition of the cellular receptor for the cholera toxin, deactivation of the CFTR protein, or most importantly, oral rehydration solutions. Without proper medical treatment or electrolyte replacement, the mortality rate is approximately 50% (5). Modern science has the tools to prevent and treat cholera and other waterborne diseases, but for approximately 2.5 billion individuals worldwide, economics prevents those technologies from ever reaching them (1). Even if international aid for water supply and sanitation increases fifty-fold, universal access to clean water will not be achieved until after 2025 (22, 23). Until that day, makeshift technologies will be necessary to reduce waterborne disease (3). Simple boiling is often effective, but fuel is too expensive for many households. With enough finances, an intermediate water treatment system may address waterborne disease at the scale of local communities (e.g. sedimentation tanks followed by slow-sand filtration) (24); however, when communitylevel systems are inaccessible, cost-effective technologies addressing specific, local pathogens such as V. cholerae are useful on a household level (7, 25). A review of four general household prevention techniques is presented below.
The Accessible, Cost-Effective Prevention of Cholera
When fuel for boiling drinking water is too expensive, a useful cholera prevention technique is filtration through a household fabric, specifically the Bangladeshi sari cloth (7). When the cloth is folded so that water is filtered through four layers, anything over 20 Îźm is removed, including particles and plankton to which the cholera pathogen may be attached. In the study by Colwell et al. (2003), the household use of filtration through sari cloth reduced clinical cholera cases by 48% in a study of approximately 133,000 individuals in Bangladesh (7). Filtration through the cloth is an accessible technology for many communities with 14
endemic cholera, and could be a vital part of an effective cholera-prevention program by removing V. cholerae reservoirs from drinking water. The major drawback is that free-swimming bacteria, including V. cholerae and other pathogenic species, are not removed from the water; however, this could be solved by using simple filtration in combination with one of the techniques discussed below. In areas with high incidence of solar radiation, household batch reactors (i.e. containers constructed from recycled household materials) can utilize the bactericidal effects of both temperature and ultraviolet light (UV) to clean water. Martin-Dominguez et al. (2005) constructed batch reactors with two-liter plastic water bottles over a reflective metal (e.g. aluminum) that deactivated 100% of coliform bacteria after four hours in the sun (25). Bottles painted half-black maintained higher temperatures by 15-25oC and may further deactivate pathogens by pasteurization (25). Mani et al. (2006) showed that conditions of low solar radiation require the use of a fully transparent bottle over a reflective surface (26), which could be particularly important, for example, when using batch reactors during the monsoon season in areas of Southeast Asia with endemic cholera. With sub-optimal solar radiation, the water’s exposure to UV must be optimized because the bactericidal properties of UV are more important than thermal inactivation (26). V. cholerae is vulnerable to both UV and pasteurization (27, 28); however, further research should investigate the exact V. cholerae deactivation capabilities for a given type of batch reactor before the prototype is presented to a community. Batch reactors are an accessible, costeffective technology that can be useful in areas without clean drinking water (25), and could be useful in areas with endemic V. cholerae. UV radiation deactivates viruses, bacteria, and protozoa (29, 30); therefore, these batch reactors may be useful in confronting multiple waterborne pathogens simultaneously. Halogen-releasing agents, specifically those containing chlorine, have long been important in sanitizing water (3), particularly in areas with endemic cholera (31). Chlorine-releasing agents include N-chloro compounds, chlorine dioxide, and sodium hypochlorite, and although they act similarly, these compounds deactivate microbial contaminants by slightly different mechanisms (32). Sodium hypochlorite (NaOCl) is found in household bleach (3-6% NaOCl by mass), and is therefore most useful because bleach is cheap and abundant even in rural communities. In water of pH 4-9, NaOCl ionizes to Na+ and OCl- and is found in equilibrium with hypochlorous acid (HOCl). HOCl is the source of active chlorine. In the presence of a bacterial cell, HOCl will chlorinate nucleotide bases, disrupt oxidative phosphorylation, and prevent growth and proliferation by inhibiting up to 96% of DNA synthesis, and between 10Dartmouth Undergraduate Journal of Science
30% of protein synthesis (32). Some experts have attributed the 1991 cholera epidemic in Peru partially to an antichlorination campaign based on NaOCl’s carcinogenic properties (31). Chlorination of drinking water using household bleach is regularly practiced in many rural Peruvian communities, with government signs proclaiming “No hay diarrhea con agua clorada” (“There is no diarrhea with chlorinated water”), alongside directions to add two drops of bleach per liter of water. Chlorination can produce hazardous disinfection byproducts (DBPs; e.g. trihalomethanes) that are linked to cancers and birth defects (33); however, in many regions, Clorox: One effective and low-cost method of sanitizing drinking water is to add small amounts of bleach. chlorination programs have successfully prevented waterborne diseases and saved lives for many years (3, 32). When used properly (i.e. vaccine could be useful in high risk areas before or during 2-4 drops of 5.25% NaOCl per liter water), chlorination a cholera outbreak. Further research will be necessary is more reliable than solar radiation batch reactors to determine the duration of this immune-response and because it acts as a residual disinfectant for viruses, its effectiveness against different strains of V. cholerae bacteria, and protozoa (3). Because V. cholerae can be (35). Currently, cholera vaccines may not be sufficiently found in biofilms (17), of which chlorine is only able accessible or long-lasting to be primary components of to deactivate the external layer (16), filtration pre- cholera prevention programs. In the future, the advanced disinfection is an essential complement to this technique. understanding of immunological mechanisms may enable If drinking water is especially murky, additional chlorine the rational synthesis of a universal cholera vaccine that can be safely added until there is a slight chlorine odor, elicits a stronger and longer lasting immune response but total chlorine should never exceed 5 mg/liter (34). (37). Vaccinology is a dynamic science that will continue Chlorination, or another halogen-releasing agent, is to evolve with the understanding of immunological vital in the rapid sanitation of water, and is an important mechanisms, but as of yet, there are no universal vaccines element even in advanced water cleaning systems. (i.e. that cover about 90% of the target population) Vaccines for cholera have also been used for over for many bacterial pathogens, including V. cholerae. a century, but with minimal efficacy in general because the duration of immunogenic response is very short-lived, Conclusion especially for children under five (8). Further difficulties Every year, V. cholerae infects hundreds of arise from the diversity of pathogenic strains within the thousands of people with cholera, a rapid and debilitating O1 and O139 serogroups (35). Cholera vaccine research disease. In areas without formal water sanitation or currently focuses on the development of a single dose adequate medical supplies, there are effective, accessible vaccine that could be affordable and rapidly immunogenic technologies that can prevent the disease. Simple for all ages, from infant to adult (35, 36). The most filtration, solar radiation batch reactors, and household promising candidate is Peru-15, a live oral vaccine of the bleach can save lives in regions of endemic cholera, El Tor Inaba strain in the O1 serogroup. In the 2007 study while at the same time removing other waterborne of Qadri et al. (35), 84% of Bangladeshi children two to pathogens from drinking water. Education programs five years old given the large-dose vaccine developed a currently are and will continue to be essential in vibriocidal antibody within seven days. The vaccine has bringing these technologies to high-risk areas until the already been shown to be safe and immunogenic in adults development of permanent sanitation infrastructure. (36), and together, the two findings suggest that the Peru-15 Image courtesy of Chad Gorbatkin ’08.
FALL 2007
15
Acknowledgments
I would like to thank Professor Kathryn Cottingham for her generous insights and advice. References 1. A. Fenwick, Science 313, 1077 (2006). 2. R. T. Bryan, Clinical Infectious Disease 21, 62 (1995). 3. World Health Organization, Emerging Issues in Water and Infectious Disease (Geneva, 2003). 4. D. R. Allton et al., Southern Medical Journal 99, 765 (2006). 5. G. H. Rabbani et al., Journal of Infectious Disease 191, 1507 (2005). 6. R. R. Colwell, Science 274, 2025 (1996). 7. R. R. Colwell et al., Proceedings of the National Academy of the Sciences USA 100, 1051 (2003). 8. J. R. Thiagarajah and A.S. Verkman, Trends in Pharmacological Sciences 26, 172 (2005). 9. S. M. Faruque, M.J Albert and J.J. Mekalanos, Microbiology and Molecular Biology Reviews 62: 1301 (1998). 10. J. F. Heidelberg et al., Nature 406, 477 (2000). 11. K. L. Cottingham et al., Frontiers in Ecology and the Environment 1, 80 (2003). 12. A. Huq et al., Applied and Environmental Microbiology 71, 4645 (2005). 13. A. Huq et al., Applied and Environmental Microbilogy 45, 275 (1983). 14. M. L. Tamplin et al., Applied and Environmental Microbiology 56, 1977 (1990). 15. H. Abd et al., FEMS Microbiology Ecology 60, 33(2007). 16. J. W. Costerton, P.S. Stewart and E.P. Greenberg, Science 284, 1318 (1999). 17. P. I. Watnick and R. Kolter, Molecular Microbiology 34, 586 (1999).
18. D. Cassel and T. Pfeuffer, Proceedings of the National Academy of the Sciences USA 75, 2669 (1978). 19. D. Cassel and Z. Selinger, Proceedings of the National Academy of the Sciences USA 74, 3307 (1977). 20. S. J. Cook and F. McCormick, Science 262, 1069 (1993). 21. C. P. Anthony, The American Journal of Nursing 63, 75 (1963). 22. The United Nations, Universal Declaration of Human Rights (Geneva, 1998). 23. P. H. Gleick, Science 302, 1524 (2003). 24. J. Davis and R. Lambert, Engineering in Emergencies: A Practical Guide for Relief Workers, (ITDG Publishing, Warwickshire, 2002). 25. A. Martin-Dominguez et al., Solar Energy 78, 31 (2005). 26. S. K. Mani, R. Kanjur, I.S. Bright Singh, R.H. Reed, Water Research 40, 721 (2006). 27. M. Abbaszadegan et al., Water Research 31, 574 (1997). 28. M. D. Johnston and M.H. Brown, Journal of Applied Microbiology 92, 1066 (2002). 29. C. A. Suttle et al., Applied and Environmental Microbiology 58, 3721 (1992). 30. D. E. Huffman et al., Water Research 36, 3161 (2002). 31. J. Tickner and T. Gouveia-Vigeant, Risk Analysis 25, 495 (2005). 32. G. McDonnell and A.D. Russell, Clinical Microbiology Reviews 12, 147 (1999). 33. M. J. Nieuwenhuijsen et al. Occupational and Environmental Medicine 57, 73 (2000). 34. U.S. Environmental Protection Agency, Emergency Disinfection of Drinking Water (Washington D.C., 2006). 35. F. Qadri et al., Vaccine 25, 231 (2007). 36. F. Qadri et al., Journal of Infectious Diseases 192, 573 (2005). 37. S. H. Kaufmann, Nature Reviews: Microbiology 5, 491 (2007).
Interested in science writing or research? Being on the DUJS staff is a great way to experience all aspects of science writing, research, and publication. Blitz “DUJS� for more information
16
Dartmouth Undergraduate Journal of Science
psychology
Resilience in Child Post-Traumatic Stress Disorder: Implications for Treatment GRACE CHUA ‘07
Introduction
On March 13, 1996, a gunman shot and killed sixteen kindergartners and one adult at a primary school in Dunblane, Scotland. Other children who witnessed the massacre experienced post-traumatic stress disorder (PTSD), an anxiety disorder that persists for months or even years after a traumatic event or series of events. Likewise, many children who survived events such as 9/11 or the Indian Ocean tsunami also developed the symptoms of PTSD. Yet a significant subset did not. Some research even suggests that resilience is less a rare exception than an “especially effective form of normal adaptation” - what one researcher termed “ordinary magic” (1, 2, 3). What protective factors – environmental, cognitive, and neurophysiological – serve to enhance such resilience? And what implications does such a multifactorial analysis have for resilience-promoting interventions and treatment?
What is PTSD?
Post-traumatic stress disorder (PTSD) is classified as an anxiety disorder, but unlike other anxiety disorders, does not have an element of irrationality. It occurs after a traumatic event, which the DSM-IV-TR defines as involving “actual or threatened death, serious injury or the threat to self or others” (APA, 2000) (Appendix 1). These traumas can be acute – one-off, unique and unanticipated events such as 9/11, the Virginia Tech school shooting, or house fires. Or they can be chronic – involving continued exposure to horrific events such as chronic child abuse. In children, reactions to trauma include intense fear, helplessness, horror, agitation or disorganised behaviour; they may re-enact the traumatic event or have nightmares about it, for example. PTSD has three symptom clusters: re-experiencing, avoidance and increased arousal (4). Children may reexperience the event by participating in repetitive play that revisits aspects of the trauma; they may have intrusive, distressing recollections of the event that are triggered by traumatic reminders – for instance, a child who survived the Indian Ocean tsunami may see a film about the sea and shriek in terror. Young children might also have nightmares as a result of the traumatic event without any recognisable trauma content. In addition, children may avoid stimuli associated with the traumatic event, choosing not to talk about it. Avoidance may also be expressed as numbing – for example, the child who has lived through a serious car accident may feel detached from her friends and cease to enjoy playing with them. Finally, the third symptom cluster involves heightened physical and FALL 2007
emotional arousal. The PTSD sufferer may be unable to sleep, have difficulty concentrating on his schoolwork, or be unusually angry or irritable. Thus PTSD in children must be distinguished from other anxiety or mood disorders, and clinicians should note that PTSD symptoms may manifest differently depending on a child’s age and developmental stage.
Vulnerabilities, Risks and Protective Factors in PTSD
According to the diathesis-stress model of psychopathology, individuals with certain genetic vulnerabilities are more susceptible to developing psychopathology when they are exposed to environmental stresses. Some biological vulnerabilities, such as an overactive hypothalamic-pituitaryadrenal (HPA) axis, translate into cognitive or behavioural vulnerabilities like hypervigilance or a tendency to dissociate – to ‘split off’ emotionally – when faced with trauma. However, no single factor, whether biological, psychological or environmental, isnecessaryandsufficientforanindividualtodevelopPTSD(2,5,6). In addition, it may be worthwhile to include protective factors (that increase resilience) in a model of psychopathology. Hoge et al. (5) believe that resilience-enhancing factors are more than the inverse of vulnerabilities – for example, a small hippocampus may be a vulnerability to PTSD, but it does not logically follow that having a normal-sized hippocampus is protection from the disorder. Rather, resilience-enhancing factors are processes and mechanisms that offer protection (5). The effects of such protective factors are often tangible only when they interact with other stresses – for instance, an inner-city child of divorced parents who has an adaptive coping strategy may excel in school and stand out amongst her peers. This section of the paper will examine vulnerabilities, risks and protective factors from three standpoints: biological, psychological and environmental.
Biological Factors
One symptom of PTSD is the tendency to reexperience the trauma in flashbacks or nightmares. In order for re-experiencing to occur, the trauma must be coded into an individual’s emotional memory. Researchers thus suggested that people with higher levels of chemicals such as cortisol that modulated memory formation would be more prone to PTSD (5). Conversely, those with higher levels of DHEAS (dehydroepiandrosterone), which has an antagonistic relationship with cortisol, are less susceptible to stress (5).
17
a chicken-and-egg problem – causality is difficult to determine for many biological and psychological aspects of PTSD.
Psychological Factors
The 2D chemical structure of cortisol. Cortisol’s role in memory formation may make those with a higher level of cortisol more susceptible to PTSD.
Besides such biochemical vulnerabilities and protective factors, genetics also plays a part in susceptibility to PTSD. In one study, maltreated boys with a long version of the gene encoding monoamine oxidase A (MAOA) were less likely to commit violent crimes and score high on tests of aggression, while those with a short version of the gene were much more likely to do so (7). And in twin studies, monozygotic twins of subjects with anxiety disorders were more likely to have symptoms of anxiety (8). Such findings about the biochemical and genetic factors in PTSD have important implications about the type and timing of drug therapy for the disorder. Finally, neurophysiology can increase or decrease susceptibility to PTSD. Brain areas such as the anterior cingulate, which is concerned with emotion processing and working memory, and the dorsolateral prefrontal area, which modulates cognitive control and helps inhibit emotional arousal, can influence the cognitive and behavioural components of a child’s response to stress (6). Similarly, a smallerthan-average hippocampus and an overactivated HPA axis may be vulnerabilities to PTSD (9) as the hippocampus is involved in the formation of conditioned fears, and a smaller hippocampus may lead to poorer regulation of the HPA axis (10). However, this research faces some limitations. Something that looks like a vulnerability may in fact be a consequence of PTSD or vice versa, depending on the perspective from which the study is conducted. For example, a smallerthan-average hippocampus has been implicated as a potential vulnerability to PTSD and other anxiety disorders (4, 10), but also as a consequence of trauma (11). Ultimately, research on the relationship between PTSD and hippocampal size was inconclusive (12, 13). Likewise with the child’s intelligence quotient (IQ) – it is commonly believed that low IQ is another potential vulnerability to PTSD, but at the same time, the expression of PTSD symptoms may have an adverse effect on IQ test scores. (In any case, IQ is only a reflection of a child’s cognitive potential in areas such as executive function.) In sum, there is
18
Young children with PTSD may be susceptible to cognitive distortions that heighten their distress, such as believing they could have prevented the trauma. Similarly, a lower IQ may be a cognitive vulnerability, as children with a lower IQ may be more prone to such cognitive distortions and misattributions. (However, Wenar and Kerig (4) also indicate that “in order for an event to be traumatic, it must be perceived as such”; one might expect greater cognitive ability – the capacity to grasp the enormity of trauma – to be maladaptive in this case.) Likewise, poorer emotional adjustment – children who are unable to self-soothe and tend to blame themselves are more likely to have severe post-traumatic reactions (4). In addition, dissociation during trauma may also predict later PTSD symptoms (Berg et al., 2005, cited in 5). More research has been done on the psychosocial aspects of resilience than the biological ones. Some psychological resiliency factors are associated with temperament - that is, children with a sunny disposition and secure attachment to their caregivers may be less likely to develop PTSD to begin with. (Securely-attached toddlers are willing to explore and engage with strangers in the presence of their caregivers, become distressed when caregivers leave and are soothed when caregivers return.) Alternatively, children with these characteristics can attract and take better advantage of social support networks, drawing empathy from family and community members. Perhaps the protective psychological factors most relevant to PTSD intervention are the affective, cognitive and behavioural ones. A high level of executive function is important for inhibiting emotional arousal and regulating behaviour, which may mitigate the child’s response to trauma (6). During or after trauma, a child with a sense of hopefulness (the notion that “things will get better”) and a tendency for positive self-talk is likely to function better than a child without such characteristics (14, 15), as these cognitions may mitigate the perceived severity of trauma. Finally, a child who has a locus of self-control (the belief that one can shape one’s life) and who is competent in various tasks (arts, sports etc.) is less likely to succumb to “learned helplessness” and is more able to cope with stress (5). All these protective factors have implications for cognitive-behavioural therapy.
Environmental Factors
Arguably, environmental risk factors are the most significant predictors of PTSD. In one longitudinal study, children in high-risk environments had much higher rates of psychopathology than children in low-risk environments, even when individual resilience factors such as social-emotional competence and intelligence were taken into account – that is, low-resilience children in low-risk environments did better later in life than high-resilience children in highDartmouth Undergraduate Journal of Science
risk environments (16). Consequently, even though social support networks can be enhanced or mitigated by individual characteristics (5), parental and social support is the most significant risk factor and perhaps the most important in PTSD interventions. Environmental risk factors besides parental support include a high incidence of traumatic events, chronic rather than acute trauma, and trauma that could conceivably be repeated, rather than a fluke event such as a car crash (9). Conversely, while it is clear that environmental factors such as family cohesion, socioeconomic status and lower levels of life stress can be protective, some researchers believe it is “individuals’ contribution to these factors that confers their status as characteristics of resilience” (5). Still, the presence of positive environmental factors like family cohesion and support networks can help moderate PTSD in survivors. For instance, Hyman et al. (17) found that different levels (and perceptions) of social support had different effects on PTSD outcomes in survivors of childhood sexual abuse. Four types of social support were examined: appraisal support (advice in coping with problems), self-esteem support (increasing the individual’s self-percept), tangible support (availability of material resources) and belonging support (being part of a social group). Self-esteem support and appraisal support were ultimately most effective in reducing PTSD symptoms.
Implications for intervention
Given what we know about resilience factors in PTSD, how can we use this understanding to treat children with the disorder or to develop preventive interventions? Biochemical vulnerabilities and protective factors are useful for pharmaceutical research on drugs to treat PTSD, or to be administered as soon as possible after trauma and before the onset of PTSD. For instance, glucocorticoids administered after trauma may interfere with the retrieval of traumatic memories, while emotional arousal can be blocked with a beta-blocking (beta-adrenergic receptor antagonist) drug known as propanolol (Roozendaal, 2003 and Cahill et al., 1994, cited in 5). In addition, there may be critical sensitive periods during which drugs can affect processes of neural plasticity (such as the formation of traumatic memories), and more research should be done on this. Cognitive-behavioural therapy (CBT) is perhaps one of the most commonly used forms of treatment for PTSD, and such therapy should be tailored to a child’s developmental stage. Also, some types of therapy may be indicated for different symptoms of PTSD – Mancini & Bonanno (15) suggest that adult grief therapy focusing on insight, for example, may be most recommended for problems of internalisation such as self-blame, hopelessness and sadness, and skill-focused interventions are more suitable for externalising problems. This may extrapolate well to child PTSD, but again, more research needs to be done. Stallard (18) did a review of CBT interventions in children with sexual-abuse-related PTSD. These interventions focused on various aspects of PTSD: decreasing sexualised (re-enacting) behaviour, helping children communicate about their abuse (decreasing avoidance), and revising abuse-related cognitions FALL 2007
like self-blame, stigmatisation and powerlessness. While each intervention was itself effective, the relative effectiveness of each type of CBT intervention still needs to be assessed. Post-trauma therapies seem successful in treating children, but pre-emptive interventions can help improve the resilience of children at risk for PTSD. One form of resilienceenhancing intervention involves developing competencies and empowering children. Researchers conducted a study of structured activities in Palestinian children living in war-torn areas of Gaza and the West Bank. These included activities like sports and arts, and were aimed at developing skills and competencies to help children feel successful. It was found that the intervention improved children’s internalising and externalising problem scores, but did not increase children’s hopefulness (though hopefulness scores did not decline – still a positive sign in an environment where war was ongoing) (14). Another type of resilience-promoting intervention focuses on improving executive function (EF) in school-aged children (6), based on the premise that executive function is important for appraising a traumatic event and its emotional meaning, regulating emotions to solve problems and gather more information, and responding behaviourally to a traumatic event. The intervention improved children’s inhibitory control and verbal fluency in comparison to a control group and decreased children’s internalising and externalising problems. However, the impact of improved EF on PTSD may be difficult to measure without a corresponding traumatic event or a high-risk environment. One might expect such resilience-enhancing pre-emptive interventions to also be of use in treating childhood PTSD. Finally, given that environment and social support networks may be the most important factors in preventing or alleviating PTSD, one should consider family therapy in the treatment of PTSD, establishing therapeutic alliances where possible. Forms of social and family support should focus on improving individuals’ self-percepts, helping people ‘belong’ within the family or community, and offering guidance in coping with problems.
Conclusion
An ounce of prevention is worth a ton of cure; naturally, it is preferable to conduct pre-emptive interventions to increase the resilience of children in high-risk populations. These interventions can be informed by our increasing knowledge about the physiological and psychological factors that improve resilience in children. At the same time, it should be kept in mind that resilience does not equal invulnerability to trauma – that no matter how resilient a child is, environmental risks still play a significant part in his or her development. However, our knowledge of resilience, coupled with an understanding of developmental psychology, can and should also inform the treatment of children with PTSD – such as child survivors caught in the Indian Ocean tsunami in 2004, which was a severe and horrifying fluke event with no known risk factors to predict it.
19
Appendix 1: DSM-IV-TR diagnostic criteria for 309.81 Posttraumatic Stress Disorder A. The person has been exposed to a traumatic event in which both of the following were present: (1) the person experienced, witnessed, or was confronted with an event or events that involved actual or threatened death or serious injury, or a threat to the physical integrity of self or others (2) the person’s response involved intense fear, helplessness, or horror. Note: In children, this may be expressed instead by disorganized or agitated behavior B. The traumatic event is persistently reexperienced in one (or more) of the following ways: (1) recurrent and intrusive distressing recollections of the event, including images, thoughts, or perceptions. Note: In young children, repetitive play may occur in which themes or aspects of the trauma are expressed. (2) recurrent distressing dreams of the event. Note: In children, there may be frightening dreams without recognizable content. (3) acting or feeling as if the traumatic event were recurring (includes a sense of reliving the experience, illusions, hallucinations, and dissociative flashback episodes, including those that occur on awakening or when intoxicated). Note: In young children, trauma-specific reenactment may occur. (4) intense psychological distress at exposure to internal or external cues that symbolize or resemble an aspect of the traumatic event (5) physiological reactivity on exposure to internal or external cues that symbolize or resemble an aspect of the traumatic event C. Persistent avoidance of stimuli associated with the trauma and numbing of general responsiveness (not present before the trauma), as indicated by three (or more) of the following: (1) efforts to avoid thoughts, feelings, or conversations associated with the trauma (2) efforts to avoid activities, places, or people that arouse recollections of the trauma (3) inability to recall an important aspect of the trauma (4) markedly diminished interest or participation in significant activities (5) feeling of detachment or estrangement from others (6) restricted range of affect (e.g., unable to have loving feelings) (7) sense of a foreshortened future (e.g., does not expect to have a career, marriage, children, or a normal life span) D. Persistent symptoms of increased arousal (not present before the trauma), as indicated by two (or more) of the following: (1) difficulty falling or staying asleep (2) irritability or outbursts of anger (3) difficulty concentrating (4) hypervigilance (5) exaggerated startle response 20
E. Duration of the disturbance (symptoms in Criteria B, C, and D) is more than 1 month. F. The disturbance causes clinically significant distress or impairment in social, occupational, or other important areas of functioning. Specify if: Acute: if duration of symptoms is less than 3 months Chronic: if duration of symptoms is 3 months or more Specify if: With Delayed Onset: if onset of symptoms is at least 6 months after the stressor (DSM-IV-TR, American Psychiatric Association 2000) References 1. A.S. Masten, Am. Psychol. 56: 227 (2001). 2. “Resilience.” Harvard Mental Health Letter [Cambridge, MA]. (December 2006). 3. W.E. Copeland, G. Keeler, A. Angold, and E.J. Costello, Arch Gen Psychiatry 64, 577 (2007). 4. C. Wenar and P. Kerig, Developmental Psychopathology (5th Edition), McGraw-Hill: Boston, 2006. 5. E.A. Hoge, E.D. Austin, and M.H. Pollack, Depression and Anxiety 24, 139 (2006). 6. M.T. Greenberg, Ann. N.Y Acad. Sci. 1094, 139 (2006). 7. D. Cicchetti and J.A. Blender, Ann. N.Y. Acad. Sci. 1094, 248 (2006). 8. S. Taylor, D.S. Thordarson, K.L. Jang, and G.J.G Asmundson, World Psychiatry 5(1), 47 (2006). 9. J. Scheiner, in-class lecture: Psychology 52.2, Dartmouth College, May 9 2007 10. M.W. Gilbertson, M.E. Shenton, A. Ciszewski, K. Kasai, N.B. Lasko, S.P. Orr, and R.K. Pitman, Nature Neuroscience 5, 1242 (2002). 11. L.M Shin, S.L. Rauch, and R.K. Pitman, Annals of the New York Academy of Sciences 1071, 67 (2006). 12. C.L Pederson, S.H. Maurer, P.L. Kaminski, K.A. Zander, C.M Peters, L.A. Stokes-Crowe, and R.E. Osborn, Journal of Traumatic Stress 17(1), 37 (2004). 13. L.A. Tupler and M.D. De Bellis, Biological Psychiatry 59(5), 523 (2005). 14. M. Loughry, A. Ager, E. Flouri, V. Khamis, A.H. Afana, and S. Qouta, Journal of Child Psychology and Psychiatry 47(12), 1211 (2006). 15. A.D. Mancini and G.A. Bonanno, Journal of Clinical Psychology 62(8), 971 (2006). 16. A.J. Sameroff and K.L. Rosenblum, Annals of the New York Academy of Sciences 1094(1), 116 (2006). 17. S.M. Hyman, S.N. Gold, and M.A. Cott, Journal of Family Violence 18(5), 295 (2003). 18. P. Stallard, Clinical Psychology Review, 26(7), 895 (2006). Diagrams for this article were created in-house by Tim Shen ‘08.
Dartmouth Undergraduate Journal of Science
engineering
Breaking the Mold:
Moving Toward More Functional Prostheses TIM SHEN ‘08
The Ossur Proprio Foot. Image courtesy of Ossur Americas.
T
he loss of a limb is a life-changing and often devastating event. As a result, the use of artificial approximations as replacements has persisted for both cosmetic and functional reasons. Historically, the technology used to produce these important artificial limbs has always failed at mimicking either the lost limb’s appearance or movement. Artificial limbs even today are often difficult to use, painful, and of limited help to the user. In fact, the execution of many simple tasks is fatiguing and difficult simply because of the hobbling effect and primitive state of the artificial limb. Fortunately, with a recent surge of interest and funding, these barriers are beginning to crumble in the face of renewed research efforts driven in the United States by demand from wounded veterans of the Iraq War (1). Artificial limbs today are marching steadily toward a future in which limb replacement will mean much more than merely dampening the traumatic effects of losing a limb, and truly means regaining functionality. The research arm of the U.S. Department of Defense, known as the Defense Advanced Research Projects Agency (DARPA), has been awarding grants worth millions of dollars to various laboratories and companies as part of its “Revolutionizing Prosthetics” program. This program aims to create an artificial arm and hand that interfaces directly with the nervous system by 2009. This prospective prosthetic would be “fully functional”, replicating the natural arm’s range of movement and giving sensory feedback (2). It would thus allow for a near-perfect replacement of the lost limb (2). FALL 2007
Two of the most lucrative recent grants in this project have gone to DEKA Research and Development Corporation in Manchester, New Hampshire, and to Johns Hopkins University. DEKA was awarded an $18.1 million grant for the creation of a prosthetic arm that would mimic a real arm in both cosmetic appearance and strength, while researchers at the Applied Physics Laboratory at Johns Hopkins received a $30.4 million grant for their work (3). The Applied Physics Laboratory at Johns Hopkins University rapidly developed a prototype called “Proto 1” within a single year of receiving the grant. Proto 1 provides sensory feedback and provides eight degrees of freedom in movement. Although Proto 1 represented a large leap ahead for prosthetics technology, there is more progress to be made. The Applied Physics Laboratory is almost ready to unveil a second prototype with 25 degrees of freedom, the speed and the strength of a natural arm, and many more sensors for feedback (4). Proto 1’s ability to provide sensory feedback and neural control of movement are made possible by a cutting-edge technique known as “Targeted Muscle Reinnervation” (4). Dr. Todd Kuiken of the Rehabilitation Institute of Chicago developed Targeted Muscle Reinnervation in order to transplant nerves that had once innervated the amputated limbs to a new area. In the case of one patient, Jesse Sullivan, Dr. Kuiken redirected the nerves for the removed arm that were running through the shoulder to several different muscle groups across the proximal pectoralis muscles (4, 5). By six months after the surgery, electrical signals were detected from the transplanted nerves (1). Contractions in these muscle 21
groups were also felt on the surface (5). The effectiveness of this transplantation was then tested by fitting the patient with a new experimental myoelectric prosthesis that would exploit the transplanted nerves. The patient subsequently showed increased dexterity and speed of movement with the prosthesis, and reported that the movement was much easier and more natural when compared to previous prostheses (5). In fact, the transition was almost effortless. The first test was conducted by doctors who simply asked the patient to try to open his absent left hand. The connected prosthetic test piece immediately uncurled its hand (1). Both the Proto 1 prosthesis and the effectiveness of the Targeted Muscle Innervation technique were demonstrated during subsequent clinical evaluations. Jesse Sullivan immediately displayed an increased capability for fine muscle control. Sullivan was able to quickly manipulate the prosthetic hand and deftly remove a credit card from his pocket. He was also able to display the natural force feedback of Proto 1’s control interface by stacking cups with a gentle, carefully controlled grip. Lastly, but of no less importance, the cosmetic covering for Proto 1 featured a photorealistic layer that was fabricated according to images of the patient’s arm before amputation (4). Proto 1 is just the beginning for this method of prosthetic control. Sullivan’s chest muscles hold much potential at this point. The nerve currently used to control the closing of the artificial hand actually innervates at least 20 muscles. Those 20 muscles are currently serving to propagate only two signals. With more development and higher resolution, this nerve (and the other three that were reinnervated) can serve many more purposes (1). In fact, a second much more advanced prototype is already nearing completion. The researchers at the Johns Hopkin’s Applied Physics Laboratory have been hard at work on “Proto 2,” which will feature over 25 degrees of freedom in movement, as well as strength and speed approximating a human arm, and over 80 sensors to provide feedback for touch, temperature, and position in space. Proto 2 may also feature the use of “Injectible MyoElectric Sensors” rather than the surface electrodes used to control Proto 1. These sensors will allow Proto 2 to interface with the necessary nerves through implanted or injected devices rather than exposed surface electrodes, and will also help ensure the reliable transmission of commands from nerve to prosthetic (4). Technological advances toward this kind of neural control of machines have been long in coming. While science-fiction writers in the past have often dreamed of the ability to directly interface with machines for augmentation, replacement, or other purposes, it was only recently that monkey studies proved that these types of direct neural linkages were possible. For instance, one study performed at the Duke University School of Medicine and published in 2000 showed that electrodes implanted in an owl monkey’s brain were able to impart upon the monkey the ability to 22
control a robotic limb. The owl monkey’s brain signals were monitored in order to initially identify the particular brain signals correlated with specific arm movements. Once this had been accomplished, a computer and robotic arm were attached. The robotic arm was directed by a processing computer that monitored the owl monkey’s brain signals for the previously identified patterns of neural firing. When a pattern was identified, the processing computer would relay the appropriate instructions to the robotic arm to cause a similar movement. This system successfully allowed the robot arm to mimic the actual arm movements of the owl monkey (6). Although Dr. Kuiken’s aforementioned muscle reinnervation technique works wonders for upcoming prosthetics of similar design – the method used for the owl monkey – connecting the artificial limbs directly to the brain – holds much potential. Brown University’s Brain Science Program director, John Donoghue, also serves as the chief scientific officer for Cyberkinetics Neurotechnology Systems, located in Foxboro, Massachusetts. Cyberkinetics Systems is currently developing a small square chip, two millimeters on each side, for implantation in the primary motor cortex. This chip, known as “BrainGate”, sends motor cortex signals to an external processor for interpretation. This processor in turn operates the prosthetic. The system has been successfully tested on a paralyzed man named Matt Nagle. Nagle was able to use the BrainGate interface to move a cursor on a screen, and even to open the hand of a prosthetic arm. However, BrainGate still requires wireless capability and practical portable power sources to be truly useful (1). While upper-body prosthetics require fine motor control for handling different objects, prosthetics for the legs require careful and timely responses in order to maintain balance, navigate different types of terrain, and allow for natural movement. In order to truly maintain balance, the wearer must also be informed of the prosthetic’s location in space. Feedback for the legs is just as important as feedback for the arms. Possibly more important for legs, however, is the issue of power. Without a boost from the prosthetic, even walking becomes a very difficult and exhausting task. Motorized prostheses would ideally be able to dampen the forces received and modulate the power delivered in order to fit the terrain or the task, whether the wearer is walking uphill, going up stairs, or even running (1). The multiple and complex requirements necessary for leg prostheses have driven one leading company, Ossur, to develop knee and foot prostheses separately. Ossur’s Power Knee and Proprio Foot products both include motorized movement to help propel the user, and both are capable of swinging naturally to help impart a natural gait. The two devices work in sync to perform the complex tasks normally performed by the leg. The Power Knee provides the motorized support to lift the user from seated positions, up stairs, and up inclines as the Proprio Foot adjusts to these differing types of terrain and shifts beneath the user as Dartmouth Undergraduate Journal of Science
they stand up. Both the Power Knee and the Proprio Foot technology of the lower prosthetic, without sensory feedback work to lift the foot a proper height off the ground when mechanisms, the user lacks the necessary innate sense of walking. Ossur appears to have put much thought into the where the leg is in space (1). Without this particular sense, teamwork necessary for a natural walking motion (7, 8). a natural gait is much more difficult to achieve. It becomes However, for leg prostheses, there are more upcoming difficult to coordinate one leg with the opposite limb when options than such bulky and robotic attachments. Current the natural gait is achieved by automatic swinging motions of research at Walter Reed Army Medical Center and Arizona the artificial leg. Presumably, it would be not be possible to State University is focusing on a project that uses lightweight turn on the organic leg and expect the mechanical version to springs to store energy. The prosthesis, dubbed “SPARKy” properly place itself in the new direction. It therefore may not for “Spring Ankle with Regenerative Kinetics”, is meant even be possible to turn sharply without first stopping. At the to store enough energy and provide proper ankle motion to least, it seems that turning will be difficult at best. The solution make it comparable to a natural leg. The teams researching to this problem may lie in further developments in feedback SPARKy have also put much effort into and increased articulation in the ankles. learning about natural gait. The simplicity Leg and arm prostheses also of SPARKy is a direct result of the research share the complication of attachment points. that reduces walking to a controlled “series Most weight-bearing attachment points of falls” (9). One heel swings forward and today for leg prostheses are painful sockets touches the ground. As the load is transferred that fit rather poorly and often impede to this leg, the springs in SPARKy store the natural movement. Arm prostheses may be energy. As the body’s weight continues strapped awkwardly around the body, and forward, the springs begin to release the again may fit poorly or come loose due to stored energy, providing a forward and the amount of motion. Despite innovative upward push as the heel leaves the ground socket designs that attempt to compensate again. This heel will swing forward to catch for poor fit, the very existence of the socket the falling weight of the body to start the cycle creates unnecessary complications (1). anew. All that is needed is a small motor to The solution to these tune the springs for optimal performance. complications is to simply fuse the SPARKy weights only about two pounds, prosthetic directly to the remaining bone. and has been successfully tested and shown However, although the technology to to provide a natural walking gait. By 2009, fuse bone to the metal of the attachment it should be completed with additional rod exists, the attachment point that must functionality to allow for everyday use (9). protrude out of the skin prevents this Focusing on the leg would be of solution from being realized on a large scale. limited use without similar focus on the The skin generally fails to heal around the intricacies of the foot. At the moment, “no The Ossur Power Knee. protruding attachment point, and infections prosthetic foot has yet been produced that can subsequently invade the area (1). This is imitate the natural sequence of movements during walking,” perhaps the largest weakness for future prosthetics. No according to Dr. Urs Schneider of the Fraunhofer Technology matter how complex the prosthetics may become, they will Development Group TEG of Stuttgart, Germany (10). Even always have to compensate for poor attachment. There is still the Ossur Proprio Foot mentioned earlier does not have the hope, however, that a material or method will be found that three-axis flexibility of the natural foot; instead, it replaces resolves this issue. Jeffrey Morgan, a molecular biologist the entire foot with a flexing pad that cannot reliably duplicate at Brown University, notes that metal and skin should be the flexibility and adaptability of the natural foot. As a result, able to seal naturally, as they do with pierced noses (1). Dr. Schneider and his team have worked to develop a complex In the future, prosthetics may be limited in their mechanical prosthetic foot that can imitate the shifting of advancement by the advent of new technologies. Left alone, the the foot during movement without the aid of any computers. prosthetics industry could eventually produce prostheses that This reliable reproduction of the foot’s actions promotes are nearly indistinguishable from their organic counterparts, natural walking and reduces the time necessary for users to but they may yet be supplanted by an even better replacement: acclimate to their prosthetic. In fact, testing of the device organic replacements grown from stem cells. Although it has been so successful that outside observers generally do still sounds far-fetched, lab-grown replacement limbs would not recognize that the user is wearing a prosthetic foot (10). spell the downfall of the prosthetics industry. Even now, The challenge of leg prostheses grows more complex, with hand transplants and face transplants being successfully however, with the question of feedback information. performed around the world, technology is approaching the Despite all the motorized, computerized, or spring-loaded day where a lost limb may be grown to order or regenerated. Image courtesy of Ossur Americas.
FALL 2007
23
Whether the prosthetics industry would fully disappear upon development of the lab-grown replacement limb is debatable. Initially, of course, prosthetics would clearly be the more economical choice, but as the price for replacement limbs falls, the decision becomes a matter personal choice. Some would likely prefer the natural choice of regaining an organic limb, particularly if issues surrounding rejection could be resolved. However, others may prefer a prosthetic because recovery from a limb replacement surgery could be extensive and difficult. It is also possible that prosthetics will become more than just replacements for lost limbs, but rather augmentations of the human body, granting greater capabilities to the user. However, despite the possibility that prosthetics will ultimately fade out in favor of organic replacements, prosthetics research today is still extremely worthwhile. Organic replacements, if feasible, would be complex due to both ethical dilemmas and scientific uncertainties. The concept of organic replacements is still too distance for patients to rely on. Amputees today still need the prosthetics industry, and the industry must continue to advance because much work needs to be done before prosthetics can truly approximate the limbs they supposedly replace. The goals for prostheses in the near term are few and obvious. Unfortunately, each entails finding, researching, and implementing a myriad of complex solutions. Much information must travel through the prosthetic-organic junction, and so control interfaces and schemes must be devised to allow for the complex array of sensory feedback and simultaneous multidimensional motions possible with organic limbs. Only after this has begun to improve can prostheses begin to incorporate the additional sensors and degrees of freedom of movement necessary to be as useful as limb replacements. Additionally, prosthetics must be less painful and less exhausting to use. A fully functional prosthetic that causes severe pain and fatigues the user is fully functional in name only. Lastly, direct bone attachment may be a neat near-term solution for the problem of prosthetic attachment since it is used extensively for internal prostheses like joint replacements. Fortunately, it appears that in the short-term, much of this work will be completed. It is likely that the discomfort of prosthetics will soon be resolved with direct bone attachment. The advent of both targeted muscle reinnervation and direct brain interfaces like BrainGate give great hope to the shortterm possibility of instinctive, comfortable neural control of prostheses. These interfaces will also likely be able to handle much of the data flow for sensory feedback and simultaneous complex multidimensional movements. The Defense Advanced Research Projects Agency may not have their fully functional arm prosthesis by 2009, but the new prostheses that year will be very close to fully functional. The future of prosthetics is bright, with many avenues of research left to explore. There is much to accomplish in 24
terms of realistic appearance, natural range of movement, and the ever-complex sensory feedback. The ultimate goal, of course, is to replace the missing limb with a high degree of fidelity so that the user will be able to function normally. As science surges forward in the fields of mechanics, biomedical engineering, computer science, and neuroscience, bionics will follow. Hopefully, if we cannot learn to regenerate and heal such extensive injuries as the loss of a limb, at the least we can learn to produce a suitable replacement in the future. References 1. S. Sataline, Popular Science. July 2006, p. 68. 2. DARPA Defense Sciences Office – Revolutionizing Prosthetics. Available at http://www.darpa.mil/dso/thrust/biosci/revprost.htm (25 May 2007). 3. D. Miles, DARPA’s Cutting-Edge Programs Revolutionize Prosthetics (2006). Available at http://www.defenselink.mil/news/newsarticle. aspx?id=14914 (25 May 2007). 4. New Prosthetic Limbs Allow For Eight Degrees of Freedom (2007). Available at http://www.news-medical.net/?id=24306 (25 May 2007). 5. T. Kuiken et al., Prosthetics and Orthotics International. 28 (3), 245 (2004). Available at http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd =Retrieve&db=PubMed&list_uids=15658637&dopt=Citation. 6. J. Stephenson, Journal of the American Medical Association. 284, 2987 (2000). Available at http://jama.ama-assn.org/cgi/content/ full/284/23/2987-a. 7. Bionic Technology – Ossur Power Knee (2007). Available at http:// www.ossur.com/bionictechnology/powerknee (25 May 2007). 8. Bionic Technology – Ossue Proprio Foot (2007). Available at http:// www.ossur.com/bionictechnology/propriofoot (25 May 2007). 9. Next Generation of Powered Prosthetic Devices Based On Lightweight Energy Storing Springs (2007). Available at http://www.whatsnextnetwork. com/technology/index.php/2007/05/02/next_generation_of_powered_ prosthetic_de (25 May 2007). 10. Artificial Limbs That Walk Naturally (2006). Available at http://www. gizmag.com/go/5298/ (25 May 2007).
Dartmouth Undergraduate Journal of Science
interview
WISP and Wetterhahn: Undergraduate Women in Science SHREOSHI MAJUMDAR ‘10
T
he 16 th annual Karen E. Wetterhahn Science Symposium was held on Thursday, May 24, in Fairchild Tower. Co-sponsored by the Dartmouth Undergraduate Journal of Science, it showcased the incredible amount of research in the physical and life sciences that is conducted at the undergraduate level at Dartmouth. The symposium was named after the late Karen E. Wetterhahn, professor of chemistry, who cofounded the Women in Science Project (WISP) in 1990 1. Initially a means for the WISP interns to present their research, the symposium now encompasses research by Howard Hughes Medical Institute fellows, Senior Honors Thesis students, Presidential Scholars, and many others. Dr. Lisa Graumlich, Director of the School of Natural Resources at the University of Arizona-Tuscon, delivered the keynote address, “Sustainability Science: An Interdisciplinary Journey,” for this year’s symposium. Her address was complemented by a special exhibit on sustainability in Kresge Library, and was followed by the poster presentation in which over 100 undergraduates from an array of scientific disciplines displayed their work. About 60 of these posters were produced by WISP interns. Through paid internships (Internship Program) and mentoring (Peer Mentoring Program), WISP has been supporting first-year women in scientific research since the early nineties. On completion of an application process, interested first-years are matched with faculty sponsors who guide them through what, for many, is their first taste of actual research. A number of interns go on to pursue science further, through internships, courses and independent study. Some may continue in the same lab or project for their entire Dartmouth career, building upon first-year research to culminate in a Senior Honors Thesis. DUJS had the opportunity to interact with these “Women in Science” and ask them about their experiences.
Carla J. Williams ’09
Carla was a Howard Hughes Medical Institute fellow. She is continuing her first-year WISP internship research. She is a biology major and a chemistry minor. “Using ionomics to investigate metal homeostasis in Arabidopsis thaliana.” SM: How did you get interested in science? CW: I’ve always enjoyed science. I had great science teachers in freshman and senior year of high school and I also took a genetics course in high FALL 2007
school that really got me interested in genetics. I’ve taken a lot of biology and chemistry courses at Dartmouth, and I am working in a plant genetics lab. SM: Could you describe your project? CW: Our lab is involved in identifying and characterizing genes involved in metal homeostasis. The overall goal of understanding these genes is [to improve] iron uptake [in plants], [which can also] be used to remove pollutants from the soil. Ninety percent of people suffering from iron deficiency are in the developing world. If we understand the genes involved in iron uptake, we can improve nutrition in plant-based diets. As far as pollutants are concerned, industrial contaminants and sewage have contaminated farmlands with toxic metals like cobalt, nickel, and cadmium, and if we know genes involved in their regulation, then we can grow plants that can remove pollutants and can aid in bioremediation. My project is to identify genes in metal homeostasis. To do this, we started with a list of singleton genes identified that encode membrane proteins that may or may not be involved in metal homeostasis. But we don’t know what their function is and that is what we are trying to determine. I have been given plants that contain t-DNA inserts in these genes. The inserts disrupt the gene expression. What I’ve been doing is screening these mutant lines using PCR and gel electrophoresis to find ones that are homozygous for these t-DNA inserts. Once I isolate homozygous plants, I grow them in soil and send the seeds off for ICP-MS [Inductively Coupled Plasma Mass Spectrometry] analysis. We get back complicated graphs that show differences in metal uptakes in these plants compared to wild-type Columbia plants. I found one interesting mutant type involving the gene sqd2. SQD2 (the protein) is a sulfolipid synthase: it attaches sulfate to glycolipids. If you look at a metal profile, the sqd2 mutant has a prominent spike in nickel which means it has increased nickel in the shoot compared to the wild type. To validate ionomic data, we tried to find a nickel-tolerance phenotype in the mutant. If you compare wild type and mutant plants, [grown] on normal growth media, there is no difference in growth. However, when I grew both varieties on media with toxic levels of nickel added, the sqd2 mutants grew much better. This means that the loss of the sqd2 gene confers tolerance to high levels of nickel. 25
The assembled WISP interns. Image courtesy of Shreoshi Majumdar ‘10
Next year, as a Presidential Scholar, I am going to localize where in the plant the gene is expressed by making sqd2 GUS constructs. Also, I will create an overexpression line which should hopefully show increased sensitivity to nickel – the opposite phenotype of the t-DNA mutant. This is just one example of interesting metal homeostasis phenotypes we’ve found. There are other knock-out lines that show different changes in metals: some show increased manganese, cadmium, or decreased iron.
could be a number of ways sqd2 could play a role in increased growth on [high concentration] nickel plates.
SM: What is ICP-MS? CW: It is a way to look at the amount of metals in the shoots and roots of a plant. We don’t do ICP-MS at our lab. We get the data from other labs as part of a collaborative project.
SM: Did you find your research really different from introductory lab courses? CW: I had some lab experience in the introductory genetics lab, but the molecular research we do in the [Guerinot] lab is really different. I’m doing PCR reactions, cloning my gene of interest into vectors, and transforming E. coli. I’d read about different research techniques, but when you’re in lab, you realize that, it’s not that simple or black and white. With labs in science courses, you know what’s supposed to happen, you do it, and it usually works. In a research lab, it doesn’t work that way at all. When an
SM: Is sqd2 the only gene responsible? CW: I’m not sure. There are probably many more genes that are related to nickel tolerance. But the sqd2 phenotype is so dramatic; it is likely that it’s very important. If you knock that gene out, nickel tolerance increases. But it is hard to speculate how it all works because there 26
SM: How have two years of research changed your perception of science? CW: I’d never done research before. I applied for WISP my freshman year, but didn’t know what to expect. It has been a really good experience. I work with a graduate student, Joe Morrissey, who has taught me a lot.
Dartmouth Undergraduate Journal of Science
experiment doesn’t work, you have to go back and figure out why and what went wrong. It’s a much longer process. SM: What part of research do you like best? CW: I like the hands on part of research. I like setting up PCR reactions, running gels, and working to make GUS constructs – when it works. It’s very frustrating when something doesn’t work. Finding out new stuff – just being part of it. The discovery is fun. SM: What did you learn about science and research from your experience? CW: I’ve learned that the scientific process takes a very long time and it can be frustrating when something doesn’t work and you have to go back to square one. I’ve also learned that there are many genes that we don’t know the function of. Although the Arabidopsis genome has been sequenced, we don’t know what many of the proteins do. There is still a lot of work to be done.
Lauren Alpeyrie ’10
Lauren was a WISP intern with Prof. Chris Levy (Engineering) “Biomimetics: Engineering an artificial thistle” SM: How did you get interested in science and in WISP? LA: I used to do a lot of physics and chemistry in high school. When I came here, engineering was always in the back of my mind. It was something I was curious about. I think I got into WISP just because a lot of people were getting involved with it. It is a very accessible program. I didn’t come in knowing the field of study that I was particularly interested in. I just looked at stuff that was appealing. Overall, I knew I wanted to do undergraduate research, but I wanted to keep an open mind. SM: Why did you choose this project? LA: When I interviewed with Chris Levey, I was just really interested in this project. It is really fascinating work. Also, part of the reason was my initial curiosity about engineering. I really enjoyed the project. I thought it was a good experience, going through all the steps of production. SM: What was your project? LA: Professor Levey thinks that WISP interns haven’t had a lot of engineering background, and have really fresh ideas. So over winter break, he had me brainstorm about the things I would think were really cool to make in a very small size. Then we had a couple of brainstorming sessions to come up with the project idea. Going from that step to a finished model and FALL 2007
learning how to make it was really fascinating. It really showed me a lot about what engineering is all about. Basically, what we ended up doing after our brainstorming sessions was biomimetics –engineering an artificial thistle. Biomimetics is imitating biology. We wanted to look at what a thistle did to hook onto clothing. We used SEM (Scanning Electron Microscopy) to see cotton and a thistle, how the thistle blends into the cotton cloth, and the size and proportion. We came up with a process using photolithography and layering, which he uses to make micro-robots. We manufactured double-ended metal hooks to accomplish the same process as a thistle attaching to cloth. It differs from Velcro in that Velcro has two parts to it. The hook has just two sides and grabs onto fabric. What we want it to do is to attach stuff to fabric. We are still thinking about the application: it depends on what direction, what the final product will look like. SM: What techniques did you use to make them? LA: There is this program called L-Edit: layout editing. We make the whole layout, multiply our hooks, put hundreds on disks and send them off to get printed on a really fine printer. Using photolithography, you imprint layouts onto silicon wafers. You end up with a silicon wafer sketch of little hooks. Then you apply layerings of photoresist and you can etch a pattern through the wafer. Then you electroplate to apply metal. The tricky thing is that it is done at such a small scale. SM: What do you think about WISP? LA: It’s a great program. I think it helps undergraduate women get exposure to sciences from the beginning. I think that for most projects, I interviewed with men. There aren’t enough women in science – more than half of the WISP sponsors are men. I think it also really shows you what you would do in the field of study or profession. SM: Have you taken any introductory science classes? Do you find working in the lab different from those courses? LA: I’ve taken Physics 13 and 14. It is very different. You’re working on a real problem; researching something that people don’t understand. It’s not a science class, you are not going in to learn the thermodynamics or physics that you need to make something. You learn how it works as you go. SM: What excites you about research? LA: You solve current problems that need addressing. [I enjoy] the newness of everything: exploring uncharted territory in some way and taking science from the classroom and applying it to the real world. In engineering especially, your goal is to make products that can be used by people. 27
SM: How would you describe your experience? LA: I kind of went in with a blank slate to experience what I could and to see what it was like. I think my favorite part was when we went on the SEM and I got to look closely at cotton and thistles, at the hooks we made. It’s really fascinating to look at something this small and to make it yourself, facilitated by this really powerful microscope.
Stephanie Trudeau ’09
Stephanie, a Computer Science major and engineering minor, was a WISP intern with Prof. Sean Smith (CS). “Big money and inside jobs: Protecting Wall Street through effective authorization management” SM: What was your project about? ST: My internship was in the Computer Sciences department, in the PKI [Public Key Infrastructure] lab. Sean Smith is the faculty advisor and there are a bunch of grad students and post-docs. I worked with a post-doc named Clara Sinclair. I get to work with a lot of people, which is nice. I think they’ve done a good job with WISP. I will continue doing research here in the future, and I’m going to a conference in Virginia. In the PKI lab, there are five WISP interns. They have us talk about projects with each other weekly, even though all of us have different projects. We get practice talking about [our research] and they want all of us involved in writing at least one paper. I think it’s really cool how they haven’t limited what we do and have allowed us to go in any direction we’re interested in. I’m working on security in the financial industry. The problem we’re working on is corporate fraud. In every financial institution, e.g. an investment bank, they have a computer system that needs to give permission to all the workers, who need to have access to the applications that allow them to do their jobs. Different projects need different accesses, what we call ‘entitlements.’ You gather entitlements as you go along but as someone is promoted or moves around internally within the organization, then they might gain another entitlement that allows them to misuse the information Stephanie Trudeau ’09 28
that they were allowed to access. This is called a toxic combination – when you have different entitlements that allow you to commit fraud. We’re working on reducing toxic combinations. There are a lot of complications, such as the massive size of these places. On Wall Street, they have 10,000 employees located around the world. It’s a very large-scale problem. Another problem with the financial corporations specifically is the annual turnover rate of 25%, which is a huge number. [And the number of] people moving around internally in the organizations [is huge]. You constantly need to keep updating system accesses, and if you fire someone, you don’t want them to having access to your information, especially if they’re going to go work for your competitors. How fast do you need to update the system? One [potential] solution is using role-clustering algorithms. It is too complicated to look at each individual person and grant them entitlements. You can’t monitor every single person because there isn’t just one person who has that job. It’s a good idea to look at everyone who has the same job – we call it a ‘role.’ Everyone who’s working on project A needs entitlements X, Y and Z. So we’re going to grant these entitlements to everyone who works on project A. It gets complicated when you introduce more information. There could be a company in Hong Kong that has rolled out a new application that other companies aren’t using yet. This might mean that they have different people with these entitlements, or you might have people with new roles. Or they might not have the same roles any
Image courtesy of Shreoshi Majumdar ‘10
Dartmouth Undergraduate Journal of Science
Ashley Walker ‘10
more. You can’t just change entitlements for everyone who has that role in the corporation because only the company in Hong Kong has changed. So, you need to look at each company individually. But you can’t do that either because it’s very common to have consultants come in and work on one project and then leave. Once they finish, you want to revoke their accesses, but you can’t revoke those of everyone in the company. So, you need to split up in each group the consultants and people who are in the company permanently. With more information, you need to look at people more and more individually. We are rapidly approaching individual entitlement-clustering, which is not effective. We’re trying to figure out if it is feasible to define roles in such a way that you can grant accesses and entitlements based on someone’s role. We have partners in the financial industry and we’re hoping we can get information from one of them and run through a simulation and see what problems come up. Maybe we’ll find that our program just isn’t the reasonable way to approach this problem, and then we’ll have to go at it from a different angle.
Image courtesy of Shreoshi Majumdar ‘10
ST: [Corporate fraud] occurs when you have a toxic combination. Let me give an example: if you’re allowed to write data to the system, then maybe that’s one entitlement: to write data. Maybe another person needs the entitlement to read the data from the system and they can read that data; [however], if you become the person that can read and update it, you might be able to misuse that. Another example would be if you’re the payroll person writing all the checks and the boss has to sign the checks. But if somehow you are granted the permission to write the checks and sign them, then you can just write checks to yourself. That’s what we mean by toxic combination, something that allows you to misuse the entitlements you’ve been given. The way to fight that is to be able to update as accurately as possible, give [employees] enough, but not more than they need to do their job. As the corporations move through time, they are going to change – the way they do business, what jobs people do, so the system needs to change too. SM: Why did you do this internship?
SM: How exactly do you fight corporate fraud? FALL 2007
29
ST: What I found so interesting about my internship, when I read the description, was that it was a very real-world problem [It’s great] to be able to fix things that people use now, to make them better, helping people out and solving problems.
Ashley Walker ’10
Ashley was a WISP intern with Prof. James LaBelle (Physics) “Developing tools for disseminating radio receiver data over the Internet” SM: What’s your project? AW: I’m working with James LaBelle from the Physics and Astronomy Department. He’s been working with Aurora Borealis/Australis [phenomena] for 10-15 years. He basically interprets the radio signals he picks up from auroras at certain times. I was looking at radio receiver data from a particular site in Canada that we picked up from the auroras. I’m also trying to get a bunch of stuff organized and put it all online so I wrote some [computer] programs to be able to do that on Linux. Basically, I’ve never taken computer science or physics. This was all new for me. So I’m a big champion of the whole idea of WISP because for me, it was such an educational experience. I was taught how to use Unix/Linux. I’d sit on a computer in the library and write programs. Shell scripting is a pretty easy format for programming. The good thing about Linux is that it has file-sharing systems, so you can log in no matter where you are. I’d just log in to his computer and access all his data, make it into a format that is accessible and put it online. SM: What really excited you about the project? AW: Initially, I guess because the aurora is an exotic topic for me. I’ve never seen an aurora in real life. Initially, when I talked to Professor LaBelle, and I looked at this diagram of a radio emission from an aurora, I thought it was just scribbles and blips on a radar screen. Then [I] learned that it actually explains how and why auroras work and you can learn the structure and characteristics of the aurora from that. Eventually I started to find them beautiful. Earlier they were black and white pictures of nothing.
was just mice in a lab. But now I know that scientists like Professor LaBelle do all sorts of different things. SM: How do you see what you did in the lab being applied to the real world? AW: I learned that one burst of aurora produces as much power as the entire energy grid of North America. It’s very powerful. If we could harness that energy – which we probably cannot do in the next 100 years – it would be amazing. It represents a lot of unharnessed, poorly understood energy I think that it could happen, and the only way we could make it happen is by understanding what the aurora actually is. I feel any time we try to understand the natural world, it can only be beneficial to human society. SM: What is your favorite part of the project or your research? AW: I enjoyed writing programs and working with the data. I now know the earth has a magnetic field and I can calculate various things about it. In the second portion, I was looking to see where the auroras were and where they are coming from. It’s really cool that we can do that from the ground. I’ve learned a lot about theoretical physics and playing with equations. But working with LaBelle, finding out that there are people willing to work with me and teach me, and that it can be fun, would probably be the best part. I think that the relationship I’ve developed with my sponsor is the most important thing. He is amazing. The author would like to thank all the interns who were interviewed for sharing their time and their experiences. References 1 Women in Science Project at Dartmouth College [Internet]. Hanover (NH): WISP; c2007. Wetterhahn Undergraduate Science Poster Symposium [cited Sept 7]. Available from http://www.dartmouth. edu/~wisp/wetterhahn/index.html
SM: Has this heightened your interest in science? What do you now think about research in the science? AW: Definitely. I feel like I have the capacity to learn science, which is a nice feeling. I learned that [research] is a very multi-faceted process. Originally, I thought it 30
Dartmouth Undergraduate Journal of Science
Environmental sciences
Geographic and Demographic Considerations:
Does irrigation development decrease local malaria infection rates? NICHOLAS WARE ‘08
An Anopheles minimus mosquito feeding on a human host. Note the penetration of the proboscis, as well as the blood already ingested. An. minimus is one species responsible for spreading P. falciparum. Image courtesy of the Center for Disease Control and Prevention
Abstract
In sub-Saharan Africa, arid soil coupled with a rapidly increasing demand for food has driven the development of small and large-scale irrigation schemes. Irrigation development has the potential to increase or decrease local malaria infection rates and this paper uses two conflicting case studies to identify four factors which largely control the effect of irrigation development on malaria infection rates: 1) the baseline (pre-irrigation) characteristics of malaria transmission, 2) the baseline length of seasonal malaria outbreaks as determined by temperature and rainfall patterns, 3) the local population composition and distribution and 4) the effect of the irrigation scheme on the population’s socioeconomic status. This paper suggests that as governments, farmers and citizens increasingly fund and support irrigation development as a way to increase food security and promote economic growth, steps should be taken to foresee the effects that proposed projects may have on local malaria infection rates. Keywords: Malaria; poverty; irrigation development; urban migration; rainfall; rice irrigation; sub-Saharan Africa
Introduction Malaria kills more than one million children annually.
The parasitic disease also hinders the physical and social development of children, decreases worker productivity and household income, suppresses economic growth, affects the movement of people and supports the stabilization of other ailments within populations. Malaria remains both a cause and a result of poverty and the poorest 40% of the world’s people are chronically at risk of contracting malaria (see appended Figure 1). Unrestricted access to insecticide-treated FALL 2007
bednets (cost about 10USD per net) has decreased all-cause under-5 mortality by 60% within some African countries (1). Fully, 90% of the worldwide deaths due to malaria occur in sub-Saharan Africa, where arid soil coupled with a rapidly increasing demand for food has driven the development of small and large-scale irrigation schemes. Nearly 50% of sub-Saharan Africa’s land receives too little precipitation to sustain rain-fed agriculture and, in 2001, irrigation schemes existed for only 4% of Africa’s arable land. Between 1990 and 2020, the area of irrigated land in sub-Saharan Africa is projected to increase by 33% 31
(2). Small and large-scale irrigation development has the potential to influence local malaria infection rates. Analyzing the effects of irrigation development on local malaria infection rates demands that we consider many geographic and demographic factors. I will approach the issue by 1) providing a brief epidemiological background of malaria as it pertains to irrigation development, 2) considering two sharply conflicting case studies which demonstrate that irrigation development can decrease or increase local malaria infection rates and 3) identifying and offering a potential resolution to the controversy which emerges from the aforementioned case studies. Irrigation development affects local malaria rates differently depending primarily upon the baseline (pre-irrigation) characteristics of malaria transmission, the baseline length of seasonal malaria outbreaks as determined by temperature and rainfall patterns, the local population composition and distribution and the effect of the irrigation scheme on the population’s socioeconomic status.
Epidemiology of Malaria and Irrigation Development
Humans and mosquitoes play essential roles in the malaria cycle (see Figure 2). The disease is transmitted to humans when an infected female Anopheles mosquito takes a blood meal and parasites enter the human’s bloodstream. Of the four types of human malaria parasite, Plasmodium falciparum is the most deadly and also the most common in sub-Saharan Africa. Irrigation systems in Africa mainly support the growth of rice, sugar cane and cotton. Rice is grown on flooded ground, which provides an ideal breeding ground for mosquitoes and often significantly increases the Anopheles mosquito population. Similarly, cotton, sugar cane and wheat irrigation schemes can also increase Anopheles populations when systems are not properly maintained and operated (2). An increase in the Anopheles mosquito population density has the potential to cause a surge in infection rates. Within sub-Saharan Africa, the prevalence of malaria varies greatly, but a strong correlation exists between a region’s annual rainfall and the length of its longest season of transmission (see Figure 3). Much of sub-Saharan Africa experiences a hot, dry season, in which malaria transmission rates are significantly lower than during the cooler, rainy season. This is largely because P. falciparum dies inside the mosquito when exposed to extreme heat. In fact, a study in The Gambia found that when daytime temperatures reached 40◦C, few infective mosquitoes were captured (3). Interestingly, irrigation development can either shorten or extend the length of a population’s longest malaria transmission season. For instance, the introduction of basic treadle pump irrigation, which uses bamboo for suction to pump water from shallow aquifers, raised annual incomes by more than 100USD/year for the average 32
participating household in eastern India and Bangladesh. The length of the longest transmission season significantly decreased in areas using the pumps, while annual rainfall patterns remained relatively constant. Furthermore, and more importantly, overall local malaria infection rates decreased nearly threefold in the regions that economically benefited from using the treadle pumps (4). By contrast, the infamous Gezira-Managil irrigation scheme of Sudan extended the length of the region’s longest malaria transmission season by permitting the irrigation of wheat throughout the dry season and thus creating new mosquito breeding grounds. Before irrigation development, the land of Gezira was mostly left unsown during winter months and malaria was not a problem during any part of the year. However, the Gezira scheme led to a staggering 20% increase in the local malaria infection rate, and the disease began to plague the population year-round (5). The Gezira scheme, introduced into an area of low baseline malaria transmission, draws attention to the epidemiological role that a lack of immunity to P. falciparum can play. When irrigation development occurs in low malaria or malaria-free areas, such as some of the desert fringes and highlands of sub-Saharan Africa (see Figure 3), malaria infection rates often increase because the population lacks immunity to the parasites (2). While an increase in Anopheles population density instigated a malaria problem in Gezira, large Anopheles populations are also associated with lower local infection rates, due to factors that the first case study will now explain.
Case Study One, Lower Moshi, Tanzania: Irrigation Development Associated With Lower Rural Malaria Infection Rate
A study conducted by Ijumba et al. (6) examined three relatively isolated villages located on the foothills of Kilimanjaro in the Lower Moshi area of Tanzania. Each rural village practiced a single type of agriculture and the study found that the two villages which utilized irrigation had the lowest malaria infection rates. Infection rates in children under 5 years of age were determined for the three villages which practiced 1) traditional, rain-fed maize agriculture (TMA), 2) irrigated sugarcane agriculture (ISA) and 3) irrigated rice agriculture (IRA). Within the ISA and IRA villages, the prevalence of malaria was lower than in the TMA village. A significantly higher Anopheles mosquito density was recorded in the two villages with irrigation, and the mosquito density was especially high in the flood-irrigated IRA village. The annual infection rates were recorded as follows: TMA 29.4% infected, ISA 16.9% and IRA 12.5% (6). During the rainy season, malaria rates significantly increased in the TMA village, slightly increased in the ISA village and did not increase in the IRA village. While the community with rice irrigation had a relatively constant incidence of malaria year-round, the Dartmouth Undergraduate Journal of Science
malaria pattern of the maize-producing community was closely correlated with the region’s seasonal rainfall distribution (6). Several factors may explain why the IRA village, despite its high Anopheles population density, demonstrated resistance to the seasonal malaria outbreaks that negatively affected the other two villages. Numerous studies have shown that ricefield irrigation communities in sub-Saharan Africa and Southeast Asia often have relatively high levels of access to anti-malarial medicines, insecticide bed nets and medical care (2). Rural ricefield irrigation development often economically benefits the population near the irrigation scheme. In the Lower Moshi case study, for instance, the rice-producing village was indeed the wealthiest of the three studied and 65% of people living in the IRA village were rice farmers. Improved nutritional status could also have played a role in the IRA community’s ability to resist seasonal malaria outbreaks and the disease in general. The children of the IRA village were heavier and taller than the children of the savannah village and likely had access to more nutritious foods (6). While the IRA village was found to be significantly wealthier than the TMA village, a drastic economic divide was not found when comparing the IRA and ISA villages. However, the ISA village used considerably fewer insecticide-treated bednets than the IRA village. In addition, migrant workers from outside regions may have increased the malaria rate in the sugarcane-producing village. The
Worldwide distribution of Anopheles malaria vectors. FALL 2007
study found that migrants, who were employed by the local sugarcane industry, composed more than 60% of the ISA village’s population. Immigration can significantly contribute to malaria transmission, especially when immigrants introduce new strains of parasites that are resistant to anti-malarial drugs and foreign to the local population (7). The comparative analysis of the three villages in the Lower Moshi is not flawless. The TMA population, which was interpreted as having high malaria rates due to its relatively poor socioeconomic status, is located near a dam. This may contribute to the local Anopheles population density and thus increase the TMA village’s infection rate. Secondly, without local baseline infection rate information, the effect that the introduction of irrigation development had on malaria rates in the IRA and ISA villages cannot be ascertained. Perhaps the malaria rates were lower in the IRA and ISA communities before irrigation development occurred. However, the study conducted in Lower Moshi remains useful because it demonstrates that, on a regional scale, a high density of potentially infective Anopheles mosquitoes can be negatively correlated with the incidence of malaria infection, and thus irrigation development can be associated with lower local infection rates.
Image courtesy of the Center for Disease Control and Prevention
33
Case Study Two, Kumasi, Ghana: Irrigation Development Associated With Higher Urban Malaria Infection Rate
In Ghana, malaria accounts for 45% of hospital visits and nearly 25% of under 5-mortality (8). A study conducted by Afrane et al. (8) in Ghana’s second largest city classified ten select areas of Kumasi into three land types and examined the rate of infection within each land type. Malaria rates were determined through adult sampling and household surveys in 1) urban areas without agriculture (UW), 2) urban areas with agriculture (UA), and 3) peri-urban areas with some rain-fed agriculture (PU). As in the case of the Lower Moshi, denser Anopheles populations were found near the irrigation schemes. In Kumasi, however, a significantly higher incidence of malaria occurred near the dense Anopheles populations. Five times more malaria occurred on UA land than on UW and PU land during Kumasi’s dry season (8). The positive correlation between Anopheles density and malaria prevalence may be due to socioeconomic factors. For instance, people living on UW and PU land types owned far more mosquito-screens than people living in the UA land. The mosquito-screened windows and doors, which the researchers studied in detail, offered significant protection against malaria for all age groups (8). Additionally, people who contracted malaria while living on UA land were by and large citizens who had no involvement with the nearby irrigation development. Urban malaria is more complex and heterogeneous than rural malaria because the urban poor often live within isolated pockets of malaria. The urban poor often suffer when living near irrigation development because they do not benefit from the agricultural operations and therefore do not gain access to anti-malarial measures (9). Hydro-geographic factors could also help explain the positive correlation between Anopheles density and malaria presence in Kumasi, where many farmers involved with irrigation occupy lowlands and obtain water by redirecting streams into manmade wells. The wells undoubtedly provide an ideal breeding ground for Anopheles mosquitoes. However, because the irrigated urban agriculture of Kumasi is situated on the low-lying, wettest regions of the city, the lack of known baseline malaria infection characteristics becomes especially detrimental to this case study. The UA land in Kumasi would almost certainly have the most vector potential if no irrigation development had occurred on any of the studied areas. Additionally, the analysis of the data failed to consider the location of any study area in relation to the Subin, an urban river which runs through Kumasi (10). A demographic factor which likely affects local malaria rates of infection in Kumasi is immigration. Kumasi, with its population of 1.2 million people, is central to a growing timber industry, as evidenced by the increasing number of sawmills, plywood plants and furniture factories within the city. The industrial growth and its associated labor
34
force have attracted many immigrants from the Ashanti region of Ghana (11). Ashanti has a high incidence of malaria and drug-resistant strains of Plasmodium falciparum thrive in the region. In 2004, a study conducted during the rainy season found that 48.8% of the population living in Ashanti was infected with P. falciparum (12). Perhaps the low-income community near the irrigation scheme attracted more migrants than the higher income areas considered in the case study. A thorough demographic analysis is required to determine the effect of migration on malaria transmission in the land type areas of Kumasi. Unlike the Lower Moshi study, the Kumasi study did not consider migration or attempt to determine the composition of any sampled populations. However, despite its several shortcomings, the Kumasi case study demonstrates that, on a regional scale, a high density of infective Anopheles mosquitoes can be positively correlated with the incidence of malaria infection and thus irrigation development associated with higher local infection rates.
Controversy
Under what circumstances does irrigation development decrease local malaria infection rates? Investigation of this controversial question through the two conflicting case studies revealed the complexity of the inquiry. Without knowing a population’s baseline malaria infection characteristics, the question proves impossible to answer in retrospect. Research strongly suggests that the geographic and demographic characteristics of an area are paramount in determining how the introduction of irrigation development will affect the local population’s malaria infection rate. Given that malaria kills nearly one million African children each year (1) and that (between 1990 and 2020) the area of irrigated land in sub-Saharan Africa is projected to increase by 33% (2), the ability to make logical and accurate predictions regarding how an irrigation development project will alter malaria infection rates within a population could save thousands of lives within sub-Saharan Africa alone.
Possible Resolution of Controversy
When planning to introduce an irrigation scheme into an area, malaria-related irrigation development research must be considered. Determining whether or not an added source of stagnant water will likely induce a decrease in local malaria infection rates requires the consideration of the aforementioned geographic and demographic factors: (1) the baseline characteristics of malaria transmission, (2) the baseline length of seasonal malaria outbreaks as determined by temperature and rainfall patterns, (3) the local population composition and distribution, and (4) the effect of the irrigation scheme on the population’s socioeconomic status. When the baseline rate of malaria infection within a community is very low, as was the case in Gezira, Sudan, the local population generally has limited immunity to P. Dartmouth Undergraduate Journal of Science
falciparum and increasing theAnopheles mosquito population density should be expected to increase local infection rates, as occurred in Gezira. Even if irrigation development economically empowers a population with a low baseline infection rate and allows individuals with limited immunity to access anti-malarial measures, the incidence of malaria cannot significantly decrease compared to the low baseline rate. The seasonal distribution of malaria outbreaks (Figure 3) within a population plays a key role in determining whether or not a local infection rate decrease will occur. If the irrigation scheme will provide mosquitoes with stagnant water during a naturally dry, low malaria season and individuals employ the same protective measures against Anopheles bites before and after the irrigation development occurs, then infection rates may be expected to increase. This phenomenon may have occurred in Kumasi, where the second case study found the local rate of malaria infection near the urban irrigation scheme to be especially high during the dry season compared to non-irrigated areas. The local infection rates were more consistent across Kumasi during the cooler, wet season (8). Alternatively, if accompanied by increased access to anti-malarial measures, irrigation development can decrease dry season infection rates despite the increased volume of stagnant water and higher than baseline Anopheles density. This phenomenon occurred when treadle pump irrigation was introduced to regions of India and Bangladesh (4). The local population composition and distribution also plays a role in determining how irrigation development will affect local malaria infection rates. For instance, if an irrigation scheme is likely to attract migrant workers or is introduced into an area with migrant workers, drug-resistant strains of P. falciparum may increase local infection rates (7). In addition, the identity of the population living near the irrigation scheme must be considered. In Lower Moshi, Tanzania, over half of all community members were farmers. In Kumasi however, while urban agriculture produces 90% of the lettuce, cabbage and spring onions eaten in the city, those living near the irrigation schemes were not involved with the agriculture. This dichotomy has socioeconomic significance, as discussed below. Of all the important geographic and demographic considerations, predicting the effect of population composition on the introduction of irrigation development may prove most problematic. The effect that irrigation development will have on a community’s socioeconomic status is perhaps the single most critical determinant of whether or not the scheme will decrease local malaria infection rates. For instance, if the population living in the vicinity of a proposed irrigation scheme will fail to benefit from the agriculture, then malaria rates should not be expected to decrease in the population. However, if the population would benefit from the increase in agricultural production, then local
FALL 2007
malaria rates may be expected to decrease due to improved access to anti-malarial measures and a more nutritious diet. As African governments, farmers and citizens increasingly fund and support irrigation development, steps should be taken to foresee the effects that proposed schemes may have on local malaria infection rates. The primary purpose of introducing irrigation into an area is to produce more food, not to decrease the prevalence of malaria. However, basic research of an irrigation development proposal, involving the examination of baseline infection characteristics, may offer qualitative, yet life-saving insight. References 1. J. Sachs and P. Malaney, Nature 415 (7), 680 (2002). 2. J. Ijumba and S. Lindsay, Medical and Veterinary Entomology 15 (1), 1 (2001). 3. S. Lindsay, et al., Journal of Tropical Medicine and Hygiene 94 (5), 313 (1991). 4. T. Shah, et. al., Research Report Series 45 (2000). Available at http:// www.iwmi.cgiar.org/Publications/IWMI_Research_Reports/PDF/ Pub045/Report45.pdf. 5. J. Keiser, et al. American Society of Tropical Medicine and Hygiene 72 (4) 392 (2005).
35
earth sciences
Ooid Production and Transport on the Caicos Platform LAUREN EDGAR ‘07
Abstract
Limestone is common in rocks from all geologic periods of the Phanerozoic era as well as in many Proterozoic assemblages (1). The minerologic and fabric character of these limestones generally reflect the complex biological, physical, and climactic character of the depositional systems under which they were created. Of particular interest are ooids – nonskeletal grains that precipitate predominantly in the warm, shallow waters of the tropics (1). Ooids are defined as spherical, concentric accretions of calcium carbonate, usually less than 2 mm in diameter, developed around a nucleus of some previously existing particle. By understanding modern ooid-producing environments, we can begin to interpret and understand ancient carbonate facies and sediments that include such particles. In a field research trip to Providenciales, Turks and Caicos, samples of oolitic limestone were collected at a series of successive Pleistocene and Holocene beach ridges, and brought back for analysis. The samples taken from the most recent beach ridges close to the modern shore show a general trend of decreased porosity, increased weathering, and higher coral content, with minor variations in ooid size, when compared to samples from older beach ridges further inland, about 1.2 km away. Observations were also made regarding ooid transport direction, visible as highly reflective sub-aqueous sand bodies in multispectral scanner satellite and high resolution aerial photos. The ooids studied in this project are produced along the southeastern coast of Providenciales and transported westward by longshore currents along the coast. This allows for a progressive sorting of ooid size along Long Bay Beach, with larger, coarser particles observed near the northeastern end of the beach, and smaller, finer ooid particles observed at the southwestern end.
Introduction Ooids Ooids are spherical or ellipsoid concretions, usually less than 2 mm in diameter, of calcium carbonate and aragonite crystals arranged around a nucleus – typically a small particle such as a quartz grain or shell fragment (2, 3). The crystals can be arranged radially, tangentially, or randomly. Figure 1 illustrates the difference between radial and tangential classifications. The arrangement of the calcium carbonate within the ooid is generally dependent upon the process by which it was formed: physical processes produce ooids with concentric lamellae, and chemical processes produce ooids with radiating crystals (4). Some geologists recognize a third type of ooid structure – the “recrystallized structure.” In this type of ooid, large irregular crystals either converge toward the center or have no special orientation (5). In addition to variations in structure, ooids can also be classified as lacustrine or marine in origin. A prime example of lacustrine ooid formation is the Great Salt Lake in Utah. Lacustrine ooids are also primarily composed of aragonite, and are often dull, have a radial fabric, and may have a bumpy surface, known as a “cerebroid” surface (1) Marine ooids are generally found in shallow, warm marine environments, often in locations at low latitudes. These ooids usually consist of aragonite rods without crystallographic terminations, oriented tangentially (parallel to the ooid lamination). Aragonite rods in marine ooids have an average length of 1 micron and a maximum length of 3 microns (5). Marine ooids, 36
as opposed to lacustrine ooids, tend to have higher surface polish and tangentially-arranged crystals Formation Marine ooids are formed in shallow marine environments, such as the tropics, in waters that are supersaturated with calcium carbonate. These waters are approximately two meters in depth, although ooids can form in waters of up to 10 to 15 meters in depth (6). Formation begins when a small sand grain, shell fragment, or other particle becomes coated with calcium carbonate and the particle is kept in suspension by wave or tidal action; this process is sometimes referred to as the “growth phase” (7). The growth phase alternates with a period known as the “temporary resting phase” in which no new material is accreted. Ooids gain successive layers by alternating between growth and resting phases. During the growth phase, ooids are suspended in the water due to turbulence and collide with other particles. Researchers have determined that the mass lost per impact with another object increases as the cube of the radius, but the mass gained from growth is
Figure 1: Two main types of ooids. Left: tangentially arranged crystals; Right: radially arranged crystals. Dartmouth Undergraduate Journal of Science
proportional to the square of the radius. The net result is that eventually, the mass loss will equal or exceed the mass gained, limiting the size of the ooid (8). Numerical models produced by Sumner and Grotzinger (1993) indicate that ooids have a larger radius in higher velocity environments. In other words, the size of ooids is limited primarily by agitation from wave or tidal action. Ultimately, the ooid falls out of suspension, and is washed ashore where it becomes part of the modern beach. Ooid-Producing Environments As previously noted, ooids are generally formed in shallow marine environments, when waters become supersaturated with calcium carbonate. Monaghan and Lytle (1956) determined that the most desirable concentration range for ooid formation is between 0.002 moles/liter and 0.0167 moles/liter of calcium carbonate. In addition to concentration, agitation plays a key role in the formation process. Ooids are typically formed “in agitated waters where they are frequently moved as sandwaves, dunes and ripples by tidal and storm currents, and wave action� (9). Examples of ooid producing environments include the Great Bahama Bank, the Gulf of Suez, Persian Gulf, the Yucatan shelf, Mexico, and Shark Bay, Western Australia. Another prime example, and the basis of this study, is the Caicos Platform, located southeast of the Bahamas. In 1989, geologists Harold Wanless and Jeffrey Dravis led a field trip to Turks and Caicos, and established four basic settings on the Caicos Platform in which concentric ooids occur. The first is the shallow subtidal swash zone, on the interior of the platform; this refers to areas such as the southeastern coast of Providenciales. The second is an area along a large shoal (the Ambergris Shoal) which experiences both wind-wave and tidal agitation. The third setting refers to shallow tidal shoals along the southwestern edge of the platform, and the fourth is the shoreline, including sediment and beach-swash (10).
Field Research in Turks & Caicos
A field research trip similar to the one led by Wanless and Dravis (1989) was conducted in February of 2006, on the island of Providenciales. The goal was to investigate FALL 2007
models of mixed carbonate-evaporite deposition as well as the effects of tectonics and climate on modern sedimentation. Furthermore, the field trip aimed to study modern ooid-producing environments as a basis for interpreting and understanding ancient carbonate facies and sediments that include such particles.
Methods
The samples used in this study were collected along Long Bay Beach, Providenciales, and samples of oolitic limestone were collected at successive Pleistocene beach ridges (Figure 2). Loose oolitic sand was collected by hand, and oolitic limestone samples were taken at road cuts using a rock hammer. The samples were then impregnated with epoxy, sliced and polished into thin sections, and analyzed using an optical microscope.
Findings
Samples from the most recent beach ridges close to the modern shore show a general trend of decreased porosity, increased weathering, and higher coral content, with minor variations in ooid size, compared to samples from the older beach ridges, further inland. The samples span a distance of about 1.2 km from the modern shore to the oldest beach ridge, and represent up to six beach ridges. Additionally, samples along Long Bay Beach were compared with samples at Extreme Point to characterize ooid transport in this region.
Figure 2: (Left) Thin-section photomicrographs taken under plane-polarized light (left) and cross-polarized light (right). Photomicrographs shown here are taken from beach ridges (TC6 R1-3) with loose oolitic sand (TC6LBB) for comparison. Note the decreasing porosity, increased weathering and higher content of coral fragments in successive samples. Scale is the same for all photomicrographs. (Right) Photomicrographs of older beach ridges (TC6 R4-6). Scale is the same for all photomicrographs. 37
visibly much smaller than those found further to the northeast along Long Bay Beach. Ooids at Extreme Point are approximately 0.25 mm in diameter, compared to approximately 0.5 mm for those at Long Baby Beach.
Figure 3: (above) Thin-section photomicrographs taken under plane-polarized light (left) and crosspolarized light (right). T&Cp1 was collected in 2001, and represents a different populatioin of ooids, hence the smaller size. T&Cp1 is the oldest beach ridge examined in this study, and shows a greater content of coral fragments.
Transport Finally, observations were made regarding ooid transport direction, visible as highly reflective sub-aqueous sand bodies in multispectral scanner satellite and high resolution aerial photos. Observations revealed that the ooids are transported westward along the coast.
Figure 4: (right) Thin-section photomicrographs taken under plane-polarized light (left) and cross-polarized light (right). Examples of decreasing ooid size, and increased weathering.
Porosity and Cementation The younger beach ridges (Figure 3, samples TC6 R1 and TC6 R2), have notably greater porosity and less cementation than the older beach ridges. The youngest samples have approximately 40% porosity while the older ridges exhibit as little as 5% porosity. Consequently, the older samples contain much more calcium carbonate cement than the younger samples. This progression is evident in samples TC6 R1-6. It is also interesting to note that TC6 R1 (Figure 5) revealed signs of banding – zones of more cement and zones of less cement (more pore space). Weathering and Ooid Structure As expected, the older ridges show more signs of weathering, whereas the younger ridges feature more intact ooids. The younger ridges have clear outlines and unfragmented particles, as opposed to broken and dissolving cement and particles, such as that found in sample TC6 R6. Coral Fragment Content In addition to increased cement and increased weathering, the older beach ridges contain more coral fragments. This is evident in samples TC6 R5 and TC6 R6, as well as a sample collected on a previous research trip from a much older ridge, T&Cp1 (Figure 4). Only about 5% of sample TC6 R6 is composed of debris from coral reefs, but this amount is significantly greater than samples TC6 R1-4, which contain no coral fragments. Ooid Size Although there are variations in ooid size among the successive ridges, there is no distinct trend. Average sizes range from 0.3 – 1.5 mm in diameter. However, there is a notable size difference among samples found along Long Bay Beach and at Extreme Point. Ooids in the Extreme Point sample (Sample TC6 EP+20, not shown) are 38
Discussion
Findings The results were consistent with our predictions. Porosity was expected to decrease with age, as the ridges become more compacted. Cement was also expected to increase with age because of the time it takes to form this chemically precipitated material. The young ridges are relatively uncemented and look similar to loose oolitic sand. The extent of weathering and ooid structure is also in line with our predictions. As with any rock exposed to the vadose environment, the degree of weathering increases over time due to physical and chemical processes. As a result of weathering, the structure of the ooids within the rock becomes increasingly less defined due to compaction and dissolution. Coral fragments were expected in the old beach ridge samples. The material most likely came from the barrier reef to the north, incorporated into older beach ridges as the island rose relative to sea level. Younger beach ridges and ooids on the modern shore do not contain these fragments because they have been isolated from the barrier reef by the island, as the land mass increased. Therefore more recent beach ridges appear to be more pristine. The size of the ooids in the beach ridge samples is also worthy of note. There is no distinct trend between successive beach ridges, but there are clear variations between samples. The average size of the particles, 0.31.5 mm in diameter, is fairly consistent with ooids found in the Bahamas and Caicos Platform (10). The most striking difference is not in the beach ridge samples but rather the samples taken along Long Bay Beach and at Extreme Point. The Extreme Point ooids were visibly much smaller (~0.25 mm) than those found further to the northeast along Long Bay Beach (~0.5 mm), and suggest that the size variations are due to sorting, as a result of transport. Dartmouth Undergraduate Journal of Science
The transport of ooids along the southeastern coast of Providenciales is due to longshore currents that carry ooids to the west. This allows for a progressive sorting of ooid size along Long Bay Beach, with larger, coarser particles observed near the northeastern end of the beach, and smaller, finer ooid particles observed at the southwestern end. The findings in the samples are consistent with predictions made from aerial photos – the westward migration of particles suggests that the particles become increasingly smaller the further they are transported. Overall, findings were consistent with Wanless and Dravis (10), and samples were similar to those collected in an earlier field research trip, led by Johnson (2001). Areas of Future Research The current research provides a solid foundation for future studies of ooid production and transport on the Caicos Platform. In particular, the correlation between characteristics of ooid-producing environments and the size, type, and composition of ooids formed is important to explore. Variations in nucleus composition, water depth, water current, or even the concentration of carbonate in the water could be investigated at other ooid-producing environments on the Caicos Platform described by Wanless and Dravis (10). Additional research may also be conducted about the zones of cementation in the beach ridge samples, such as ridge 1, to determine what factors caused the cement to form in bands rather than continuously throughout the sample. Future research in these areas would serve to broaden our understanding of the Caicos Platform, as well as modern ooid producing environments.
Significance Limestones commonly occur in rocks in every geologic period of the Phanerozoic and many from the Proterozoic (1). The minerologic and fabric character of these limestones reflect the complexities of the depositional systems under which they were created, serving as indicators of biological, physical, and climactic variations. By understanding modern ooid-producing environments, we can begin to interpret and understand the ancient carbonate facies and sediments that included such particles. References 1. M. E. Tucker, Sedimentary Petrology (Blackwell Science, United States, ed. 3, 2001). 2. J. Donahue, Journal of Sedimentary Research. 39, 1399 (1969). 3. M. E. Tucker, V. P. Wright, Carbonate Sedimentology (Blackwell Science, United States, 1990). 4. C. Le Goff, Ooids and Oolite, http://www.brookes.ac.uk/geology/ sedstruc/ooid/webpage.html (5 March 2006). 5. T. Peryt, Coated Grains (Springer-Verlag, Berlin, 1983). 6. L. Simone, Earth-Science Reviews, 16, 319-355 (1981). 7. N. D. Newell, E. G. Purdy, J. Imbrie, Journal of Geology. 68, 481 (1960). 8. D. Y. Sumner, J. P. Grotzinger, Journal of Sedimentary Petrology. 63, 974 (1993). 9. P. H. Monoghan, M. L. Lytle, Journal of Sedimentary Petrology. 26, 111 (1956). 10. H. R. Wanless, J. J. Dravis, Carbonate Environments and Sequences of Caicos Platform (American Geophysical Union, Washington D.C., 1989).
X
Volume X!
It’s the DUJS’s 10th Anniversary! We would like to thank all of our loyal readers, advisors, and sponsors for their interest, help, and support over the years. Look for a DUJS retrospective in the spring issue! Also, keep an eye out for special events related to this milestone!
FALL 2007
39
biology
Incorporation of Fluorinated Nucleotide Analogs Into HIV-1 TAR RNA BOYD LEVER ‘10
Abstract
Tat is an 86 amino acid virally encoded protein vital to the human immunodeficiency virus type 1 (HIV-1) life cycle by stimulating transcription initiation and increases processivity of ribonucleic acid (RNA) polymerase II. Tat is introduced to the endogenous transcription machinery upon binding of the RNA stem-loop TAR. The Tat-TAR interaction is limited to a 6 nucleotide loop and a 3 nucleotide bulge. Given these modest base specific requirements, TAR tertiary structure must serve as the scaffold for Tat binding. A means to study nucleic acid tertiary structure is nuclear magnetic resonance (NMR). Fluorine NMR has a chemical shift range 100-fold larger than the more commonly utilized proton NMR. To generate TAR RNA for 19F-NMR, fluorinated adenosine and uridine nucleotide analogs were incorporated by in vitro transcription in a pUC 19 TAR-Hammerhead expression vector system. Upon transcription with 2F-ATP, 5F-UTP, CTP and GTP, Hammerhead ribozyme autocleavage was inhibited, precluding the production of TAR RNA. Conversely transcription with 2F-ATP, UTP, CTP and GTP resulted in a 31nt TAR transcript, 57nt Hammerhead transcript, and a visible amount of un-cleaved 88nt TAR-Hammerhead transcript, implying decreased ribozyme autocleavage efficiency. Despite decreased autocleavage, milligram quantities of 5F-UTP TAR RNA were synthesized for 19F-NMR ligand binding studies. A model other than the pUC 19 TAR-Hammerhead must be employed to generate 2F-ATP, 5F-UTP TAR RNA.
Introduction
Tat is an 86 amino acid virally encoded protein vital to the human immunodeficiency virus type 1 (HIV-1) life cycle by stimulating transcription initiation and increases processivity of ribonucleic acid (RNA) polymerase II (1,2). Tat is introduced to the endogenous transcription machinery by binding the RNA stem-loop encoded by the trans-activating response element (TAR) (3-5). The TAR element is positioned just distal to the transcription start site conferring a TAR RNA hairpin located at the 5’ ends of mRNA (6). HIV-1 TAR contains two prominent structural features: an apical 6 nucleotide loop and a 3 nucleotide bulge, both requirements for Tat binding (Figure 1) (7). Through extensive mutagenesis it has been shown that Tat binding is localized to the bulge (7) and cellular cofactors bind the loop (8). TAR RNA tertiary structure is dynamic. In the absence of ligands, like Tat, an open and accessible major groove is prominent, but a more tightly packed structure is conferred once TAR folds around basic side chains emanating from the Tat protein(9). Given the relatively modest specific-base requirements (only U23 makes a base-specific interaction with Tat), one infers that TAR tertiary structure must play an active role in protein recognition and RNA-protein interface formation (10). A means to study nucleic acid tertiary structure is Nuclear Magnetic Resonance (NMR) spectroscopy. Fluorine NMR has an extremely wide range of chemical shifts, nearly a 100-fold larger than that of proton NMR, implicating 19F NMR as a valuable tool in ligand binding studies, such as Tat-TAR interactions. The first step in employing Fluorine NMR for ligand binding studies is the incorporation of fluorinated nucleotides. In addition, TAR RNA transcripts with defined 5’ and 3’ ends are required for NMR analysis. To satisfy this requirement, we employed 40
a TAR-Hammerhead construct, where the Hammerhead ribozyme autocleaves itself from the transcription product, resulting in a TAR transcript and a Hammerhead transcript (11). The experimental goal of this study is to incorporate 2F-
TAR RNA secondary structure. The nucleotides involved in Tat recognition are highlighted (U23, G26, A27, U38, C39). Numbering is relative to the transcription start site.
Adenosine and 5F-Uridine into HIV1-TAR RNA by in vitro transcription using the pUC 19 TAR-Hammerhead plasmid.
Methods and Materials
Template DNA and Fluorinated Nucleotides Fluorinated Adenosine (2F-Adenosine) nd Uridine (5F-Uridine) nucleotide analogs were obtained from colleagues at the Scripps Research Institute in La Jolla, California (Figure 2). The 2F-Adenosine, 5F-Uridine, Cytidine and Guanosine solution concentration was 114.1 mM. A separate 54.9 mM Dartmouth Undergraduate Journal of Science
solution was acquired containing independent 5F-Uridine. The pUC 19 test plasmids carried a 31 nucleotide HIV-1 TAR sequence (GGCCAGATTTGAGCCTGGGAGCTCTCTGGTC) and a 57 nucleotide Hammerhead ribozyme sequence (GACGGCTTCGGCCGTCCTGATGAGTCCGTCCTGATGAGTCCGTGAGGACGAAACCAGAGAGCTCCGGATCC) in the multiple cloning site downstream of the T7 polymerase promoter sequence (TAATACGACTCACTATA). The plasmid solution was acquired from colleagues at the Scripps Research Institute in La Jolla, California.
Transcription buffer and diluted with 18.2 ohm water. The fluorinated transcription reaction mixture was incubated for 3 hours at 37 o C. Fluorinated RNA transcripts were fractionated and photographed by the previously described methods.
Results
TAR-Hammerhead Synthesis and Autocleavage Gel A contains transcripts from the unfluorinated control transcription, with 31 nt TAR and 57 nt Hammerhead separated, implying complete Hammerhead ribozyme autocleavage. Gel B containstranscriptsfromthefluorinatednucleotidetranscriptions, with lane 1 as a size marker and lane 2 as an unfluorinated control. Lane 3 shows transcription products from the 5FUridine transcription. Near the bottom is the 31 nucleotide TAR RNA. In the center is the 57 nucleotide hammerhead ribozyme. Interestingly, near the top is the 88 nucleotide Hammerhead + TAR construct, indicating less efficient cleavage of the hammerhead ribozyme than that of the unfluorinated control. Conversely, lane 4 shows the solitary 88 nucleotide TARHammerhead construct, implying no autocleavage activity by the ribozyme with incorporated 2F-Adenosine and 5F-Uridine.
Transcription Condition Optimization Prior synthesis of milligram quantities of TAR, 24 20ul pilot transcriptions were run in order to assess optimal nucleotide and magnesium conditions for RNA synthesis. The optimization followed a 3 x 4 matrix: 3 nucleotide concentrations varied across 4 magnesium concentrations. The variable nucleotide concentrations were 21mM, 24 mM, and 27 mM. The variable nucleotide concentrations were 21mM, 24mM, 27mM, 30mM. The optimal conditions for RNA synthesis were qualitatively determined by fractionating the transcription Discussion and Conclusion products using 15% polyacrylamide gel electrophoresis. TAR is a virally encoded RNA that binds the viral Unfluorinated in vitro Transcription: Control The control transcription reaction mixture (15 ml) contained 750 μg template DNA, 21 mM standard nucleotides, 30 mM MgCl2 , 10 mM dithiothreitol (DTT), 75 μl 5000 U RNase Out (invitrogen), 3112.5 μg T7 RNA polymerase, 1X Transcription buffer (400 mM Tris pH 8.0, 100 mM DTT, 10 mM Spermidine FW 145.25, 0.1% Triton X-100) and diluted with 18.2 ohm water. The transcription reaction mixture was incubated for 3 hours at 37 o C. Control RNA transcripts were fractionated by electrophoresis on 15% polyacrylamide gels containing 8 M Urea, .89 M Tris base, .89 M Boric acid, 20 mM EDTA ( pH 8.4). The gels were commassie stained and photographed on a Kodak 4000 MM Image Station. Fluorinated Flavors of RNA The 5F-UTP, 2F-ATP, GTP, CTP transcription reaction mixture (20 μl) contained 4.15 μg template DNA, 21 mM fluorinated nucleotide solution, 30 mM MgCl2, 10 mM DTT, .1 μl 5000 U RNase Out (invitrogen), 4.15 μg T7 RNA polymerase, 1 X Transcription buffer and diluted with 18.2 ohm water. The fluorinated transcription reaction mixture was incubated for 3 hours at 37 o C. The 5F-UTP, ATP, GTP, CTP, transcription reaction mixture (15 ml) contained 750 μg template DNA, 5.25 mM ATP, 5.25 mM CTP, 5.25 mM GTP, 5.25 mM 5FUTP, 30 mM MgCl2, 10 mM DTT, 75 μl 5000 U RNase out (invitrogen), 3112.5 μg T7 RNA polymerase, 1 X FALL 2007
86 amino acid Tat protein. Once bound, the RNA-protein interface associates with the endogenous transcription machinery, implicating the Tat-TAR interaction as an integral part of the HIV life cycle. Indeed Frankel et al. have shown that the Tat-TAR interaction and subsequent transcription machinery association is absolutely required for efficient transcription and, ultimately, viral replication. Given the integral role of the Tat-TAR interface in the HIV life cycle, understanding the physical dimensions of the interaction could potentially provide insights for anti-HIV therapy.
TAR-Hammerhead Expression Construct and the TAR-Hammerhead 88nt transcript. The transcript has 12 highly conserved nucleotides that are absolutely required for efficient cleavage. Of the 12, 5 are Adenosine and 2 are Uridine, the nucleotides of interest. Fluorination of these conserved nucleotides interferes with ribozyme autocleavage activity. 41
Image of Gel A: Unfluorinated, control RNA transcripts. Lane 1: Oligonucleotide size ladder (75, 61, 51, 40, 25). Lane 2: Unfluorinated TAR and Hammerhead RNA.
Gel image courtesy of Boyd Lever ’10
Image of Gel B: Lane 1 and 2 are the same as Gel A. 5F-Uridine transcription products are in Lane 3, and 5FUridine, 2F-Adenosine transcription product is in Lane 4.
Gel image courtesy of Boyd Lever ’10
The incorporation of fluorinated nucleotide analogs into TAR RNA is the first step toward utilizing fluorine NMR in ligand binding studies. Incorporating fluorinated Adenosine and Uridine into TAR allows for the positioning of fluorine probes in the major and minor grooves of the TAR structure. Given that RNA polymerases in general, and the T7 RNA polymerase utilized in this study have non-templated transferase activity, a transcription model that generates TAR RNA with defined 3’ and 5’ ends is necessary. Given the Hammerhead ribozyme’s autocleavage activity and production of defined ends, a construct exploiting this ribozyme is ideal for NMR studies. However, when transcribing TAR with 2F-Adenosine and 5F-Uridine with the pUC 19 TARHammerhead system, no autocleavage by the Hammerhead ribozyme occurs, precluding the production of TAR RNA. Such a transcription renders only the 88 nt TAR-Hammerhead uncleaved transcript. Conversely, transcription with standard GTP, CTP, ATP and 5F-Uridine results in the production of 31 nt TAR transcript, 57 nt Hammerhead transcript, and a visible amount of uncleaved TAR-Hammerhead 88 nt transcript. The variable cleavage activity of the Hammerhead ribozyme reflects the interference of fluorine with the highly conserved Hammerhead cleavage site. Of the 12 conserved nucleotides 5 are adenosine, thereby increasing the possible interference of fluorine with Hammerhead autocleavage when 2F-Adenosine is present as compared to fluorinated Uridine, which constitutes only 2 conserved nucleotides. Given the relative abundancies of each nucleotide in the conserved cleavage site, one logically assumes that concurrent 2FAdenosine and 5F-Uridine nucleotides transcriptions would 42
show greater cleavage interference than the use of 5F-Uridine independent of the fluorinated Adenosine. Indeed, we have shown that (i) 5F-Uridine incorporation decreases Hammerhead autocleavage efficiency and (ii) incorporation of 2F-Adenosine and 5F-Uridine inhibits Hammerhead cleavage completely. Without Hammerhead autocleavage, no TAR RNA is produced when using this model. Thus the cleavage inhibition precludes the concurrent use of the TAR-Hammerhead construct with the 2F-Adenosine, 5F-Uridine analog combination in TAR RNA studies. A different model must be employed to synthesize 2F-Adenosine, 5F-Uridine TAR RNA for ligand binding studies. However, despite decreased autocleavage efficiency, we successfully synthesized milligrams quantities of 5F-Uridine TAR RNA for NMR imaging and future ligand binding studies (data not shown).
Acknowledgments
I wish to thank Mirko Hennig for his generous hospitality in allowing me total access to his laboratory, and the Medical University of South Carolina’s Summer Undergraduate Research Program for their support and instruction. References 1. M.B. Feinberg, D. Baltimore and A. D. Frankel, PNAS 88, 4045 (1991). 2. S. Kao, A. F. Calman, P. A. Luciw, and B. M. Peterlin, Nature 330, 489 (1987). 3. N. J. Keen, M. J. Gait, and J. Karn, PNAS 93, 2505 (1995). 4. C. Dingwall, et al., PNAS 86, 6925 (1989). 5. C. Dingwall, et al., EMBO Journal 9, 4145 (1990). 6. A. D. Frankel, Current Opinion in Genetics and Development 2, 293 (1992). 7. M. G. Cordingley, PNAS 87, 8985 (1990). 8. R. A. Marciniak, M. Garcia-Blanco, and P. A. Sharp, PNAS 87, 3624 (1990). 9. F.. Aboul-ela, J. Karn, and G. Varani, Nucleic Acids Research 24, 3974 (1996). 10. A. D. Frankel, Protein Science 1, 1539 (1992). 11. H. W. Pley, K. M . Flaherty and D. B. McKay, Nature 372, 68 (1994). Diagrams for this article were created in-house by Tim Shen ‘08.
Dartmouth Undergraduate Journal of Science
chemistry
It’s Getting Hot in Here:
Analysis of Prius Carbon Dioxide Emissions by FT-IR Spectroscopy BAILEY Shen ‘08, Constantinos Spyris ‘09, Benjamin Blum ‘09, Bryan Chong ‘09, and Daniel Leung ‘09 Advisors: Siobhan Milde and Charles Ciambra
Abstract
Since the Industrial Revolution, the carbon dioxide concentration in the atmosphere has been steadily increasing, most likely due to the burning of fossil fuels. Carbon dioxide absorbs infrared radiation, and its increase is partly responsible for the rise in global temperatures. Because of rising fuel prices and an increasing concern over global warming, fuel-efficient hybrid vehicles such as the Toyota Prius have become more popular. In this study, the concentrations of carbon dioxide emitted from the 2004, 2005, and 2007 Prius, the 2007 Camry, and the 2002 Corolla were measured. Samples were taken at 40 kph and 100 kph, as well as while the cars were idling. Carbon dioxide emissions were quantified by measuring the absorbance of the carbon dioxide bending peak with an FT-IR spectrometer and comparing it with a calibration curve. The Corolla hot idle run produced 317% more CO2 and the Camry hot idle run produced 223% more CO2 than the Prius averages. For the 40 kph runs, the Corolla and the Camry produced 151% and 79% more carbon dioxide than the Prius, respectively. The Corolla produced 23% more CO2 and the Camry produced 9% more CO2 than the Prius for the 100 kph run. The results were consistent with the miles per gallon data given by the EPA. FT-IR spectroscopy was very useful for calculating CO2 emissions, and in the future, FT-IR spectroscopy could be employed to quantify other pollutants in vehicular exhaust.
Introduction
In 1827, one of the pioneers of climate research, was approximately 280 ppm; in 2000, this concentration had Jean Baptiste Joseph de Fourier, proposed that atmospheric risen to 358 ppm (3). According to a 2005 National Oceanic gases were trapping the Sun’s heat and that the Earth’s and Atmospheric Administration report, the concentration of atmosphere was like a “hothouse” (1). While he did not carbon dioxide is increasing by about 1.5 ppm per year (4). understand the mechanism or the identity of the gases, Fourier An exponential increase in atmospheric CO2 will suggested that humans could disturb the natural climate (2). create a real problem for the Earth’s future. First, an average The idea that molecules could trap heat was global temperature increase will lead to a disruption in the revolutionary. John Tyndall was the first person to attempt to natural seasonal variation as well as the variety of natural quantify the absorption of various gases (2). Tyndall determined habitats and ecosystems (5). Furthermore, the average that water vapor, CO2, and methane trapped IR radiation, water level of the Earth will rise drastically, presenting but N2 and O2 did not (2). Further evidence that molecules major problems for populations living near the coastline (5). in the atmosphere could increase the Earth’s temperature According to the Department of Energy, came from determining the Earth’s steady-state temperature transportation is responsible for about a third of all mathematically and experimentally. The mathematical U.S. carbon dioxide emissions (6). The equation for calculation of the Earth’s steady-state temperature, which combustion of hydrocarbons explains how the burning assumes that all of the energy that enters the Earth is of fossils fuels leads to an increase in atmospheric CO2: reflected back into space, predicted the Earth’s temperature to be -19oC (2). However, the experimentally determined value of the Earth’s temperature was 15oC (2). This Table 1: ‘02 Corolla ‘07 Camry Prius discrepancy between the theoretical temperature of the LOA (mm) 4,420 4,806 4,445 Earth and the experimental temperature is attributed Width (mm) 1,694 1,821 1,725 to what is now called the Greenhouse Effect—the Height (mm) 1,384 1,461 1,476 increased temperature caused by the molecules in the atmosphere that trap some of the heat that reaches Earth. Curb Weight (lbs.)* 2,445 3,680 2,890 These molecules, which include CO2, CH4, and Engine Power (HP) 125 155 110 (Gasoline) N2O, are known as greenhouse gases. Carbon dioxide 76 (Electric) has been a greenhouse gas of major concern because Engine Volume (L) 1.8 L4** 2.4 L4 1.5 L4 of its abundance in the atmosphere. Additionally, Coefficient of Drag 0.31 0.28 0.26 CO2 emissions have been steadily increasing since Table 1: Car specifications the Industrial Revolution, coinciding with a significant *Curb Weight is defined as the total weight of car, tires, oil, fuel and all other increase in the Earth’s average temperature. Prior to 1750, accessories needed for a normal operating car. **1.8 L4 refers to a gasoline engine that has 4 in-line (L) cylinders, the total volume the average atmospheric concentration of carbon dioxide of which is 1.8 liters. FALL 2007
43
however, can scan all elements simultaneously, allowing for a much shorter processing time of a given sample. Infrared spectroscopy, like other spectroscopies, can be used to quantify the amount of a chemical in a given substance. Beer’s Law describes the relationship between concentration and absorbance: [2] Absorbance = absorptivity at * cell length * concentration.
Figure 1: CO2 infrared spectrum.
[1]
If a calibration curve is made, an experimenter can determine the absorptivity at, a constant, and calculate a compound’s concentration (10). In our study, we compared the concentration of carbon dioxide emitted from a hybrid, four-door sedan, the Toyota Prius, to that of its conventional counterpart, the Toyota Camry, through FT-IR spectroscopy. We also evaluated the emissions of an older, “environmentallyfriendly” sedan, the Toyota Corolla. The objective of this study was to find the carbon dioxide emission ratio between a gasoline/electric hybrid and a traditional gasoline-powered car, and compare this ratio with the findings of the Environment Protection Agency (11).
Image courtesy of Bailey Shen ‘09
CxHy + (x + (y/4)) O2 xCO2 + (y/2) H2O
Gasoline is one of the primary fossil fuels due to its ease of transport (2). Gasoline engines themselves, however, are very heavy (7). An alternative to the gasoline engine, the electric motor, is much lighter than a gasoline engine, but does not provide as much power. A hybrid car, such as the Toyota Prius, combines the best of both worlds. At low speeds, the job of the gasoline engine in the Prius is to charge the battery, which in turn runs the electric motor. At high speeds, the electric motor and the gasoline engine both run. The Prius’s impressive gas mileage is due to the fact that it carries a smaller gasoline engine than its nonhybrid counterparts. The Toyota Prius also boasts a regenerative braking system, an efficient parallel hybrid powertrain, as well as an aerodynamic body (7). The goal of these innovations, like the hybrid system, is to improve gas efficiency and lower carbon dioxide emissions. Since CO2 has vibrational modes that absorb infrared radiation (Figure 1), FT-IR (Fourier Transform Infrared) spectroscopy is often used to analyze vehicular emissions (8, 9). All FT-IR spectrometers are based on the Michelson interferometer (Figure 2, which consists of a heat source, a detector, a beamsplitter, a fixed mirror, and a mobile mirror (10). Depending on how far away the mobile mirror is from the beamsplitter, the light that bounces off the fixed mirror and the light that bounces off the mobile mirror might interfere constructively or destructively before hitting the detector. A graph that shows the intensity of light that hits the detector versus the distance between the mobile mirror and the beamsplitter is called an inteferogram. A computer Fourier transforms the inteferogram (time-dependent function) into a spectrum (frequency-dependent function) (10). Before the advent of FT-IR in 1969, chemists used diffraction grating spectrometry to scan the entire region they were interested in one resolution element at a time (10). A spectrum between 4000 cm-1 and 400 cm-1, for example, would take several minutes to scan. FT-IR spectroscopy,
Methods
Exhaust samples were taken from three different classes of cars. Two 2002 Toyota Corollas, two 2007 Toyota Camrys, and a 2004, 2005, and 2007 Toyota Prius were chosen to represent the prototypical economy car, the new midsize sedan, and the hybrid car, respectively. Between these three classes of cars, three different strategies for achieving low emissions were represented. The 2002 Toyota Corollas used small car and engine size, the 2007 Toyota Camrys used a
Figure 2: Diagram of a Michelson interferometer that is the basis of the Fourier Transfom Infrared spectrometers used to analyze vehicle emissions in this experiment
Image courtesy of Bailey Shen ‘09
44
Dartmouth Undergraduate Journal of Science
Super Ultra Low Emissions Vehicle (SULEV) exhaust system, and the Toyota Prius used gasoline/ electric hybrid system (12). All cars tested were equipped with automatic transmissions and standard features. Table 1 lists the most important physical characteristics of the three classes of cars. Exhaust samples were taken from all seven cars at speeds of 40 kilometers per hour (kph), 100 kph, and while the car was idling with a warm engine (hot idle). Between three and four different samples were taken for each car under each engine condition to be run separately on the FT-IR spectrometer. The roads that were used for collecting the samples had no major changes in elevation, though some smaller hills were present. All Figure 3: One liter Supelco pump used to obtain exhaust samples for this experiment. samples were taken on sunny days with dry roadways. The samples of exhaust were taken using a Supelco A sample IR spectrum can be seen in Figure 1. The pump (Figure 3). A metal probe was inserted about 30 cm bending peak absorbance of each sample is shown into the exhaust pipe of the vehicles and secured using in Table 2. The uncertainty in absorbance was not tape. A hose about 3.8 meters long connected the metal mentioned in the 1605 Series manual, so the uncertainties probe to the pump. The tubing was secured to the vehicle of the bending absorbances are not listed in this table. The CO2 calibration curve can be seen in using tape. The tubing went up to the bumper, forward Figure 4. The equation of the least-squares line is along the car and into the vehicle through the right, rear window. For most of the tests, the right, rear window was [3] Absorbance = (0.0152 ± 0.002) / torr * partial halfway down, and all the other windows were fully closed. pressure of CO + (0.8 ± 0.1). 2 The pump, which was kept inside the vehicle, was used to fill 1-liter plastic sample bags which were equipped with The hot idle, 40kph, and 100kph averages for simple valves. Each bag took less than 30 seconds to fill. each car are shown in Figure 5. The hot idle, 40kph, and In the lab, a large ice bath was prepared using ice, 100kph averages for each class of car are shown in Figure 6. water and NaCl to achieve a temperature below 0oC. The bags were then placed on top of this ice water bath in order Discussion to freeze the water vapor, which was a large component of As recently as five years ago, the most the exhaust and had an IR absorption that would obscure economically feasible option for consumers looking for many of the peaks of interest. The bags were connected to an environmentally friendly car was a small vehicle (the a manifold, which was then connected to a 12 cm long, IR 2002 Corolla). However, improvements over the last five glass cell with salt windows. The IR cells were evacuated, years in materials, aerodynamics, engine efficiency, exhaust and sample was transferred from the bag to the cell. FT- systems and hybrid technology now present consumers with IR machine was used to observe the absorbance at 668.4 better options. New conventional cars come with advanced cm-1, which is a characteristic peak of carbon dioxide. exhaust systems, which reduce emissions of pollutants, as The IR spectra were taken with a Perkin Elmer well as advanced engines, which burn cleaner and run more 1605 Series FT-IR spectrometer equipped with a lead efficiently. Lightweight materials are also used nowadays to sulfide detector and a tungsten iridium lamp. A spectral reduce weight. The 2007 Toyota Camry, which features these resolution of 2 cm-1 was used. A background scan of the technologies, has become the first car with a conventional evacuated gas cell was taken before every 12 scans. gasoline engine to be rated a Super Ultra Low Emissions Image courtesy of Bailey Shen ‘09
Calibration Curve The CO2 calibration curve was created by adding known amounts of pure carbon dioxide to the gas cell, bringing the gas cell to room pressure, taking the cell’s IR spectrum, and recording its bending peak (668.4 cm-1) absorption. The partial pressure of CO2 in the gas cell was measured with a mercury manometer. The CO2 content in air was ignored.
Results FALL 2007
vehicle by the State of California (12). Similarly, hybrid cars have also benefited from improved engineering over the past five years, as their prices are now comparable to that of conventional cars (13). Hybrids remain the most fuelefficient option available on the roads today for “city” driving. The comparisons of the Toyota Priuses to the Camrys and Corollas validated our theory that the Prius would emit less carbon dioxide when hot idling or moving at 40 kph than either the Camry or Corolla. We also speculated (based 45
2). This is probably due to the Prius’s variable use of the electrical and gasoline engine during our runs. When the Prius’s battery is fully charged, the car draws mostly from the electric engine. If the Prius’s battery is low during a drive, then the gasoline engine will be used (13).
Figure 4: CO2 calibration curve computed for this experiment.
Error Analysis One possible source of error in our study was the status of each car when we took our runs. Though we attempted to maintain a consistent setup, in some cases the air conditioner was running or extra windows were open, both of which may have caused the car to draw more gasoline. Running the air conditioner at a high setting uses a significant amount of engine horsepower and keeping an extra window open increases aerodynamic drag. The weight of the passengers in the car could have also affected our data; in some runs we had an extra passenger in the vehicle. This would require the engine to work more to maintain the same speed, thus leading to slightly more CO2 emitted when the cars were heavier. The road surface we took our samples on may also have skewed the data; though we attempted to take our samples over flat terrain, some roads were slightly sloped. Also, since some of the cars did not have cruise control, we often had to control the speed by foot. This undoubtedly led to small variations in speed during some runs and to variable amounts of acceleration so that the desired speed could be maintained. Yet another source of error could be leaks in the exhaust bags; some of our sample bags were only half-full before we were able to run IR spectroscopy on them. The time from when we took the exhaust samples to when we ran the samples on the IR varied from an hour to 48 hours. Image courtesy of Bailey Shen ‘09
on the EPA highway mpg data) that the carbon dioxide emissions of the Prius would not be much lower than the other cars at the high speed runs; our data confirmed this too (11). The Corolla hot idle run produced 317% more CO2 and the Camry hot idle run produced 223% more CO2 than the Prius averages. For the 40 kph runs, the Corolla and the Camry produced 151% and 79% more carbon dioxide than the Prius respectively. The Corolla produced 23% more CO2 and the Camry produced 9% more CO2 than the Prius for the 100 kph run. We deduced that the Prius did not offer the same CO2 advantages at the high on the highway, where the Prius’s gasoline engine is operating at full speed. For the slower runs the Prius utilizes its electrical engine more and the gasoline engine does not need to provide as much horsepower to the wheels. This allows for the Prius to emit significantly lower amounts of CO2 at slower speeds. For the hot idle runs the Prius exhibited much lower CO2 emissions as the gasoline engine shuts off while the car is stopped for short periods of time; furthermore, Comparisons to Previous Literature the car display, radio, air conditioning, etc. are all powered We compared our data to www.fueleconomy. by the electrical engine when the car is at a standstill. gov in order to see the ratios of CO emitted between 2 For the hot idle and 40 kph runs, the two Camrys the cars we sampled. The website showed that had very similar absorbances. The Corollas had similar the Corolla emitted 70% more CO2 than a Prius, hot idle and 40 kph absorbances as well: a 24% difference while the Camry produced 93% more CO2. These was measured in the hot idle runs and a 5% difference in percentages are comparable to the percentages that the 40 kph runs. For the100 kph run, there were large differences between the cars of the same class; a 36% difference between the Camrys and a 59% difference between the Corollas. The differences at the high speed runs might be due to variable acceleration. At higher speeds, this variable acceleration error is magnified. Also, the difference between the two Corollas might be due to differences in how they were maintained since production. We noticed that the absorbances for each Prius varied greatly during the same run (see Table Figure 5 (above): Sample averages.
Image Courtesy of Bailey Shen ‘09.
46
Dartmouth Undergraduate Journal of Science
Figure 6 (left): Vehicle class averages. Image courtesy of Bailey Shen ‘09
we obtained when we averaged our 40 kph and 100 kph runs. The sole difference between our data and that of the website is that fueleconomy.gov showed that the Corolla emitted less CO2 than the Camry. We concluded that this variation could be attributed to the difference in mileage between the 2002 Corollas and new 2007 Camrys from the dealership (fueleconomy. gov tests brand new cars only) (11). Extensions of Our Project Follow-up experiments include using length IR spectroscopy to analyze the exhaust from automobiles. With a longer IR cell, we would be able to detect gases that are present at extremely low concentrations in exhaust, such as CH4, NO2, SO2 (9). We could also use different techniques to analyze the exhaust such as gas chromatography followed by mass spectroscopy. We could use flame ionization to measure the different types of hydrocarbons that is emitted by exhaust (8). Also, by measuring the flow of exhaust, we could figure out the amount of a certain gas each car was emitting in grams/kilometer (9).
Conclusion Compared to its counterparts, the Toyota
Prius produces significantly less CO2 when idling and when moving at 40 kph. The ratios of CO2 concentration between the Prius and the other four-door sedans studied agreed with the EPA’s miles per gallon data. FT-IR spectroscopy offers a quick and precise way to measure carbon dioxide emissions, and it should continue to be used to analyze the emissions of new cars.
Acknowledgments
We would like to thank Mitch Piper of White River Toyota for providing us with many of the cars used in this study. FALL 2007
long path
References 1. C. Chen and E.T. Drake, Annual Review of Earth and Planetary Sciences 1986, 14, 201-35. 2. S. Milde, Environmental chemistry lecture, Dartmouth College, Hanover, NH, 2007. 3. K. Bennett and K. Willis, Global Ecology & Biogeography 2000, 9, 355-61. 4. After Two Large Annual Gains, Rate Of Atmospheric CO2 Increase Returns To Average, NOAA Reports (2005). Available at: http://www. noaanews.noaa.gov/stories2005/s2412.html. 5. M. Mastrandrea and S. Schneider, Global Warming (2005). Available at: http://www.nasa.gov/worldbook/global_warming_worldbook.html. 6. U.S. Carbon Dioxide Emissions from Fossil Fuels Virtually Unchanged in 2005 as Price Increases Dampen Energy Demand (2006). Available at: http://www.eia.doe.gov/neic/press/press272.html. 7. J. Layton and K. Nice, How Hybrid Cars Work. Available at: http:// www.howstuffworks.com/hybrid-car.html. 8. EPA Motor Vehicle Aftermarket Retrofit Device Evaluation Program (1998). Available at: http://www.p2pays.org/ref/20/19356.pdf. 9. Reyes, F.; Grutter, M.; Jazcilevich. A.; González-Oropeza, R. Atmos. Chem. Phys. Discuss. 2006, 6, 5773-96. 10. Griffiths, P.R.; de Haseth, J. A. In Fourier Transform Infrared Spectrometry; Winefordner, J. D.; 2nd Ed.; Chemical Analysis: A Series of Monographs on Analytical Chemistry and Its Applications; John Wiley & Sons: Hoboken, NJ, 2007; pp 1-172. 11. Fuel Economy. Available at: http://www.fueleconomy.gov/. 12. Toyota Camry 2007. Available at: http://wheels.fosfor.se/toyotacamry-2007. 13. Toyota Prius 2007. Available at: http://www.toyota.com/prius/.
Car ‘04 Prius ‘05 Prius ‘07 Prius ‘07 Camry ‘07 Camry ‘02 Corolla ‘02 Corolla
Sample 1 Sample 2 0.1672 0.4118 1.8614 0 0.2875 0.3124 1.2111 1.3859 1.4583 1.4791 1.9882 1.9283 2.0335 2.1023
Sample 3 0.3534 0.3171 0.3751 1.3987 1.3914 0.8767 1.8127
Sample 4 0.3062 0.3377 0.4152
Average 0.3097 0.6291 0.3476 1.3319 1.4429 1.5977 1.9828
Car ‘04 Prius ‘05 Prius ‘07 Prius ‘07 Camry ‘07 Camry ‘02 Corolla ‘02 Corolla
Sample 1 Sample 2 1.8 1.4 0.3993 0.7365 0.3 0.95 1.0208 1.5876 1.4946 1.447 1.9314 1.8781 2.07 1.9832
Sample 3 0.4501 0.7015 1.1733 1.5361 1.5472 2.0936 2.1164
Sample 4 0.6379 0.5486 0.5398
Average 1.0720 0.5965 0.7408 1.3815 1.4963 1.9677 2.0565
Car ‘04 Prius ‘05 Prius ‘07 Prius ‘07 Camry ‘07 Camry ‘02 Corolla ‘02 Corolla
Sample 1 1.8461 0.7687 0.8192 1.3529 1.5188 1.1418 2.0024
Sample 3 Sample 4 0.8827 0.7496 2.1549 2.1922 1.8549 0.7854 1.0529 1.507 1.8325
Average 1.0424 1.7156 1.1841 1.2180 1.6519 1.2506 1.9828
Sample 2 0.6912 1.7464 1.2768 1.2483 1.93 0.7776 1.9632
Table 2: Absorbances at 668.4 cm-1 47
DUJS Submission Form What are we looking for? The DUJS is open to all types of submissions. We examine each article to see what it potentially contributes to the Journal and our goals. Our aim is to attract an audience diverse in both its scientific background and interest. To this end, articles generally fall into one of the following categories: Research This type of article parallels those found in professional journals. An abstract is expected in addition to clearly defined sections of problem statement, experiment, data analysis and concluding remarks. The intended audience can be expected to have interest and general knowledge of that particular discipline. Review A review article is typically geared towards a more general audience, and explores an area of scientific study (e.g. methods of cloning sheep, a summary of options for the Grand Unified Theory). It does not require any sort of personal experimentation by the author. A good example could be a research paper written for class. Features (Reflection/Letter/Essay or Editorial) Such an article may resemble a popular science article or an editorial, examining the interplay between science and society. These articles are aimed at a general audience and should include explanations of concepts that a basic science background may not provide. Technical guidelines: 1. 2. 3. 4. 5.
The length of the article must be 3000 words or less. If it is a review or a research paper, the article must be validated by a member of the faculty. This statement can be sent via email to the DUJS account. Any co-authors of the paper must approve of the submission to the DUJS. It is your responsibility to contact the co-authors. Any references and citations used must follow the Science Magazine format. If you have chemical structures in your article, please take note of the American Chemical Society (ACS)’s specifications on the diagrams.
For more examples of these details and specifications, please see our website: http://www.dartmouth.edu/~dujs For information on citing and references, please see: http://www.dartmouth.edu/~sources Specifically, please see Science Magazine’s website on references:
http://www.sciencemag.org/feature/contribinfo/prep/res/refs.shtml
48
Dartmouth Undergraduate Journal of Science
DUJS Submission Form Statement from student submitting the article: Name:__________________
Year: ______
Faculty Advisor: _____________________
Email: __________________ Phone: __________________ Department the research was performed in: __________________ Title of the submitted article: ________________________________________________________________________ Length of the article: ____________ Program which funded/supported the research (please check the appropriate line): __ The Women in Science Program (WISP)
__ Presidential Scholar
__ Dartmouth Class (e.g. Chem 63) - please list class ______________________ __Thesis Research
__ Other (please specify): ______________________
Statement from the Faculty Advisor: Student: ________________________ Article title: _________________________ I give permission for this article to be published in the Dartmouth Undergraduate Journal of Science: Signature: _____________________________ Date:______________________________ Note: The Dartmouth Undergraduate Journal of Science is copyrighted, and articles cannot be reproduced without the permission of the journal. Please answer the following questions about the article in question. When you are finished, send this form to HB 6225 or blitz it to “DUJS.� 1. Please comment on the quality of the research presented:
2. Please comment on the quality of the product:
3. Please check the most appropriate choice, based on your overall opinion of the submission:
__ I strongly endorse this article for publication
__ I endorse this article for publication
__ I neither endorse nor oppose the publication of this article
__ I oppose the publication of this article
FALL 2007
49
Spring 2007 Vol. IX | No. 2
Decoding the Language of Proteins
Write
Edit
Microscopic Arms Race:
Submit
Design
A Battle Against Antibiotic Resistance
50
Dartmouth Undergraduate Journal of Science